How to Implement RAG locally using LM Studio and AnythingLLM

preview_player
Показать описание
This video shows a step-by-step process to locally implement RAG Pipeline with LM Studio and AnythingLLM with local model offline and for free.

🔥 Get 50% Discount on any A6000 or A5000 GPU rental, use following link and coupon:

Coupon code: FahdMirza

#lmstudio #anythingllm

PLEASE FOLLOW ME:

RELATED VIDEOS:

All rights reserved © 2021 Fahd Mirza
Рекомендации по теме
Комментарии
Автор

You are my favorite YouTube! This is amazing. I got to know about the LM studio through you and now I am going to try this out. I was trying to RAG llama3 but I ran into a lot of errors. But I think as this is a simpler method, I would be finally able to chat with my pdfs

bomsbravo
Автор

Another useful and informative video thank you

publicsectordirect
Автор

Getting error while uploading a pdf or other file...any remedy

PushpendraKumar-itwf
Автор

In my opinion, the latest release of Msty is much more functional and has a better UI. AnythingLLM advantage is that connects to LMStudio.

Alex
Автор

Sir nice vedio, can u pls tell me sir what are the benchmarks to calculate LLM model performance and compare with other LLMs in term of performance and privacy

bhawanirathore
Автор

Thank you, it's exactly what I was searching for. I have a question, Is there a local way to make my model search and use data inside a SQL database ?

dinamohamed
Автор

Hi, Thanks for the tutorial. What did you choose under setting: Chat.... and Agent:... LM Studio or Llama?

blazar
Автор

Is there a way to have Vision as well ? that would be amazing!

frosti
Автор

Very nice solution for RAG and using a local model.

I was attempting to do this with Streamlit but this appears to be very clean approach.

How can we use Colab to point to a public URL with Localtunnel.

I seem to have a challenge in getting that working ?

Thanks for Sharing

SolidBuildersInc
Автор

Thanks for the video. Exactly what I was looking for. Question: can we still use lmstudio interface for to chat with a model after we added new content to it or do we need to exclusively use anythingllm? In other words, does lmstudio see the added content, or does only anythingllm see it? Thanks again!!!!

dennisg
Автор

I'm trying to hook AnythingLLM into a Slack chatbot, because you can use multiple models for docs and websites (even Google search, I think). While LM Studio has a server port, I don't think AnythingLLM does, does it?

JohnPamplin
Автор

It's been reported that AnythingLLM has a critical security flaw. Just fyi

longboardfella