Testing local RAG (document search/querying) with Llama3 and Nomic's embedding models!

preview_player
Показать описание
In this video I play with LM Studio's new embeddings endpoint to run a FULLY offline document search solution with powerful AI models like Llama3 (just released) and Nomic's embed model!
Рекомендации по теме
Комментарии
Автор

Hi, I think all llama3 models come with their own embedding model/technique (llm2vec) which can also be used for setting up RAG. Do u see nomic's embedder outperform it significantly, or it's still better to use the inherent llm2vec for text embedding?

remyrflt
Автор

did you upload the .py files anywhere? i'd like to try the same, with nomic + phi3mini

Sbato
Автор

We can't see the code you're speaking to because it's behind you.

JustinJohnson