Discover LlamaIndex: Bottoms-Up Development with LLMs (Part 4, Embeddings)

preview_player
Показать описание
In this video, we present the concept of embedding models - models that generate text embeddings, which are numerical representations of text that enable semantic search. We provide an overview of different embedding models and discuss the benchmarking of these models (MTEB leaderboard). Additionally, we demonstrate how to use different embedding models, such as OpenAI and Instructor embeddings, and show how to implement them in LlamaIndex.
Рекомендации по теме
Комментарии
Автор

Hi Logan, I watched the 4 parts video today. They are awesome! I have been using Llama Index for a few months now, there are still things I can learn from them. Hope to see more videos from you! Great job!

jma
Автор

same error: "TypeError: Can't instantiate abstract class InstructorEmbeddings with abstract method class_name"

gaborberei
Автор

Thanks for the videos. Can I confirm I understand correctly how it works? So. once embeddings are constructed via an openai api, llama index performs a cosine similarity check locally(?) and picks up top k vectors. It does then retrieve the actual text encoded by the embeddings and sends all these chunks of text to openai. If my understanding is correct and llama index does not send the embeddings to openai llm but rather already retrieved chunks of text, the only reasoning I see to use openai embeddings opposed to a local model embeddings would be a good fit for their dimentions which might help better in further openai NLU processing techniques.

MrSmilev