filmov
tv
Embeddings in Depth - Part of the Ollama Course

Показать описание
Dive into the world of embeddings and their crucial role in modern AI applications, particularly in enhancing search capabilities and information retrieval. This video, part of our comprehensive Ollama course, explains:
- What embeddings are and how they differ from traditional text-matching searches
- The importance of embeddings in Retrieval Augmented Generation (RAG)
- How to create and use embeddings with Ollama's API
- A practical comparison of different embedding models, including:
- nomic-embed-text
- mxbai-embed-large
- all-minilm
- snowflake-arctic-embed
- bge-m3
- bge-large
- llama3.1
We'll demonstrate real-world applications, discuss performance considerations, and explore the nuances of working with embeddings. Whether you're new to AI or looking to deepen your understanding, this video provides valuable insights into this powerful technology.
Join us as we uncover how embeddings are transforming the way we interact with and retrieve information in the age of AI.
#AI #MachineLearning #Embeddings #Ollama #InformationRetrieval
(they have a pretty url because they are paying at least $100 per month for Discord. You help get more viewers to this channel and I can afford that too.)
Join this channel to get access to perks:
00:00 - Start
00:36 - There is another way
00:47 - Welcome to the course
01:25 - How do embeddings fit in
01:45 - What does the actual embedding
02:12 - Dimensions
02:39 - Similarity Search
03:39 - How to create the embedding
03:58 - The 3 endpoints
04:43 - The right endpoint to use
05:25 - Python and JS/TS libraries
05:54 - Let's look at a simple example
06:09 - The sample I used to embed
06:50 - Which is faster
07:34 - Let's look at the answers
08:55 - Where to find the example code
09:08 - Some of the variables to play with
09:32 - Frustrations
- What embeddings are and how they differ from traditional text-matching searches
- The importance of embeddings in Retrieval Augmented Generation (RAG)
- How to create and use embeddings with Ollama's API
- A practical comparison of different embedding models, including:
- nomic-embed-text
- mxbai-embed-large
- all-minilm
- snowflake-arctic-embed
- bge-m3
- bge-large
- llama3.1
We'll demonstrate real-world applications, discuss performance considerations, and explore the nuances of working with embeddings. Whether you're new to AI or looking to deepen your understanding, this video provides valuable insights into this powerful technology.
Join us as we uncover how embeddings are transforming the way we interact with and retrieve information in the age of AI.
#AI #MachineLearning #Embeddings #Ollama #InformationRetrieval
(they have a pretty url because they are paying at least $100 per month for Discord. You help get more viewers to this channel and I can afford that too.)
Join this channel to get access to perks:
00:00 - Start
00:36 - There is another way
00:47 - Welcome to the course
01:25 - How do embeddings fit in
01:45 - What does the actual embedding
02:12 - Dimensions
02:39 - Similarity Search
03:39 - How to create the embedding
03:58 - The 3 endpoints
04:43 - The right endpoint to use
05:25 - Python and JS/TS libraries
05:54 - Let's look at a simple example
06:09 - The sample I used to embed
06:50 - Which is faster
07:34 - Let's look at the answers
08:55 - Where to find the example code
09:08 - Some of the variables to play with
09:32 - Frustrations
Комментарии