Ollama Embedding: How to Feed Data to AI for Better Response?

preview_player
Показать описание
🌟 Welcome to our deep dive into Ollama Embedding for AI applications! In this comprehensive tutorial, we're unlocking the power of Ollama Embedding to enhance your app's performance significantly. 🌟 Feed Data to AI for better response.

🔍 What We Cover:
Introduction to Ollama Embedding and its advantages.
Step-by-step guide on ingesting data from URLs, converting them into embeddings, and storing in Vector Database using ChromaDB.
Integration with Nomic Embed Text for superior embedding performance.
Utilising RAG for data retrieval.
Building a user-friendly interface with Gradio.

*If you like this video:*

🛠️ Setup Steps:
Installation of necessary packages (Lang Chain, Chroma Embeddings, etc.)
Detailed walkthrough for setting up your application file.
Splitting data, converting to embeddings, and database storage.
Creating and integrating the user interface with Gradio.

🔗 Resources & Links:

👀 Why Watch:
Learn to create AI applications with enhanced performance.
Understand the benefits of using Nomic Embed Text over other models.
Gain insights into creating efficient user interfaces for your AI apps.

📌 Don't forget to subscribe and hit the bell icon to stay updated with our latest videos on Artificial Intelligence. Like this video to help spread knowledge to more enthusiasts like you!

Timestamps:
0:00 - Introduction to Ollama Embedding
0:39 - Benefits of Nomic Embed Text
1:00 - User Interface Preview
1:04 - Subscription Reminder
1:21 - Setting Up Your Application
3:00 - Understanding the Rag Process
4:01 - Running the Code
5:00 - Adding User Interface with Gradio

#OllamaEmbedding #Local #Nomic #OllamaEmbeddings #OllamaNomic #OllamaNomicEmbedding #NomicEmbedding #NomicEmbeddings #NomicOllama #EmbeddingOllama #Embed #Embedding #LocalRAG #OllamaLocalRAG
#ollama #runollamalocally #howtoinstallollama #ollamaonmacos #installingollama #localllm #mistral7b #installllmlocally #opensource #llms #opensourceLLM #custommodel #localagents #opensourceAI #llmlocal #localAI #llmslocally #opensource #Olama #Mistral #OllamaMistral #Chroma #ChromaDB #LangChain
Рекомендации по теме
Комментарии
Автор

This channel is gold. Short, to the point, Dev focus, latest IA... Thank you so much for taking the time to upload almost daily.

santiagolarrain
Автор

Thanks so much! I was just stuck in a project and was trying to work my way through RAG, and now I've got the solution!

technobabble
Автор

Superb tutorial! Concise, clear and produces results - perfect!

stiofanmacthomais
Автор

Fantastic tutorial mate! Most examples I've seen use the OpenAI embedding models which unit normalise the vectors, and so when you switch to ollama embeddings the distance numbers are very large (and so you can't use the same similarity metrics).

This workflow has given me a way forward ☺️

tperham
Автор

amazing Mervin, you got another subscriber here. Nice content ❤

Van-Helssen
Автор

What a fantastic video and tutorial. Many many thanks. I'm currently trying to transit my job as software dev to AI/ML stuff and this tutorial is very valuable to me to get a grip to all the new stuff. I'm more a do it and try and error type learner. Thanks a lot.

gnashermedia
Автор

Oh buddy, this is a super cool tutorial. As simple as a stick :) Thanks a lot!

mirosawh.
Автор

you are full metal no fluff. the best.

kinnaa
Автор

Your content is 🔥. I would love to see you combine Ollama utilizing Groq, Ollama embeddings, RAG with document uploads and url, and a Gradio UI.

Builder-pwqn
Автор

Thanks, i always thought that embedding model and chat model always have to be same. Especially for the dimension must match, but your example show difference and works. This is new to me.

jonzh
Автор

❤ very very very simple, clear, and beautiful, thanks bro

yongjintang
Автор

Really helpful!
Thanks for the video.

amirulbrinto
Автор

It is crazy simple and efficient, thx man!

syciciel
Автор

Great content! Super useful embedding. Seems we need to use nomic API from now on for using the embedding?

sam.sleepwell
Автор

This is an amazing video. Do you know how to evaluate the RAG results?

hunkims
Автор

Good news. Would like to see an easy way to set up conversational memory with ollama

lvhklzz
Автор

Thank you very much for your efforts. Your videos have been incredibly helpful to me! I have a question: In my experience, RAG's performance in extracting information from tables or images in PDFs is quite poor. Is there any way to improve this?

MikewasG
Автор

With regular RAG you can examine the snippet you get to see if they are actually relevant and then correct if not. That is "corrective rag". How do you do that when using embeddings? Does it become a black box and is no longer steerable?

MattJonesYT
Автор

Great video. Would you use a simpler model e.g. tinydolphin for embeddings, given the greater speed? or would embeddings quality suffer too much?

It seems to take a very long time to do embeddings on large files, and produce very large output files: e.g. a 21 mb sec edgar 10k took about 90 minutes to index and mushroomed into about 1.1gb of index files (!).

PoGGiE
Автор

is there any way that i can do function calling with a tiny llm, it is even ok if i need to fine-tune it

gvnurhx