Ollama 0.1.26 Makes Embedding 100x Better

preview_player
Показать описание
Embedding has always been part of Ollama, but before 0.1.26, it kinda sucked. Now it’s amazing, and could be the best tool for the job.

Yes I know I flubbed the line about bun. It’s not an alt to js. It’s a whole new runner for js/ts. Makes typescript which is a better js even better than it was.

Рекомендации по теме
Комментарии
Автор

I am here for it. Lets goooo! And yes, videos on vector DBs would be amazing.

Slimpickens
Автор

You are a great teacher!! I want to see more videos of yours. Thanks for your service🙇

ChetanVashistth
Автор

Thanks a lot for all your videos, this one really helped me a lot, just started with Ollama and local LLMs a week ago and was using llama2 for embeddings and it was painfully slow and I didn't event know that it can be faster until I watched this video yesterday evening. Just changed it to using "nomic-embed-text" and I love it :-) Thanks and keep up the good work! I also really like you humor !!!

guidoschmutz
Автор

Could you share the source code of the examples you use in your videos?

disturb
Автор

Your voice is amazing. I could listen to you present on anything man. Amazing video

archamondearchenwold
Автор

Hi Matt, thanks for making these videos. It is very informative and helpful.

joan_arc
Автор

Really nice video Matt. We're thinking about doing a similar video testing top 5-10 vector DBs

JoshuaMcQueen
Автор

I'm loving your videos! I really like that their to the point. Out of all the YouTubers doing video in this AI, LLM space, I enjoy yours the most. Keep the coming! Tell your family this is more important! Lol 😮. I'm kidding. 😂

Turbozilla
Автор

thank you, I really appreciate your works and support. can't wait next video.

NLPprompter
Автор

Looking forward to when tools to embed documents into models become available, thanks for all you do.

joeburkeson
Автор

A video on vector databases would be great. As always, please do not forget to include a brief how-to, those well-thought snippets in your videos really do make a difference. Thanks!

martinisj
Автор

Great video! Embeddings take Ollama to the next level! And I love that you dont lose a word about Gemma ;)

trsd
Автор

Hi Matt, love your content - super stuff thank you, this is exactly what I was looking for and you explain it so well, I am working on a project of RAG search using open-source for a big Genomics project, providing specific information to users of the service, really detailed information about which test to request etc this video came just at the right time 👍

janduplessis
Автор

This is absolutely brilliant. Also, to answer your question, looking at vector databases, I think a useful distinction is whether they support Colbert-style embeddings because Colbert is clearly the way forward when you want high-quality embeddings.

JulianHarris
Автор

Thanks for posting these videos mate. I’m finding them so helpful in orienting myself in the world of ai tooling 🎉

sunt
Автор

This is such good content. Can you do a full video tutorial on a production case of a best rag strategy. There's so many out there .

karanv
Автор

I personally struggle to understand and use embeddings effectively. This video is highly appreciated! please do go on a deep dive on the differences on vector db providers. I'll definitely like and share if you do!

riftsassassin
Автор

Definitely do the side by side for the db options in the context of ollama on something like an M2. Our work machines for the public school system are M2s with only 8 gigs of RAM, as a reference point. The potential for a local teaching assistant is definitely close

brandonheaton
Автор

A great addition to Ollama. Hopefully, batching will be supported soon. As of now, it is one API call per string which makes it less suitable for larger data sets

colliander
Автор

Vids keep getting better - and thanks - I overlooked the embeddings due to gemma!

nicholasdudfield