Embeddings: What they are and why they matter

preview_player
Показать описание
Рекомендации по теме
Комментарии
Автор

These 38 minutes changed my next 38 years

nursenayeneredit
Автор

This is the best fundamental way of describing embeddings.

tutacat
Автор

This has been a timely and really useful presentation. Thank you for posting it!

martinjohnpeck
Автор

Thank you Simon Wilson. Great information. I especially like how you demonstrated the development of your own tools. Finally my thoughts is your presentation in an executive summary format will educate policy makers in both the enterprise and government sector who seem to have fear of AI. For example my company has an existing early policy that employees are not allowed to use AI or ChatGPT. At the same time my Use Case to leverage RAG was to augment our LLM was accepted by our AI Review Committee. My thought is the enterprise companies will be careful and prudent in the rollout of LLMs and AI tools because they will want “security rails” in place. Thank you.

energyexecs
Автор

"vibes-based search" lol. love the term you invented.

zgintasz
Автор

Thanks Simon for sharing your knowledge. This video is so underrated.

thanhquachable
Автор

I’m totally new to embeddings and this video inspired me to want learn even more!

Clammer
Автор

very great demo, thanks for sharing!

this is an excellent example of practical use of embeddings.

_ramen
Автор

That was awesome. Thank you for uploading it!

loicleray
Автор

A nice and very inspiring presentation! Thank you!

ygiomum
Автор

Great talk. Thoroughly enjoyed it. Thanks!

donpark
Автор

Impressive fast talking and fast scrolling. A lot of knowledge and experience for sure. I guess I'll have to do some digging if I want to really benefit from this lecture.

curtisblake
Автор

Great work! I will definitely follow up on your website. Some of the clustering results are really remarkable. Have you thought about hierarchal clustering on the embeddings to see if a sensible taxonomy emerges?

vmstanford
Автор

How does the semantic vectorization of a word look like, in a mathematical sense ? Is it like every word has it’s spatial ID (coordinate) and gets kind of multiplied with a vector array of assoziatives IDs?

miikalewandowski
Автор

Can you show us some Imagebind unix-fu?

BuFuO
Автор

This is what microsoft recall wants to do

tutacat