How to Build an LLM Query Engine in 10 Minutes | LlamaIndex x Ray Crossover

preview_player
Показать описание
Join Jerry, co-founder and CEO of LlamaIndex, and Amog, Ray developer at Anyscale as they discuss key challenges with building and scaling LLM applications.

Then, follow along as Amog takes you through a code tutorial that you can try and read more about in their jointly-written blog!

Learn More
---

Join the Community!
---

Managed Ray
---

#llm #machinelearning #ray #deeplearning #distributedsystems #python
Рекомендации по теме
Комментарии
Автор

Thank you, Jerry and Amog! Nice demo! Keep up the good work!

Andromeda_
Автор

always great videos! Love this topic but without openai..I would love to see something created only using open source technology...

fabsync
Автор

What advantages does this have over taking all your data sources, creating embeddings for all and storing in a vector store. Then a single query would give the top_k responses and a summarization of those will give an answer? I can see that the question explicitly defines the data sources, but struggle to see the utility in this.

davidwynter
Автор

is there a place a beginner can learn about ray from scratch

zahabkhan
Автор

Can a subsequent SFT and RTHF with different, additional or lesser contents change the character, improve, or degrade a GPT model? Can you modify a GPT model? How?

amparoconsuelo
Автор

What’s is the difference ray and embedded chain

rayhanpatel