LlamaIndex overview & use cases | LangChain integration

preview_player
Показать описание

00:00 intro
01:04 LlamaHub
02:32 basic query functionalities
03:31 document management
04:38 query multiple documents
06:26 Router query engine
08:37 Hypothetical document embeddings
10:12 use LlamaIndex with LangChain

⭐ Stay in touch ⭐
💚 #AI
Рекомендации по теме
Комментарии
Автор

What a delightful combination of knowledge, objectivity and calm energy. Thank you!

ronaldokun
Автор

Llama Index for long term memory and Lang Chain for chaining and agents.

This all happened within a year or so. What a time to be alive!

matten_zero
Автор

Thank you for the fantastic overview of LlamaIndex functionalities 😊

SimonMariusGalyan
Автор

Love your videos. Always on my queue! Thanks for sharing!

kevon
Автор

I like how you added additional info in the yellow text overlays. That helps with retaining info. Great job 👏

matten_zero
Автор

Really useful stuff! Thanks for making this video.

jerryoverton
Автор

Too bad that i can't like twice your videos! Thank you, very informative videos. 🙏

eck
Автор

Another Informative video, didn't know about this.

talhayousuf
Автор

Great tutorial, thank you very much!
My question is, how can we persist the indices and load them from a storage directory, in the Query multiple documents example? I could not find any example in the llama-index documentation about persisting several indices (from several documents) and load them individually to replicate the SubQuestionQueryEngine example.

juanjesusizquierdo
Автор

Great you mention that those tools are not exclusive. However probably for simpler use cases one of them will be enough.

wiktorwysocki
Автор

Great content! Thanx ... plz give us more on combining Lamaindex with Langchain ... thanx in advance

ahmedmusawir
Автор

Please create a breakdown video about different kind of models, chat models, instruction models etc.

eck
Автор

Hey Sophia, when I tried to query across multiple documents I get this error: "InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a " Do you know why this happens?

TroxerLiveCops
Автор

Hi Sophia, my team and I found this video very helpful during Berkeley's AI/LLM Hackathon last week :D
I'm a rising junior studying CS, and most of my experience thus far has been all sorts of programming NOT related to AI/ML/DS. However, I'm now really interested in experimenting through building meaningful tools/apps on top of the ML layer through frameworks such as LlamaIndex, Pinecone etc. I just wanted to ask if you think its necessary to have a strong understanding in AI ML fundamentals before moving forward with these tools.
Thanks

jamesxiao
Автор

hi, how can i do in context learning from a txt file and lengchain but using falcon 7b parameter instead of open ai? i try to query a txt file with lengchain but not using openai as model interpreter but using falcon 7b

tantomanontroppo
Автор

Thanks for explaining llmaindex. 👍 Is it also a vectorstore like chromadb?

henkhbit
Автор

Hey, is there any way you can query mongodb collection? like something similar to langchain sqlchains

shrey
Автор

try to use llamaindex and huggingface model, but all kinds of issues. It seems model kwargs cannot be passed to hf transformer model?

mzty
Автор

I used Langchain's dataloader and then converted into a llamaindex. I had to do this because the llamaindex beautifulsoup scraper would return 403's because it wouldn't let me enter the user header. The challenge I am having with both SDKs is they are throwing so much good stuff out but unfortunately the testing seems lacking. YIPPEE!! My question: What if I want to make a local podcast of stuff happening in my town that gets buried. Some things in the town might come from a Wikipedia entry, some will come from our town web site buried in council minutes, etc. some will come from police reports, more investigative. For quality results should everything be indexed? Or when does one just rely on "Do a web search?" Also, what are good methods for fact searching?Funny when I ask why it came up with an answer. ChatGPT 4 told me "Because it makes a good story" That was interesting and made sense that it would do that. But I'd like to have a better handle on fact and fiction. Thank you.

happyday.mjohnson
Автор

Very nice!!. thank you but can you make more videos about how to structure the data in the documents. and how can we tell llama about the data structure in the documents. for example to tell them that the document have Q&A template structure separated by (----)

and if you do a video about hoe tow to make the result better. this would be Awesome.

Thank you and good

ProfessorMuwaffaq