RAG with LlamaParse, Qdrant and Groq | Step By Step

preview_player
Показать описание
In this video, I will show you how to create a effective RAG with LlamaParse, Qdrant and Groq. I will explain what LlamaParse is and briefly walk you through the recent blog post they provided. LlamaParse is a state-of-the-art parser designed to specifically unlock RAG over complex documents. I will explain the following in this video.
- How to parse the document using LlamaParse
- Create embeddings from that parsed document and store it in Qdrant Cloud
- Use models from Groq for super fast Inference
- Also show you how to use OllamaEmbeddings and models from Ollama
- Some extra tips to make RAG better.

👉🏼 Links:

NOTE: Readme file in Github has additional links.

------------------------------------------------------------------------------------------

⏰ Timestamps
00:00 Introduction
01:15 LlamaParse blogpost walkthrough
04:16 Getting Started ( GitHub code )
06:13 Llamaparse
10:00 Qdrant Cloud
11:28 FastEmbedEmbeddings
12:44 Models from Groq
17:16 Asking Questions
24:57 Make RAG better ( tips )
26:23 Conclusion
------------------------------------------------------------------------------------------

------------------------------------------------------------------------------------------
🔗 🎥 Other videos you might find helpful:

------------------------------------------------------------------------------------------
🤝 Connect with me:

#llamaindex #llamaparser #llamacloud #qdrant #groq #datasciencebasics
Рекомендации по теме
Комментарии
Автор

Hey! FastEmbed creator here, thanks for the shout out!

NirantK
Автор

How we can show relevant images as well along with text response in output, pls provide some information or implementation process.

AdarshMamidpelliwar
Автор

Grt video! can you please make a UI around this implementation? using chainlit or something similar?

THE-AI_INSIDER
Автор

Around 16 min mark, you have suggested to comment vector_store, storage_context, index and uncomment the


# storage_context =
# index =

once the storage is made

I observed that we get the Error - ImportError: cannot import name 'load_index_from_storage' from 'llama_index' (unknown location) for the latest version on llama-index.

hence it is problematic. I found that for version 0.6 to almost 0.8 llama index recognizes load_index_from_storage, but for the latest versions it doesn't, you might want to look into this.

If you find a solution and the correct version please let me know

THE-AI_INSIDER
Автор

how can i add llama parse in open web ui so that i can parse documents and then fed them to ollama models

AmanKumar-uzri
Автор

Lamaparse support image retrieval as well??ex I need to store all the image embedding and retrieve as image based on user questions??

Jeganbaskaran
Автор

its fantastic video, llamaparser. its possible to extract the table

venkatkumar