L-8 Build a Q&A App with RAG using Gemini Pro and Langchain

preview_player
Показать описание
Welcome to our step-by-step tutorial on building a Q&A app using Retrieval-Augmented Generation (RAG) with Gemini Pro and LangChain!

In this video, we will guide you through the process of creating a powerful and intelligent Q&A application from scratch. With the combination of Gemini Pro's cutting-edge AI capabilities and LangChain's versatile framework, you'll learn how to integrate advanced retrieval techniques to enhance your chatbot's performance.

What you'll learn:
The basics of Retrieval-Augmented Generation (RAG)
Setting up the Gemini Pro environment
Integrating LangChain with your application
Building and deploying a Q&A app

Don't forget to like, comment, and subscribe for more tutorials and updates!
Рекомендации по теме
Комментарии
Автор

u have cleared all the concepts in very simple and easy way

Random-qwvl
Автор

Best tutor for AI and ML, Thanks alot mame

itsmeuttu
Автор

Wao Amazing thanks mam
from Pakistan

Umairkhan-jp
Автор

looking forward to hearing seminar about Lora-pro from U

howGnt
Автор

Super awesome video Asrohi. Can you make one RAG app to chat with any multiple websites please.

AkulSamartha
Автор

Plz explain fine tuining the hugging face model on custom data specially text to image generation

NehaKothari-izhy
Автор

Thank you for this amazing series of vedios. I have a question that we ca using Chroma DB for saving the embeddings so how can we see these embeddings in chroma db and aslo we have not use any chroma db connection link.

noorahmadharal
Автор

How to interact with multiple pdfs, and how much load of data will be handled by llm as a free tier

sanjaybhan
Автор

full course video about "Claude 3.5 sonnet AI model, API finetune" full course please

hendoitechnologies
Автор

I have followed your video, but the chatbot is still giving answers outside the provided context, even after using your system prompt and making adjustments. For example, if I say "I'm sad, write a joke for me, " it still writes a joke. This is the issue I'm encountering. Could you please provide a solution?

Umairkhan-jp
Автор

Hello madam, Omkar, this side. I’m very glad to see your video regarding that RAG model. But for somehow, I realise that it is not hundred percent running locally, we need to use Google API token key. I have use the same with the open AI after few recurrent and after few token processing, it is asking some billing method or credit card details to further.

Can we have such a model where we can deploy rag pipeline from scratch hundred percent locally? we can fetch an LLM model from hugging face and download it and storage in our local drive. Create a victor data on our own or just a pie tenor, which am all the text token. That will be much more beneficial for me if we are going for a a business purpose. It is much more beneficial to run it locally with a discreet GPU.

Can you please help me guiding on the same building a rag model from scratch using a LLM from hugging face? It can be any LLM of my choice. I’ll be hopeful to see that tutorial and develop myself.. thank you so much for your content. Your content are very beautiful, and it’s very informative.. just teach like a teacher in a classroom, thank you so much again…..❤❤

omkarsatapathy
Автор

how to create the same on a CSV dataframe?

sunnycloud
Автор

I get this error while I run last cell of that basic rag


AttributeError Traceback (most recent call last)
Cell In[13], line 1
----> 1 response = rag_chain.invoke({"input": "what is new in YOLOv9?"})
2 print(response["answer"])

AttributeError: 'int' object has no attribute 'name'

FahadRamzan-ricr
Автор

mam can you please complete the generative ai playlist

clarkpaul