Build RAG application with Gemini using Langchain | How to use Gemini with Langchain| Karndeep Singh

preview_player
Показать описание
Video explains the usage of Gemini-pro with Langchain for text generation, Multimodal (using text and image together) and building a RAG application to extract the answer from the PDF using Gemini pro.
Chat with your document using Gemini pro using Langchain.

Connect with me on :

Creative Commons CC BY-SA 3.0

#geminiai #langchain #google #llms #embedding
Рекомендации по теме
Комментарии
Автор

Watched dozens of videos on RAG. One of the best tutorial. Thanks.

meetarpitjain
Автор

Can i train gemini on custom data and export this new model into an online chatbot app!

_mohamedesmat
Автор

Hello Sir, I want to make a social media caption generator web app using Gemini api, but for some inputs, it gives random language answer..I am not sure why that is happening? Because I have specify that I want the output in English language.

thefurreverfriends
Автор

Hi if i pass 10 pdfs can i get the name of the pdf from which it is retrieving answer from source documents

PratheekBabu
Автор

How can i get related answer to just previous asked question to maintain context with RAG?
submitting previous question-answer would cosume more tokens and might differ with context..


Thanks❤

SagarAhirrao
Автор

Hi. Nice explaination. You used retrievalqa. My question is what is the use of loadqachain ?

venkateshpolisetty
Автор

Hi, instead of PDF's, can we embed a bunch of images and retrieve them based on the prompt's similarity with the images. If the prompt says 'red saree' the result should be the images of red saree from the vector db. If it's possible, any recommended embedding models for that?

bharanidharansundar
Автор

In the way you implemented it, is the model capable of knowing what was previously asked? Or does it only retrieve documents, but not the content of previous interactions?

Alessandro-undr
Автор

hi karn this is great video. i had a question. suppose i have some customer conversation data from our chat application on the website, i want to have a question answerign system where i can ask questions about the data like "what are the top concerns for which customer is coming to chat", "how can we help improve customer experience" etc.. do you suggest going with the RAG approach or is there a better way. the reason am asking is in this case the data is not going to structured like it will be in a pdf document. looking forward to your reply.

AJITHKODAKATERIPUDHIYAVEETIL
Автор

hello can we use conversationalqachain instead of retrival?

memesthatifoundonreddit
Автор

Will the model always access Gemini's API for generating answers or does it have the capability to answer FAQs from the knowledge base rather than going to API everytime for generating an answer?

ananyaredhu
Автор

Suppose I only have tabular data in pdf file. Will the same code be able to generate answers

ishavmahajan
Автор

Is gemini pro model which u used here free or chargeable like gpt4

shaktidharreddy
Автор

What about using pinecone as a vectordb? I tried to switch chromdb as pinecone but it doesn't work

markjoshua
Автор

Nice. What is the purpose of RAG ? Why we need to use can you please explain?

karthikb.s.k.
Автор

I used the exact same code as yours but the model is not generating answers. Sometimes it's generating answers but maximum times it is showing "I don't know the answer". (I used the same PDF as yours)

ananyaredhu