End To End Document Q&A RAG App With Gemma And Groq API

preview_player
Показать описание
Gemma is a family of lightweight, open models built from the research and technology that Google used to create the Gemini models.In this video we will create an end to end Document Q&A RAG App wwith Google Gemma And GRoq API
----------------------------------------------------------------------------------------------------------
Support me by joining membership so that I can upload these kind of videos
-----------------------------------------------------------------------------------

►Data Science Projects:

►Learn In One Tutorials

End To End RAG LLM APP Using LlamaIndex And OpenAI- Indexing And Querying Multiple Pdf's

►Learn In a Week Playlist

---------------------------------------------------------------------------------------------------
My Recording Gear
Рекомендации по теме
Комментарии
Автор

Thanks for the informative video. And again reiterating my request- Please upload videos on LLM evaluation. It’s badly needed.

ssen
Автор

Thank you sir!! In the end we are doing similarity search in which llm first take context of input and give some more result related to those input context.

achrajpachauri
Автор

thank you krish. please use model llama3 and show we how use it as llm for Q&A and RAG

mohsenghafari
Автор

Thankyou for all your efforts Sir,
Your videos did help us a lot!

aneeshamanke
Автор

I really appreciate your efforts and excellent content. Thank you 😊

DAvgGamer
Автор

Thanks Krish for all the value you share with us! For the end-to-end projects, could you also include various alternatives of deploying the AI solutions to (Azure, Google Cloud, AWS...)?

dordekodzic
Автор

Thank you Krish!! This is really helpful for my project.

Couple of questions :

1. My data is in Google drive or 0ne drive or other cloud storage. How I can load this document or create pipelines to load data from my google drive.

2. What make change in the code for get top 3 k=3 similarity search results.

anshulesh
Автор

Krish I really appreciate your videos. Can you please create a video on multimodal model creation.

ashishkumarbajpai
Автор

very nice u r a instutution Thanks for good work

globeaz
Автор

Sir, how about adding a memory or follow-up feature in one of any RAG videos. That'd be really helpfull.

ayushmishra
Автор

Great video!Really really helpful. Can you please modify this code to add a conversational feature so that we can ask follow-up questions?

SagnikSarkar
Автор

How to use llama3 model in production, In local we have downloaded llama3 8b model. In live how to deploy the model. Please teach me bro❤

__john
Автор

Thanks Krishna, for the amazing video, appreciate on your smart work. I have one doubt, did you miss to pass context in the prompt, saw you passed the question only, or is it optional? 29:13

vivek
Автор

Hello Krish, Can you make video on how to deal with structured format of data like excel using aws knowledge base and LLM models

jallaaswini
Автор

Great session indeed. But still curious to know one thing. you have created "venv" conda environment, in which you have created necessary libraries. But 12:10 timeframe, when you tried acessing cmd, it was showing different environment, (development). How could this possible?.

SwaminathanSekar-ksvb
Автор

Will Groq use the data that we are passing to LLMs for train models?

sarangakumarapeli
Автор

hi, the document provided by you is working but not my personal document and i need to add memory how can i add to it

jtgzdft
Автор

Make a LLM Evaluation on this RAG using this groq model

priyank
Автор

Can we load any pdf and get our output?

devloper.hs
Автор

When i loading pdf, is it possible to load table, image, or is it get all the elements. I think It can only accept text element.

muhammedyaseenkm