Getting Started With Nvidia NIM-Building RAG Document Q&A With Nvidia NIM And Langchain

preview_player
Показать описание
Explore the latest community-built AI models with an API optimized and accelerated by NVIDIA, then deploy anywhere with NVIDIA NIM inference microservices.
-------------------------------------------------------------------------------------------
Support me by joining membership so that I can upload these kind of videos
-----------------------------------------------------------------------------------

►Data Science Projects:

►Learn In One Tutorials

End To End RAG LLM APP Using LlamaIndex And OpenAI- Indexing And Querying Multiple Pdf's

►Learn In a Week Playlist

---------------------------------------------------------------------------------------------------
My Recording Gear
Рекомендации по теме
Комментарии
Автор

Check out my udemy course on Machine learning and NLP with end to end projects

krishnaik
Автор

Such an excellent video with 3 awesome components. A very tight Langchain RAG implementation, on Streamlit, highlighting the capabilities of Nvidia's NIM inference.

Thank you for sharing!

IdPreferNot
Автор

How is it different from Groq api inference service where LPUs are way faster than GPUs for inferencing. NIM is more like hosting hf models inside NVIDIA ecosystem with inferencing facility alongwith.But as groq has also got pretty good models, just for inferencing using groq api will be much wiser option for speed for same kind of models.

kumargaurav
Автор

Krish Sir thank you for all the RAG videos you shared. I added one feature into this application of Roman Hindi, where user can ask in Hindi and get response in Hindi too..

syedfarhanahmed
Автор

I love you Krish. I always pick my phone to see if you have posted something new

victorakakpo
Автор

Thank you for the nice overview..I somehow keep getting 400 error with respect to inference, any heads up how i could sort it out ?

yvonne
Автор

Try to create RAG deployment using GenerativeAIExamples repo with nvidia nim container for llm inference

rahulpatil
Автор

how do we ensure that the digital humans or nims are accountable and striving for higher nps?

davh
Автор

Hello sir, instead of Streamlit application all the time, can you do sometimes where we can just run locally through terminal?

uniqueavi
Автор

It's a request that please do add the version if libraries used in the requirements.txt document for the ease in running the application

dimpleagrawal
Автор

Sir, in this your are first creating an OpenAI object 'client' and then using Llama3 model from it. How?

_satyamrai
Автор

does the udemy course offer anything else from his previous playlists?I mean why should i choose the course over the free playlist apart from the certification??anyone?

rcky
Автор

Does your laptop have GPU's . How did you ensure that the code runs on GPU's and not CPU's

anandasekhar
Автор

Sir when we create an embedding so obviously it stores in local directory so could you please tell, if we deploy this application so where should we save this embedding if we use nomic embedding and chroma

Compact
Автор

Have you compared the inference speed of GROQ CLOUD with Nvidia Nim?

JebliMohamed
Автор

I am getting "ModuleNotFoundError: No module named What am i missing?

amitbandyopadhyay
Автор

I cheer for 1M subscribers for your channel !

sergeistadnik
Автор

Hi, Krish, Can we use this to extract information from the pdf? Like invoice number, invoice dates or the description which contains table?

bodhisattadas
Автор

Chainlit authentication pe ek video karo sir

khadarvali
Автор

Sir can u tell us about Lighting AI plz

pradhyumnasg