Python Langchain Tutorial: Use 3 Different LLMs in 10 Mins

preview_player
Показать описание
Learn how to easily switch between LLMs in Langchain for your Python applications: OpenAI's GPT-4, Amazon Bedrock (Claude V2), and Google Gemini Pro.

Or if none of these suit your needs, you can also implement your own interface.

📚 Chapters
00:00 Introduction
01:43 Getting Started
03:18 Creating the Base App
05:33 How To Use OpenAI GPT-4
07:42 How To Use Claude/Llama2 (via AWS)
09:33 How To Use Google Gemini Pro
10:27 Custom LLMs

👉 Links
Рекомендации по теме
Комментарии
Автор

Fantastic video! I would love more tutorials about setting this up on a local LLM. I love the example used in this video but I don’t like the idea of having to send bank statements to LLMs in the cloud. 😅

Midnightmicroscope
Автор

@pixegami TBH this channel got me so much interest and motivation to do something new and your contents are really impressive. 🔥

crbnx
Автор

"Nvidia Inference Microservice" demo will be a good addition

NathanSTLPillai
Автор

Excellent video and content in general. I'll leave an idea: do a similar video on funcchain, it's a pythonic library that encapsulates some of the langchain complexities.

raonitimo
Автор

Would love to see how huggingface can be used here. If possible please drop a video describing it, thanks!

bec_Divyansh
Автор

when you do the export command what is supposed to happen? I am tangled with this. When I try to invoke the llm it does not work because there is something wrong with what you called enviroment variable. Please help

glenilame
Автор

and how can I use big models from huggingface ? I can't load them into memory because many of them are bigger than 15gb, some of them are 130gb+ . Any thoughts?

botondvasvari
Автор

hey man ! Please do a tutorial on LLM agents and using a custom APi as a tool

rossholland
Автор

Excellent video.
What developer tools need to develop these applications ?
In general for GenAI development.

NathanSTLPillai
Автор

Langchain is a great tool but is there any use to it if I will only use one LLM i.e., GPT?

dxb
Автор

you didn't use azure open ai llm because most of the companies using closed openai like azure so.. We can learn azure open ai services so create video with azure open ai llm with multimodal support rag application ( input multiple pdfs with images and tables and text) and integrate with streamlit..
post the vdo by using azure open api key
Azure open ai embedding and azurechatopenai for multiple pdfs rag application..
Using azure open api end point there are no videos on YouTube so it would be helpful.. 😊

shahnaz