Getting Started with GPT-3 vs. Open Source LLMs - LangChain #1

preview_player
Показать описание
LangChain is a popular framework that allows users to quickly build apps and pipelines around Large Language Models. It integrates directly with OpenAI's GPT-3 and GPT-3.5 models and Hugging Face's open-source alternatives like Google's flan-t5 models.

It can be used to for chatbots, Generative Question-Anwering (GQA), summarization, and much more.

The core idea of the library is that we can "chain" together different components to create more advanced use cases around LLMs. Chains may consist of multiple components from several modules. We'll explore all of this in these videos.

🌲 Article:

📌 LangChain Handbook Code:

🤖 AI Dev Studio:

🎉 Subscribe for Article and Video Updates!

👾 Discord:

00:00 Getting LangChain
01:14 Four Components of LangChain
06:43 Using Hugging Face and OpenAI LLMs in LangChain
07:13 LangChain Hugging Face LLM
13:51 OpenAI LLMs in LangChain
18:58 Final results from GPT-3
Рекомендации по теме
Комментарии
Автор

That's so sick, I'm coming from being a crypto dev and seeing the AI open stack so strong so fast is awesome

thomasmeta
Автор

I really like your content man! it's so nice to have someone go in-depth with this stuff, not the usual ~10 tutorials that don't go beyond the very basics.
excited for the rest of the series!

Galmion
Автор

Thank you for the great content. I am excited to see where this series will go! I appreciate how you include diagrams and flowcharts in the videos, they are excellent for visualising what you are teaching!

AIQuestX
Автор

that’s a very comprehensive one. looking forward for the future series together with other tools and use case

seattletcp
Автор

James is the most amazing YTuber. Just making awesome videos and teach with some quality content. I am really looking forward to some open source free ones to learn from and using pinecone than OpenAI paid ones. :) Thanks, James.

sriramkrishna
Автор

Perfect, James, thanks for an introduction!

pavellegkodymov
Автор

I really enjoyed watching your video today. Keep up the great work. I look forward to seeing what will happen next.

NelsLindahl
Автор

@James, I'm getting an xl timeout error using your notebook directly on a T4 and A100 GPU in Colab. Any ideas? Really like your stuff, this is a super helpful intro video.

redhatravi
Автор

Wonderful content! ... I will follow the entire playlist
Thank you very much, James Briggs !

LearningWorldChatGPT
Автор

James, Great tutorial as usual. Perhaps you should create another tutorial about using Langchain for the recently enhanced In-context learning for the GPT-3 tutorial you made. I mean if its components help improve and automate the in-context learning flow.

cloudshoring
Автор

🎯 Key Takeaways for quick navigation:

01:34 *🧩 Prompt templates in LangChain are templates for different types of prompts, facilitating various interactions with large language models such as question answering and summarization.*
02:43 *🤖 Large language models in LangChain, like GPT-3, are capable of performing incredible tasks such as text generation and understanding.*
03:12 *⚙️ Agents in LangChain are processors that use large language models to determine actions based on queries or instructions, enabling logical operations like web search or calculations.*
05:29 *🧠 LangChain supports short-term and long-term memory for models, aiding in tasks like conversation buffering and data augmentation for better domain-specific responses.*
06:50 *🛠️ Getting started with LangChain involves installing the library and using it for basic tasks like text generation with large language models from providers like Hugging Face and OpenAI.*

Made with HARPA AI

eugenmalatov
Автор

I get an error on the generate method saying attribute doesn't exist on the LLMChain object

tradingwithwill
Автор

Great tutorial James! On a separate note, I observed that even if I remove the word "the" from the same question as in the video, the flan-t5-xl model is unable to answer! It's as if the question is hard coded in the model with the exact number of words to generate an answer. Bizzare!

arjunbaidya
Автор

Eugene Cernan was the 12th person to 'leave' the moon, because he was the last to board again. Harrison was the 12th person to first step foot on the moon, since he stepped out shortly after Eugene. Eugene comes up a lot in the context of the last person to have stepped foot on the moon to this day, which often puts context into "last to leave the moon". So, it's a bit of a fiddly question.

graham
Автор

Super interesting vid! Is Hugging Face Hub LLM expensive to run, or is it free? Bit confused about it..

I understand OpenAI is paid; which can be replaced with Hugging-Face-Hub Flan-T5-xl (or xxl but times out??). Can Bloom also be used? Is it free/better than Flan-t5-xl?

hiranga
Автор

Hi James! Great content! One question... is it possible to to use Open Source LLMs like flan-t5-xl to "Question answer over Docs"? Every example I have found uses OpenAI and I have been unable to rewrite for OpenSource LLMs....

blazkocevar
Автор

Thanks! Can you create a tutorial from the Start screen? I don't know how to get to whatever screen you are on.

BlackStudies
Автор

Thank you very much! that's amazing. Looking forward for your next video!

rubickwilde
Автор

Excellent start to what I hope an equally excellent rest of course that I am eager to follow and hand-code (rather than use your code from Github). However: 2010 season was won by New Orleans Saints. ;) Also how could I do the course using a locally installed LLM from HF? What would change in the LLM instantiation call do do so?

nikosterizakis
Автор

Thank you so much for doing this. I never knew this existed.

arjungoalset