Prompt Templates for GPT 3.5 and other LLMs - LangChain #2

preview_player
Показать описание
In the second part of our LangChain series, we'll explore PromptTemplates, FewShotPromptTemplates, and example selectors. These are key features in LangChain that support prompt engineering for LLMs like OpenAI's GPT 3, Cohere, and Hugging Face's OS alternatives.

LangChain is a popular framework that allows users to quickly build apps and pipelines around Large Language Models. It integrates directly with OpenAI's GPT-3 and GPT-3.5 models and Hugging Face's open-source alternatives like Google's flan-t5 models.

It can be used for chatbots, Generative Question-Answering (GQA), Retrieval Augmented Generation (RAG), summarization, and much more.

The core idea of the library is that we can "chain" together different components to create more advanced use cases around LLMs. Chains may consist of multiple components from several modules. We'll explore all of this in these videos.

📌 Code notebook:

🌲 Pinecone article:

🎉 Subscribe for Article and Video Updates!

👾 Discord:

00:00 Why prompts are important
02:42 Structure of prompts
04:10 Langchain code Setup
05:56 Langchain's PromptTemplates
08:34 Few shot learning with LLMs
13:04 Few shot prompt templates in Langchain
16:09 Length-based example selectors
21:19 Other Langchain example selectors
22:12 Final notes on prompts + Langchain
Рекомендации по теме
Комментарии
Автор

Probably the best video explaining FewShotPromptTemplate and the other ones. Thanks

bpraghu
Автор

I am a ML dev in the LLM space and really like your videos. Keep up the great work🙌

rohitsaluja
Автор

You are really dropping in some interesting graphics to the videos to make it more than just you talking to the camera. Nice work!

NelsLindahl
Автор

Amazing video, as always. A great content niche mirroring this that no-one hit on yet is how to test if the output from context is being biased by the underlying model. For example, I'm adding context from Carl Jung's books to the Davinci Model to try and understand his writing better, but many of his ideas are not politically correct and I can't tell when the model is 'self-censoring.' I think this will be a growing problem with pretrained models and many of the more interesting use cases.

AltcoinAnalysis
Автор

mindblowing stuff. I wish I saw this 2 months ago when I was looking for it.

subodh.r
Автор

Good presentation, James. Thank you. It seems the LLM companies should create a graduated-feed prompt system where you could submit portions of your background prompt, context, examples and question into stages. A session would be started with the initial background prompt, identified as such to the model, and broken into portions so as to not exceed the token limit, and submitting these portions until the entire initial background prompt is presented. Then follow with the context prompt(s), examples and the question. The LLM would remember each stage, so that more, or different context, could be presented anywhere in the session, and the model would interpret it against the initial background prompt. Same with more examples against the context, etc.

georgeallen
Автор

Insightful as always. Loving the Langchain series. Thanks! 🙏🏻

dikshyakasaju
Автор

I've been following your channel for a while, really good work!

mikoajkacki
Автор

Great walkthrough! Thanks! Would love to see more on langchain.

niklase
Автор

Thanks for reference of this langchan.

rajivmehtapy
Автор

Your explanation was fantastic and I have a question!

I'm trying to build a chatbot that can extract information from a pandas dataframe. It will be necessary to create filters and operations that the agent can already do! Nonetheless, the user is not a data expert and may ask a question that is not DIRECTLY a data science task. Ok, the question can be formatted using `FewShotPromptTemplate` before passing the question to the agent. It allows us to create a context and set an example. However, the agent still gets confused and makes mistakes.

I would like to know how I can create a `FewShotPromptTemplate` inside the agent where I can create a context and, most importantly, pass some code examples. It is possible?

luanorionbarauna
Автор

I'm not sure why we would supply examples of questions and answers. I mean - LLMs inherently respond to our questions. They don't need to be told to do that. I can understand we might want them to respond in a certain format at times, or with a certain amount of verbiage, but the examples in the video and elsewhere I have seen were not addressing these specific requirements.

TropicalCoder
Автор

A much needed video, thank you so much ❤️

shamaldesilva
Автор

I am expecting my LLM to return a constant JSON output as my examples but I am getting error, please replay how to do that please

GiridharReddy-hbnv
Автор

the colab notebook always gives error when i try to run the print(openai... part of the code
any solutions?

ayushgautam
Автор

Thank you very much, very useful. I wonder how to select huggingface models to try with these?

fucyxth
Автор

For dealing with the context window size limit, can one go hierarchical? One template to index. Other templates for the branches. A common use case might be to load a company's FAQ page, and see if the LLM can handle the Q&A.

mintakan
Автор

Could you come up with a way to compress the prompts into less tokens. Then as part of original prompt, tell it how to decode and encode using the mapping. That way you could send a lot less tokens

Chris-senc
Автор

Similarity ExampleSelector, as mentioned last in the video seems very powerful. Has anyone have had succes with this? Would love to hear it!

ward
Автор

But we can create this template directly why we need langchain… can anyone explain?

faisalamdani