filmov
tv
You should use LangChain's Caching!

Показать описание
LangChain provides a caching mechanism for LLMs (large language models). The benefits of caching in your LLM development are:
1. Save you money by reducing the number of API calls you make to the LLM provider (like OpenAI, Cohere etc), if you're often requesting the same completion multiple times
2. Speeds up your application by reducing the number of API calls you make to the LLM provider. We look at how LangChain implements this caching mechanism, and how we can use it in our own LLM development process.
- Watch PART 1 of the LangChain / LLM series:
Build a GPT Q&A on your own data
- Watch PART 2 of the LangChain / LLM series:
LangChain + OpenAI to chat w/ (query) own Database / CSV!
- Watch PART 3 of the LangChain / LLM series
LangChain + HuggingFace's Inference API (no OpenAI credits required!)
- Watch PART 4 of the LangChain / LLM series
Understanding Embeddings in LLMs (ft LlamadIndex + Chroma db)
- Watch PART 5 of the LangChain / LLM series
Query any website with GPT3 and LlamaIndex
- Watch PART 6 of the LangChain / LLM series
Locally-hosted, offline LLM w/LlamaIndex + OPT (open source, instruction-tuning LLM)
- Watch PART 7 of the LangChain / LLM series
Building an AI language tutor: Pinecone + LlamaIndex + GPT-3 + BeautifulSoup
- Watch PART 8 of the LangChain / LLM series
Building a queryable journal 💬 w/ OpenAI, markdown & LlamaIndex 🦙
- Watch PART 9 of the LLM series
- Watch PART 10 of the LLM series
GPT builds entire app from prompt (ft. SMOL Developer)
- Watch Part 11 (Prompt Engineering / Prompt Design)
A language for LLM Prompt Design: Guidance
All the code for the LLM (large language models) series featuring GPT-3, ChatGPT, LangChain, LlamaIndex and more are on my github repository so go and ⭐ star or 🍴 fork it. Happy Coding!
1. Save you money by reducing the number of API calls you make to the LLM provider (like OpenAI, Cohere etc), if you're often requesting the same completion multiple times
2. Speeds up your application by reducing the number of API calls you make to the LLM provider. We look at how LangChain implements this caching mechanism, and how we can use it in our own LLM development process.
- Watch PART 1 of the LangChain / LLM series:
Build a GPT Q&A on your own data
- Watch PART 2 of the LangChain / LLM series:
LangChain + OpenAI to chat w/ (query) own Database / CSV!
- Watch PART 3 of the LangChain / LLM series
LangChain + HuggingFace's Inference API (no OpenAI credits required!)
- Watch PART 4 of the LangChain / LLM series
Understanding Embeddings in LLMs (ft LlamadIndex + Chroma db)
- Watch PART 5 of the LangChain / LLM series
Query any website with GPT3 and LlamaIndex
- Watch PART 6 of the LangChain / LLM series
Locally-hosted, offline LLM w/LlamaIndex + OPT (open source, instruction-tuning LLM)
- Watch PART 7 of the LangChain / LLM series
Building an AI language tutor: Pinecone + LlamaIndex + GPT-3 + BeautifulSoup
- Watch PART 8 of the LangChain / LLM series
Building a queryable journal 💬 w/ OpenAI, markdown & LlamaIndex 🦙
- Watch PART 9 of the LLM series
- Watch PART 10 of the LLM series
GPT builds entire app from prompt (ft. SMOL Developer)
- Watch Part 11 (Prompt Engineering / Prompt Design)
A language for LLM Prompt Design: Guidance
All the code for the LLM (large language models) series featuring GPT-3, ChatGPT, LangChain, LlamaIndex and more are on my github repository so go and ⭐ star or 🍴 fork it. Happy Coding!
Комментарии