filmov
tv
Constraining LLMs with Guidance AI
Показать описание
In this video, we'll learn how to use the Guidance library to control and constrain text generation by large language models, specifically integrating it with the llama CPP library and the Mistral 7B model.
We'll build an emotion detector with help from functions like select which restricts generation to an array of values and gen, which can be controlled by regular expressions.
We'll also learn how to create reusable components and output results in JSON format.
#LLMS #Mistral7B #Guidance
Resources
We'll build an emotion detector with help from functions like select which restricts generation to an array of values and gen, which can be controlled by regular expressions.
We'll also learn how to create reusable components and output results in JSON format.
#LLMS #Mistral7B #Guidance
Resources
Constraining LLMs with Guidance AI
Constrain and Filter LLM Output with Guidance Locally
Constraining your LLM Generation with Guidance - Yuval Mazor
A language for LLM prompt design | Guidance
ORPO: NEW DPO Alignment and SFT Method for LLM
Guidance: Make your Models Behave | BRK257
Guidance Language for Controlling LLMs
Quick overview of the Guidance python library for LLMs
Pydantic is all you need: Jason Liu
New AI cascade of LLMs - FrugalGPT (Stanford)
Lessons From A Year Building With LLMs
What is Prompt Tuning?
Control Tone & Writing Style Of Your LLM Output
Prompt Engineering Tutorial – Master ChatGPT and LLM Responses
RouteLLM - Uses The Best AI Based On Your Task - Super Intelligence In The Making?
Serve a Custom LLM for Over 100 Customers
Lessons Learned on LLM RAG Solutions
LMQL Programming Large Language Models
Coercing LLMs to Do and Reveal (Almost) Anything with Jonas Geiping - 678
Master the Perfect ChatGPT Prompt Formula (in just 8 minutes)!
Using LangChain Output Parsers to get what you want out of LLMs
AI Won't Be AGI, Until It Can At Least Do This (plus 6 key ways LLMs are being upgraded)
How to Build with LLMs — Practical Lessons Learned (DockerCon 2023)
Practical Fine-Tuning of LLMs
Комментарии