Explained Techniques to parse/format LLM's output using LangChain in python(TypedDict,Pydantic,JSON)

preview_player
Показать описание
Explained Different Techniques to parse the output of LLM's Using LangChain
1. Pydantic
2. TypedDict
3. JSON Schema
4. Custom Extraction

Рекомендации по теме
Комментарии
Автор

In few-shot prompting section, since we are already defining the structure in the prompt itself, we don't need to keep the "structured_llm" at the end of the chain, right ? We can simply use the llm ?

Superb video bro. So much info in this video, with great clarity !

asitnayak
Автор

11:00 - As we have already used while formatting the prompt, why do we need to pass the "parser" again at the end of the chain ? Because the prompt itself contains the formatting instructions.

asitnayak
join shbcf.ru