Advance RAG control flow with Mistral and LangChain: Corrective RAG, Self-RAG, Adaptive RAG

preview_player
Показать описание
Рекомендации по теме
Комментарии
Автор

Guys, this is crazy good! Please don't stop your demos and explaining of concepts. If you read this - can you explain a lil bit more the concepts of action tools (usage, own implementations and so on). Thx in advance!

nikitakuznetsov
Автор

🎯 Key points for quick navigation:

00:00 *- Advance RAG control flow with Mistral and LangChain*
00:12 *- Combining small steps into comprehensive control flow for large language model applications*
00:25 *- Flow engineering uses a flow diagram to check response intent and construct answer iteratively*
01:06 *- Corrective RAG uses retrieval evaluator to assess document quality and trigger web search for additional information*
02:14 *- Hallucination note checks answer support by document, and answer question node checks generated answer relevance*
10:31 *- Bind MRAW to schema*
10:43 *- Convert JSON output*
10:59 *- Mock retrieval example*
11:12 *- Grading documents relevance*
11:25 *- Confirm binary score*
11:39 *- Define RAG chain*
12:05 *- Graph State explained*
21:11 *Adversarial Tax Routing*
21:52 *Hallucination Grader Defined*
22:18 *Router Conditional Edge*
22:47 *Web Search Fallback*
24:03 *Control Flow Implemented*

Made with HARPA AI

hxxzxtf
Автор

Thanks for this ! Learned a ton of good stuff, very well explained, will definitely be playing with your notebooks 😊 You’re fantastic for sharing such high quality work

nicolaspellerin
Автор

This is an amazing tutorial. so much valuable information packed in 30 min. Subscribed, Thank you!

NarendraChennamsetty
Автор

but better if you can store several version of perspective and has calculation the benefit do a perspective than other perspective. because...in academic world there is several perspective to solve a problem. what you build here is only enhancement a perspective. event that we all here should appreciate this is a BIG STEP forward of improvement in field AI Knowledge. cheers...

TheInternet
Автор

In the last part, when the flow went twice to the web search tool, it basically searched on the same query, then how did it produce valid result 2nd time and not first time. How to ensure that it does not get stuck in the loop, because basically it does the same thing again and again without changing anything hoping to get correct result.

kuldeepsinhjadeja
Автор

thank you for the wonderful insights in the latest RAG developments. Can someone explain in simple terms the benefit of implementing "LangGraph" ? from what I understand it allows for more accurate LLM executions by limiting the "routes" the output of a certain LLM flows trough, improving it's reliability in execution. But why can't we empower LangChain "Agents" with the same functionality ? wouldn't the ideal agent have LangGraph capibilities built in ?

awakenwithoutcoffee
Автор

Awesome job! Thank you for sharing! What's the best way to do the RAG based on the relational database? We need to understand the question, go to the correct table of a database and find the most relevant records. Looks like we should support both keyword search and sematic search. For the keyword search, we need to extract the parameters, like the keyword, date of that question, the person who generated that record, etc.

yvcqwkm
Автор

What happens if the graph gets stuck in a loop? (Web Search > not use full > Web Search > not usefull > ...)
Do i have to add a "tries" counter to my state and end after x tries to prevent an infinte loop?

aipt
Автор

Does the structured output work with llm calls using bedrock?

RUSHABHPARIKH-vyey
Автор

Can't wait to incorporate Mistral into Taskade in our next Multi-Agent update :)

Taskade
Автор

Hey, can we implement it with all together?

sergiovasquez
Автор

What about this document be csv file? How can I do it?

luanorionbarauna