filmov
tv
Retrieval Augmented Generation (RAG) with Data Streaming
Показать описание
How do you prevent hallucinations from large language models (LLMs) in GenAI applications?
LLMs need real-time, contextualized, and trustworthy data to generate the most reliable outputs. Kai Waehner, Global Field CTO at Confluent, explains how RAG and a data streaming platform with Apache Kafka and Flink make that possible.
RESOURCES
CHAPTERS
0:00 - What is RAG?
2:19 - Why Apache Kafka and Flink
3:40 - RAG with a Data Streaming Platform
8:54 - Use Cases
10:34 - Summary
ABOUT CONFLUENT
#GenAI #LLM #RAG #confluent #apachekafka #kafka #apacheflink #flink #cloud
LLMs need real-time, contextualized, and trustworthy data to generate the most reliable outputs. Kai Waehner, Global Field CTO at Confluent, explains how RAG and a data streaming platform with Apache Kafka and Flink make that possible.
RESOURCES
CHAPTERS
0:00 - What is RAG?
2:19 - Why Apache Kafka and Flink
3:40 - RAG with a Data Streaming Platform
8:54 - Use Cases
10:34 - Summary
ABOUT CONFLUENT
#GenAI #LLM #RAG #confluent #apachekafka #kafka #apacheflink #flink #cloud