Applied Generative AI with AIS: Haunting Hallucinations

preview_player
Показать описание
In episode 5 of Applied Generative AI with AIS, Vishwas Lele hosts a special Halloween edition to share ways to lessen the scare factor of AI. We don't mean to spook you with warnings of hallucinations. Large language models can sometimes provide false or misleading information. But don't let hallucinations haunt you...

What You'll Learn:
- Hallucinations Overview: What they are, implications, and why they occur.
- Examples: Hallunication types explained with prompts and responses.
- Measuring: Methods for tracking hallucinations in LLMs using models and human evaluation.
- Best Practices: How to structure prompts, using RAG patterns for grounding, and more ways to improve accuracy.

🔗 Useful Links:
- Marketplace Offerings: Azure Marketplace

Check Out Our Full Mini Video Series: "Applied Generative AI"

🔔 Subscribe to our channel for more updates and hit the bell icon to get notifications about our latest videos!
Рекомендации по теме
Комментарии
Автор

Great video,
very informative and well made.
thank you.

AsafBarHida
welcome to shbcf.ru