Combining LLMs with Knowledge Bases to Prevent Hallucinations // Scott Mackie // LLMs in Prod Con 2

preview_player
Показать описание
// Abstract
Large Language Models (LLMs) have shown remarkable capabilities in domains such as question-answering and information recall, but every so often, they just make stuff up. In this talk, we'll take a look at “LLM Hallucinations" and explore strategies to keep LLMs grounded and reliable in real-world applications.

We’ll start by walking through an example implementation of an "LLM-powered Support Center" to illustrate the problems caused by hallucinations. Next, I'll demonstrate how leveraging a searchable knowledge base can ensure that the assistant delivers trustworthy responses. We’ll wrap up by exploring the scalability of this approach and its potential impact on the future of AI-driven applications.

// Bio
Scott is a Staff Engineer at Mem, the AI-powered workspace that is personalized for you.

As one of Mem's founding engineers, Scott has played a crucial role in developing the engineering platform, working on everything from DevOps and Web Development to recruitment. Lately, his focus has been on scaling the LLM pipeline system that drives the AI workspace.

Prior to joining Mem, Scott worked as a Senior Software Engineer at Instacart, where he helped launch and scale up Instacart's Enterprise Engineering Team in Toronto.
Рекомендации по теме
Комментарии
Автор

This is really interesting :) I am currently studying Data Science and wanted to ask with which methods we can grant/deny permission to certain (internal business) data based on the level of the employee, in an LLM-conversational application? Can we use guardrails for this as well? Thank you a lot in advance for your quick answer!

laehmo