Coze | 3 ways to reduce hallucination for your AI chatbot

preview_player
Показать описание
If you're building a chatbot for Q&A use cases (e.g. customer service), you want your bot's answers to be as reliable as possible. Unfortunately, LLMs have a hallucination problem. On Coze, there are several ways you can configure your bot such that the chance of hallucination can be reduced.

TIMESTAMP
00:00 Uploading Excel files as Knowledge
03:37 Adjusting the LLM temperature
05:50 How to make the bot recall Knowledge more frequently
08:52 Summary


Join our communities to share your bots and connect with other builders:

Рекомендации по теме
Комментарии
Автор

Great stuff, especially the image retrieval!

michaelwu
Автор

Guys, keep it up!!! don't stop👋👋👋

diman
Автор

Hello, I'm working on a workflow to create a function that can search and recommend products.
Here's the specific idea: first, establish a product's' knowledge containing SKU, name, price, etc. Then, utilize a Language Model (LLM) to analyze the keywords entered by the user and generate multiple synonyms. Next, use this expanded keyword set to search the knowledge base for relevant products. Finally, output the SKUs of these suitable products.
However, we've encountered a challenge with the knowledge base format. While we can successfully process a .txt upload format where each product is listed per line (e.g., b SKU, b name, b price), I face issues when using a CSV table format. During the workflow process, the result returned is either 0 or null. Any suggestions or insights on resolving this issue?

jeyler