Why AI hallucinations are here to stay | Ep. 151

preview_player
Показать описание
As businesses look to deploy artificial intelligence, many are concerned about making sure the systems are 100% accurate in their responses, and that ‘AI hallucinations’, where the system seems to make up answers, are eliminated. However, there are cases where AI hallucinations can be good for a business. Keith chats with Ryan Welsh, Field CTO for Generative AI at Qlik, about how companies can determine the right level of accuracy for their AI needs, and whether hallucinations are OK in certain situations.

Follow TECH(talk) for the latest tech news and discussion!

------------------------------­----

Keith Shaw

Рекомендации по теме
Комментарии
Автор

I don't know that much about AI but it sounds to me like the basic algorithms and data structures are flawed

jonnash
Автор

The limitations are in the illogical structure of English. Try much more precise Esperanto (built for AI & communication with/between robots) & see if it hallucinates.

DimitarBerberu