Why Generative AI hallucinates and gives different answers

preview_player
Показать описание
I try to relate how you can find your answers with LLMs, much like a detective would interrogate a witness about what really happened. I hope you find this analogy useful when you think about leveraging LLMs in your own business or context.

I am deeply grateful to my long-time friends, Srinivasan Sundhararajan and K. Govindarajan, for over 40 years. This collaborative project got us back to playfulness as we collaborated across the world through Zoom every Sunday, and we still continue that touch point—often taking our ideas in tangential directions. They help bring back the curiosity and inquisitiveness of our young minds, if I may say so.

We had a good time with RAG experiments, writing and critiquing code, trying to fix the traditional "it works on my computer" problems, and having a jolly good time. Our insightful conversations have been instrumental in shaping the ideas presented in this video.

------------------
I help businesses tell effective stories for digital transformation, so they can drive results.

Рекомендации по теме
Комментарии
Автор

How do you see GPT models advancing in construction industry?

mystudy
Автор

Great. Just great. So very pedagogically explained. Thanks.

SatishPatel
Автор

Great analogy. It highlights my concern. Gen AI as a dog makes a lot of sense. From my experiences and bias, I am very fearful of it, because I haven't seen a lot of evidence lately from large corporations about how they would use this technology in a way that (to put it bluntly) wouldn't ruin everything. I have friends who work in ai and fully believe there is a beneficial capacity and can be a useful tool. However, that is drowned out by those viral clips of executives displaying a zealous disregard for human suffering. However, I'm aware I am biased and those are viral clips for a reason. I'm willing to be talked off the anxiety ledge. Are there any guard rails in place to regulate this technology?

Zappbrannigan
Автор

On a more technical level, hallucination has a lot to do with temperature (how much variability is forced on the AI) and rlhf, a process where they teach it to respond to humans in a way humans 'like', since they use large amounts of lower paid people for this they tend to prefer answers which sound 'confident' rather than those that say 'I don't really know the answer or there is too much vagary to answer'.

michaelnurse
Автор

Thank you for sharing your excellent insights. Your videos give me a much better perspective in using AI and deserve to be viewed by many more viewers.

janzandberg