Are Hallucinations Popping the AI Bubble?

preview_player
Показать описание

AI stocks have been dropping, leading many to believe the AI bubble has finally burst. But in this video I want to make a case that this bubble which is currently bursting is not that of AI per se, it’s that of the specific type of AI called Large Language Models.

🔗 Join this channel to get access to perks ➜

#science #sciencenews #artificialintelligence #technews
Рекомендации по теме
Комментарии
Автор

The issue isn’t with LLMs being wrong when asked a question. The issue is with corporations pushing them into every service possibly imaginable while *knowing that.*

stchaltin
Автор

“The ability to speak does not make you intelligent”
Qui-gon Jinn

citywitt
Автор

One thing that people may find interesting: LLM's hallucinating isn't a bug, it's the way it works. It's completing patterns it hasn't seen before, using statistics it has learned from other texts. This is indistinguishable from hallucinations, since it's always just made up stuff, the best guess LLM has about what might come next. Sometimes this hallucination is useful, it uses rules we agree with. Sometimes it uses wrong rules, rules we disagree with. Either way, it's LLM just hallucinating what the "correct" answer might be.

gJonii
Автор

I liked it much better when everyone was careful with the terminology and did not mix AI with ML and expert systems.

JorgTheElder
Автор

Colin Fraser: "Solve the wolf, goat and cabbage problem."
LLM1: "Shoot the wolf."
Colin Fraser: "No!"
LLM2: "Shoot the goat."
Colin Fraser: "No!"
LLM3: "The farmer?"
Colin Fraser: "Definitely not!!"
Baby LLM4: "Shoot the cabbage. And then, the boat."

MictheEagle
Автор

4:51 "Imagine AI being able to win every logical argument. Reddit would become a ghost town overnight"
You're assuming people on reddit use logic. That's quite of a bold statement

exapsy
Автор

I once have seen a banner in a school class: "All human errors come from misunderstanding the laws of physics." It seems it is now true also about the AI.

arctic_haze
Автор

So, LLMs have reached the level of our politicians who blather content-free word salad and to make them intelligent we need to teach them math and physics. Good luck with that.

jeffryborror
Автор

Although "hallucination" is a popular word to characterize unrealistic assertions from LLMs, a technically better term of art is "confabulation."

BarryKort
Автор

Regarding the maths Olympiad, it is a bit less surprising when considering that the problem was manually translated into Lean by humans, it acted as the solver and then transferred the data back to the system to phrase it properly. Lean was run against the problem an additional time to verify the solution. Oh, and it took up to 60 hours per question, which is far longer than any human gets.

jttcosmos
Автор

Working in AI, I can say that LLMs are not the alpha and omega is already established. It's been put to rest. While companies keep training bigger models for commercial reasons, researchers have already switched the attention to cognitive features. Memory, reflection, planning, abstractions... It's not true that the industry is stuck with LLMs

AlValentini
Автор

We dont have sentinent ai, its a language model with access to a huge database, it is not self aware or alive, its a data center.

stevensteven
Автор

As karpathy said: hallucinations are the main output of LLMs as they are next word predictors. Not hallucinating is the anomaly.

Gilotopia
Автор

Thank you Sabine for this eye opener. I tried to think about how we learn to speak at early age and do some math later.

carlbrenninkmeijer
Автор

i just asked chatgpt the following: "a farmer with a wolf, a goat and a brathering must cross a river by boat. The boat can carry only the farmer and a single item. How can they cross the river without anything being eaten"
i used brathering because it looks like an english word, but isnt. Its german and means fried hering, so something the farmer might enjoy, but not the goat or the wolf.

Chatgpt gave me the same answer as shown in the video including a mysterious cabbage.


Asking question about the results reveal that chatgpt knew that brathering is a dish and goats dont eat it and the whole puzzel is pointless, with the return with one item answer. If asked again chatgpt will not speak about cabbages, but will still provide the return with one item answer

julian
Автор

The status of AI currently?

We have reached the level of Artificial Idiot.

Agnemons
Автор

I am not as confident that AI won’t cause serious chaos in the near future. I miss the 80’s and 90’s more and more every single day 😢

PaulKrista
Автор

This topic is very close to my heart. It seems that we're so facile with language that, in most cases, people can get along by using the symbol (word) without remembering that the word is not the THING. At a simple level, like "please pass the salt", language works fine, but as soon as a topic gets complex, nuances of meaning get lost in the journey from speaker to listener and, if undetected, unexpected errors occur. This is not a problem when two people are speaking a language that is not their primary tongue -- the interlocutors KNOW there's going to be slippage. But when they're both speaking their native language, they don't realize how fragile our communication system is.

I often picture the problem as using language to describe a painting or musical composition -- complete with its emotional content and historical context. Language just isn't the appropriate tool to get the experience from speaker to listener. You can also consider watching a movie as a stream of ones and zeroes and using your mind to compose what the movie actually is.

Yet words are so deceptive in their apparent clarity that we mistake the picture our minds make for the thing itself. Of course, when you see an object, it's not like the object is going into your eye. We just see reflected light an "calculate/deduce" that the thing must be a tree. We don't see the tree; we only sense light. But language allows us to say "I see a tree", and we all jiggle the concept around in order to construct a picture.

Failing to see that everything we learn, or even think, is actually just the product of emergence can cause some pretty strange results. Hence life as we know it.

dactylntrochee
Автор

Labeling something a hallucination instead of simply an error has been excellent linguistic strategy to anthropomorphize a cute computer program.

Refractualism
Автор

I'm a business student. For 2 years, everyone wrote flashy papers on AI. I only found engineers to talk about this with. It's like business people.... Don't want to know what they don't know. I'm a bit of the black sheep of the study, but by now I mind it less. The ai bubble bursting at face value was a predictable outcome. Maybe now I have grounds for people to listen to my research. I feel like I'm screaming into a corporate void. I had a bit of a crisis because of it, for i was saddened by science, for I thought naively science was open and curious. I hope we go onto better times. Thank you for this video.

angelikaolscher
join shbcf.ru