Why AI Is Incredibly Smart and Shockingly Stupid | Yejin Choi | TED

preview_player
Показать описание
Computer scientist Yejin Choi is here to demystify the current state of massive artificial intelligence systems like ChatGPT, highlighting three key problems with cutting-edge large language models (including some funny instances of them failing at basic commonsense reasoning.) She welcomes us into a new era in which AI is becoming almost like a new intellectual species -- and identifies the benefits of building smaller AI systems trained on human norms and values. (Followed by a Q&A with head of TED Chris Anderson)

Follow TED!

#TED #TEDTalks #ai
Рекомендации по теме
Комментарии
Автор

It's great that she's bring this topic up, as surely civilisation as a whole wouldn't want AI tech to lie in the hands of a few mega corporations inside a black box. She's fighting for transparency.

gavinlew
Автор

Why should we always remind ourselves that GPT is a parrot-like chatbot that only synthesizes knowledge, not produces it?

m_messaoud
Автор

This reminds me of the Einstellung Effect, where what you've learned in the past makes you blind to more obvious solutions. The AI expects that every piece of information it gets is relevant, and that's why it sees that "Oh, one quantity is associated with another quantity. Based on the data I have, I (incorrectly) infer that the two are directly proportional and that one quantity getting increased by five times means the other quantity would be increased by five times." (I did ask ChatGPT, and it did give this reasoning).

In fact, the water jug thing is very similar to the experiment used to showcase the Einstellung Effect. Participants were asked to measure a certain amount of water using three jugs. Half of the participants were given 5 practice problems, in which the answer was always "Fill Jug B, pour it away to Jug A, then pour it away to Jug C, empty Jug C and pour into Jug C again, the remaining water in Jug B is the amount you want." So when participants were given values that could only be solved via simpler methods (like just adding the amounts of Jugs A and C), they were blind and couldn't figure it out.

You can also compare this to the famous question of, "I bought a pencil and an eraser for $1.10. If the pencil cost $1 more than the eraser, how much does the eraser cost?" A surprising amount of people will say 10 cents.

byeguyssry
Автор

Seems like an opportunity to change how we educate people. Fact regurgitation is now less valuable. Common sense, or better yet critical thinking, are more important than ever.

I wish Yejin had addressed ways to stop AI from running amok and how to “tag” its output to reveal the origin as inhuman (versus relying on humans to explicitly label their AI-generated material).

jaymacpherson
Автор

14:00 "ability that make hypothesis, verify by its own, make abstraction from observation" - the way we truly learn

azir
Автор

Notice how quickly we have normalized the idea of machines being able to learn. That itself is legit mind blowing. It took only a few weeks/months for humans to take for granted the current emergent "abilities" brought about by LLM's, and scoff at imperfections. Looking forward to more advancements in this exciting field!

eelincahbergerak
Автор

0:53: 🤔 Artificial intelligence is a powerful tool but still has limitations and challenges.
3:44: 🤔 AI passing the bar exam does not mean it is robust at common sense.
6:43: 🤖 Common sense is crucial for AI to understand human values and avoid harmful actions.
10:00: 📚 AI systems are powered by specialized data crafted and judged by human workers, which should be open and publicly available for inspection.
13:47: 🤔 The speaker discusses the idea of common sense and the limitations of current language models in acquiring it.
Recapped using Tammy AI

Eric-zowo
Автор

Inspiring talk to let people think what else may need on top of "scale is all you need". E.g. let AI actively make its own hypothesis (the scientific hypothesis we learnt in high school) and verify itself.

tommytao
Автор

I just tried your examples in gpt4 today and it basically got them all right now

brandodurham
Автор

I think one problem is that these models have only been fed data cosisting of text and images. When these models are allowed to see the real world, they will develop a much better common sense of their surroundings.

mahmedmaq
Автор

Absolutely incredible talk, and a gateway to even more advanced reasoning towards intelligence amplification!

anakobe
Автор

Good to see someone point out the limitations of large language models. I feel like its kind of over hyped right now because investors need to find something new to pump and dump. The fear of AI also adds to this inflation of what is starting to look like the next bubble.

makendoo
Автор

I throughly enjoyed this talk. I can’t think of any words just like. Yes please thank you for saying this all outloud on stage. It’s needed. 🙏🏻

samanthamckay
Автор

The example she put is some extreme cases in earlier days. I asked the same question to gpt 3.5 and got much smarter answers. Like : If you can dry one cloth in 2 hours, and you have 12 cloths, it will take you 2 hours to dry each cloth separately, which means it will take a total of 24 hours to dry all 12 cloths.

However, if you can dry multiple cloths simultaneously, you can reduce the time required to dry all 12 cloths. For example, if you can dry 3 cloths at a time, it will take you 2 hours to dry 3 cloths, so you would need to repeat the process 4 times to dry all 12 cloths. ……

martinli
Автор

I have used chatgpt for some incredibly complicated questions and it is mostly correct as long as I am precise and careful with my question, but I was astonished when I gave it the 30 shirts question and it said 30 hours! However, after I told it why it was wrong it accepted the reasoning and then gave me the correct answer

Greguk
Автор

Excellent talk, great examples and analogies and I couldn't agree more with the message

junka
Автор

0:16
0:56
1:16
2:57
5:32
10:42
13:01

lynn
Автор

Enjoyed the talk overall quite a bit. Thank you!
What I don't quite understand is the focus on "common sense" which as correctly quoted isn't common at all. The very fact that it is not common and LLMs are trained on all the knowledge publicly available implies that, given that it is a statistical language model at it's core, it will give you the distillation of uncommon common sense of humanity. Depending on your phrasing and wording precision it will provide answers ranging from the correct solution, something that is correct but wildy inefficient, something that is almost correct to a completely misunderstood incorrect answer.

Let's just agree that the current challenge of LLMs is to teach it correct reasoning.

the_curious
Автор

As someone who once obsessed over IQ tests as a measure of intelligence, I think I can safely say that being able to pass some sort of test isn't equivalent to being smart, it just means you know how to answer specific questions.

That's why I was confused by the obsession of OpenAI over GPT-4's ability to pass different tests. Of course it could pass a college exam, you fed it the answers!

michaelh
Автор

Common Sense: is the ability to analyze various outcomes, and decide, which outcome most suits a given need or satisfaction. How does one teach a computer 'satisfaction'?

ThisOldMan-ya