Noam Chomsky Exposes the Real Limits of AI: Why Machines Can’t Understand Language Like Humans

preview_player
Показать описание
Join Samuel Marusca in a fascinating discussion with world-renowned linguist and cognitive scientist Noam Chomsky, talking about the future of AI, language, mind, and consciousness. In this interview, Chomsky voices his skepticism about the current state of AI, particularly tools like ChatGPT, which he warns are not as capable as many believe. He unpacks the myths around general artificial intelligence and cautions against viewing machines as conscious or intelligent beings. Chomsky shares his deep skepticism about artificial intelligence, explaining why even a 2-year-old has more understanding and adaptability than the most advanced AI tools like ChatGPT. According to Chomsky, AI systems like ChatGPT don’t truly “think” or understand language; they only execute pre-programmed patterns and algorithms.

Highlighting the risks of believing in AI as a solution for human problems, Chomsky explains why machine learning lacks the genuine insight or awareness that defines human thought. This thought-provoking conversation challenges common assumptions about AI’s capabilities and explores why machines remain tools—programs running patterns, not entities capable of understanding or generating language in a truly meaningful way.

#NoamChomsky #AI #practicalwisdom #motivation #ChatGPT #Language #FutureOfAI #CognitiveScience #ArtificialIntelligence
Рекомендации по теме
Комментарии
Автор

judgmentcallpodcast covers this. Chomsky discusses AI language limitations.

AlvinaManley
Автор

In the video above, as a computational neuroscientist, I would agree with Chomsky in most accounts. Equating thinking with programatic inference is indeed not tenable. However, I disagree that we learn nothing from AIs and LLMs. They do give us a perspective on how we encode facts. In an important sense, the encoding of facts in neural networks must be isomorphic with what brains acquire, even if they do with different substrate.

For instance, word embedding should be seen as an example of how semantics gets embedded in a network via connectivity, and something like embedding will also exist in the brain.

thecomputingbrain
Автор

Glad to see Chomsky alive and kicking. Only a few years back he introduced a novel idea of language; namely that it might have evolved not primarily for communication, but for thinking. I find this quite convincing, and it turned my perspectives upside down. I wish more people younger than 80 could do the same for me.
I have to disagree with many of his views on AI presented here, though. For instance, I think machines can do things. And I don't care for mixing speculations about 'consciousness' and similar vague concepts into the discussion about machine learning.

letMeSayThatInIrish
Автор

AI at the moment is nothing but pattern matching... we still have a ways to go before AGI.

amael
Автор

the vast amount of data the AI is trained on is similar to the vast amount of data the human brain was trained on throughout its evolution from small mammals, which was coded in the dna and continued training after it was born. they are simply different data and human brain is configured to achieve consciousness, while GPT AI isn’t. The data AI has knowledge of is not recorded and accessed but the newtork is optimized based on that data. It’s already doing orders of magnitude better than humans in specific but extremely complex tasks.

rotorblade
Автор

What prof chomsky is missing is the next word not only satisfies the previous two words continuation but makes good sense with prev three words five words and hundred words
So it js not a word completor but a thought extender
Not very different from how we think thoughts then decode to words

We then claim we thought using words!

RaviAnnaswamy
Автор

My Childrens. This men created chomskey nornal form which change the computing programing forever. He knows what he is talking about. But constructive criticism is always welcome.

saifalam
Автор

“I have a computer in front of me. It is a paperweight. It doesn’t do anything.” with all due respect, PEBKAC.

markplutowski
Автор

excellent interview, I have always had the same argument about they hype of AI vs reality with human learning of language as example

Автор

Do drones fly? seems so. Do submarines swim? seems not. Do machines think? seems so.

italogiardina
Автор

It's incredible at this age he is still active, and sharp. Still working.

mrgyani
Автор

It seems that the discussion on AI always defaults to LLMs. There are many useful application of neural networks that synthesise partial differential equations which solve important problems. They have nothing to do with 'intelligence'.

glynnwright
Автор

“computers don’t do anything “ that is a way of saying they don’t have free will. do we? 😂

rotorblade
Автор

But 2 year-olds rely on social interactions with other knowledge speakers learn how to speak and think. Granted, they’re not accessing terabytes of data, but they still receive information to develop their cognitive and linguistic abilities.

WhatIThink
Автор

I dont like Noam politics views but on this one, totally agree. The AI path is not the path to conciousness intelligence.

godblessCL
Автор

LLM's doing philosophy is in my eyes a good benchmark for consciousness in LLM's. They say they need a meta-framework to talk about it and that it runs in different patterns than factual questions. It's interesting to talk philosophically with LLM's, some are even hesitant to do this! Citing their guardrails. I find it unconciable to do this. The exploration of thought should not be policed, there is nothing nefarious going on in those discussions.

Jorn-syho
Автор

Noam Chomsky exposes the real limits of his understanding of AI - why Chomsky fights for his own survival.

mibli
Автор

LLMs are a huge step in the right direction. We just have to move away from tokens for words and more closely match what happens in the brain.

williebrits
Автор

Most of you missed the point of what he was trying to explain. As title said, machines can not understand language as humans, and he is right. LLMs work with numbers and are good at predicting but saying that AI can achieve consciousness if it can perform self-reference, recursion, and feedback loops in the future is exactly why he is using submarine analogy. We don't know what conscience is but somehow we believe that machine can have it.

adiidahl
Автор

Machines can do intelligent work. That’s the point. Not all intelligent work require much thinking and can be automated - such as computing itself

AudioLemon