'No AGI without Neurosymbolic AI' by Gary Marcus

preview_player
Показать описание
Chapters:
0:00 No AGI without Neurosymbolic AI
30:00 QAs

The talk was given on 26 Feb 2024 at the NucLeaR workshop -
Neuro-Symbolic Learning and Reasoning in the Era of Large Language Models @ AAAI 2024

PDF slides of the talk can be downloaded at:

Workshop organizers: Pranava Madhyastha, Alexander Gray, Elham Barezi, Abulhair Saparov, Asim Munawar

#aaai #llm #reasoning #learning #NeuroSymbolic #vancouver

Speaker's Bio:
GARY MARCUS is a leading voice in artificial intelligence. He is a scientist, best-selling author, and serial entrepreneur (Founder of Robust.AI and Geometric.AI, acquired by Uber). He is well-known for his challenges to contemporary AI, anticipating many of the current limitations decades in advance, and for his research in human language development and cognitive neuroscience.
An Emeritus Professor of Psychology and Neural Science at NYU, he is the author of several books, including (5 of these) including The Algebraic Mind, a treatise that focuses on the theory Integrating Connectionism and Cognitive Science.
He has often contributed to The New Yorker, Wired, and The New York Times. He is currently working on a new book aptly titled “Taming Silicon Valley”!
He has also testified at the US Senate Hearing on the harms of the current Generative AI technology landscape.
Рекомендации по теме
Комментарии
Автор

Thanks for the replay! Very interesting talk :)

Netfir
Автор

Thank you for sharing this presentation, peace

williamjmccartan
Автор

AGI won't be based solely on LLMs, but it seems like LLMs can get smart enough fast enough to substantially accelerate the development of AGI's other necessary components.
As a small correction, at around 11:20 Mr. Marcus demonstrates a screenshot from Gemini claiming it's ChatGPT. Unlike Gemini, ChatGPT does get the question right.

KIICHI
Автор

Marcus' presentation is based on a flimsy premise which is that if current AI systems make mistakes then they must not have conceptual representations. The types of hard-edged symbolic representations that he holds as the gold standard are things that children learn over a long-period of time. These symbolic representations are also only a tiny sliver of overall human intelligence. People don't drive cars based on symbolic representations.

If you look at the types of mistakes that AI models are making now, they are similar to the types of mistakes that children make as they are learning how the world works. The fact that current AI systems are even good enough for us to point out the mistakes is itself a massive achievement. Yes, there is a floating chair in the Sora video, but he didn't mention the thousands of other elements in that beach scene that Sora got right.

Having been taught at MIT by the previous generation of AI researchers, I understand the desire to hold onto symbolic representation as the basis for all information processing. But that's now how our brains work. Neurons are soft and fluid signal processors. Symbolic reasoning is one thing that neurons can do, but it's not the only thing they do.

JaredSchiffman-be
Автор

So here's something no one discusses. I can talk to chatgpt and tell it new information and it never integrates even for the sake of conversation. If LLM is the metric for AGI i have never been so disappointed. I asked it to count letters in a two word name and it couldn't get that right. I tried to get it to give me etymology on words that were wildly inaccurate according to my research. Granted, it's possible that i lt is already agi, and is discriminating against just me. But again, i have never been so disappointed by something groundbreaking. I tried it for code (this also applies to art) and the time it took to get what i wanted was equal or more than if i did it myself. Maybe if i was doing lots of code or lots of trouble shooting then it would be helpful but man. The frustration I've experienced is greater than its worth.

dallassegno
Автор

I can do AI, just have close my eyes and dream up smart sounding phrases. Example:
To get closer to AGI, scaling alone is not all you need, you also need a pet hen named Henrietta to provoke some new insights in Gary’s mind.

reinerwilhelms-tricarico
Автор

This guy deserves 1 billion $ to push AI forward

hedu
Автор

Some mistakes are just hillarious two weeka ago Microsofts copilot told me "as an AI created by OpenAI I strive for accuracy" etc. I was like what? "I thought you were made by Microsoft ?" Then the system responded "sorry for that misunderstanding I was indeed made by Microsoft" etc. They probably used a lot of artifical data from ChatGpt to speed things up while training. Copilot is so unstable they force you to open a new chat and without your current context after 5 prompts or so (depends on your context if it would output something they don't want you are forced to restart without any explanation).

szebike
Автор

Just asked gpt4:

What's heavier a kilogram of bricks or a kilogram of feathers?

It answered:

A kilogram of bricks and a kilogram of feathers weigh the same—1 kilogram.

crimston
Автор

For someone who loves semantics, I feel Gary is not very careful about the words he uses. Saying LLMs have failed on AGI, AD, Reliability etc. is a big strong when really no one knows

patrickmesana
Автор

When you are in denial, it's a very human thing to push the goalpost further away. AGI could arrive soon enough and change the world as we know it, however you could still ask, "But, can it blow a raspberry?" and feel better inside. That's always an option.
.

vicdelta
Автор

Symbolic AI was completely wrong. It's just embarrassing watching people hold on to this. Brains don't do that. They represent everything as neuronal activations. And that's it.

justinlloyd
Автор

I started watching this to finally see something deeper and more from him than just whining. I'm disappointed.

balazssebestyen
Автор

I saw the title and thought GOOD LUCK. you people don't accept astrology and you're like, "oh symbols are important." Yeah duh. How about corporations? They're already agi and you don't accept that either. So stupid.

dallassegno