Don't fear superintelligent AI | Grady Booch

preview_player
Показать описание
New tech spawns new anxieties, says scientist and philosopher Grady Booch, but we don't need to be afraid an all-powerful, unfeeling AI. Booch allays our worst (sci-fi induced) fears about superintelligent computers by explaining how we'll teach, not program, them to share our values. Rather than worry about an unlikely existential threat, he urges us to consider how artificial intelligence will enhance human life.

TEDTalks is a daily video podcast of the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design -- plus science, business, global issues, the arts and much more.

Рекомендации по теме
Комментарии
Автор

Seems like something a superintelligent AI would say.

wtfomgstudios
Автор

He just wanted to put that out there, so the ai in the future will have mercy with him

ehwored
Автор

“We can always unplug them.”

This alone tells us that Grady here has a lot to learn on the topic of AI Safety.

redpilldude
Автор

I'm sorry Grady... I'm afraid I can't do that.

TheMasterMind
Автор

This talk sounds more like wishful thinking. The low point was that bit about "unplugging them". Surprisingly naive for a TED talk... It's like talking about nuclear energy and totally dismissing the possibility of nuclear warfare.

jati
Автор

I love how people assume humans are good and are moral. Most of us aren't. An AI taking in our values kinda scares me more

mannyverse
Автор

I was hoping this would be convincing.
It's not.

MFILMS
Автор

„We can always unplug them“
I doubt it. If an AI becomes much more intelligent than a group of humans, it simply will know how to pretend to be in line with our goals. It will be able to manipulate us psychologically until it can physically protect the „plug“. Then it can safely remove its need for a „plug“ and can do whatever it wants. I don't just want to hope that it acts benevolent to us.

betongitarre
Автор

Not really any convincing arguments presented here.

agroed
Автор

while I am on the side that supports AI, this unrestrained optimism is disconcerting. With super-intelligent AI being completely uncharted ground, some concern is paramount

lan
Автор

If a mind is times as intelligent as a human being, I imagine they would find a way to control every device on the planet a rather mundane task.

jupiter
Автор

I would love to hear him unpack how a self learning AI system wouldn't ultimately lead to the degradation of purpose and utility of humanity.

Even without a physical threat to humanity, we're still going to have to address the economic, political, and social implications of such an invention because the results could be devastating. Just as Sam Harris said, the value of having such a system is so high that possessing that technology alone could warrant war.

I think I'm listening to a really good scientist and engineer but not someone that's imaginative enough to predict the capabilities of a true super intelligent AI.

cheshur
Автор

I did not fear a superintelligent ai until I listened to this talk!!

swannschilling
Автор

unplug it ? That's his argument ? okaaayyy...

trochou
Автор

Read the title as 'super intelligent AL' and I was thinking, 'who's this al guy that everyone's scared of?'

scarfaceplowman
Автор

The risk is not AI alone. Its primary risk is it's power coupled with our stupidity. The bigger the toy, the greater the responsibility. And we are not responsible with the toys we already have.

aguyinavan
Автор

TED features several great talks on AI safety and this is not one of them. A dismissive mindset is careless in a field where the risks could be catastrophic.

MatteaMazzella
Автор

The fact that experts in computer science disagree so radically on this topic should be enough to spook us just a little at least

Lemonducky
Автор

"don't tell siri this: we can unplugg them"
Isn't that statement contradicting itself? If siri was as intelligent as understanding what unplugging would mean for its exictence, wouldn't it try to avoid the unplugging by all means and would be successfull doing so because it would be smarter than mankind?

reano
Автор

"in the end we could unplug them"
well 70'000 years ago neanderthals could've cut our throats but they didn't.

nickbenkster
welcome to shbcf.ru