Will Superintelligent AI End the World? | Eliezer Yudkowsky | TED

preview_player
Показать описание
Decision theorist Eliezer Yudkowsky has a simple message: superintelligent AI could probably kill us all. So the question becomes: Is it possible to build powerful artificial minds that are obedient, even benevolent? In a fiery talk, Yudkowsky explores why we need to act immediately to ensure smarter-than-human AI systems don't lead to our extinction.

Follow TED!

#TED #TEDTalks #ai
Рекомендации по теме
Комментарии
Автор

Eliezer: We are all going to die!
Audience: 😅

phillaysheo
Автор

Audience is laughing. He isn't laughing, he is dead serious.

TheDAT
Автор

“Humanity is not taking this remotely seriously.”

*Audience laughs*

Bminutes
Автор

I keep getting 'don't look up' vibes whenever the topic of the threat of ai comes up.

EnigmaticEncounters
Автор

Not shown in this version - the part where Eliezer says he'd been invited on Friday to come give a talk - so less than a week before he gave it. That's why he's reading from his phone.
Interestingly, I think the raw nature of the talk actually helped.

kimholder
Автор

"I think a good analogy is to look at how humans treat animals... when the time comes to build a highway between two cities, we are not asking the animals for permission... I think it's pretty likely that the entire Earth will be covered with solar panels and data centers." -Ilya Sutskever, Chief Scientist at OpenAI

Michael-eivy
Автор

Surprised he didn’t bust out this old chestnut: “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”

Tyler-zfgj
Автор

Regardless of whether Yudkowsky is right or not, the fact that many in the audience were **laughing** at the prospect of superintelligent AI killing everyone is extremely disturbing. I think people have been brainwashed by Hollywood's version of an AI takeover, where the machines just start killing everyone, but humanity wins in the end. In reality, if it kills us, it won't go down like that; the AI would employ stealth in executing its plans, and we won't know what is happening until it is too late.

dereklenzen
Автор

Imagine a team of sloths create a human being to use it for improving their sloth civilization. They would try to capture him/her in a cell so that it doesn't run away. They wouldn't even notice how they've failed to capture the human the instant they made it (let's assume its an adult male human), because its faster, smarter and better in every possible way they cannot imagine. Yet, the sloths are closer to humans and more familiar in DNA than any general intelligence could ever be familiar to us

gregtheflyingwhale
Автор

By the time we figured out, if at all, that AI had deemed us expendable, it would have secretly put 1, 000 pieces into play to seal our doom. There would be no fight. When being pitted against a digital super intelligence that is vastly smarter than the whole of humanity and can think at 1 million times the speed, it's no contest. All avenues of resistance will have been neutralized before we even knew we were in a fight. Just like the world's best Go players being completely blindsided by the unfathomable strategies of Alpha Go and Alpha Zero. They had no idea they were being crushed until it was too late.

tobleroni
Автор

I've always been very skeptical of Yudkowsky's doom prophecies, but here he looks downright defeated. I never realized he cared so deeply and to see him basically admit that we're screwed filled me with a sort of melancholy. Realizing that we might genuinely be destroyed by AI has made me simply depressed at that fact. I thought I'd be scared or angry, but no. Just sadness.

windlinkeverable
Автор

I think some people expect something out of a movie. In my opinion I don't think we would even know until the AI had 100% certainty that it will win. I believe it would almost always choose stealth. I have two teenage sons and the fact that people are laughing makes me sad and mad.

mathew
Автор

Eliezer has only had four days to prepare the talk. The talk has actually started with: "You've heard that things are moving fast in artificial intelligence. How fast? So fast that I was suddenly told on Friday that I needed to be here. So, no slides, six minutes."

MikhailSamin
Автор

The audience laughing reminds me of the film "Don't Look Up", but instead of an asteroid it's AI

mav
Автор

He's not just talking about the deaths of people a thousand years in the future. He is talking about YOUR death. Your mum's. Your son's. The deaths of everyone you've ever met.

dlalchannel
Автор

To all the “he’s just another guy ranting about some apocalypse”:

You’re making a category error. You’ve seen all of those crazies screaming about how the end is coming “because my book says so”, “just look at the signs”, etc. and you’re putting him in that same bucket.

They tell you about how something that is utterly unlike anything in our history is going to happen for no good reason but because they said so. And “oh by the way, buy my book; give me money.”

This man is saying, “look at the data”, “look at the logic”, “look at the cause and effect”, “look at how I’m predicting this to go exactly the same way it has always gone in this situation.” Ask the Neanderthals and the Woolly Mammoths. This is a man who just told you, “I’ve done everything I can to stop it. I’ve failed. I need your help. Tell your politician to make rules so we don’t all die.”

This is a man who will gain no financial benefit from this. He’s not asking you to join his religion. He’s not asking you to give him money. He’s begging you to save everyone.

Now take into consideration that thousands of the smartest people in the world, many of the very people who have helped to build this exact technology, are all saying that there is a good chance that EVERYONE WILL DIE!

Don’t look at it as a statistic. This isn’t everyone else dying. This is YOU and everyone you love dying. Your children, your friends, everyone. And everything else on this planet. And maybe everything on every planet in every galaxy near us.

If you wouldn’t put your child on a plane that had a 1 in 100 chance of crashing (instead of 1 in 1, 000, 000), then you should sure as heck not put our entire planet on that plane. And it isn’t 1 in 100; I’d say it’s more like 80% given the current state of the world.

He’s not the latest doomsayer. He’s Dr. Mindy from the movie Don’t Look Up begging someone to just look at the data and the facts.

DeruwynArchmage
Автор

He's right, folks in silicon valley dismiss the notion. I know several tech billionaires personally that make light of the idea. These are guys that would know better than anyone about the science.

wthomas
Автор

Why are people laughing? This isn't funny this is real life, folks. Dystopian novelists predicted this ages ago. How do we live in a reality in which the Matrix franchise exists and no one that mattered saw this coming?

Alainn
Автор

I think regular people have a hard time understanding the difference between narrow AI and Artificial General Intelligence. Most people are not familiar with the control problem or the alignment problem. You won't convince anyone about the dangers of AGI because they don't want to make abstractions about something that hasn't arrived yet. Except this is the one scenario when you definitely have to make the abstraction and think 2, 3, 10 steps ahead. People are derisive about anyone suggesting AI could be an existential risk for mankind because there's is also this need people have to be always the stoic voice of reason saying anyone asking others to take precautions is catastrophizing.

If you try to explain this to anyone all they can invoke in their minds is terminators, I am robots, bicentenial men, movies, books where AI is antromophized. If we think about an AI takeover, it's usually in hollywood terms and in our self importance we dream ourselves in this battle with AI in which we are the underdog, but still a somewhat worthy and clever opponent. The horror is not something that maliciously destroys you because it hates you. But i don't think most people are in a position to wrap their head around the idea of something that is dangerous because it's efficient and indifferent to anything of value to you, not because it's malicious.

Metathronos
Автор

A simple answer to the question "why would AI want to kill us?"; Intelligence is about extending future options.. means it will want to utilize all the resources starting from earth... and we will become the unwanted ants in its kitchen all of a sudden..

sahanda