Roman Yampolskiy: Dangers of Superintelligent AI | Lex Fridman Podcast #431

preview_player
Показать описание
Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors:

TRANSCRIPT:

EPISODE LINKS:

PODCAST INFO:

OUTLINE:
0:00 - Introduction
2:20 - Existential risk of AGI
8:32 - Ikigai risk
16:44 - Suffering risk
20:19 - Timeline to AGI
24:51 - AGI turing test
30:14 - Yann LeCun and open source AI
43:06 - AI control
45:33 - Social engineering
48:06 - Fearmongering
57:57 - AI deception
1:04:30 - Verification
1:11:29 - Self-improving AI
1:23:42 - Pausing AI development
1:29:59 - AI Safety
1:39:43 - Current AI
1:45:05 - Simulation
1:52:24 - Aliens
1:53:57 - Human mind
2:00:17 - Neuralink
2:09:23 - Hope for the future
2:13:18 - Meaning of life

SOCIAL:
Рекомендации по теме
Комментарии
Автор

Here are the timestamps. Please check out our sponsors to support this podcast.
0:00 - Introduction & sponsor mentions:
2:20 - Existential risk of AGI
8:32 - Ikigai risk
16:44 - Suffering risk
20:19 - Timeline to AGI
24:51 - AGI turing test
30:14 - Yann LeCun and open source AI
43:06 - AI control
45:33 - Social engineering
48:06 - Fearmongering
57:57 - AI deception
1:04:30 - Verification
1:11:29 - Self-improving AI
1:23:42 - Pausing AI development
1:29:59 - AI Safety
1:39:43 - Current AI
1:45:05 - Simulation
1:52:24 - Aliens
1:53:57 - Human mind
2:00:17 - Neuralink
2:09:23 - Hope for the future
2:13:18 - Meaning of life

lexfridman
Автор

Ive been a first responder for the last few decades. One of the rules of my profession, especially when dealing with life and death, is to expect and always be prepared for the worst case scenerio and mitigate risks as much as possible. This man understands that concept. I love Lex's optimism, but in some situations optimism can be very, very dangerous

oldandtired
Автор

Lex: "On a more mundane note, how do you spend your weekends?"
Roman: "I have a paper about that"

matteofalduto
Автор

This is what disagreements look like in a perfect world, I wish all differences of opinion were so calmly discussed

NDSOart
Автор

I think that before we reach superinteligent AGI we'll reach a more darker and oppressive state of technofeudalism that can be best summarized by this amazing quote from the first Dune book: “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”

johnofardeal
Автор

Lex: *"What gives you hope"*

Roman: *"That I might be wrong. I could be. I've been wrong before".*

simesaid
Автор

“What gives you hope?”
“ I could be wrong”
lol

sanjin
Автор

We messed up in a huge number of ways in different fields of science and we will 100% mess up when AI becomes more significant.

eduardobenassi
Автор

It's difficult to out-calm and outsmart Lex... Kudos, Roman!

Ceredir
Автор

1:44:20 X-Risk deniers always start by saying machines will never take over, then fall back to finding comfort in the fact we'll likely be kept as zoo animals. Every single time.

BrunoPadilhaOficial
Автор

The best prepared, and calm interviewee.
He lives and breathes his craft.
Will be reading all his work.
Great interview.
Appendix = vestigial organ

connic
Автор

I think Lex gives too much credit to humanity

salasart
Автор

This conversation feels like it's on a loop.

AIisallyouNeed-ofku
Автор

I'm an optimist, for sure. But we cant really argue the whole "can you find an example in nature where a far less advanced civilization/system that is controlling an extremely advanced one?"

mattwesney
Автор

This man looks like he knows when, where, and how I die

marywimmer
Автор

A MUST WATCH! Loving it! 1:02:29: "I for one, welcome our overlords!". Got to know about Roman Yampolskiy from this talk and now he is my favorite guest! <3 Thanks <3

taniaoyarzo
Автор

This entire podcast doesn't even go over the most possible near term negative outcome, that being that this technology is held by a small group of people who will gain control and influence over the world in a way we have never seen before. Without a super intelligent system, the misuse by the initial developer teams are nearly guaranteed. We are already seeing the writing on the walls for large scale unemployment

LeonTGBU
Автор

Thanks Lex. And sometimes I fear you overestimate the good in the people.

edgar
Автор

Roman' s thought process as well as his concerns are very grounded with so much human experience despite the field being new. He speaks to what he knows and you can't fault him. He also preps us not to assume anything and be naively optimistic. Thank you Lex for such quality sessions and guests.

mydhe
Автор

Out of all the alien theories out there, the last one i wouldve thought to play out is the one we seem to be on the path to creating. The fact we are building a blackbox AI and then have to interview it to figure out just how capable it is, followed by feeding more of our collective knowledge and iterating on seeing how smart it is feels like we are building an alien and then trying to dissect its utterly foreign biology to try to figure out if it is an enemy or not. Im so fascinated and frightened that this the story that we are in right now. Like wtf. This guy makes such strong points that what i really wanna see is a live discussion between an optimist of equal intelligence and understanding as him.

coole