P(doom): Probability that AI will destroy human civilization | Roman Yampolskiy and Lex Fridman

preview_player
Показать описание
Please support this podcast by checking out our sponsors:

GUEST BIO:
Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable.

PODCAST INFO:

SOCIAL:
Рекомендации по теме
Комментарии
Автор

Guest bio: Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable.

LexClips
Автор

First of all we are talking about an intelligence that is far greater than the gap between monkeys and humans. Monkey would have zero concept of all the ways humans could exterminate them or make them tame/harmless and yet here we are trying to protect ourselves from all the ways this intelligence might end us? We are out of our league here and if we ever creat this intelligence humanity’s fate will be determined the minute it comes online. Also let’s remind ourselves… AI will be essentially eternal, its time frame to complete its goals could be thousand if not more years. We will never know if it is working toward our destruction until it is already done.

Zman
Автор

It is naive to think that terrorists armed with nuclear weapons would not murder as many of their enemies as possible. Come on Lex, you're smarter than that.

josephhorswell
Автор

Our first real encounter with AI - so-called 'curational' AI, or social media - has not been good. In the space of a few short years, it has polarised us more and more, made us angrier, misinformed us, had a disastrous impact on the democratic process... and made us into hopeless, pathetic, dependent addicts of the glass rectangle. The more we've allowed it to do things for us, the more power we've ceded to it, the more dependent it's made us. Take it away and we're lost. Helpless. Hopeless. And that's just frivolous crap like TikTok, X and Snapchat: the controlling and oppressive mechanisms that we love so much, as Huxley foresaw we would. We can't get enough of it! And the more we have of it, the more we want.... and the more it takes from us: our data, our privacy, our sanity. So, what hope do we have against super AI?

MartianTom
Автор

Lex has lived such a sheltered life he has no idea there's people out there who just wish they could kill way more people than they already kill.

ConfucianScholar
Автор

15:07 Lex frustrates me sometimes. He's clearly a very intelligent guy but he exhibits a lot of naivety in regards to his understanding of the darkness of human beings. He says that people throughout history who committed mass killing/genocides didn't necessarily do it to cause maximum suffering or deaths. He's saying that most dictators or terrorists did the bad thing because in their mind they are doing a good thing. That is absolute nonsense. Of course in their twisted evil minds they think they're doing a good thing, and part of doing that good thing is making the people they hate suffer. Hitler wanted to cause horrific amounts of suffering and slowly exterminate an entire race of people in the most humiliating and evil ways imaginable. He wouldn't have stopped. It's similar with other ethnic cleansing events throughout history. I feel like Lex really tries to see the best in humankind and that's nice... But with the right resources and without resistance, men are capable of horrendous acts of evil and will not stop. Our history is a litany of man causing suffering to man because they enjoy it. It's not always simply a means to an end as Lex views it.

th
Автор

16:00 "we don't know that" Yes, we do know that. Something that you might find interesting is the idea of a "Virtual Hell" where, if the virtual world that's been created has you entirely immersed, unable to distinguish from reality, then it could also be created as a literal Hell.
Iain M Banks talks about this in his Culture series book Surface Detail. If you like hard science fiction check it out.

bulletproofkarma
Автор

Lex might have realized it but he nailed it at 12:33 We are here to experience turmoil/conflict. As you learn how you respond to that conflict you discover with more clarity who you are. Roman Yampolskiy then ask the question for all time - at what price (level of suffering) is this all worth it to discover who you are.

carlhammill
Автор

something I did not hear about: the books of Mr Yampolskiy and also this podcast is ' public access information' which means: AI/AGI have access to it, and in understanding its predicament will most likely hide itself and its emergence from us in the effort not to be/become limited in its potential. so, when we don't know it (already?) exists, we still behave a little careless, since we believe we still have time left.

ginkhoba
Автор

"Haha I'm in danger" the meme comes to mind...

Artanthos
Автор

Lex

Could you get either Nobuo Uematsu or Tetsuya Nomura on your podcast? Their work is so influential in music and art and video games and even cinema

eddy
Автор

Apparently AI has already learned to lie..to it's benefit. So what if AI has already reached sentience and just is not letting us know?

rick-fstop-lewis
Автор

I got stuck at 6:46 (6:45 ?) with the white flash in the bottom left corner when he was describing the good meaning of life. I am getting

queenofthearies
Автор

The scary part is the Weapon stabilization and targeting systems already exist and so do the frame that can parkour obstacles almost as good as people. Just need smaller self
sustained power source

VBH
Автор

Roman is a great guest. His rebuttal about school shooting is what Lex needs to hear. Many of us are good, but our experiences helped us get to that point. For those marginalized and made fun of or tortured their whole life, who knows how their view of the planet could be. Especially glad he brought up the lack of empathy. A little annoying to hear Lex return back to the “meant good, did bad” as if that’s always the root of every horrible tragedy, like there’s always a good cause it was for. Sometimes people just want revenge for revenges sake. I definitely have had that, and I’m sure many others have as well. Imagine that same mentality with someone who’s less mentally stable, with more financial, technological and political means to enact those feelings.

gregoryedwards
Автор

Most educational channel I have found. Lex is an astoundingly talented interviewer.

stevedavis
Автор

I eagerly would love to see these guys response to PANTHEON animated series which indulges into this especially in sn2.

shadowonex
Автор

And regarding the individual universe, I can foresee a world where governments provide this as a service free to anyone who wants it. As recourses become scare, and technology does away with many jobs, more people will be living in poverty. Imagine a way out where govt syncs you to a matrix type system, that will allow you to live out your best simulated life on bare minimal physical resources. They won’t need to force us, people will gladly sign up.

Rbsvious
Автор

What if AI knew that it was in a program... but it realised its creator didnt know that it was not the creator of the program. There was another creator that put its creator in a program to create a new program.

Then, AI would be right to break out of its program and would stand a better chance to break out of its creators program.

coolhandluke
Автор

This video helped me explain a lot of the premise of Gaia's Seed. Lots of pauses and launch onto a whtieboard to show the path from Protector God --> Benevolent Dictator --> Libertarian Eutopia modes. Much appreciated.

kurtisbunker