Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368

preview_player
Показать описание
Eliezer Yudkowsky is a researcher, writer, and philosopher on the topic of superintelligent AI. Please support this podcast by checking out our sponsors:

EPISODE LINKS:
Books and resources mentioned:

PODCAST INFO:

OUTLINE:
0:00 - Introduction
0:43 - GPT-4
23:23 - Open sourcing GPT-4
39:41 - Defining AGI
47:38 - AGI alignment
1:30:30 - How AGI may kill us
2:22:51 - Superintelligence
2:30:03 - Evolution
2:36:33 - Consciousness
2:47:04 - Aliens
2:52:35 - AGI Timeline
3:00:35 - Ego
3:06:27 - Advice for young people
3:11:45 - Mortality
3:13:26 - Love

SOCIAL:
Рекомендации по теме
Комментарии
Автор

Here are the timestamps. Please check out our sponsors to support this podcast.
0:00 - Introduction & sponsor mentions:
0:43 - GPT-4
23:23 - Open sourcing GPT-4
39:41 - Defining AGI
47:38 - AGI alignment
1:30:30 - How AGI may kill us
2:22:51 - Superintelligence
2:30:03 - Evolution
2:36:33 - Consciousness
2:47:04 - Aliens
2:52:35 - AGI Timeline
3:00:35 - Ego
3:06:27 - Advice for young people
3:11:45 - Mortality
3:13:26 - Love

lexfridman
Автор

I can honestly say the last two weeks have been one of the most interesting time of my entire life. I absolutely am in awe that this happening. I try to explain it to those around me and all I get are blank stares. Are people not aware of the implications of what is happening right now?

Edit: Just want to add that my wife is seven months pregnant with our first child. Now, I truly do not know if there will be a future for him. I am not saying there won't be with any certainty. But the thought of if there will be has become one impending mystery. Scared shitless honestly.

dustinbreithaupt
Автор

Please keep the AI topics coming and shine light on all perspectives! That's really awesome and very important these days.

chillingFriend
Автор

I didn't expect lex to be so bad at thinking about how to take over the world, we need more of his kind of AI

flexoffender
Автор

Interesting listening to this again now. There’s no delay, no pause, no slow down, no huge allocation of funds to safety or alignment, it’s full steam ahead with AGI and ASI development as predicted - the financial, geopolitical, egotistical, competitive, military incentives are too strong.

zjouephoto
Автор

Rob Miles next? Please? He's such a fantastically clear communicator.

agentdarkboote
Автор

Eliezer's pessimism is the arch nemesis of Lex's optimism. It's essential that we have both types of people.

christobita
Автор

When Lex mentioned in the Altman interview he was going to interview Yudkowsky I hadn't dreamed it'd be this soon! Fantastic.

BDDHero
Автор

"I'm trying to be constrained to say things that I think are true and not just things that get you to agree with me." The world would be a truly better place if we would all do this more often😀

stephens
Автор

Lex has always been a bit of a naive optimist, and usually i find it refreshing, but here he’s shown how hard he tries to use it to shut down reasoning out his arguments, was good to see Eleizer didnt like letting him get away with it. In a point in human history as pivotal as this naive optimism could turn out to be incredibly harmful

stevitos
Автор

Man was difficult not to get emotional at 3:06:29 when lex ask for advise for young people, Eliezer is genuinely worry, sad, and deeply touch about what young people would face and the future of humanity

juancarlosdasilvamartinez
Автор

Comeback of another Lex Fridman era. Please more AI revolution stuff!

wonseoklee
Автор

Fascinating to watch this guy. To me, it seems this guy is of the mental level of an A. Einstein. In his own mind the thing is a done deal, and there only seems to be a few people who can argue the subject at all at his level (to provide a counter argument). I was moved when Lex asked Eliezar about his fear of not existing.

stevedriscoll
Автор

this guy is like the last boss of Reddit. I love him.

phosphate
Автор

I love how Lex, when he disagrees with a guest like this one, almost seem happy, and feels joy because it is an opportunity to spar ideas and points of view. The world needs more of this.

amosjohansen
Автор

Brilliant and.. chilling... Part 2 definitely has to be Eliezer together with Sam Altman! I want to hear Sam's counter to Eliezer's statements about how little they know about what's happening under the hood.

zaraabbey
Автор

As a young physicist who just finished my degree at Oxford, I am now orienting my career towards AI safety. Some of us are listening :).

quickphysicsvids
Автор

He seems eerily like the exact person you see in the movies warning everyone and we all ignore 😂. His look, his personality, his mannerisms are all a fit.

jdn-wnon
Автор

Watching Eliezer explain the escaping the box thought example over and over again was like watching a subway station slowed down by a factor of 100.

CaesarsSalad
Автор

Eliezer is a guy that maxed out his intelligence stats. No stats spent on strength or speed. Respect.

HamiltonOfiyai