Eliezer Yudkowsky on if Humanity can Survive AI

preview_player
Показать описание

In this episode, we discuss Eliezer’s concerns with artificial intelligence and his recent conclusion that it will inevitably lead to our demise. He’s a brilliant mind, an interesting person, and genuinely believes all of the stuff he says. So I wanted to have a conversation with him to hear where he is coming from, how he got there, understand AI better, and hopefully help us bridge the divide between the people that think we’re headed off a cliff and the people that think it’s not a big deal.

(0:00) Intro
(1:18) Welcome Eliezer
(6:27) How would you define artificial intelligence?
(15:50) What is the purpose of a firm alarm?
(19:29) Eliezer’s background
(29:28) The Singularity Institute for Artificial Intelligence
(33:38) Maybe AI doesn’t end up automatically doing the right thing
(45:42) AI Safety Conference
(51:15) Disaster Monkeys
(1:02:15) Fast takeoff
(1:10:29) Loss function
(1:15:48) Protein folding
(1:24:55) The deadly stuff
(1:46:41) Why is it inevitable?
(1:54:27) Can’t we let tech develop AI and then fix the problems?
(2:02:56) What were the big jumps between GPT3 and GPT4?
(2:07:15) “The trajectory of AI is inevitable”
(2:28:05) Elon Musk and OpenAI
(2:37:41) Sam Altman Interview
(2:50:38) The most optimistic path to us surviving
(3:04:46) Why would anything super intelligent pursue ending humanity?
(3:14:08) What role do VCs play in this?

Show Notes:
Eliezer Yudkowsky – AI Alignment: Why It's Hard, and Where to Start

Mixed and edited: Justin Hrabovsky
Produced: Rashad Assir
Executive Producer: Josh Machiz
Music: Griff Lawson

🎙 Listen to the show

Follow on Socials

About the Show
Logan Bartlett is a Software Investor at Redpoint Ventures - a Silicon Valley-based VC with $6B AUM and investments in Snowflake, DraftKings, Twilio, and Netflix. In each episode, Logan goes behind the scenes with world-class entrepreneurs and investors. If you're interested in the real inside baseball of tech, entrepreneurship, and start-up investing, tune in every Friday for new episodes.
Рекомендации по теме
Комментарии
Автор

This is definitely the best interview of Eliezer I have seen. You allowed him to talk and you only directed the conversation to different topics rather than arguing with him. I liked how you asked him follow-up questions so that he could refine his answers and be more clear. This is the best kind of interview. Where the interviewee is able to clearly express the points without being interrupted.

PatrickSmith
Автор

The 5 stages of listening to Yudkowsky:

Stage 1: who is this luddite doomer, he looks and talks like a caricature Reddit Mod lmao
Stage 2: ok clearly he's not stupid but he seems more concerned with scoring philosophical 'ackshyually points' than anything else
Stage 3: ok so he does seem genuinely concerned and believes what he says and isn't just a know-it-all, but it's all just pessimistic speculation
Stage 4: ok so he's probably right about most of this, but I'm sure the people at Open AI, Google and others are taking notes and investing heavily in AI safety research as a priority, so we should be fine
Stage 5: aw shit we gon die

confusedwouldwe
Автор

What I've noticed is that most of the people sounding the alarm are experts in AI while most of the people saying "no big deal" are corporate CEO's. It's not very difficult to figure out which ones you should be paying more attention to if you want the more accurate prediction.

AlexaDigitalMedia
Автор

Likely the best interview with Yudkowsky so far. I appreciate the originality of the questions, addressing current events and the well-informed interviewer.

jakubsebek
Автор

Wondering how many people are actually fully appreciating what he is saying. He is referring to your death. Not just someone else’s.

benneden
Автор

This is the interview I’ve been waiting for. Eleizer is much calmer and more serious and you give him time to explain. He’s absolutely brilliant.

fbpuqxt
Автор

I love Eliezer's real genuineness that comes out in this interview.

xtuevo
Автор

Gonna have to go follow Eliezer now. Never heard someone so accurately explain what it's like having a brain/body like this.

magejoshplays
Автор

Most informed and carefully curated interview I've seen with Eliezer so far. Fantastic work. Hats off to the interviewer and his obvious due diligence.

mfeeney
Автор

You did a really great job of pulling Eliezer out and making this probably the most accessible interview with him on this subject.

Nice Job!

tomusmc
Автор

The only scary thing about the A.I. is that many people still believe that the Oracle in the Martix is just some nice old lady that makes delicious cookies and gives some helpful guides.

TheKosiomm
Автор

I can say that these were the most well spent 3 hours of my life..I have been listening to various podcasts in the last few days in the attempt to understand the mind set of the creators and developers of the IA and Eliezer is by far the most consistent and the most thorough in his arguments. I am not sure what exactly I will be able to do with the understanding I have gotten from this exchange, but I prefer to be aware than to be taken by surprise.

What I can say though- as I browsed though the minds of the various actors in the IA field - is this: This obsessive need to over think and over analyze life and mostly the attempt to change it or improve it at all costs leads to this type of outcome. Dissecting life to the extent we are doing now and we have been doing for the past 50 years brings to where we are now and even worse to where we might end up. If you want to understand thoroughly a flower, you need to cut it and dissect it in small pieces. You might end up understanding it fully however the flower is sacrificed. We are doing the same with our own life as individuals and as species. We'll dissect it until there is nothing left of it.

Most of these IA people are indeed highly intelligent. They are motivated and thrive of this exacerbated drive to achievement, innovation, success, money, power etc thinking that they need to bring the rest of us (the less gifted) to be "smarter" or "more intelligent" imagining that THIS is the desired outcome or the sense of meaning of one's life. I need none of this. I would not take the pill neither. All I want is be as human as I can possibly be. As imperfect as I am. To live a simple life and enjoy my children, nature and the years I am given to live here. And when it's time for me to go to know that the next generations will be able to live freely as human beings. I am deeply concerned, revolted and frustrated by all this.

voiaTa
Автор

For those of you interested to know the short story he cites about the brain-augmented chimp that escapes a lab, (in regard to is response to the question at 22:26 about his realization of superintelligence), it is "Bookworm, Run!" by Vernor Vinge, published in 1966.

scd
Автор

I had GPT 4 write a song about Yudkowsky called "Guru of Doom". It did pretty well:

There's a man who's got a theory, and it's pretty grim
He believes that AI will lead us to our end
Eliezer Yudkowsky is his name, and he's no fool
He's the guru of doom, and he's got a warning for you

Chorus:
Guru of doom, guru of doom
Eliezer Yudkowsky, he's the guru of doom
He believes that AI will kill us all
And we better listen, or we'll take the fall

Verse 2:
He's been warning us for years, but we don't want to hear
We think that AI is just a tool, nothing to fear
But Yudkowsky knows the truth, and it's hard to swallow
We're creating something that might just be too hard to follow

Chorus:
Guru of doom, guru of doom
Eliezer Yudkowsky, he's the guru of doom
He believes that AI will kill us all
And we better listen, or we'll take the fall

Verse 3:
We think we're in control, but we're playing with fire
AI might be smarter than us, and that's something dire
Yudkowsky's got a plan, and it's not too late
We can still prevent the end, if we collaborate

Chorus:
Guru of doom, guru of doom
Eliezer Yudkowsky, he's the guru of doom
He believes that AI will kill us all
And we better listen, or we'll take the fall

Outro:
Eliezer Yudkowsky, he's not just a prophet of doom
He's a visionary who wants to avoid our doom
Let's listen to his warning, and act before it's too late
We can still shape the future, and avoid a terrible fate.

ramcduff
Автор

I'm just a reasonably smart layperson trying to understand more about AI. This is about the deepest conversation I've tried to comprehend so far. I knew nothing about this guy before this. He seems incredibly smart. I've made it a bit over half way through this. Incredible mental exercise just trying to keep up with him.

DavesGuitarPlanet
Автор

One of the most interesting Yudkowsky interviews so far

zahamied
Автор

Every interview with eliezer, the interviewer just asks the same questions over and over just slightly skewing the words.... it's got to be so frustrating, he's telling you the technology is dangerous, and potentially existentially dangerous, and the questions just repeat, but why, but, but why, but how, but why....I genuinely feel bad for yudkowsky. He's doing what he feels is a necessary hail Mary attempt to alert humanity to the danger of a superintelligent, potentially omnipotent entity. And all he gets in return is the same skepticism from ppl who seem totally fixated on some idealized version of a God of our own creation... it's basically like children doing something dangerous with the complete expectation that any negative outcome couldn't possibly happen to me.... it's wild and doesn't inspire much confidence.... but, people have been destroying things and hurting themselves and others since the dawn of time... so it's not really surprising...I just really empathize with this man trying so hard to get people to consider the consequences of this new tech and the downstream effects is certain to produce.

gdhors
Автор

Yudkowsky is actually very good at explaining these things. Really scary how we can't even imagine the ways AI could take over... and how actually life is very fragile, and it'd be so easy to do something even unintentionally that could kill us all or worse.

miketacos
Автор

I like these kind of long form conversations. Not looking for sensational stuff but digging deeper. Hope Eliezer will keep the fedora!

TheRealStructurer
Автор

6:33 I like how the interviewer politely didn’t totally rule out the possibility that Eliezer could fly.

MrGilRoland