Why the imminent arrival of AGI is dangerous to humanity

preview_player
Показать описание
I had the good fortune to speak with Daniel Kokotajlo, an ex-OpenAI safety researcher. He is known for taking a stance on the dangers of AGI, even going so far as to sacrifice OpenAI equity to be able to speak out.

Daniel believes that AGI will arrive within 3 years. There is only one primary cluster of skills that still has to be developed, which is long horizon tasks and agency skills. Daniel also assigns a 70% probability that AGI will go catastrophically wrong. We need to change our trajectory to have a good chance of survival.

To communicate the absurdity of the situation, Daniel gave two analogies that our alternative timelines. One imagines that human cloning had proceeded unhindered, and we now have 12-year-old super geniuses. Another imagines that octopuses had been bred for intelligence and companies now have octopus customer support instead of human. Unfortunately, strange as these scenarios are, our real situation is actually worse than either of them.

#agi #alignment #openai

Daniel Kokotajlo

Daniel Kokotajlo: “Unclear how to value the equity I gave up, but it probably would have been about 85%...”

OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance

A Right to Warn about Advanced Artificial Intelligence

OpenAI expands lobbying team to influence regulation

OpenAI Bolsters Lobbying Arm Amid Regulatory Pressure

Alex Blechman: “Sci-Fi Author: In my book I invented the Torment Nexus...”

0:00 Intro
0:30 Contents
0:38 Part 1: When AGI will come
0:44 Daniel Kokotajlo is a safety researcher
1:05 Non-disparagement agreement
1:30 Daniel walked away from $1.7 million
1:56 OpenAI changes course after publicity
2:15 Right to warn about AI, New York times
2:51 I had a conversation with Daniel directly
3:14 Background and "What 2026 looks like"
4:03 Daniel is producing in-depth AI reports
4:30 Current projection for AGI: within 3 years
5:28 What is left for AGI?
6:16 Agents capable of operating autonomously
6:47 Part 2: Why AGI is dangerous
7:07 AGI could backfire of its own accord
7:50 We don't know how to control these systems
8:29 Why AI is more challenging than historical examples
8:54 Tech workers are too close to the tech
9:20 Torment Nexus and other bad examples
10:10 Two alternative world analogies
10:19 Analogy 1: Human cloning
11:02 12-year-old super geniuses
11:43 Society getting used to clones, too late to change
12:21 Analogy 2: Superintelligent octopuses
12:48 Octopus tanks for customer support
13:10 Fast breeding cycle
13:29 Science fiction book recommendation
13:48 How similar are these timelines to our reality?
14:26 Physical bodies vs observable compute
15:24 Tell your friends
15:34 Part 3: The moment of creation
15:47 Intelligence explosion likely to start in secret
16:23 High-stakes decisions in secret
16:51 Government involvement needs to increase
17:31 We've been getting boiled along with the frog
18:26 Desire for unnecessary level of secrecy
19:03 Proposal: keep the public informed at a high level
19:39 Impose reporting requirements on companies
20:37 Why the dead man's switch is helpful
21:24 Conclusion
21:55 AGI is likely to be kept secret
22:38 What we can do about it
22:58 Outro
Рекомендации по теме
Комментарии
Автор

I've finally moved and started my new job as an AI safety researcher!

Octopuses or octopi? (I blame github for popularizing the latter.)

DrWaku
Автор

DR. Waku your delivery of complex subject matter for the layman is astonishingly good, you are a pleasure to listen to, and watch, regardless of what the you happen to be covering, thank you for being you.

santolarussa
Автор

having a look at the hundreds of youtube subscriptions i have YOURS is the one i count as most precious to me. the intelligent dissection of complex issues with an alignment to my own personal morals and point of view makes your site my most valued and shared amongst friends. thanks for all you do for society .

georgedres
Автор

I was just thinking that about embodied agents. If we don't have enough training data, they can create their own like we do. The more capable they become, the better they'll get at obtaining and sharing higher quality data.

JB
Автор

When everyone is talking about the problem of the 3 bodies and no one talks about the problem of 2 intelligent species living together on the same planet 💀

azhuransmx
Автор

Pretty crazy to hear Daniel say that we’re just missing long horizon task capabilities to reach AGI, and just last week Reuters released an article about “Strawberry”, which seems to be what they renamed Q* and is meant to give AI the capability to perform long horizon tasks

nomadv
Автор

Beautiful background, it's better than the previous one! Congrats on the new job! It seams like no one is better for this

alexlanayt
Автор

Wow, Octopus intelligence, 3 years to AGI and high stakes decisions done in secret all in one video? I guess thats what getting boiled alive feels like. SMH. Lots to think about. Great video.

dylan_curious
Автор

Nice background & great video. Glad to know that you have joined the field with the likes of David Shapiro who has been doing such research for a while. Wishing you good health and more updates.

trycryptos
Автор

Hi Dr. Waku!
First let me say that I really appreciate your videos! I have a bit of a personal question: how do you stay so positive despite your awareness of these huge existential risks? I get such a good vibe from you from these videos, you come across as an optimistic and cheerful person, even though a lot of what you're saying has got me very worried.
Thanks again for the good work, keep it up!

fonsleduck
Автор

Great video! And the interior looks awesome! Keep up the great work!

ichidyakin
Автор

Thank you so much for your work Dr Waku! I really enjoy your videos and find them so refreshing and well-explained, and your style is partly why I started making them myself. Absolutely love the octopus analogy too!

aiforculture
Автор

Your wave at the end is my favorite part. It feels like a very genuine part of you.
If you wear foundation, perhaps you might consider using a little less. It looks flawless, but combined with the perfectly coordinated colors—the way your shirt, gloves, hair, hat, skin tone, and lighting all match—it feels almost too perfect, almost like a marketing image.
I really enjoyed your unedited interview. It was a pleasure to get a better sense of the real you.

MNhandle
Автор

Thanks for all you do Waku!!! Love your grounded takes

TheExodusLost
Автор

Can you talk about the future of education in the age of AGI? Chatgpt is already changing how people learn

bornrun
Автор

@ Dr Waku, The background at The new location looks very suitable for video recording. How is the new place for you? Do you live there? Does it support you better in your special bodily needs and ameliorate your physical state? I wish you good health, stay strong! ❤

Slaci-vlio
Автор

70 percent within what time period. You should always give a window. Love Dr. Waku and this channel ❤

susymay
Автор

Yeah, I'm liking the new background, very pleasant and bright, airy.
Congrats on your new job.
Thanks for shining another light on Daniel. He deserves more recognition -- as does the topic he's dug in his heels over.
Really enjoy your channel, your talks. Keep it up!

Je-Lia
Автор

You’re the best channel on this subject. Thank you 🫀

dauber
Автор

serious question: anybody here familiar with arthur c. clarke's "childhood's end"? the novel? tv adaptation?

ingoreimann