The economy and national security after AGI | Carl Shulman (Part 1)

preview_player
Показать описание

The human brain does what it does with a shockingly low energy supply: just 20 watts — a fraction of a cent worth of electricity per hour. What would happen if AI technology merely matched what evolution has already managed, and could accomplish the work of top human professionals given a 20-watt power supply?

Many people sort of consider that hypothetical, but maybe nobody has followed through and considered all the implications as much as Carl Shulman. Behind the scenes, his work has greatly influenced how leaders in artificial general intelligence (AGI) picture the world they're creating.

Carl simply follows the logic to its natural conclusion. This is a world where 1 cent of electricity can be turned into medical advice, company management, or scientific research that would today cost $100s, resulting in a scramble to manufacture chips and apply them to the most lucrative forms of intellectual labour.

It's a world where, given their incredible hourly salaries, the supply of outstanding AI researchers quickly goes from 10,000 to 10 million or more, enormously accelerating progress in the field.

It's a world where companies operated entirely by AIs working together are much faster and more cost-effective than those that lean on humans for decision making, and the latter are progressively driven out of business.

It's a world where the technical challenges around control of robots are rapidly overcome, leading to robots into strong, fast, precise, and tireless workers able to accomplish any physical work the economy requires, and a rush to build billions of them and cash in.

As the economy grows, each person could effectively afford the practical equivalent of a team of hundreds of machine 'people' to help them with every aspect of their lives.

In today's episode, Carl explains the above, and host Rob Wiblin pushes back on whether it's realistic or just a cool story.

• Cold open [00:00:00]
• Rob’s intro [00:01:00]
• Transitioning to a world where AI systems do almost all the work [00:05:21]
• Economics after an AI explosion [00:14:25]
• Objection: Shouldn’t we be seeing economic growth rates increasing today? [00:59:12]
• Objection: Speed of doubling time [01:07:33]
• Objection: Declining returns to increases in intelligence? [01:11:59]
• Objection: Physical transformation of the environment [01:17:39]
• Objection: Should we expect an increased demand for safety and security? [01:29:14]
• Objection: “This sounds completely whack” [01:36:10]
• Income and wealth distribution [01:48:02]
• Economists and the intelligence explosion [02:13:31]
• Baumol effect arguments [02:19:12]
• Denying that robots can exist [02:27:18]
• Classic economic growth models [02:36:12]
• Robot nannies [02:48:27]
• Slow integration of decision-making and authority power [02:57:39]
• Economists’ mistaken heuristics [03:01:07]
• Moral status of AIs [03:11:45]
• Rob’s outro [04:11:47]

----

_The 80,000 Hours Podcast_ features unusually in-depth conversations about the world’s most pressing problems and what you can do to solve them.
Рекомендации по теме
Комментарии
Автор

Love Carl Shulman, always so insightful

Sporkomat
Автор

Fascinating talk and thought experiment

JuliusNkemdiche
Автор

When will the world finally wake up to the irreversible transition society is about to undergo?

We are living at the dusk of the Old World.

JD-jlyy
Автор

Re Robot Nanny: Temrinator 2:, Sarah Connor sees the Terminator playing with her child and thinks: "Watching John with the machine, it was suddenly so clear. The Terminator would never stop. It would never leave him, and it would never hurt him, never shout at him or get drunk and hit him, or say it was too busy to spend time with him. It would always be there. And it would die to protect him. Of all the would-be fathers who came and went over the years, this thing, this machine, was the only one who measured up. In an insane world, it was the sanest choice" - THAT is why you want a robot nanny. And a life partner that is an AI, not a human - humans are for short term relationships, but your soulmate will be an AI.

ThomasTomiczek
Автор

The question is why you quantize resources in $ in a AGI economy.

Thedeepseanomad
Автор

Perhaps the adaptive reason for not living longer is due to a too rapid compounded doubling rate, exhausting local resources. The offspring timing may be a balance between training/learning time vs. the optimal time sampling of the environment for genetic adaptation. Conjecture of course, I would be interested in any studies that have relevant data.

jobyyboj
Автор

So will robots make our "best friends". Our society has vast amount of future things to decide in the near future. I think when robots experience pain is when we really need to be careful with the idea of exploitation. How to determine the subjective experience of a robot is a hurdle we must come to terms with in the future.Woot! I listened to the whole 4 hours!

xbluebells
Автор

Will AGI need time off for back propagation?

lifetheuniverse
Автор

Profoundly disagree with the anthropomorphism at the end of the conversation and giving moral status to AIs - if we build AI tools that require moral status, we have failed. Creating AI creatures, agents and fake-humans will be our downfall.

odiseezall
Автор

Economists are hesitant to extrapolate the full impacts of competent AI for fear of appearing crazy to their peers. These capabilities could indeed lead to sci-fi-like outcomes and potentially render their profession obsolete. Most economists will choose to focus on maintaining the status quo until retirement, only updating when new AI capabilities emerge.

It's a species of intellectual cowardice.

calvinsylveste
Автор

This requires packing the court. What other choice it there? But we may only have to early January 2025

ili
Автор

WHY is he ( the host) talking so crazily fast?? Does he want folks to switch off??
Luckily the guest talks in a way much more comfortable to listen to.

michelleelsom