AGI by 2027, Insider Warnings, and Altman's Dealings: The AI Argument EP12

preview_player
Показать описание
It's been a quieter week for AI releases, but there's no shortage of debate. Join Justin and Frank as they tackle the growing concerns about AI safety and the potential backlash.

→ Increased talk of AI safety and potential dangers – is the backlash real or just a filter bubble?
→ Startling prediction from Anthropic's senior manager: will any of us need to work in five years?
→ Scrutiny of Sam Altman’s investments – ethical or just savvy business?
→ The parallel between OpenAI and Facebook – is AI's mission to benefit humanity just a façade?

The more AI insiders express concerns, the more the general public will eventually share those fears. If the experts are spooked, how long before everyone else is too? This episode is a must-watch for anyone interested in the ethical and societal impacts of AI.

Leopold Aschenbrenner, previously at OpenAI, believes artificial general intelligence (AGI) could emerge by 2027 based on the rapid progress in the field. This AGI would be as capable as a PhD student, a remarkable and concerning milestone.

The accelerating pace of AI advancements has experts sounding the alarm. Some argue for the "right to warn," stressing that AI insiders have an obligation to speak out about the potential risks. But will the public be able to fully comprehend the gravity of these warnings? Justin and Frank discuss the challenges in bridging this gap.

The repercussions of AGI's arrival could be immense, from widespread job displacement to economic disruption. Our political and business leaders must start grappling with these weighty issues now to guide society through the turbulence ahead. Proactive leadership and open public discourse will be essential.

The ethics of AI development are also under scrutiny. Justin and Frank debate the concerns surrounding OpenAI CEO Sam Altman's investments and business dealings, as well as the pushback Meta faces over plans to train AI models on Facebook user data without explicit consent.

► LINKS TO CONTENT WE DISCUSSED

► SUBSCRIBE Don't forget to subscribe to our channel to stay updated on all things marketing and AI.

► YOUR INPUT Do you think AI insiders should have the right to warn the public about potential dangers? Why or why not? Share your thoughts in the comments!

00:17 Increased talk of safety and dangers of AI - is there a backlash?
01:52 Will any of us be working in 5 year's time?
08:08 Is OpenAI the new Facebook, is their goal really to benefit all of humanity?
11:49 Leopold Aschenbrenner - will we have AGI by 2027?
19:49 Should AI insiders have the right to warn about the dangers of AI?
21:43 What kind of leadership is needed to guide us through AI risks?
27:25 Should Meta be allowed train AI on user generated content?
Рекомендации по теме
Комментарии
Автор

25:34 do you think monetary stimulus would be superior to fiscal interventions? (Like increasing corporate income taxes).

I would expect that if corporations translate AI cost savings into consumer price reductions, then there should be a deflationary effect that can balance out the monetary stimulus and prevent demand-pull inflation (from too much money chasing too few goods).

But if corporates pocket the AI windfall, then I think you will need to use increased taxation and government spending to force the same deflationary outcome.

And ultimately hyper-deflation is exactly what a socially acceptable intelligence explosion should yield for society - that the cost of all goods and services should just keep falling as supply becomes increasingly cheaper due to AI hyper-efficiency, yielding in the limit a state of material abundance (and massive real wealth for everyone).

Ironically this would look just like Karl Marx's original vision of "communism", or effectively a post-labour economy. Capital will eventually take over all production from labour.

af.tatchell