Episode #38 TRAILER “France vs. AGI” For Humanity: An AI Risk Podcast

preview_player
Показать описание
In Episode #38 TRAILER, host John Sherman talks with Maxime Fournes, Founder, Pause AI France. With the third AI “Safety” Summit coming up in Paris in February 2025, we examine France’s role in AI safety, revealing France to be among the very worst when it comes to taking AI risk seriously. How deep is madman Yan Lecun’s influence in French society and government? And would France even join an international treaty? The conversation covers the potential for international treaties on AI safety, the psychological factors influencing public perception, and the power dynamics shaping AI's future.

Please Donate Here To Help Promote For Humanity

This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

For Humanity Theme Music by Josef Ebner

RESOURCES:

SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!

BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS

Best Account on Twitter: AI Notkilleveryoneism Memes

TIMESTAMPS
Trust in AI Awareness in France (00:00:00)
Discussion on France being uninformed about AI risks compared to other countries with AI labs.
International Treaty Concerns (00:00:46)
Speculation on France's reluctance to sign an international AI safety treaty.
Personal Reflections on AI Risks (00:00:57)
Speaker reflects on the dilemma of believing in AI risks and choosing between action or enjoyment.
Underestimating Impact (00:01:13)
The tendency of people to underestimate their potential impact on global issues.
Researching AI Risks (00:01:50)
Speaker shares their journey of researching AI risks and finding weak counterarguments.
Critique of Counterarguments (00:02:23)
Discussion on the absurdity of opposing views on AI risks and societal implications.
Existential Dread and Rationality (00:02:42)
Connection between existential fear and irrationality in discussions about AI safety.
Shift in AI Safety Focus (00:03:17)
Concerns about the diminishing focus on AI safety in upcoming summits.
Quality of AI Strategy Report (00:04:11)
Criticism of a recent French AI strategy report and plans to respond critically.
Optimism about AI Awareness (00:05:04)
Belief that understanding among key individuals can resolve AI safety issues.
Power Dynamics in AI Decision-Making (00:05:38)
Discussion on the disproportionate influence of a small group on global AI decisions.
Cultural Perception of Impact (00:06:01)
Reflection on societal beliefs that inhibit individual agency in effecting change.
Рекомендации по теме
Комментарии
Автор

Thank you John and Maxime for your important work!

PauseAI
Автор

You said it yourself, it's about the transition, you can contribute to make it smoother

antoine.-
Автор

John, so important. At about minute 2, Maxime says something very, very important: The at-risk argument is BACKED by science, the no-risk bunch are just in la-la land with wishful thinking. Maxime also said he got involved because he decided that one man can make a difference. This is you too, for the people who don't really understand this stuff, people like me. So, thanks!!! BTW, a movement around "what gives them the right to play with such fire?" when they have no idea what Frankensteins they're creating using a smart product safety angle with lawfare against products that put out erroneous and dangerous information which the Chat products actually do, there are bases to stop this madness.

davidlasoff
Автор

As a programmer with a background in AI, I agree to the risks involved. This blind "race to the bottom" has to stop! We need to pause and reflect before continuing on a course that can end what it means to be human alongside humanity itself! Read James Barrat's "Our Final Invention" #PauseAI

DaGamerTom
Автор

The guy talking in right side window is an AI Character?

arslanhaider