filmov
tv
Episode #38 TRAILER “France vs. AGI” For Humanity: An AI Risk Podcast
Показать описание
In Episode #38 TRAILER, host John Sherman talks with Maxime Fournes, Founder, Pause AI France. With the third AI “Safety” Summit coming up in Paris in February 2025, we examine France’s role in AI safety, revealing France to be among the very worst when it comes to taking AI risk seriously. How deep is madman Yan Lecun’s influence in French society and government? And would France even join an international treaty? The conversation covers the potential for international treaties on AI safety, the psychological factors influencing public perception, and the power dynamics shaping AI's future.
Please Donate Here To Help Promote For Humanity
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
For Humanity Theme Music by Josef Ebner
RESOURCES:
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
Best Account on Twitter: AI Notkilleveryoneism Memes
TIMESTAMPS
Trust in AI Awareness in France (00:00:00)
Discussion on France being uninformed about AI risks compared to other countries with AI labs.
International Treaty Concerns (00:00:46)
Speculation on France's reluctance to sign an international AI safety treaty.
Personal Reflections on AI Risks (00:00:57)
Speaker reflects on the dilemma of believing in AI risks and choosing between action or enjoyment.
Underestimating Impact (00:01:13)
The tendency of people to underestimate their potential impact on global issues.
Researching AI Risks (00:01:50)
Speaker shares their journey of researching AI risks and finding weak counterarguments.
Critique of Counterarguments (00:02:23)
Discussion on the absurdity of opposing views on AI risks and societal implications.
Existential Dread and Rationality (00:02:42)
Connection between existential fear and irrationality in discussions about AI safety.
Shift in AI Safety Focus (00:03:17)
Concerns about the diminishing focus on AI safety in upcoming summits.
Quality of AI Strategy Report (00:04:11)
Criticism of a recent French AI strategy report and plans to respond critically.
Optimism about AI Awareness (00:05:04)
Belief that understanding among key individuals can resolve AI safety issues.
Power Dynamics in AI Decision-Making (00:05:38)
Discussion on the disproportionate influence of a small group on global AI decisions.
Cultural Perception of Impact (00:06:01)
Reflection on societal beliefs that inhibit individual agency in effecting change.
Please Donate Here To Help Promote For Humanity
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
For Humanity Theme Music by Josef Ebner
RESOURCES:
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
Best Account on Twitter: AI Notkilleveryoneism Memes
TIMESTAMPS
Trust in AI Awareness in France (00:00:00)
Discussion on France being uninformed about AI risks compared to other countries with AI labs.
International Treaty Concerns (00:00:46)
Speculation on France's reluctance to sign an international AI safety treaty.
Personal Reflections on AI Risks (00:00:57)
Speaker reflects on the dilemma of believing in AI risks and choosing between action or enjoyment.
Underestimating Impact (00:01:13)
The tendency of people to underestimate their potential impact on global issues.
Researching AI Risks (00:01:50)
Speaker shares their journey of researching AI risks and finding weak counterarguments.
Critique of Counterarguments (00:02:23)
Discussion on the absurdity of opposing views on AI risks and societal implications.
Existential Dread and Rationality (00:02:42)
Connection between existential fear and irrationality in discussions about AI safety.
Shift in AI Safety Focus (00:03:17)
Concerns about the diminishing focus on AI safety in upcoming summits.
Quality of AI Strategy Report (00:04:11)
Criticism of a recent French AI strategy report and plans to respond critically.
Optimism about AI Awareness (00:05:04)
Belief that understanding among key individuals can resolve AI safety issues.
Power Dynamics in AI Decision-Making (00:05:38)
Discussion on the disproportionate influence of a small group on global AI decisions.
Cultural Perception of Impact (00:06:01)
Reflection on societal beliefs that inhibit individual agency in effecting change.
Комментарии