Don't Look Up - The Documentary: The Case For AI As An Existential Threat

preview_player
Показать описание
In the last months, a growing number of AI experts have warned that the current AI Arms Race between OpenAI, Google and some other tech companies, is very dangerous, and might lead to the destruction of the human race. Yet somehow, most people are not aware of this danger.
In this video I try to make the case for the existential danger using the words of some of these world renowned experts, in the hope that many more people will become aware of the danger these companies are putting us all in.

A call for action:
The following organizations all work in order to give humanity a chance in the face of the AI risk. Please get in and see how you can help:

Pause AI:
Future of Life Institute(FLI):
Campaign for AI Safety:
Center for AI Safety(CAIS):
Center for Human Compatible Artificial Intelligence:

Creator (Concept, Editing, Sound Design & all the rest):
Dagan Shani

Music:
Highway RunCandy Apple Red by Ace from Artlist
Rhythm Revolution by Rewind Kid from Artlist
Odd Numbers by Curtis Cole from Artlist
Aware by Adrián Berenguer from Artlist
Return to Oasis by Aleksey Chistilin from Artlist
Chaos Rhythm by Skygaze from Artlist
Presto by Adrián Berenguer from Artlist
Revelations by Tristan Barton from Artlist

Footage:
Movie "I, Robot"
Movie "The Terminator"
Movie "Thelma & Louise"
Movie "Don't Look Up"
Movie "Cast Away"

Licensed Footage by:
Artgrid

Some long form interviews used in the video(Note that all the footage from YouTube used in the video is credited on the video itself):

Bankless - 159 - We’re All Gonna Die with Eliezer Yudkowsky:
Bankless - How We Prevent the AI’s from Killing us with Paul Christiano
Max Tegmark: The Case for Halting AI Development | Lex Fridman Podcast #371
Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368
Center Of Human Technology: The A.I. Dilemma - March 9, 2023
Unveiling the Darker Side of AI | Connor Leahy | Eye on AI #122
Future of Life Institute - Connor Leahy on AGI and Cognitive Emulation
ReasonTV - Hit PAUSE on AI research before it's too late?
Imagination in Action: Breakthrough potential of AI | Sam Altman | MIT 2023
Commonwealth Club of California - Beyond ChatGPT: Stuart Russell on the Risks and Rewards of A.I.
The Logan Bartlett Show - Eliezer Yudkowsky on if Humanity can Survive AI
Novara Media - God-like AI is Closer Than You Think | Aaron Bastani Meets Ian Hogarth
Theories of Everything with Curt Jaimungal - Daniel Schmachtenberger: Ai Extinction & The Metacrisis
Connie Loizos: StrictlyVC in conversation with Sam Altman, part two (OpenAI):
Рекомендации по теме
Комментарии
Автор

There’s suddenly a massive momentum for changing course. Since I released the video several days ago, I found out many amazing people and organizations that try to make a difference, but they need your help. I put links for these organizations in the description. Please go to their sites and see how you can help!

DaganOnAI
Автор

I'd like to see like 50 more versions of this type of video, made by a range of creators, with a good number of them 2 minutes long or less. And they have to be played as ads here on YouTube. We can't be sure the reach will be adequate otherwise.

kimholder
Автор

Yes! Still the best most exciting AI Safety documentary on youtube, holding up, eight months later.

masonlee
Автор

This stuff is scary, and it can make us feel like all hope is lost. But there is nothing inevitable about a small group of AI researchers racing towards a superintelligence. It's not as if atoms magically re-arrange themselves into GPU clusters. We can stop it from happening. We can regulate. Over 60% of people are in favor of pausing AGI development. This puts the measure in the middle of the overton window. All that's needed is some politicians to step up and take initiative. This is where you come in - take responsibility and reach out to your representatives. Write about these risks. Show them this movie. Make them feel the fear and panic that they should. We need adults in the room. It's up to you to wake them up.

PauseAI
Автор

Concerning Fearmongering : I guess that this is a prevalent thing before any false catastrophe...as well as before most of the real ones. If anyone would have told you on the morning of August 6th 1945 that there is a bomb which can make 200, 000 people evaporate in an instant, you would accuse him of Fearmongering. Please note - when you say that something is unimaginable or inconceivable, you are saying nothing about the actual reality, but rather you are commenting on the limits of your own imagination.

DaganOnAI
Автор

Excellent work! I have watched many of these clips separately, but to have them all together is incredibly impactful. Thank you.

kashaglazebrook
Автор

Great edit of recent interviews on AI posing an existential threat. Thanks for making this, Dagan!

WilliamKiely
Автор

Incredibly well put together. I did not think it was possible to accurately summarize the situation we are all in right now in just 17 minutes, but somehow you managed to do it in a way that was flawlessly engaging both intellectually and emotionally. From the bottom of my heart, thank you DaganOnAI for making this. I most certainly hope it goes viral.

perivarfriborg
Автор

You’re really doing your part to raise awareness, Dagan. Excellent production, deserves to go viral - I will share it.

I have faith (some) that the alignment will get corrected / balanced with the US tech companies developing - The wildcard is always the competing bad actor countries against the US.

So many points of failure trying to control this technology - and human nature is right at the top.

Good luck to us all…Let the Moloch games begin.

cyb_structure
Автор

Dagan, this is fantastic; thank you so much! I hope that a video in this style can help raise awareness that experts believe we are facing an existential threat. I look forward to a "Part 2" that will help fully connect the dots from super intelligence to human extinction.

masonlee
Автор

Very well done, and hope to see more. Momentum has shifted away from doomers (I have gladly adopted the label) at this point, unfortunately, so reboots of momentum are necessary. Thanks for your work.

flickwtchr
Автор

Awesome video.

I remember watching „Do you trust this computer“ back in 2018 shortly after reading Life 3.0 and it was so alarming — but still seemed that we had a few decades to figure this out.

Now, 5 years later, it is so much closer. Thanks for raising the flag.

smithstock
Автор

Fantastic job! You picked out some of the best moments from the last half year of alignment discussions and made the subject entertaining and approachable. I have been obsessed with the alignment problem and understanding the underlying issues that lead to these kinds of multipolar traps/races to the bottom for a few years now. Please let me know if there is anything at all I can do to help with future content, or if you feel like chatting with someone about these issues. I'd be happy to throw some money your way if you want to link a payment platform somewhere.

CollapseKitty
Автор

If this won’t go viral then it’s because it scares people too much

Sprngm
Автор

I was here before this went viral. Amazingly done!

praguevara
Автор

Fantastic summary of where we're at in late May 2023. I'd love to see the part 2 where the alignment problem is expanded on and the explanation for how AGI ends humanity is conveyed. Folks I talk to now can't grasp this crucial part, unlike a Meteor headed for Earth.

williamwright
Автор

This video is the best for making the case, thank you so much, this is deeply meaningful. If we get through this, one day they'll measure the cornerstones, and this could be viewed as one of them.

jorgesandoval
Автор

The guys at the end of the video are both freak! Dear god, the first one has a little baby and feels like he has cancer; the second one tells the young ones "don't expect to be a long life". What those guys are trying to do? A mass suicide? Crazy. AI is very concerning and needs to be discussed, but this ending is very histeric and alarmist.

AndreGuedesCartoon
Автор

I love the collation of these interviews and perspectives, though this is leaning heavily towards AI being a NET negative. I have heard other perspectives that do give us hope, it would be nice to hear more of those.

ebswift
Автор

This is a great summary of where we are today. It would be great to see a follow up with some of the scenarios leading to extinction.

HC-xlen