Why you shouldn't believe the AI extinction lie

preview_player
Показать описание
They warn us that future artificial intelligence will wipe out humanity. This may be a lie with ulterior motives.

There is a powerful motivation to keep you thinking that AI is an existential threat. That we should treat it with the same level of urgency as a nuclear war or a global pandemic.
And yet, we shouldn’t stop developing AI. We should accelerate it as fast as we can. As long as it’s the select few good guys that get to control it. We must not let AI fall into wrong hands.

But there is a growing opposition to this movement. A conflict is arising. There is those that want to keep AI closed off and tightly controlled and those that want to leave it open and accessible to all. Even though both sides claim they are doing for humanity, only of them is right. This is an arms race for who gets to dominate AI development and who will be left out.

Sources

Credits
Music by White Bat Audio

Follow me:

The footage and images featured in the video were for critical analysis, commentary and parody, which are protected under the Fair Use laws of the United States Copyright act of 1976.
Рекомендации по теме
Комментарии
Автор

If AI is too dangerous for open source AI development, then it's 100 times too dangerous for proprietary AI development by Google, Microsoft, Amazon, Meta, Apple, etc.

LibertyEver
Автор

In the US it's called lobbying, the rest calls it corruption, they pulled a suprise mecanics before EA

weshuiz
Автор

Most of the regulations have nothing to do with preventing AGI from destroying humanity.

Love-xhdg
Автор

never would have imagined Zuck to be the Laama-AI-Gaib in this timeline

settingsun
Автор

I'm a physicist who has been studying AI and its issues to a deep level for the previous two years. I agree with the majority of what you say in this video but the conclusion is not correct. We should all be concerned about what silicon valley is building. While we don't know that the current paradigm of AI will scale to super-intelligence, we do know that if we create anything anytime soon that is super-intelligent, we have zero way to understand or to control it, regardless of who built it. These companies do not have humanity's best interests at heart, OpenAI just disbanded its entire safety team after the leaders and top scientists quit because they lost faith that OpenAI cares about safety. The answer to the danger of AI is not to give companies regulatory capture, or to open-source it, it is to not FUCKING BUILD IT BEFORE WE UNDERSTAND WHAT WE ARE DOING. This is the point in the movie where the scientists are all warning that we shouldn't build the torment nexus and everyone is like "let's build the torment nexus!" and we make fun of the story-writers for how unrealistic it is that everyone just ignores the scientists. Please everyone, for the love of humanity, just listen for once, we need to get our shit together or we will all lose. The only hope we have at this point is to regulate the bottlenecks of advanced chip manufacturing and prevent ANYONE from training even bigger and more capable AI models before it is too late. AI has the potential to fundamentally reshape everything for the better if we can use it wisely, but the current wild-west arms race is going to go terribly for everyone.

maxwinga
Автор

Of course they want to kill open source AI, who would pay their monthly subscriptions when you can run a smaller model on your own hardware with no internet connection required and still get very good results?

that_is_not_me
Автор

It's the first time I've seen the Hated One bringing hopeful news and I welcome it. For the question: release the sinkhole!

DavidPereiraLima
Автор

Just finished reading an interesting paper "The mechanisms of AI hype and its planetary and social costs". It's on Springer on open access.

cristianst
Автор

Doesn't matter how strict it is, it's about safety.
Punishments should be scaled, large companies should be held to the same accountability.
You act like scientific breakthroughs aren't a reasonable expectation - Just because things are slowing down now doesn't mean it wont get worse in the future, or that the risk isn't there.

LBoomsky
Автор

I feel that attempts to regulate the ability to develop A.I. is akin to regulations against piracy. It doesn't work, and will never work, because the people making the rules don't understand how any of it actually works, or why anyone actually does it.

THE-X-Force
Автор

I'm glad Meta is putting out really good open source models

young
Автор

I kinda dislike how AI has become almost synonymous with magic in the minds of people who have never even coded in their lives.

What's even worse is the business people purposefully trying to scare the public with threats of doom to keep the hype cycle going.

fullmetaltheorist
Автор

When computers were first being developed someone suggested that there was a global market for 5 computers and they would be owned by the richest kings on the planet. The elite loved this vision and are constantly looking for a return to that structure.

capitalistdingo
Автор

The most important part of opensource is the freedom to modify and create derivatives, without it products can be free and even have the source code available but they are not truly opensource. They can be better termed as source available.

talhaakram
Автор

If AI is created it could be really bad. But we aren't really anywhere close to creating "True AI" that can think for itself. What we have now is just a series of computer programs that perform predictable tasks even if it does so in unpredictable ways sometimes. It only does what it's programed to do.

shaunrosenberg
Автор

Please make a video on longtermist effective altruism movement! Would love to know your thoughts on it.

priyanshipal
Автор

if one scenario will lead to a potential extinction, its when ai is only in the hands of a few.
imagine a super ai manages to "break free". it will mean theres almost no way to stop it, since all development was behind closed doors and hardly anyone can do something about it.
now imagine ai was open source and everyone could develop ai as they pleased. even if someone managed to make a harmful ai, there would be plenty of other people being able to make counter ais or even already made an ai that is specially designed to fight bad ai actors.

dafff
Автор

My first reaction to reading this was that this is just a PR stunt to make AI appear more important than it is. How are AI developers qualified to make predictions about AI's effects on the world, that has to do with sociology, economy, etc.

ths
Автор

I just started catching up with the new vids after some years not watching your content. I used to follow your series on degoogling and stuff around that. I like how you've changed the tone of your voice to a more friendly, relatable one which I think will greatly benefit getting more new people hooked and start actually caring about the important things you make videos about.

MetaPikachu
Автор

I'm so sick of AI grift omg, preach mate!

EdLrandom