'AI Safety' is a scam.

preview_player
Показать описание
It's time we stop letting AI companies use "we're the only thing standing between the world and the AIs that are going to kill us all" as an excuse.

There are real people being harmed every day by AI. And the companies should be held responsible.

Don't let them fool you that "AI Safety" should be about SkyNet. It should be about preventing DeepFake porn and artists having their work, their voices and their likenesses stolen. It's about resume filtering extortion and AI autopilots crashing Teslas and Boeing 737Max's. Don't ever let them forget it.

00:00 Is AI *Really* as serious as nuclear war?
00:26 Changing the subject
01:36 Topic Statement
01:43 Disclaimer - the future is impossible to predict perfectly
02:48 Perverse Incentives and Moral Hazard
04:06 US Government missing the point on AI Safety
04:40 The points they missed, that we need to pay attention to
05:56 AI is here to stay (in some form)
06:19 We can't keep letting them do this
07:33 The dumbest way to do what they're claiming they're doing
07:57 The Software Development Life Cycles models as they apply to AI
10:02 OpenAI doubles down on "Only we can save the world!!!"
10:42 The dystopian hellscape we're Actually on track for
11:27 Wrap Up.

Resources and Links:
# "AI Safety" according to the industry

# AI harm they don't want to talk about

# US Government proposal for AI

# Software Development Life Cycle (SDLC) research

# Baby steps on understanding how to actually understand AI

# Open AI disbands the team that did the Baby Step understanding paper right above this

# The new, even worse supposed existential AI risk
Рекомендации по теме
Комментарии
Автор

If an LLM takes control of all of the nukes in the world, the real question is which idiot connected it to all of the nukes in the world.

innocentsmith
Автор

I kept getting downvoted for telling people that a text autocomplete that is wrong most of the time is not going to cause a nuclear holocaust.

NeuroScientician
Автор

People who say that generative algorithms (I'm not calling it AI because it certainly isn't intelligent) could ever gain sentience and destroy humanity don't know anything about how the tech they preach for works. It's like saying Google Translate will evolve into SHODAN.

brimstoner
Автор

A computer can never be made accountable -- Therefore a computer must never make a management decision

tamasburik
Автор

AI cameras don't need to be safe for mass surveillance. They are just cameras.

itzhexen
Автор

I'm so glad I found this channel. Going into my 2nd year as a CS major now, and it's nice to watch cs content that isn't fearmongering🙌

tifala
Автор

Interesting video, I agree that these companies need to be held accountable! You have good reason to be suspicious of their claims, especially when they hide behind catastrophic risks as an excuse for the current problems they're already causing. However, I dislike like that you dismissed the survey of AI researchers without an explanation. You imply that they're not acting in good faith, and I don't think that's an accurate characterization. At least, knowing how many researchers have left (well-paid) AI research for (lesser-paid) AI safety. And people have been talking about AI safety for a long time before the hype took off. Are they all delusional narcissists with a hero complex? Now that I don't know. I don't think a Terminator-esque situation is likely, to be clear, but I don't think catastrophic risks should be dismissed out of hand. Intelligent people have concerns about it, it's clear when you look at the names in that open letter. Despite my disagreements, I am glad at least everyone finally is FED UP with the BS and want to hold these companies accountable! Woohoo!

Something concerning I noticed is this video seems to confuse "ai safety" as a bad faith anti-regulation position, like how AI companies misappropriate the term. "We're working on safety, trust us! *lobbies against any oversight*" And yeah, (dons tinfoil hat) you start to wonder if these companies are just filling these "safety" positions for PR reasons, and aren't actually doing real work. I mean, the list of ex-Heads of Safety at OpenAI is lengthy.. Now I'll agree their idea of it could very well be a scam. It seems AI companies have convinced a lot of people that their twisted vision of "ai safety", off in the ivory tower, is how it's supposed to be done :/ But if you ask independent AI safety researchers, they're not happy about that lol. That's because of potential AND current problems.

My opinion is that whether you hold one risk as more important over another, it's not a distraction. I'm not offended if you believe deepfakes are more serious than doom, nobody can say for certain. We can still agree on what to do. At the very least, there needs to be a bare minimum of oversight and accountability. This industry is less regulated than a taco stand and they get away with so much, it's a joke. Thanks for raising awareness on the common sense issues.

harrisondorn
Автор

When the owners of BigAI talk "AI-safety", they actually mean keeping _them_ safe from the rest of us.
As in; _AI must never help the riffraff escape control_

ZappyOh
Автор

What many people have been distracted from is that these are not bugs, they are very much there intended use case for machine learning as it currently exists. This isn't a secret, it's literally how these things are pitched to investors, and the downstream effects of that investment focus on the direction of the development of technology. After all, what is the purpose of designing a system that brute forces a pretence of intelligence, than cooking your customers, investors and even employees.

orbatos
Автор

I'm glad to have found this channel. I'm one of the students working on AI safety. I strongly believe that we need to be fully aware of the real threats posed by AI, rather than getting distracted by the narratives from AI tech companies. We should focus on immediate research needs like privacy, creating harmless AI, ensuring AI safety, and unlearning harmful patterns.

jeungwonje-ilqj
Автор

Do I see internet of bugs swag? Where can I buy?!?!

deanwthompson
Автор

Rob Miles convinced me of the credibility of AI safety/ misalignment, but I do agree that the immediate issues they present currently shouldn’t be downplayed. In some ways it feels that neither end of the spectrum is being properly addressed (alignment’s nearly impossible to solve and AI producers are nearly impossible to satiate with resources and “data”), and I’m willing to take the chance that solving/mitigating the issues plaguing us currently will help us with broader issues down the line.

johnclark
Автор

I am one of those AI safety researchers who is genuinely concerned about the risk of human extinction. Not only do I not work for any of the big tech companies, I believe that they are the worst offenders in the current situation. If you're interested in my particular background, I'm a recent graduate from the UIUC physics program with highest honors who aced my way through advanced/graduate AI courses at my university and I'm now on track to move to London to work with Conjecture on the control problem in AI. I've read hundreds of papers and spent hundreds of hours critically listening to people from across the AI space discuss their views on the topic.

I appreciate this video because you point out a lot of the immediate problems with AI that people are ignoring. It is unfortunate that for some reason we've ended up in a situation where worrying about these immediate problems and worrying about larger scale risks like extinction are seen as competing concerns where we have to pick one or the other. In my view, these are all major issues that need to be addressed. A lot of my studying about AI safety has involved reading papers about how problematic the bias these systems perpetuate is, and it sucks that basically nobody is talking about these issues.

I would however like to address, in good faith, a few of the criticisms being directed at AI safety as a whole. Firstly, as someone in the space, I can promise you that most of the people concerned and actually working on AI safety are not doing so for the bottom line of the big tech companies. Most of us are regular people who have become convinced of the dangers and are working to protect our collective future. This is occurring SIMULTANEOUSLY to the big tech companies realizing that they can play off of long-term safety concerns to distract from current issues with their software.

It is also important to note that within the AI safety space, pretty much nobody is genuinely concerned about the current level of AI. None of us are worried that GPT-4o is going to take over the world. The concern is that the current research trend is towards increasingly multi-modal and independent AI agents who will operate with minimal supervision online and in the physical world through robotic embodiment. While a whole book could be written on the issues with AI benchmarks, the overall trend with the technology is that humans are ultimately outpaced in whatever tasks AI is directed towards and if we are able to create general intelligence in the near term as many signs point towards, we are going to be in a very unstable position as a species. Remember that ML systems are essentially giant brains that are trained on data to complete tasks and we have no way to interpret what is actually happening inside the systems, and no way to control them.

The biggest problem with the conversation around AI is that nobody knows just how far the current boom will push the technology. If it doesn't get much better for another decade, then the concerns you discuss here are really the biggest issues and we should be holding AI companies to account for them, as well as the massive scale of theft that is happening right now in developing the training sets for these models. If the current trends continue though and we are on track to create human-level intelligence in the near-term, then we suddenly have to contend with a lot of very difficult problems on the civilization scale that I do not believe we are at all prepared for.

If anyone has any disagreements or would just like me to expand on anything I've said here, please just ask, I'm happy to discuss. All I ask is that you be respectful and polite, we're all just human beings existing together through this crazy time. Regardless of differences in our views on AI, these are conversations that need to be had about a rapidly advancing technology and we waste time and energy by devolving into insults and anger. If you strongly disagree with me I'd love for you to articulate why and we can dig into the crux of where we disagree to see if we can find some common ground :)

MaxWinga
Автор

Hello sir. I’m a computer engineer graduate and your videos are inspiring me to become a better engineer each day. Thank you for the content.

KTF
Автор

Its not researchers (generally) who are obsessed with AI safety, though to be fair some of them are pretty silly about it. "Mechanistic interpretability" is field including real research, and most of the rest in the safety field is just hype/jealousy/greed/fear...

crassflam
Автор

yeah, it’s possible to create an ai to explain what sources went into creating their results, because you can ask humans to give sources as well. the main problem is that there’s no way to tell if the ai is lying, because the objective of the ai isn’t to provide the truth but to provide the answer that will most likely convince the human the answer is correct.

EastBurningRed
Автор

If "The Singularity", or whatever, was a real threat they wouldn't be going on about it as much as they do.

This narrative is actually to their benefit, it makes their tech look much more powerful and potent than it actually is. These text transformers are no where near being the same thing in either function or software as "actual" AI as described by Science fiction, but they try their hardest to make it seem that way. The only damage done so far has actually been due to human negligence and/or over-estimating the reliability of LLMs.

Like you said, this apocalypse narrative shifts the focus from the actual damage that is already being done, and the realistic probable challenges that would arise from integrating LLMs into society, to a make-believe scenario where the AI-companies not only are not the culprits, they are our only hope for survival and their transgressions are necessary for the greater good.

This level of fearmongering is crazy, it's extremely manipulative, deceptive and down right despicable.

sledge_remirth
Автор

Maybe we are hearing about AI safety from different sources. The harm you are talking about today is true, we don’t need Skynet before AI tools make decisions that human operators blindly trust and lead to lost lives.

I don’t believe AI safety research is just preventing doomsdays though, which is the impression I get from you in this video.

On this topic I feel like a discussion with Robert Miles would be interesting.

voicesarefree
Автор

From my understanding, the earlier safety design is what he means by iterative software development. Instead of focusing on the big sci fi problems, we gotta sort out problems like pron and fair use etc

myko_chxn
Автор

Individual privacy and getting payed for you job and data needs to become the priority of polititians and the consumer, companies will fight this and try to change the focus.

StuartLoria