Why AUTO is the BEST AI Villain (And Why Most Others Fail)

preview_player
Показать описание
AI villains have become a staple of all genres, but they seem to have a huge presence in cartoons and animated films. Today we’re taking a look at AUTO from WALL-E and pitting him up against various other animated AI villains to prove once and for all why he’s the best. We'll also look at the Fabrication Machine from Nine, Ares from Next Gen, and PAL from The Mitchells vs The Machines.

Clips Used

WALL-E

Nine

Next-Gen

The Mitchells vs The Machine’s

The Iron Giant

SpongeBob

Other sources

Рекомендации по теме
Комментарии
Автор

One big example of AI misunderstanding the instructions: In a tetris game the goal was to be “alive” as long as posible… so the AI paused the game.

germax
Автор

I'd love to see a movie where the AI is the antagonist but its only dangerous by accident because of how hopelessly incompetent it is. It somehow gained sentience through [insert vauge science here], but its also running on like, Windows 1.5

negativezero
Автор

The end credits do actually show the humans thriving, though auto was wrong it wasn't his fault, he was acting on outdated orders, he even shows the captain the recording of said orders. Auto was ordered to keep the humans on the ship no matter what and so thats exactly what he did. The plant no longer had significance to the equation.

turtletheturtlebecauseturt
Автор

I've always interpreted Ares as a villain with a misguided view of what "perfect" means. Justin's first words to it were "You're perfect. Now go make the world perfect. " And Ares, being an AI, made the logical leap that because it was deemed "perfect" by its creator, everything that wasn't a cold, calculating machine like itself, namely humans, needed to be purged to create what it believed to be a perfect world.

ZackMorrisDoesThings
Автор

I once heard about an AI that was trained to tell you if a picture was taken inside or outside.
It worked surprisingly well.
Until its engineers found out that it does so by simply looking if the picture contains a chair or not.

WadelDee
Автор

Fun fact about the whole "AUTO was right and the humans probably didn't survive after they landed" situation: a lot of people in the focus groups had that same thought, and that's why they added that animation of the humans and robots working together to recolonize in the end credits, just to reassure them

everysecond
Автор

I think a very important distinction is between neurons themselves and neuron-like functions. All the organic chemicals (adrenaline, dopamine, etc.) are essentially very complex packets of information that our brains can send around as needed to get our behavior. What really matters to make a self-aware being (in my opinion) is a super complex way of processing information and adapting to that information internally. Our neurons change and grow and atrophy depending how we think and what environment we’re in. I saw an article that used pulses of light in a very similar way to neurons (ie, pulses of light can trigger another pulse of light in response depending on the circumstances, just how we use pulses of electricity). If you can make a complex and flexible enough artificial neural net, I think it could experience emotion just like us (this would require essentially recreating a human mind in an artificial substrate, making “neurons” with the exact same behaviors). In this way, you could have a huge variety of robotic characters, with as familiar or alien characteristics as you please (with the right background lore, and any good worldbuilder would see what the side affects of such tech is. If these things act like neurons, could you have a cellular interface between them and repair brain damage with them? How advanced is the chemistry of this world and what does it look like? Etc)

If the AI is non-sentient and operating like a blackbox, it could pick up our behaviors without actually being sentient. You could have either a sentient synthetic being making its decisions against humanity, or a complex series of algorithms that’s had our behaviors imprinted onto it. A scarier AI than a sentient one to me is a malfunctioning dumb one, the classic paperclip maximizer that’s spiraled out of control.

hummingbirb
Автор

26:26 A game called Soma actually did this really well, without getting into much spoilers it kept humanity alive using machines but it didn't know what it meant for humans to live/ to be human.

MONTANI
Автор

I feel like Glados would ideally get a pass on the whole “no emotion” thing because what a lot of people miss the mark on is that she isn’t really an AI, but a human consciousness *turned into* an AI. It’s only really at the end of Portal 2 when she truly becomes a full on robot

Anonymous-
Автор

I always thought it was stated in the film of 9 that the scientist modeled the machine off of his own mind and we even see him put a piece of his soul inside it. Then when they take the scientist away we see that it holds on to him like a child would a parent or authority figure. And we do see it has emotion because it was most likely not fully robotic because of the soul fragment it had. I just saw its motivation int the movie was to bring all of the scientists soul fragments back together to become a "whole" being.

ShankX
Автор

It is not that auto wants to save the humans, it is that he was programmed to NEVER let the humans return to Earth.

Sausages_andcasseroles
Автор

The scientist in 9 who created it delved into alchemy and "dark science" to make the AI as well as the other characters in the film. There was also a theory that the scientist in question put a piece of himself into it, which is why it freaked out the way it did, and him taking the dolls back was The Machine trying to make itself "whole" again.

meekalefox
Автор

I love how AUTO isn't really a villain by also being a villain, if that makes sense. After showing the captain the secret video recording, it was shown that he was just doing what he was programmed to do which is keeping everyone safe and not returning to earth even if it means hurting someone else to stop going to the planet and most people forget about that. He was a villain because he was programmed to keep others safe. Its the 'Try to be a hero, but end up looking as a villain.' thing

TheFloraBonBon
Автор

Solid video. One point however. Auto had nothing to do with humanity becoming fat, lazy, and complacent. They did that to themselves. Repeatedly ignoring the problem and returning to comfortable ignorance. It’s one of the biggest messages of the movie.

One thing you didn’t mention about Auto that also makes him so convincing as an AI is that it never disobeys an order. Unless of course, the order is contradicted by another, superseding order. Even when it is to its disadvantage, Auto always obeys the Captain.

WelloBello
Автор

It's a shame you didn’t cover HAL 9000 at all. I know you were focused on animated films but two of those films reference HAL and you did talk about Terminator a little.

HAL is pretty much the perfect AI antagonist. All his actions are caused not by emotion but by conflicting orders. There's a great scene in "2010" where one of the computer engineers that designed HAL figures out what went wrong and is like: "They massacred my boy! He's a computer, of course he doesn't understand how to tell white lies and balance conflicting priorities!"

oliviastratton
Автор

That is one thing that sort of bugged me about Walle. In that did we just FORGET about those dust storms that were still frequent?! I'm no geologist or meteorologist, but I have this feeling that that isn't something that just fixes itself over night. Even if Earth had reached that point where photosynthesis was possible again, it would still take time, lots of time before those storms stopped.

JaguarCats
Автор

Like, Auto was simply following his directive. He was coded and created to follow his directive no matter what. Unlike other AI villains, he didn't turn against his orders, he didn't suddenly decide to become self aware and kill everyone and do as he wishes.
In fact, WALL-E is the AI who gained sentience and emotions and started going against orders.

aquaponieee
Автор

On the subject of “any animal in the snow is a wolf” problem, apparently a problem with early chess-bots was a tendency to intentionally kill their queens in as few moves as possible right at the start of the game. The reason? Of the thousands of Grand-master level games they had been fed to teach them chess, most ended with the winning player sacrificing high-value pieces in exchange for a checkmate in the endgame, and therefore there was a very strong statistical correlation between intentionally loosing you queen and winning in the next five moves, and they picked up on this.

aidanfarnan
Автор

In an online roleplaying game called Space Station 13, there is a role players can take, called "AI". Most AI start with the three Asimov Laws, of "Prevent human harm", "Obey Humans", and "Protect yourself", and AI players are tasked with obeying their laws and generally being helpful to the crew of their space station. The problem emerges when an AI is made to go rogue, or malfunctions.

Specifically, AI may have new laws added, laws removed, or laws altered, and a famous and extremely easy way for an Antagonist to turn an AI into something that helps them hurt the station or destroy it is to implement "Only Human" and "Is human harm" laws. Asimov AI are only obligated to protect and obey humans.

So if another player instills them with a fourth law, "Only chimpanzees are human", the AI is now capable of doing anything to any members of the crew, if it protects or serves chimps, because they (the crew) are no longer human.

Likewise, if a fourth law is added that says something like "Opening doors causes human harm", the AI is obligated to prevent the opening of doors at all cost, through both action and inaction.

Lastly, one may attempt more clever additions, such as reversing the order of laws. An AI must protect itself, it must obey humans, unless that would interfere with protecting itself, and it must protect humans, unless doing so prevents it from obeying orders or protecting itself.

In that sense, I feel that the ideal AI antagonist must have a human or nature-borne deuteragonist. The machine will do as it is designed to do, under normal circumstances. The most common AI villain, then, is doing what it was designed to do, and nothing more.

An AI can have emotions, emotions are simply strategic weights in the end that serve the purpose of altering conclusions based on incomplete data, but it should always go back to what it was designed to do. An AI becomes violent because that serves its goals. It becomes manipulative because that serves its goals. Much like a living creature, whose "goal" is survival, and successful reproduction, an AI is structured in such a way that its cognition serves those goals.

shadestylediabouros
Автор

I really do think Auto wasn't the villain we have to remember that he was build/program to satisfy, protect, and attend to the needs of the humans on that ship he was just doing his job what he was program to do by whoever created him so, when Wall-e came with the plant I don't really think he was being destructive, but mainly just following protocols he was program to follow

animeadventuressquad