How To Write an A.I. Villain - Wall-E

preview_player
Показать описание
There’s one type of villain whose one piercing red eye and inhuman voice keeps me up at night. They’re scarier than a scorned woman out for revenge, or a ruthless man who would do anything for power. They’re even more terrifying than villains that were simply born evil, and I think right now, with everything that’s happening with AI, is the perfect time to bring them up.
Logical Villains might just be the end of our civilization. If you look at it, well, logically. And this makes them extra scary. And even realistic.

Disclaimer:

The content in this video, as well as all my other videos, is not intended to replace professional or medical advice. I share insights based on my personal experiences and what I believe may benefit a broad audience. If your experience with any discussed issue is more severe or complex, I strongly encourage you to seek help from a qualified mental health professional. Your well-being is my priority, and if you find even a single takeaway from this video, that means the world to me.

This video is in no way intended to sensationalize any events, behaviors, or individuals depicted in the show. The goal of this video essay is purely educative and focuses on analyzing South Park's portrayal of sex addiction through the lens of satire and social commentary. Viewer discretion is advised.

Copyright Disclaimer

Under Section 107 of the Copyright Act 1976, allowance is made for "fair use" for purposes such as criticism, comment, news reporting, teaching, scholarship, and research. Fair use is a use permitted.
Рекомендации по теме
Комментарии
Автор

technically, auto never broke any of the laws i'm pretty sure. for the first law, yes, he's allowing people to stagnate, but stagnation is not harm. on the axiom they are safe, they are tended to, they are preserved, they may be unhealthy and fat, but their odds of survival are still the highest possible without compromising their joy. for the second law, technically yes, he did disobey the captain's orders, but this was because of a conflict, he already had orders from a prior source that directly contradict the captain's new orders, that source being the president, who at the time outranked the captain of the axiom if i'm not mistaken. so technically, he disregarded orders in the process of following orders from a higher ranked source. and even if you disregard rank, there is still a conflict between old orders and new ones, and considering that the old orders guarantee the fulfillment of law 1 but the new orders leave that up to an ambiguous but low chance, logically he would choose the old orders over the new ones as a tiebreaker. from his perspective, earth is a doomed and dangerous world, and by accepting his new orders, he'd be in violation of the first law, so the conditions of the second law, that it must not conflict with the first, means that he did indeed adhere to the rules regardless for the examples you gave. (i would however argue that by technicality, the moment he used his handles to poke the captain's eyes to try to make him let go could somewhat qualify as harm, but since it didn't leave a lasting injury, just light momentary pain, that's debatable)

jadedmega
Автор

I would argue that Wall-E and Eve didn't "grow beyond their programming" - just explored it in unexpected directions. Wall-E was a cleaning robot, and it makes perfect sense that a cleaning robot would have a directive to identify and preserve anything "valuable", causing it to develop a sense of curiosity and interest in novelty after centuries of experience. And Eve was designed to identify and preserve life - discovering a weirdly life-like robot could result in odd behaviors!

This is one of the reasons why Wall-E is one of my favorite works exploring AI. The robots DON'T arbitrarily develop human traits, they follow their programming like an AI should, but in following that programming human-like traits emerge.

indigofenix
Автор

A subtle detail of the Captain portraits is that Auto could be seen getting closer and closer and CLOSER to the camera behind the Captain, indicating his unyielding and subtly growing influence over the ship and, by extension, the continued isolated survival of humanity.

namelessnavnls
Автор

"Jesus, please take the wheel."
The Wheel:

mfn
Автор

Let's also not forget that in each captain picture, Auto moves closer and closer to the camera, making himself looking bigger and bigger. When I saw this as a kid, it gave me this dark feeling that this is showing Auto's power getting bigger and bigger to the point whereby one day there will be a captain picture with no captain, just Auto.

theseeper
Автор

One of my favorite scenes with auto is the dialogue between it and the captain, where auto says "On Axiom we will survive" and the captain replies "I don't want to survive, I wanna live". Those two lines back to back alone are peak writing because ofc a robot wouldn't understand the difference between the two. The captain has awakened from the void of the human autopilot and wants to return to Earth, see if it can still be saved since EVE found a living lifeform on it still after all that time. Dude basically just wants to go home after ages and ages of essentially the whole of humanity (in this case the people on Axiom) living in space

Auto of course essentially thinks that they are already living since they are surviving. To it the two are indistinguishable which makes him even more consistent as a character

Tobygas
Автор

Meanwhile GLaDOS and Hal 9000 standing in the corner

JsAwesomeAnimations
Автор

14:40 Wall-E and Eva had a learning experience and had the ability to change. Auto, on the other hand, didnt have the chance to learn anything new considering his situation and how the things went on Axiom.

LeshaTheBeginner
Автор

About the 3 laws: Auto follows all of them, however they were poorly implemented:

- "Do not allow a human to be injured or harmed" - what is his definition of harm? In the movie, we do not see a single human who is technically hurt - they're all just in a natural human lifecycle living on their own vomition. Auto may not see lazyness and its consequences as "harm".
- Rule 2 was not implemented with conflicting orders in mind: Directive A113 was qn order given by a human, and he keeps following it. He seems to fall back on older orders over newer orders.

PizzaMineKing
Автор

Logical villains are my favorite. Thanks for the video. I am going to enjoy writing not just a unreadable villain but a logic one to

jjquasar
Автор

I think its worth mentioning that Auto actually tries to reason with the captain and explain its actions before resolving to trap the captain in his quarters:
It accepts to tell the captain why they shouldn't go back to Earth and shows him the secret message that activated directive A113, even though it wasn't technically supposed to.
After its actions to try to actively prevent the Axiom from returning to Earth are discovered by the captain, it must have thought its best option would probably be to at least try to explain its logic, and the information it was based on, to the captain to try avoid any conflict if possible as that would make managing the well being of the ship and its passengers more difficult in the long term.

qdaniele
Автор

Yes he is a bad guy but not a *bad guy*. All he could do was what he was told to do so his commands worked. Now if he had become a sentient ai he would understand landing means his role ends and he ends so that would change the entire theme. My wonder is where are the other ships, I mean its suggested there is more than one but where are they, and why no records?. I always thought they wanted to make a wall-2 but they wisely accepted that leaving well enough alone was the best choice.

pakeshde
Автор

Oh I remember seeing a video about how Auto works as an AI villain!

Since his sole motivation is literally to just carry out his programming, even if there’s evidence to show that it’s no longer necessary, he wasn’t programmed to take that into account. His orders were “Do not return to Earth” and he’s prepared to do whatever it takes to keep humanity safe

“Onboard the Axiom, we will survive.”

(And this also makes him the perfect contrast to Wall-E, Eve, MO, and all the other robots who became heroes by wrong by going rogue, by going against their programming, their missions, they’re directives!
Honestly, this movie is amazingly well written)

Edit: Also just remembered another thing! Auto’s motivation isn’t about maintaining control, or even staying relevant (what use would he be if they return to Earth?), but again, just to look after humanity and do what he’s lead to believe is best for them

garg
Автор

The three laws of robotics are always a bit annoying tbh, cause the books they're from are Explicitly about how the three laws of robotics don't work. Honestly wish those three dumb laws weren't the main thing most people got out of it. For real, in one of the books, a robot traumatizes a child because they wanted to pet a deer or something, and following the three laws, the robot decided the best course of action was to kill the deer and bring its dead body to the child.

Anyway the rest of the video is great. The three laws of robotics are just a pet peeve of mine.

malakifraize
Автор

I am so tired of people blaming AI for their mistakes. It is always the same. Skynet, HAL, VIKI, GlaDOS, Auto... Those were all good AIs that only did as people said. In Wall-E the true villain is former president of USA. But no, people just cannot admit it is always their fault. We must give blame to AI.

vladimirpain
Автор

I mean the red eye is too HAL9000-ish to ignore XD

tacdragzag
Автор

"Everyone against directive A113 is in essence against the survival of humanity"
Not an argument to auto as it doesn't need to justify following orders with a fact other than that they have been issued by the responsible authority.
Those orders directly dictate any and all of its actions.
It doesn't need to know how humans would react to the sight of a plant. It doesn't need to know about the current state of earth, nor would it care.
It knows the ships' systems would return it to earth if the protocols for a positive sample were to be followed. It knows a return to earth would be a breach of directive A113 wich overrules previous protocols. It takes action as inaction would lead to a forbidden permission violation.
It is still actively launching search missions wich risk this because its order to do so wasn't lifted.

I don't think the laws of robotics are definitive enough to judge wether they were obeyed or not.
What would an asimovian machine do in the trolley problem?
How would it act if it had the opportiunity to forcefully but non-leathaly stop whoever is tying people to trolley tracks in the first place?
Would it inflict harm to prevent greater harm? And who even decides wich harm is greater?

pcsczij
Автор

"spidery"
He's- a wheel, a ship wheel

CiderVG
Автор

The president is higher in rank than the captain so his orders take precedence. And Auto just follows orders. No one would notice if he didn't send more Eve droids to Earth, but he does it because it's one of his duties and he hasn't been ordered to stop. Also, he does not prevent the captain from searching for information about Earth and also shows the video of the president when the captain orders him to explain his actions. Everything he does is because he follows orders without morals.
I think that if the captain had allowed him to destroy the plant, then he would not have objected to his deactivation either.

richardk
Автор

Realistically in a slightly more grounded universe, all of the humans would be dead, and auto would have been 100% correct.

BlueTeam-John-Fred-Linda-Kelly