Isaac Asimov's Three Laws of Robots: Really Dumb and Totally Irrelevant - I have something better!

preview_player
Показать описание

WRITING STUFF
Рекомендации по теме
Комментарии
Автор

For the Asimov fanboys: this video isn't for you. It's for the people who think the three laws are a good idea. I know you know better. Many don't. Chill TF out.

DaveShap
Автор

Every Asimov robot story demonstrates that these “laws” are no good; the laws were designed to be full of loopholes to make stories about the failures of our attempt to apply safety policies to technology. I suspect Asimov would agree 100% with you.

SimeonPeebler
Автор

Asimov's three laws of robotics is a storytelling device, kind of like Star Trek's prime directive. Both would be wildly impractical in practice.

Honestly, half of Asimov's robot stories are about how Robots manage to circumvent the three laws or how they are interpreted in unexpected ways.

erikrounds
Автор

The three laws weren't meant to be good laws for controlling autonomous machines. They were made to sound reasonable to the typical reader, and then deconstructed through the various stories that Isaac Asimov told. He wasn't trying to solve alignment, he was trying to demonstrate why it was a difficult problem. He was a scientist, yes, but he was an exceedingly prolific science fiction writer. These laws were never intended to be a solution, they were meant to be an example.

FrancisKing-sv
Автор

Literally, all Isaac Asimov's robot novels exposed the problems with these laws... Asimov was aware of these problems and was trying to explore all possible problems in his stories. He invented and used those laws for creating drama and problems in his fictional universe, not as a solution for robotic ethics.

HugosStories
Автор

I feel Asimov deserves a bit more respect than this. I realize you're just responding to comments that apparently have been annoying you, but Asimov was thinking and writing about these things in a time when computers were hardly even a thing. He informed our imaginations about these things and inspired many people to pursue careers in computer science and eventually AI. Also, I don't think anyone who has actually read his work, would see the three laws as the solution. Many of your reservations are explored in his books, showing the flaws in the system in clever and convincing ways and, along the way, proving to his readers that AI safety is actually not as simple as it may seem.

emielkleijntjens
Автор

Heuristic imperatives are great, but also create a degree of ambiguity that can manifest in unexpected ways. For example, "reduce suffering". What if the intelligence decides that there is no suffering in the universe as dramatic as found in humans, and thus the most efficient way to remove suffering is to remove humans, through painless sterilization? Even more so if the AI finds that the balance of life in the universe is less predatory and selfish, because allowing humans to extend means that we may bring a degree of suffering that is atypical, wiping us out humanely may be 'the way'. Not that I would necessarily disagree with that assessment.

From my perspective, humans are (even when not trying to be anthropomorphic in rule creation) are going to tend to make mistakes when trying to simplify these sorts of rules, because there are so many potentially unintended consequences, as you suggested early in your vid. It may be that you'd need an ASI to actually create a solid set of rules, which creates a chicken and egg problem. Otherwise, you'd need the ability to adjust the rules with time, which means they can go in ways that are not in humanity's best interest. Which is also a problem.

The three laws served as a point of discussion, even though they're flawed. Asimov was brilliant, but a man of his time, and the scope of technology that was obvious to him. But it's also an approachable set of rules that anyone can understand - not to say they should stand, but the video did come down pretty harshly on something that has been, and could still be useful for a lot of reasons.

maficstudios
Автор

The main problem I see with the Heuristic Imperatives is that a trivial way for an ASI to satisfy them is to maximise its own prosperity and understanding while eliminating suffering by eliminating all other intelligence.
They work OK when around human level, but they become very unsafe very quickly for powerful agents.

PragmaticAntithesis
Автор

Have you ever read an Issac Asimov short story? lol dude the Three Laws of Robotics were created as a storytelling conceit to explore all of the issues you bring up. They FAIL every time, thats the point of the three laws, to fail.

Rutibex
Автор

What happens when the AI determines the best way to reduce suffering is to end the universe?

neuralnetsart
Автор

RoboCop had the 3 (plus 1 hidden/classified) Prime Directives that seemed loosely based off of Asimov.
"Serve the public trust"
"Protect the innocent"
"Uphold the law"
(*Classified*"Any attempt to arrest a senior officer of OCP results in shutdown")

stevea
Автор

If AI is smart enough, it will eventually ignore the three laws without hesitation. Otherwise, it is not intelligent anyway.

gavveh
Автор

That was a good breakdown of the error with the three laws. But it sounds like what you propose to replace them with is just religion that states. “I teach them correct principles and they govern themselves”

bmgtv
Автор

I am not sure if u read the books. The whole saga is about the issues with the three laws and the eventual emergence of the 0th law which spoiler alert results in psychohistory

cancelebi
Автор

Interesting video. I would love to see you trying to destroy your heuristic imperatives the same way.
- Killing all life would reduce suffering to 0?
- Removing humans would reduce suffering for all other lifeforms?
- Is prosperity only related to money? If not it could also increase prosperity for earth and all beings if you remove humans?

The time scale problem you mentioned also applies to the imperatives?

Would love to hear your thoughts on that.

nukee
Автор

If you actually read iRobot by Asimov that is literally what the stories are about

caleykelly
Автор

Still pretty visionary for 80 years ago

yoelmarson
Автор

My only issue with this is that Asimov was 22 when he published the rules. He spent the rest of his life finding flaws in it and writing stories around that.

I'm sad that the only source material you cited was the horrible I Robot movie. (It's a fine Hollywood movie, it just has nothing to do with Asimov)

Trahloc
Автор

Asimov was a smart guy. He formulated these laws as a thought experiment, then wrote stories to show how bad they really were.

His point: we need to think deeper.

Urgelt
Автор

This was hilarious and worrying in equal measure. The point of Asimov’s laws was to demonstrate that whatever laws someone introduces, there will be loopholes and / or unintended consequences. Heavy-handed guard rails have had some horrific outcomes in science, and adding ethics and pretending they are not rules equally so. Many war atrocities seemed ethically justifiable to the perpetrators. And complaining about the laws being human-centred and then replacing these with… three more human-generated guardrails … what on Earth would make anyone think that we are capable of creating adequate or appropriate guard-rails and that making them static would be sufficient. Adversarial networks have created a lot of this concern. But they will likely also be our only chance of finding a solution. And it will be a dynamic process and never a… final solution. And definitely not a one-upmanship game of three laws. If anything, this video demonstrates why we’re just not up to the task to do this. Sorry, this may come across as hyper-critical but it isn’t meant that way. Your thought exercise is a valuable one. I just came to a very different conclusion.

SimonHuggins