Why Asimov's Laws of Robotics Don't Work - Computerphile

preview_player
Показать описание
Three or four laws to make robots and AI safe - should be simple right? Rob Miles on why these simple laws are so complicated.

Thanks to Nottingham Hackspace for the location

This video was filmed and edited by Sean Riley.

Рекомендации по теме
Комментарии
Автор

He mostly focused on the difficulty of defining "Human", but I think it's much much more difficult to define "Harm", the other word he mentioned. Some of the edge cases of what can be considered human could be tricky, but what constitutes harm?


If I smoke too much, is that harmful? Will an AI be obligated to restrain me from smoking?
Or, driving? By driving, I increase the probability that I or others will die by my action. Is that harm?
What about poor workplace conditions?
What about insults, does psychological harm count as harm?


I think the difficulties of defining "Harm" are even more illustrative of the problem that he's getting at.

arik_dev
Автор

"You are an AI developer. You did not sign up for this". Brilliant quote!!!

KirilStanoev
Автор

the laws exist to create a paradox around which to construct a narrative.

LordOfNihil
Автор

I didn't realize that people took Asimov's Three Laws seriously, considering that nearly every work they're featured in involves them going wrong.

dalton
Автор

"Optimized for story writing." I can't express how much I love that sentiment.

DVSPress
Автор

The problem with Asimov's laws is probably that they're just obscure enough that people don't think they're well known, but they're also not well known enough for people to remember the context they appeared in and how they always failed.

ThePCguy
Автор

Asimov: you can't control robots with three simple laws


everyone : yes, we will use three simple laws, got it.

rubenhayk
Автор

Yes, that was Asimov's intention all along. The whole point of the laws of robotics in the books is that they are incomplete and cause logical and ethical contradictions. All the stories revolve around this.

This is worth emphasizing, as most people seem to think Asimov proposed them as serious safeguards. The comments in the beginning of the video illustrate this misconception well.

Thanks for bringing this up, Rob!

ucasvb
Автор

"if Goingtoturnevil

don't"

txqea
Автор

So the problem of ensuring that technology only acts in humanity's best interests isn't between human and technology, but between human and self. We cannot properly articulate what kind of world we actually want to live in in a way that everyone agrees with. So no one can write a computer program that gets us there automatically.

Sewblon
Автор

I kinda wish this video just kept going.

AliJardz
Автор

This was sort of Asimov's point in the first place if you actually go back and read his original stories instead of the modern remakes that mistakenly think the rules were meant to be "perfect." He always designed them as flawed in the first place, and the stories were commentary on how you *can't* have a "perfect law of robotics" or anything, as well as pondering the nature of existence/what it means to be sentient/why should that new "life" have any less value than biological life/etc.

DeathBringer
Автор

This brings to mind the Bertrand Russell quote in Nick Bostroms's book.

_"Everything is vague to a degree you do not realize till you have tried to make it precise."_

bigflamarang
Автор

"[The laws are] optimized for story writing" spoken like a true programmer

AstroTibs
Автор

A story about robot necromancy sounds kind of cool though. 🤔🤖☠️

DJCallidus
Автор

In fact, Asimov's whole point in writing I, Robot was to show the problem with these laws (and therefore the futility in creating one-size-fits-all rules to apply in all cases).

shanedk
Автор

"I didn't sign up for this" - made my day

Jet-Pack
Автор

Does psychological harm count as harm? If so, by destroying someone's house, or just slightly altering it, you would harm them.

MasreMe
Автор

Asimov didn't intend for them to work, you said it yourself--the 3 laws of robotics go wrong

fortytwo
Автор

Asimov was not a fool, and these are clearly ethical rules, and as such are in the field of moral philosophy. It's blindingly clear that they aren't design rules and they rather do point to the problem of the inherent ambiguity of morality and ethical standards which always have subjective elements. However, human beings have to deal with these issues all the time. Ethical standards are embedded into all sorts of social systems in human society, either implicitly or even explicitly in the form of secular, professional and religious laws and rules. So the conundrum for any artificial autonomous being would be real.

To me this points out the chasm there is between the technological state of what we call Artificial Intelligence, that is based on algorithms or statistical analysis and what we might call human intelligence (not that the psychologists have done much better). Asimov got round this by dumping the entire complexity into his "positronic brains" and thereby bypassed this.

In any event, there are real places coming up where human morality/ethical systems are getting bound up with AI systems. Consider the thought experiments that are currently doing the rounds over self-driving cars and whether they will be programmed to protect their occupants first over, say, a single pedestrian. As we can't even come to an agreed human point on such things (should we save the larger number of people in the vehicle or the innocent pedestrian that had no choice about the group choosing to travel in a potential death-dealing machine), then even this isn't a solvable in algorithmic terms. It sits in a different domain, and not one AI is even close to being able to resolve.


The language adopted in the video is all that of computer science and mathematics. The definition of hard boundaries for what is a "human" is a case in point. That's not how human intelligence appears to work, and I recall struggling many years back with expert systems and attempting to encode into rigorous logic the rather more human-centred conceptualisation used by human experts. Mostly, when turned into logic, it only dealt with closed problems of an almost trivial nature.

TheEulerID