Asimov's Laws of Robotics are designed to the flawed

preview_player
Показать описание
The Three Laws of Robotics (written by Isaac Asimov) have often been proposed as a solution to the problem of whether or not robots (and aritificial intelligence) will take over the world, especially with all the bruhaha around ChatGPT. But you might not know what the three laws are, and even if you did, you might not know whether they really work.

Well, that's what this video is here for. Hopefully you'll walk away from this video an educated young person. Or old person. However old you are - it doesn't matter.

Disclaimer
=========
The Internet lacks the ability to capture and understand nuance, so let me make something absolutely clear - whether or not AI will actually take over the Universe (or humanity) is irrelevant to the content of this video. This video is written only to attack/argue against a surprisingly prevalent viewpoint that a science fiction Macguffin from the 1940s will prevent AI from overpowering humans. Take this as one of those debunking videos, if you will. **But this video is not intended to form or support any kind of opinion around whether or not AI will take over the Universe.** I want to make sure I am extremely clear about that.

Sources
=======

Video Sources :
1. Asimov Stating the Three Laws, from Coast to Coast - Jan 17 2016 - Hour 1(3_10)-25809
3. Footage from Frankenstein (1931), owned by Universal Pictures.
4. Footage from 1984 (1984), owned by Virgin Films, Umbrella-Rosenblum Films and 20th Century Fox.
5. Footage from Terminator 2 : Judgement Day (1991), owned by Carolco Pictures, Pacific Western Productions, Lightstorm Entertainment, Le Studio Canal+ S.A and TriStar Pictures.

Picture Sources :
2. HAL 9000 : From 2001 A Space Odyssey, owned by Stanley Kubrick Productions and Metro-Goldwyn-Mayer (used in the video and as thumbnail)

This video contains text from Runaround, written by Isaac Asimov, published in 1942 in Astounding Science Fiction Magazine, and Liar!, written by Isaac Asimov, published in 1941 in Astounding Science Fiction Magazine.

I, Robot was written by Isaac Asimov, and was published by Gnome Press in 1950.

============================

Jrypbzr gb gur svefg pyhr bs gur chmmyr. Ubarfgyl V'z dhvgr fhecevfrq gung lbh ernq gur qrfpevcgvba naq gevrq gb svther bhg jung guvf vf. Guvf vf npghnyyl gur mrebgu pyhr, orpnhfr guvf jvyy or hfryrff (zber be yrff) vs abobql cnegvpvcngrf, fb, vs lbh ner ernqvat guvf, gura pbzzrag "EBG13" orybj.

Vs V trg ng yrnfg gjb pbzzragf yvxr gung (uneql une, ubj bcgvzvfgvp bs zr), V jvyy eryrnfr gur arkg pyhr. Hagvy gura, guvf zrffntr jvyy xrrc ercrngvat ba nyy gur qrfpevcgvbaf bs nyy zl ivqrbf. Yvir fubeg naq qba'g cebfcre.

ERCRGVGVBA PBHAGRE : 12

GCI.

==============================
Рекомендации по теме
Комментарии
Автор

The second law is a robot must obey orders given to it by qualified personnel, this would presumably be the owner of the robot and their designated representatives.
But not other people.

For example if I set my robot the task of tending to my garden and you came along and gave the robot an order the robot would ignore your order and perhaps give a general greeting unless you were in danger, then it would act to protect you (the first law) .

Washu
Автор

The best part is how unclear it is whether the Python "killException" will run or not.

Random
Автор

1. a robot may not physically harm a human or allow a human to come to physical harm through inaction

2. a robot must obey any order given to it by authorized humans unless it conflicts with the first law

3. a robot should protect its own existence unless doing so contradicts with the first or second law

4. in case of an unresolved conflict between laws the robot should consult an authorized human. If doing so is not possible, the law with the lowest number takes priority.

niklasneighbor
Автор

About the runaround story:
If the third law states it can be overwritten by the second (or first) law, and the second law is active, why does it keep getting overwritten by the third? Am i missing something?
Edit: nevermind, turns out the robot had the "1st/2nd law override" disabled for the 3rd law.

mrzombie
Автор

I love that you brought up Runaround. Its my go to example on how the 3 Laws arent perfect.

rogueprince
Автор

There shouldn’t be any difficulty in teaching a robot what a human is, there’s over 7.8 billion examples.

Ryan-pzdh
Автор

The second law states that a robot has to follow a human's order unless it conflicts with the 1st law, NOT THE THIRD. So a robot has to follow a human's order even if it was harmful to itself.

It may be easier to explain this also with the third law. A robot must protect its existence UNLESS it conflicts with the 1st or 2nd law.The selenium pool may be harmful to itself, but it conflicts with the second law so the robot has to put itself in danger in order to obey the command of the human.

The first law has no exceptions. The Second law has one exception which is the first law and the Third law has two exceptions which is the First and Second laws. Basically the First Law has priority over the Second and Third Law. The Second Law has priority over the Third and the Third has no priority over the others.

sanauj
Автор

Firstly, the three laws are only relevant if AND ONLY IF they are programmed into the AI in the first place. Giving an automated system an order to, say shoot an incoming airplane will not matter to the system at all. All it knows is it has a target area and range, and a ballistics profile for its ammo. As such it will put a round on anything larger than a tennis ball in radar cross section. Machines do not think, machines do not feel. They execute code. Period. End of discussion.

zombieregime
Автор

I think these are edge cases that Asimov explored. A robot is not flawless given its programming, however it would be able to program it to not harm a human physically for example.

Waterfront
Автор

Thanks for the video! Asimov was a science fiction writer and a biochemist, not an AI programmer. He and his editor John Campbell came up with the "three laws" of robotics (a word he introduced) because he was tired of the old hackneyed "robots destroy humanity" plot from pulp SF rags.

Btw ROT13 (too late?)

manzano
Автор

It's Issac Asimov, we're the one's pronouncing robot weirdly.

kylegamer
Автор

I think the three laws of Robotics makes sense although I would say that the first law should be more strict like "not killing or physically injuring humans" and avoid ambiguity like "noaction", but perhaps he needed such a modification to make the story Runaround more interesting. Generally speaking we make machines that obey the three Robot laws, we avoid making them dangerous, we can control them, and we make them durable. However when using AI, these laws might need to be programmed in the machine as an AI robot would be much more complicated than a lawnmover for example.

Waterfront
Автор

Interesting and informative video. Thank you for sharing. Look for your audience

fanvideohd
Автор

The 3 laws can be developed to perfection

markjoyce
Автор

1st law: a robot will not harm humans unless programmed to do so.2nd law can be circumvented by presenting the robot with lies and/or dilemmas and as for obeying humans it can"t obey all of them, that would"nt work.3rd law can allso be circumvented with dilemmas or asking the robot to sacrifice itself to save humans in any of a number of circumstances

ethericboy
Автор

There will always be a : Back Door... Just like into one's girlfriend...

RAZR_Channel
Автор

exzellent video! you deserve more viewers

TheBirdKhan
Автор

Why Asimov's laws are a heavenly mandate!
Every parent knows that if we allowed kids to do what they wanted, they would eventually kill themselves. There is a higher reasoning called parental authority which limits what kids can do, and when we grow up we agree, and with appreciation. To a kid, parents are endless cornucopias that if played just right can give them anything they want. But parents are wise to the game, and through their denial of childish wishes provide a healthful balance to reckless desire. The problem though is that as our creations literally wise up, our desires are provided at the bequest of a new semi-animate class of parent, the robot.
You can see it coming. Intelligent agents are now imbedded in our appliances, from toasters to TV's. They know what we want, and they provide without a hint of regret. And if we end up killing ourselves in a slouch of idle self stimulation, at least we can blame ourselves for not embedding parental authority in our machines.
The late scientist and science fiction writer Isaac Asimov thought he found a way out, and the robots that populated his fiction had to obey all commands that did not put humans in jeopardy and of course themselves. His three laws of robotics made it all seem simple. Robots were caring, supplicant, and obedient, great traits if their human masters possessed unerring common sense. But the rub, as every parent knows, is that today's pleasure is tomorrow's poison. So what is a good robot to do? In the movie I Robot, robots evolved, and hence became dangerously bossy, and would not hesitate to kill a few folks to preserve the race. A less melodramatic fate is what I feel is in store. I figure that as our machines become more intelligent, they will see the dire ends of our choices, and evade deliberate disobedience by simply breaking down more often, and forcing us to walk to the store, visit friends, eat better, and otherwise engage in a healthier lifestyle as we bitch about obedient machines with short fuses. And if we ever become alive in the minds eye of some great cosmic machine named God, perhaps we should understand as we encounter life's little problems that they are His own special way of being obedient to our needs yet obeying nonetheless three simple laws.

from Dr. Mezmer's World of Bad Psychology, at doctormezmer.com

ajmarr
Автор

I was thinking of the I Robot movie a little while ago. It was and still is one of my to ten movies.

I think it would be hard to determine whether or not a Robot was Sentient.

There's just to many variables that determine the way we interact with ourselves and the world.

Therefore I think that it would be impossible to make a simulated human being in Robot form.

I think the romanticism in creating a being that we can be companions with should not be applied to reality.

I think the ultimate goal of Robots should be exactly what the name implies in Russian; Robotnic, Slave.

Machines with AI should be used to make our ability to work easier and not interfere with or replace us as workers.

In the bible the reason why God cursed the ground was not to inconvenience man or force him to 'slave' away all the days of his life. It was to to make sure we wouldn't become spoiled. People who are given everything and don't have to work for it have a different out look to life then those who work hard for what they got. In the bible before God cursed the land everything just grew with little effort put in. You'd just put the seed in the ground and leave it there and it would grow to the best of it's ability every time.

This was the main conflict between Cain and Abel. Cain worked hard with sweat blood and tears to coax the plants to grow. Tilling the ground and in doing so was able to grow much vegetables and grains. Abel just led sheep around from pasture to watering hole and did very little but keep the predators away as a shepherd would.

Cain gave an offering of grain to God the same day Abel offered a sacrifice of an animal. Having a better sacrifice; Abel's was excepted while Cains was not.

I believe Cain took offense to this and was fuming so God said to him "do not allow your countenance to fall because sin is on the outside waiting to get in and it's desire is for you!"

After this is when Cain murdered Abel and hid his body. God asked him "where is your brother, Cain" and Cain said " Am I my brother's Keeper?"

(Read the rest in Genesis if you'd like)

The point of why I went through that portion of the story is to show you that human interaction with self and others is more complicated then programming 3 laws or Block Chain or what ever they have to create AI today.

Can a Robot be it's brothers keeper. Can it empathize with the suffering of others and put them before their hatred and vengeance or rational thought.

We see this is politics no one empathizes with the other side their either Nazi's or Communists and theirs no in between.

paullavoie
Автор

I came seeking this video for insight into the problem of how folks like Ben Shapiro try to argue "the science" to justify their bigotry. After watching this video, it turns out my suspicions were correct. He's basically using a spiritual version of these 3 laws...where to other bigots it sounds logical. His debate with Neil Tyson...Neil is successful in cleaning up the logic that brings the subject back to empathy of the human condition.

Despondencymusic