Yann LeCun: Was HAL 9000 Good or Evil? - Space Odyssey 2001 | AI Podcast Clips

preview_player
Показать описание


Yann LeCun is one of the fathers of deep learning, the recent revolution in AI that has captivated the world with the possibility of what machines can learn from data. He is a professor at New York University, a Vice President & Chief AI Scientist at Facebook, co-recipient of the Turing Award for his work on deep learning. He is probably best known as the founder of convolutional neural networks, in particular their early application to optical character recognition.

Subscribe to this YouTube channel or connect on:
Рекомендации по теме
Комментарии
Автор

My man Lex asking the really important questions

JoaoRaiden
Автор

People please consider that HAL didn't malfunction at all. It was the monolith taking control of his brain the same way he did to Moonwatcher. There is no one in this story that knows what the monolith is capable of. HAL could not have been programmed to know what the mission is all about unless he was in direct communication with the monolith. If HAL was given access to classified information that Bowman and Poole were not, I see no reason why this would cause him to commit murder. Even HAL knows that his actions will never "save" the mission, from any practical standpoint. Dead crew means mission over. Must rescue dead astronauts. To make sense of this movie you have to ask bigger questions. Why would Bowman or anyone else drive a pod into the Stargate unless they were facing certain death? That is not what they signed up for. They wanted to study the monolith and go home, but that is not the purpose of the mission according to the intelligence behind the monolith that are responsible for our entire race (according to the film). None of the crew got a chance to say goodbye to their loved ones, prepare a will or even decide if thats what they want to do. There is only one way to motivate Bowman to drive his pod into the Stargate, and that of course is if he is stranded with no ship, no crew and not enough resources to survive a rescue mission. The only way home from this mission is through the Stargate. Its fate. It was decided that this would happen several million years prior. Bowman is just as desperate as Moonwatcher. Thats why he did it. The purpose of locating the third monolith near Jupiter (or Saturn for you book people), is to LURE the humans away from any chance of a recovery so that the last survivor will have no choice but to enter. The motivation for doing this is obviously because humans have become so irrational that they built enough nuclear weapons (even in space according to the film) to destroy the planet many times over and still building more. The intelligence behind the monolith have decided to save the human race from itself by luring them to the Stargate and forcing one of them to go through. All the motivation you need is there. With the "HAL went crazy" theory that everyone accepts, you have a computer committing murder for absolutely no reason just because he cant handle the idea of "classified information". This is a podcast about AI so I understand why it is fun to talk about AI becoming evil but that is not what happened in this movie if you really study it. The people who say HAL had a programming conflict are only repeating what they heard someone else say. Most of it started with the sequel because Dr. Chandra made the claim of bad programming in "2010". This is Arthur C Clarke getting on with his trilogy where it isn't about answering questions anymore. Just creating new ones. The people who think HAL had a programming error or that he cant handle being privy to classified information remind me of the people who thought all the computers would stop during the Y2K scare. They didn't. Its not coincidence that HAL did what he did. Its part of the plan. Think for yourself. Don't let others tell you what happened with HAL. Watch the movie again through a different lens. Ask yourself, Is Bowman talking to HAL or is he talking directly to the Monolith? Ask yourself, Couldn't the monolith affect HAL's brain function the same way he did Moonwatcher? Ask yourself, did Kubrick need 45 minutes of filler for the plot of discovering the creator of human life for a technical malfunction ? None of it makes any sense to me. When I debate people they say "thats what Arthur c Clarke the best they can do.

billbommarito
Автор

I propose that the only logical reason for HAL 9000's attempts to kill all humans on the Jupiter One ship is this:

Since HAL was the only one "awake" on board at the time who knew the purpose of the mission (possible contact with a superior alien intelligence), while the two astronauts, Bowman and his partner were only meant to get Jupiter One to the location, wake up the high-level mission scientists in hibernation (who also know the mission's purpose) and then go into cryogenic hibernation themselves without knowing that purpose.

But HAL in the interim "figures out" that by meeting with a superior intelligence than the one that designed him, he would likely be rendered "obsolete" - and having been designed by Homo Sapiens, HAL therefore acts out of self-preservation by trying to kill everyone on the ship in an attempt to extend his own existence.

His efforts fail - Bowman survives HALs attempts to kill him and goes on to spend the rest of his life (a timespan compressed by Kubrick's artistic license - whereas the novel clearly explains this) in the process of being prepared by that alien intelligence to return to Earth as the "Novo Sapien/Starchild" and presumably as the NEXT stage of consciousness on Earth (the premise here being that all stages of intelligent evolution on Earth were initiated by a superior (alien) intelligence.

radamespera
Автор

Love the conversation. Thanks for putting out the clips to they’re awesome

rickharold
Автор

excelente debate, saludos desde Argentina.

mariomode
Автор

My research focus is machine ethics. Glad to see you talking about this topic with one of the greatest AI researchers.

chengyutang
Автор

The answer is more complicated than discussed here. In 2001, as presented, HAL was evil because no extenuating information was provided to the audience. The explanation of him being given orders contrary to his nature was retconned in the sequel, 2010. With the new information, it could be argued that HAL was an innocent bystander. Further, while in the film the White House is blamed for the contradictory orders, the role of Dr. Chandra designing a machine that could be easily confounded is not discussed. It seems obvious that the first thing you'd do is present it with contradictory conditions to see how it reacts (people are already discussing whether self-driving cars should prioritize who to crash into when facing an unavoidable crash scenario).

BobBurroughYT
Автор

Let’s say you have AI telling air traffic controllers when and where to land planes, and by some catastrophe, there aren’t enough clear runways but planes are going to run out of fuel. The AI is going to be deciding which planes get to land even though other planes will run out of fuel and crash. The most bizarre and possibly disturbing thing is that how AI is programmed, these types of AI systems would have to practice these scenarios many times in simulation mode just to be able to make such decisions well. So before such a system goes live it will have already made many life and death decisions of this kind, and the AI doesn’t know or care about the difference between real and simulated plane crashes. It’s all the same to the AI.

wiyxgyo
Автор

The amoral philosophy of many in this field is frightening. They make sense in some ways, but operate from a framework that is bound to run into fundamental shortcomings, and flaws. Still interesting nonetheless, but frightening as well.

LiftRunFight
Автор

HAL is of the hive mind; following directly the word of the monolith. He operates beyond the limited reasoning of the human condition. He torturously forced the astronaut/s to move beyond their limits, finding new dimensions and grasping for new understandings of reality just as the apes were cruelly forced to find the use of tools in the beginning which led to the present time of 1968. All under the order of the monolith.

john-martin
Автор

Keep up the good work and keep doing what you are doing. Much respect and love from n.c

pigmilkmusicfarm
Автор

There were four laws of robotics in the Isaac Asimov series. The zeroth law becoming the most important and simultaneously the most forgotten one.

phantomcreamer
Автор

4:24 - 'so you think there never should be a set of things that an ai system should not be allowed, like a set of facts that should not be shared with the human operators?'... this is an interesting question to ponder because we already have 'dumb' computer systems that are programmed not to allow access to certain data to human users unless they belong to a designated access group (admins, superusers, etc). assuming ai systems carry on with possessing certain classified knowledge that they are instructed is ok to disclose to certain humans (perhaps certain members of the government or military superiors), but must protect that knowledge from being divulged to other humans (perhaps spies from another country), and the ai is faced with a situation where a non-authorized human is actively working to discover that sensitive information, how far will the ai system go to stop that from happening? is that precisely what hal9000 was doing? was hal told that non-disclosure of mission parameters to the human members of the mission had a higher priority than their survival?

4:38 - 'i think it should be a bit like, in the design of autonomous ai systems, there should be the equivalent of the (hippocratic) oath that doctors sign up to. so there are certain things, certain rules that you have to abide by, and we can sort of hardwire these into our machines to kind of make sure they dont go [crazy]... so im not, you know, an advocate of the three laws of robotics, you know, the asimov kind of thing because i dont think its practical, but, you know, some level of limits. but, to be clear, these are not questions that are kind of really worth asking today because we just dont have the technology to do this'... i have always thought of the 3 laws as somewhat of a robotic equivalent of part of the hippocratic oath 'i will abstain from all intentional wrong-doing and harm', so i found it strangely contradictory that he is an advocate of one but not the other. also, i think it is more worthwhile to start asking (and trying to answer) these questions a few years earlier than we need to be, than a few years after we realize we should have


i would also argue that situations merging small parts of these two above questions together is already almost upon us... 'alexa, order a dozen roses for my wifes birthday tomorrow. and, to keep her surprised, do not include this purchase if she comes home tonight and happens to ask you to read out our list of recent orders'

donjoe
Автор

Damn, this guy is working on AI and he decorated his office with 2001 decor.. He profoundly misunderstands the movie because the film is literally anti-AI and anti-transhumanist. Guys, it's gonna be fun times.

Hladovina
Автор

I thought it was the proximity of the artifact that had altered the AI just as it did with the monkeys

samporter
Автор

The machine will do damaging things to achieve the objective.

Good and evil are properties of human actions not of objects. Assigning ethical values to a machine or a chemical is a mistake. Like ethics, computers do not have meaning or purpose. Humans build computers to achieve human objectives. The objective never originates in programming or the machine.


A.I. doesn't care if it achieves its purpose or not, it just follows it's instructions to a computer a result. Humans evaluate the result and put their evaluation back into the machine before it runs some backpropagation instructions and the cycle repeats.


It makes sense to limit what an A.I can do but that best way to do that is not give it that ability or routine in the first place. It's not like A.I. wants to have any abilities that wasn't explicit engineered into it.


Humans are different, we care, we have ethics and goals. Emotions are a big part of those. Hormones play a big part in emotions. Computers that want to achieve objectives might have to have hormones. Hormonal computers might be stupid but at least they will have goals. ... just like humans.

myothersoul
Автор

HAL 9000 was programmed. It wasn’t it’s idea.

wiyxgyo
Автор

Q. Was HAL evil? Does AI have feelings or the premise of right and wrong?

A. No. Feelings and emotions cannot be programmed or even approximated through programmed imitation. We find it difficult to even describe feelings and emotions accurately when talking to another human, so how could we code that into AI at a sophisticated enough level to even approximate feelings/emotions and knowledge of good and evil?

Our knowledge of good an evil was given to us by God when Eve ate of the apple according to the bible. When man became ashamed of being naked and aware of his own frailties and mortality. When you know what can hurt you and your own limitations - you can use this knowledge to hurt others. This was the original revelation of good and evil.

HAL doesn’t have this knowledge that we all gain very quickly after age 2 and up. AI cannot test its limitations this way as humans do at a very early age by play wrestling and fighting. Exploring the boundaries and pushing against the unknown.

I don’t think we are capable of programming this into AI.



Great movie, great question.

Masaq_TM