The danger of AI is weirder than you think | Janelle Shane

preview_player
Показать описание

The danger of artificial intelligence isn't that it's going to rebel against us, but that it's going to do exactly what we ask it to do, says AI researcher Janelle Shane. Sharing the weird, sometimes alarming antics of AI algorithms as they try to solve human problems -- like creating new ice cream flavors or recognizing cars on the road -- Shane shows why AI doesn't yet measure up to real brains.

Рекомендации по теме
Комментарии
Автор

They definitely painted the walls of my high school with "suffer".

iambiggus
Автор

Human: Solve world hunger
AI: OK. **kills starving humans**
Human: Why did you kill them?
AI: Dead people aren’t hungry.

PythonPlusPlus
Автор

6:41 reminds me of a machine learning algorithm classifying images of wolves and dogs. One particular dog kept being classified as a wolf no matter what the scientists did. They eventually figured out it was because of the white background: the machine figured "if it's standing on snow, it's a wolf" because that's what the researchers fed the AI. You never quite know which features will be deemed significant.

There was also the time the AI kept sinking its own ships in a battle game because that seemed to be the quickest way to get those ships out of the way.

The AI is only as good as the data it is fed.

umarahmed
Автор

“The danger is that it does exactly what we tell it to do” has been the risk all along if you ask any sci-fi writer. And if you’re not using it to think out of the box you’re not using it right.

orangecrush
Автор

"What color do you want your walls painted?"
"Suffer."

williambarnes
Автор

- Hey Ai, what’s the fastest way to get from point A to point B?
- Start from point B.
- No, you can’t do that!
- You can’t, I can.

riccello
Автор

Recently I’ve been reading and watching a lot about the real nature of AI, and it’s truly gotten me to appreciate the “artificial” in artificial intelligence, as you wouldn’t expect a natural intelligence to make these mistakes due to having a more “well-rounded” and general structure. Modern AI is like if you took an abstraction of one very specific part in the brain and tried to make it do all the things that are normally handled by multiple parts of the brain working together

geraldkenneth
Автор

AI is like a genie. It does exactly what you asked for, but not necessarily what you wanted.

StardropGaming
Автор

So as of 2019, AI will not intentionally kill us, just accidentally kill us. Got it!

briandrake
Автор

A.I. Engineers: "A.I. will do exactly what you ask it to do, not what you want it to do."

Computer Programmers: "Welcome to the club."

undead
Автор

I'm reminded of HAL's famous words from 2001: A Space Odyssey: "This kind of thing has come up before and it has always been due to human error."

dcterr
Автор

Human: make me happy
AI: How do I tell you're happy
Human: I'll be laughing and smiling
AI: **ties them to a chair and gives them laughing gas**

LokeshKumar-mtki
Автор

Old Dr. Who quote - "Answers are easy. Asking the right questions is hard."

nbi
Автор

Isn’t it a trope in old legends and fairy tales that the helper fairy or mischievous god gives you exactly what you ask for and not what you want or need.

benjaminbrewer
Автор

They should make a series on AI failures, it's interesting. AI thinks outside the box, exactly what humans have a hard time doing.

seana
Автор

There was a project I ran into a while back where researchers were attempting to teach an AI in the same manner as they would teach a child. The way they do this is by interacting with an AI through a small robot vessel, which gives the appearance of a child to the AI. This is supposed to help the researchers treat the AI like a child, and also help to reduce the negative feedback it may encounter when it does something wrong.

I think this is a pretty intuitive idea, as it teaches the AI by building its knowledge in small increments in the same way a person is expected to learn. This should help it to understand more nuanced ideas and human principles that would not be as apparent to a traditionally trained AI.

Of course, this requires a lengthy amount of time to complete, even if it isn't one-to-one with a child's developmental timeline. It may also learn human responses to certain problems, such as anger, frustration, sadness, or even laziness. Still, what better way to study AI than to teach it to simulate human responses?

timothyt.
Автор

“And what is your favorite color?”
“*Suffer*”

happychappiejr
Автор

Axiom of computer programming: Garbage in equals Garbage out.

bradleyp
Автор

Finally, a realistic assessment of what we need to worry about with AI, rather than some future speculation of what they'll be able to do based entirely on fear and poor reasoning.

TheMrmoc
Автор

Plot twist : the AI is super smart but it is just trolling us

adityabagdi