Breaking DeepMind's Game AI System | Two Minute Papers #135

preview_player
Показать описание

The paper "Adversarial Attacks on Neural Network Policies" is available here:

WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE:
Claudio Fernandes, Daniel John Benton, Dave Rushton-Smith, Sunil Kim, VR Wizard.

Recommended for you:

Károly Zsolnai-Fehér's links:
Рекомендации по теме
Комментарии
Автор

It makes me wonder if similar system could be used to fool natural neural networks, like human/animal brains. Kind of like perception filter from Doctor Who, where you can hide things in plain sight or make them be mistaken for completely other things.

KohuGaly
Автор

This sounds terrifying if someone manages to trick a self-driving car or other machine learning algorithm that directly impacts the physical world.

cowshrptrn
Автор

Very interesting, I suspect the reason why it seems to be so simple to fool these AI, only by adding small noise to their inputs, is that it never had to face noise in the first place during their learning process.

It would be like threatening a caveman with a handgun: without the knowledge, he would only think you are goofing around and stuff.

MrJTom
Автор

The calm ending music is a welcome change. Kinda loud though.

gafeht
Автор

That an additional ball would confuse an AI that's trained to pay attention to only a single ball doesn't surprise me much. The global noise is pretty ridiculous though.
It goes to show that, to properly deal with our new AI-driven robot overlords, all we have to do is show them some pretty pictures :o)

Kram
Автор

3:00 @TwoMinutePapers do you not believe that there will be some "easy" solution that will make these kinds of attacks obsolete soon? Like how residual network solved the vanishing gradients problem. Can you provide some reason why you're saying what you're saying at 3minutes?

FelheartX
Автор

Maybe you can disturb some of the trading algorithms by introducing some of these techniques to the market?

betadryl
Автор

When neural networks becomes super ubiquitous, you gotta wonder if advanced adversarial algorithms will be considered a weapon and be export controlled like encryption algorithms.

Reavenk
Автор

The 'live pixel' from a camera is something that can happen to a machine's cameras. Just think of the ways that image sensors fail in cameras.

BurnabyAlex
Автор

When we can't compete with AI anymore, we need AI to compete with it.

Lugmillord
Автор

Sobering news: 90 km/h = 25 m/s = 50 human body displacements (width)/second. If a self-drive vehicle is 'distracted' for a little more than 1 human body displacement (0.02 sec), it could kill you, and would almost certainly injure you. All it takes at 90 km/h is 0.02 seconds (less than blink of an eye) of 'lost computation / signal', to potentially shred a human ( 1 full body displacement in 0.02 seconds / 2500 G). Gotta love AI and its successor AGI: Inevitable Human obsolescence. Now review the video.

almostbutnotentirelyunreas
Автор

It's like optical illusion for AI.

AakashKalaria
Автор

if this is possible, shouldn't one be able to FIX an ai that was trained for a game with some small differences? like making it for pong, but when it is presented with pong with 10 dots, it cant work, so fix it with some noise instead of training

MrRyanroberson
Автор

Perhaps this can grow into a system akin to GAN for reinforcement learning.

sinharoy
Автор

that noise sample is real example how easy to fool these machines, I wonder how many human who watched this video fooled by that noise

TheOswald
Автор

could come in handy when skynet takes over

EDIT: imagine skynet vs skynet2(the fooler)

snaawflake
Автор

There is more than one type of over fitting.

SirCutRy
Автор

Isn't that a simple case of an over fit from the learning algorithm?

nraynaud
Автор

Does anyone know of a channel like this that covers genetic advancements with stuff like crispr?

Killadog
Автор

Aww, I miss the old outtro music :(

Oh well, great vid as always.

kipper