TensorFlow Tutorial #12 Adversarial Noise for MNIST

preview_player
Показать описание
How to create a single noise pattern that fools a neural network into mis-classifying all input images to a desired target-class. Demonstrated on the MNIST data-set.

This tutorial does NOT work with TensorFlow 2 and later versions, and it would take too much time and effort to update it.
Рекомендации по теме
Комментарии
Автор

You are doing a great job with all these tutorials, thank you!!

alexfuster
Автор

wow, very nice explanation. Thank you for this contribution Magnus

ktyewgy
Автор

absolutely amazing . You just save me a lot of time . Thank you for your hard works .

TuNguyen-oxlt
Автор

Respect and thanks for all of the tutorials. Just a small note for this "warning: please use GLOBAL_VARIABLES instead; VARIABLES will be removed after 2017-03-02."
In [18], you may use tf.GraphKeys.GLOBAL_VARIABLES instead of tf.GraphKeys.VARIABLES. I tried using tf.GraphKeys.variables and it still works.

mertsefatarhan
Автор

I have a doubt . What is the use of adversarial Noise in real life .. please reply . Btw, nice explanation of all the tutorials..

sahiljindal