filmov
tv
Adversarial images
Показать описание
Today's video is on Adversarial Images: images designed to break image recognition algorithms with minor changes, yet still look like their base items to the human eye.
I thought that this was incredible as I didn't think such incredibly minor changes would lead to such drastically different identifications (eg the turtle to rifle example on the labsix page below).
Hopefully this'll lead to improvements in the algorithms used, especially with those used in bleeding edge fields such as self-driving cars)
Labsix page (includes video):
BBC article:
The Verge article:
Tom's Hardware article:
MIT paper:
Su Jiawei/Kyushu University paper:
Google paper/response:
=================================================
Subscribe if you want to see more. Hit like if you liked it, comment if you have questions or suggestions.
==================================================
==================================================
==================================================
==================================================
==================================================
==================================================
==================================================
I thought that this was incredible as I didn't think such incredibly minor changes would lead to such drastically different identifications (eg the turtle to rifle example on the labsix page below).
Hopefully this'll lead to improvements in the algorithms used, especially with those used in bleeding edge fields such as self-driving cars)
Labsix page (includes video):
BBC article:
The Verge article:
Tom's Hardware article:
MIT paper:
Su Jiawei/Kyushu University paper:
Google paper/response:
=================================================
Subscribe if you want to see more. Hit like if you liked it, comment if you have questions or suggestions.
==================================================
==================================================
==================================================
==================================================
==================================================
==================================================
==================================================
Adversarial Attacks on Neural Networks - Bug or Feature?
Adversarial images
Adversarial Image Attack Demo
Adversarial Attack Demo
What are GANs (Generative Adversarial Networks)?
Fooling Image Recognition with Adversarial Examples
Understanding Adversarial Examples From the Mutual Influence of Images and Perturbations
Adversarial Examples, Optical Illusions and Neural Networks
Adversarial Attacks + Re-training Machine Learning Models EXPLAINED + TUTORIAL
Adversarial Attack | FGSM | deep learning model | image classification
Tutorial on the Fast Gradient Sign Method for Adversarial Samples
Adversarial attacks on deep learning models: Konda Reddy Mopuri
Breaking Deep Learning Systems With Adversarial Examples | Two Minute Papers #43
Adversarial examples for humans
Adversarial Imaging Pipelines
AI Thinks This Dragonfly is a Manhole Cover | Natural Adversarial Images #shorts
Adversarial Images Against Super Resolution Convolutional Neural Networks for Free
Universal Adversarial Perturbations
Adversarial images and attacks with Keras and TensorFlow | PyImageSearch | Deep Learning Part -14
All You Need is RAW: Defending Against Adversarial Attacks with Camera Image Pipelines
[9B] Adversarial Images Against Super-Resolution Convolutional Neural Networks for Free
Generative Adversarial Network (GAN) to generate face images
Learning From Simulated and Unsupervised Images Through Adversarial Training
Towards Large Yet Imperceptible Adversarial Image Perturbations With Perceptual Color Distance
Комментарии