What is an Autoencoder? | Two Minute Papers #86

preview_player
Показать описание
Autoencoders are neural networks that are capable of creating sparse representations of the input data and can therefore be used for image compression. There are denoising autoencoders that after learning these sparse representations, can be presented with noisy images. What is even better is a variant that is called the variational autoencoder that not only learns these sparse representations, but can also draw new images as well. We can, for instance, ask it to create new handwritten digits and we can actually expect the results to make sense!

_____________________________

The paper "Auto-Encoding Variational Bayes" is available here:

Recommended for you:

Andrej Karpathy's convolutional neural network that you can train in your browser:

Sentdex's Youtube channel is available here:

Francois Chollet's blog post on autoencoders:

More reading on autoencoders:

WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE:
David Jaenisch, Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang.

Károly Zsolnai-Fehér's links:
Рекомендации по теме
Комментарии
Автор

Just discovered this channel. Would call it my best online discovery ever. Thanks a lot for this. :)

salman
Автор

Came here from The Coding Train. And now you are sending me to sentdex. I knew about you all. Means I am on the right track

niaei
Автор

I think you explain it much better than some of the others.

feraudyh
Автор

The next episode is going to be about Two Minute Papers itself, and after that, we'll be back to the usual visual fireworks. :)

TwoMinutePapers
Автор

Its really nice of you to promote a good channel like sentdex.

varunmahanot
Автор

I love what you are doing. Pleasure to watch your videos!

TheAwesomeDudeGuy
Автор

Wow. It's fascinating to see what this channel was like when it was sprinting up. The style is largely the same, but less fine-tuned. Karoly had learned a lot more about engaging speech, and the icon looks just a little different. Also, we have two favorite phrases that have basically become a culture: 1) "Hold on to your papers" (and variations stemming therefrom), and 2) "Just two more papers down the line" (and variations therefrom).

jfk_the_second
Автор

I'm glad I found this channel, thank you!

CopperHermit
Автор

A great application could be in denoising before vectorisation of mid-lines or in animation when you need to automatically morph complex shapes. It seems to do that with quite a lot of understanding of what lines are.

Ludifant
Автор

I think the main advantage of AE compression over the standard compression techniques is that it is possibly a bit more general as opposed to something like JPEG which is only limited to images

atrumluminarium
Автор

Thanks for pointing us to such a valuable channel :D

ahmed.ea.abdalla
Автор

I have to put my paws to the 'like' button immediately!

JS-lfsm
Автор

Thanks a ton for the link. It'll probably help with my schol dts

bobsmithy
Автор

Very clear and to the point!!! Why my teacher can't just talk in this way?

summerxia
Автор

Concise and truly informative lecture!

I’m just wondering—after we obtained the most important features from the bottleneck of our trained neural network, is it possible to apply the denoising capability of the autoencoder to a live feed video that is somewhat highly correlated to the training images?

Will this be better, or even recommended, instead of using traditional denoising filters of OpenCV for real-time videos?

I’d love to learn more from your expertise and advices as I explore this topic further. Thank you for the insightful explanation and demo by the way! Subscribed! :)

ellisiverdavid
Автор

I love this channel, thank you! I am setting up a Patreon account asap :)

ServetEdu
Автор

1:48 Shouldn't we call it a very dense representation instead of the sparse one? Here's how I think about it: the less number of neurons has to compress the data from a large representation into a very dense small one. Compressing should mean that you are making things dense, isn't it? And usually, we refer to a sparse vector as a really large representation.

offchan
Автор

I glad to see this kind of ratio on youtube at the likes-dislikes, it's well deserved! keep up the good work! (egyik kedvenc csatornám, nagyon jó témákat szedsz össze!)

ndavid
Автор

do you have a link to the video that explains how to build the 'tanks' game shown at 3:24?

thomasblackmore
Автор

Any chance you know the video of Sentdex's where you show the tank game?

robosergTV