MIT 6.S191 (2023): Deep Generative Modeling

preview_player
Показать описание
MIT Introduction to Deep Learning 6.S191: Lecture 4
Deep Generative Modeling
Lecturer: Ava Amini
2023 Edition

Lecture Outline
0:00​ - Introduction
5:48 - Why care about generative models?
7:33​ - Latent variable models
9:30​ - Autoencoders
15:03​ - Variational autoencoders
21:45 - Priors on the latent distribution
28:16​ - Reparameterization trick
31:05​ - Latent perturbation and disentanglement
36:37 - Debiasing with VAEs
38:55​ - Generative adversarial networks
41:25​ - Intuitions behind GANs
44:25 - Training GANs
50:07 - GANs: Recent advances
50:55 - Conditioning GANs on a specific label
53:02 - CycleGAN of unpaired translation
56:39​ - Summary of VAEs and GANs
57:17 - Diffusion Model sneak peak

Subscribe to stay up to date with new deep learning lectures at MIT, or follow us @MITDeepLearning on Twitter and Instagram to stay fully-connected!!
Рекомендации по теме
Комментарии
Автор

What's great about this instructor is that they are very careful and particular about what they say, and how they phrase it. There's no fluff, nothing that could cause confusion. Straight to the point and very intentional.

MrJ
Автор

Highly recommended series for AI enthusiasts. This MIT series is by far the most intuitive videos covering all aspects of deep learning. Well done on that.

vikrambhutani
Автор

Honestly, you two are the best speakers for this subject and beyond. I am so thrilled these lectures are opensource and exist for data science communities outside of MIT!

sarahamiri
Автор

A lot of appreciations from my side to your Team who build such a excellent course on Deep Learning

arfakarim
Автор

This series is coming out right after I want to learn more about theory! Thanks for this 🙏

maazkattangere
Автор

don't know why, but i could not breath listening to this lecture. she's so clear without any redundancy, without any hmmm, urgggg, ... how come. she is so amaizing . i would have practiced 1000 times to be able to lecture like this

thankyouthankyou
Автор

Plato's myth of cave Latent Variable example was not intuitive for me (sorry), so I asked a similar example but simpler one to chatGPT. It gave me this:

Imagine that you have a box filled with different types of candies, but you cannot see what's inside. Instead, you can only touch the box and feel the shape and texture of the candies inside. Based on how they feel, you might be able to guess what type of candy is inside the box. For example, if a candy feels round and has a hole in the middle, you might guess that it's a donut-shaped candy. In this example, the shape and texture of the candies are the observed variables, while the type of candy inside the box is the latent variable that we are trying to learn from the observed data. By observing and feeling the candies inside the box, we can learn the different types of candies that are hidden inside, even though we cannot see them directly.

You guys are awesome :) Thank you for sharing these lectures. 🙏

VijayasarathyMuthu
Автор

This is excellent, so grateful to learn a lot from this channel. Kudos to our presenters for laying a solid foundation in deep learning.

jamesgambrah
Автор

Thank you all so very much! Many greetings from Germany.

ersbay
Автор

The lectures are top of notch. But in this lecture, I got my track out when she explained GAN with mathematical notions. I had to put some more effort on those again.

shovonpal
Автор

Wow, such clarity of thought and ideas. I guess that's the MIT advantage! Well done :)

aefieefnvhas
Автор

Thanks a lot for all the wonderful content on deep learning. These are very helpful to me.

codingWorld
Автор

Very well presented with intuition behind deep generative modeling, its architecture and how it is being trained, Well done

EGlobalKnowledge
Автор

The knowledge, the passion and clarity of presentation are out of this world! God bless you guys!

Savedbygrace
Автор

Thank you for doing this! We all are grateful❤

MaksimsMatulenko
Автор

I opened to just watch 2 min of the video, and didn't realize untill the lecture is over 😅. Freaking awesome 😎

giyaseddinbayrak
Автор

wonderful. Very dense and hugely interesting and informative lecture; MIT-style! 60 minutes in a latentspace kind of compression of a hugely complex and multidimensional topic which under reallife conditions takes weeks to understand and "digest". I am really looking forward to the "diffusion model" lecture! Hope it will be online soon!

jensk
Автор

Wow! Can't wait to learn the coming lectures!

AndyLee-xqwq
Автор

Thank you for such valuable lecture. 🙌

rrtt
Автор

Greatly appreciate the knowledge sharing.

yousufmamsa