MIT 6.S191 (2022): Deep Generative Modeling

preview_player
Показать описание
MIT Introduction to Deep Learning 6.S191: Lecture 4
Deep Generative Modeling
Lecturer: Ava Soleimany
January 2022

Lecture Outline - coming soon!

Subscribe to stay up to date with new deep learning lectures at MIT, or follow us @MITDeepLearning on Twitter and Instagram to stay fully-connected!!
Рекомендации по теме
Комментарии
Автор

What an intuitive and straight forward example of an abstract concept as "latent variable" - what a "wow"! Thanks Ava Soleimany

ajaytaneja
Автор

I cannot thank you enough! Never have I ever learned a concept this well!! THANK YOU.

neginmsh
Автор

Wow, this is just an awesome video. This lecture encouraged me to get into Deep Generative Modeling beside my NLP work.

mamunurrahman
Автор

Exciting and hopeful lecture has the some updated thing compare to last year

voquangtuong
Автор

I feel so intellectually enriched after listening to this, wish I could hit the upvote button more than once.

shamimahossain
Автор

Nice explantion of such a advance topics in deep learning "Thanks for such beautiful lecture, hope @mitdeeplearning team continue to make more videos"

shubham-ppcw
Автор

Question, 10:50
how Auto encoder doesn't have a training data

donfeto
Автор

I'm wondering what the limitations are of a normal distribution. Suppose one variable of many that describe a whole body is height. Height, for a broad middle range, I imagine, can be independent of other parameters. But what about the upper end and the lower end, which are often associated with genetic irregularities and distinctive facial and body shapes? And what if weight does not scale as the cube of height, which I've read it does not? Suppose I am correct that this is a valid counterexample to the accuracy of a Gaussian distribution at the tails. What general conclusions should I draw from this and other similar examples I might think of? Or, on the other hand, will a deep neural network tend to automatically give the height-like variable a nonlinear relation to height at the tails of the distribution of the height-like parameter?

RichardTasgal
Автор

I have decent knowledge of Deep learning. But this was quite concise and to the point. Just a request. To show the Maths behind these ideas. Again, thanks for the lecture series Alexander and Ava.

akshaypansari
Автор

the objective function of gan is correct?

longpan
Автор

can i get the link of last two lectures, the link in website not working

sarmadf.ismael
Автор

Hi Ava Soleimany/Alexander Amini, in GANs, the Generator by training is able to get the "fake" data identical to the "real" data in a series of training iterations, am I right? If that is the case, how do we say we have "new" images ? Is it not right that the Generator is able to "reproduce" the actual image from the noise? IS the image generated from the "noise" not the eaxct replica of the real image?

ajaytaneja
Автор

Hi,good class, any practice process after the class?

macknightxu
Автор

Latent perturbation is fucking awesome. Must've changed the game for generating samples with the desired qualities. Like designer babies, but designer samples.

jayp
Автор

The fact that I didn't even realize the faces weren't real...

ashioyajotham
Автор

the concept is very deep for beginners if possible try to explain with examples the whole point
is to explain to beginners

karthikbobba