MIT 6.S191 (2024): Deep Generative Modeling

preview_player
Показать описание
MIT Introduction to Deep Learning 6.S191: Lecture 4
Deep Generative Modeling
Lecturer: Ava Amini
2024 Edition

Lecture Outline
0:00​ - Introduction
6:10- Why care about generative models?
8:16​ - Latent variable models
10:50​ - Autoencoders
17:02​ - Variational autoencoders
23:25 - Priors on the latent distribution
32:31​ - Reparameterization trick
34:36​ - Latent perturbation and disentanglement
37:40 - Debiasing with VAEs
39:37​ - Generative adversarial networks
42:09​ - Intuitions behind GANs
44:57 - Training GANs
48:28 - GANs: Recent advances
50:57 - CycleGAN of unpaired translation
55:03 - Diffusion Model sneak peak

Subscribe to stay up to date with new deep learning lectures at MIT, or follow us @MITDeepLearning on Twitter and Instagram to stay fully-connected!!
Рекомендации по теме
Комментарии
Автор

First thank you Alexander and Ava for sharing the knowledge
After watching these videos, I realized that learning machine learning is not just a skill; teaching is a much bigger skill.

ML-DS-AI-Projects
Автор

Briliant Ava. Explained one of the most complex concept GAN, cycle GAN brilliantly.

sammyfrancisco
Автор

I would love to see Lecture 6 on Diffusion Models!

lucawahl
Автор

Thank you so much for the course. So much interesting.

freddybrou
Автор

What an amazing lecture it was. Really enjoyed it tbh.

akshatchouhan
Автор

Thanks for your course, what I want to ask is whether you can upload the pratice course file or related document to website etc. It maybe help for all of those who want to follow the course and do some practices. Many thanks!

Radosteven
Автор

I lost from 32:00 onwards about different terms phi, qphi etc meant ...

ssrwarrior
Автор

Does anyone know if we can actually expect Lab 3 to be released or if there's a way to access it?

anantsinha
Автор

awesome, many thanks for your initiative !
keep up the great work

catalinmanea
Автор

Couldn't bear to live without tech and AGI.

miroslavdyer-wdei
Автор

This proves Plato's idealism is working.

Ponassening
Автор

I am curious, regarding the CycleGANs with respect to audio generation, would the output from the model be better if the person creating the input audio were to try and mimic the person the model was trained on as closely as possible? For example, if an Obama impersonator were to supply the input audio, would the output even more closely resemble that of Obama's true voice? The same question would also apply to the video content. If body-language were more closely mimicking the target, does the model generate an output that more closely resembles the target? My hunch is that it would indeed improve the prediction.

JCasaraconn
Автор

32:33 *_"and so with they employ this really clever trick that effectively"_* Did any body catch what she was saying here, thanks

veganath
Автор

Thank you for the video .. what is Deterministic and Stochastic node?

ssrwarrior
Автор

thank you for the amazing content, please add the slides for this lecture in the website, its still not there, cheers :)

arpandas
Автор

Is there such a thing as a Generative Modelling Agency???

miroslavdyer-wdei
Автор

The website aint working since a few days :/

newmood
Автор

First thank you Ava for sharing the knowledge.
I'm not able to understand, why the standard auto-encoder does a deterministic operation?

ahmedelsafty
Автор

I have a dataset of 120 images of cell phone photographs of the skin of dogs sick with 12 types of skin diseases, with a distribution of 10 images for each dog.
What type of Generative Adversarial Network (GAN) is most suitable to increase my dataset with quality and be able to train my DL model? DcGAN, ACGAN, StyleGAN3, CGAN?

geoffreyporto
Автор

Nice amini teaching❤ and your curly hair nice😮

gapcreator