MIT 6.S191 (2023): Robust and Trustworthy Deep Learning

preview_player
Показать описание
MIT Introduction to Deep Learning 6.S191: Lecture 5
Robust and Trustworthy Deep Learning
2023 Edition

Lecture Outline
0:00 - Introduction and Themis AI
3:46 - Background
7:29 - Challenges for Robust Deep Learning
8:24 - What is Algorithmic Bias?
14:13 - Class imbalance
16:25 - Latent feature imbalance
20:30 - Debiasing variational autoencoder (DB-VAE)
23:24 - DB-VAE mathematics
27:40 - Uncertainty in deep learning
29:50 - Types of uncertainty in AI
32:48 - Aleatoric vs epistemic uncertainty
33:29 - Estimating aleatoric uncertainty
37:42 - Estimating epistemic uncertainty
44:11 - Evidential deep learning
46:44 - Recap of challenges
47:14 - How Themis AI is transforming risk-awareness of AI
49:30 - Capsa: Open-source risk-aware AI wrapper
51:51 - Unlocking the future of trustworthy AI

Subscribe to stay up to date with new deep learning lectures at MIT, or follow us @MITDeepLearning on Twitter and Instagram to stay fully-connected!!
Рекомендации по теме
Комментарии
Автор

Everytime I go through one of the lectures, I have this feeling for you: God Bless You!

siak
Автор

This lecture series are just incredible. Thank you Alexander and all other instructors for putting this together. Learned so much! And you are pushing the boundaries for AI learning!

ethanm
Автор

Very Inspiring Lecture!
Before this, it had been not been easy to know where we could make the AI learn better, without manual diagnosis the training data.

hilbertcontainer
Автор

Its very inspiring what you guys are doing. Looking forward to use the learnings in future projects. THX to the entire team behind this course and for making it available to everyone around the globe.

melfice
Автор

I've already complimented the lectures in another video. This is a comment just for the YT algorithm 🙏. Keep up the great work.

MrPejotah
Автор

Thank you for doing this important work!

Isabella-_
Автор

Please continue this great work.
Also cources on AI, ML and data science.

SantoshKumar-hxig
Автор

Very clear lecture.
(But maybe you have to explain the ''noise'' term a bit more)

salamander
Автор

where can i find or practice the Lab session?

Edit: I found it. It is in the website All Lab session

suyogkhadke
Автор

I didn't quite get it from the intro. Was Alexander simply reading from a script or is he a part of Themis AI?

gregwerner
Автор

For the corresponding lab, capsa module is no longer found. Has it been removed? Where can I play with it? Thanks

IrfanKhan-nlqc
Автор

How do I come up with the variance of a single data point? (see @35:56) How does the variance of a single data point even make sense?

MarkJackson-zl
Автор

49:30 - Capsa: Open-source risk-aware AI wrapper
Its disappointing to know that Capsa has been converted from an open source to a closed source by Themis AI.

manjeetkulhar
Автор

where is the lecture for diffussion models?

arpita
Автор

Please also educate me; what should typically be the number of training samples per class for a deep learning network such as Yolo, Resnet, Transformers etc etc.?

siak
Автор

this needs to be taught in every classroom public and private

PoliticalFelon
Автор

can anyone explain for me why high variance means noise in data, while the variance of any point in data depends on x values to be far or near the mean of all data, while the noise as i understand it could have the same value of x with different y values, so how we detect noise with variance being high or not..This issue in aleatoric uncertainty

abdullahmarwan
Автор

what would i need to do to become an AI safety engineer? I already have a CS degree

holthuizenoemoet
Автор

How in the world is she just a Undergraduate 😱

convolutionalnn
Автор

This lecture could have been explained more easily. It is not as clear as the other ones.Still, great job!

pradyumnanimbkar