MIT 6.S191 (2021): Deep Learning New Frontiers

preview_player
Показать описание
MIT Introduction to Deep Learning 6.S191: Lecture 6
Deep Learning Limitations and New Frontiers
Lecturer: Ava Soleimany
January 2021

Lecture Outline
0:00​ - Introduction
1:11​ - Course logistics
3:48​ - Upcoming hot topics and guest lectures
6:56​ - Deep learning and expressivity of NNs
10:02​ - Generalization of deep models
14:03 - Neural network failure modes
18:43 - Uncertainty in deep learning
22:41​ - Adversarial attacks
26:35 - Algorithmic bias
27:27​ - Limitations summary
28:29​ - Structure in DL
29:46 - Learning on graphs
37:50 - Learning on 3D pointclouds
39:58​ - Automated Machine Learning (AutoML)
48:03 - Conclusion

Subscribe to stay up to date with new deep learning lectures at MIT, or follow us @MITDeepLearning on Twitter and Instagram to stay fully-connected!!
Рекомендации по теме
Комментарии
Автор

I like how Ava explains complex topics so easily!

islomjon
Автор

Excellent clarity, and well-crafted selection of topics under "New Frontiers". However, I feel that multitask learning deserves mention as a separate topic, since the bulk of AutoML is still pretty task-specific.

nintishia
Автор

She explains very well. Thanks for the lecture.

Fordance
Автор

as always, thank you for making this free!

kaizhang
Автор

The moment, that I will finally be an expert in deep learning will be the exact moment when AutoML will take over and all my knowledge will be obsolete :D

BoomBaaamBoom
Автор

Thanks Ava for this amazing lecturing! It's really good to know and understand those new frontiers in Deep Learning and AI in general. I think those new ideias and algorithms and, of course, the amount of computational power is the reason why Artificial Intelligence this time came to stay, allowing many types of algorithms and ideas to be applied. Such interesting years have to come for AI!

reandov
Автор

This is a very good overview of some new topics!

piotrarturklos
Автор

I feel like noting some similarities with methods used in physics, and wondering what from physics might be helpful and what might be misleading. A deep neural network with an activation function is, in effect, a certain trial function, or ansatz. Yes? A trial function might be used in physics to reduce the dimensionality of a problem from infinite, say, a field one or more dimensions, into a function of only a handful of parameters. If you have physical reasons to think that a limiting case might behave a certain way (and you often do, since limiting cases are likely to be simpler and more easily and confidently solved), then you can build that behavior into the trial function. This was clearly done at the end of the problem, when you limited the possible results to a finite set of objects, such as dogs or cats. Some of the convolutions do a similar heuristic (or maybe raw empirical) reduction of the degree of freedom, I think. Yes? Just guessing, but the deep neural networks look like they are in effect choosing the trial functions pretty haphazardly. Not that I know what to do with these reflections, if they are in fact even relevant.

RichardTasgal
Автор

I'm surprised that I haven't heard "Newton-Raphson" mentioned in the course. OK, that only concerns speeding up calculation of the optimization.

RichardTasgal
Автор

47:17 as I understand from the lecture, AutoML need to do some kind of search for the optimal network structure, should this also be counted as the number of parameters? Or maybe AutoML claims that a structure found with one type of problem is also suitable for other types of problems?

thinkingaloud
Автор

Can a non MIT student get that t-shirt too :)

harshkumaragarwal
Автор

The buzz about AI in the media seems overhyped when you learn the workings. AGI is far away. Ideas?

ながれる季節
Автор

How many local (and not global minima) are there in the parameter space? There ought to be some way of getting some ideas of how dense the parameter space is with local minima (statistically).

RichardTasgal
Автор

Yes YOU CAN GET THE TSHIRT
EMAIL THEM NOW !!

Kryptikoo
Автор

Can commerce student shift to this field

sumitverma
Автор

Could you make lecture short of 20min....so that viewers are interested in seeing them

AVyt