MIT 6.S191 (2020): Introduction to Deep Learning

preview_player
Показать описание
MIT Introduction to Deep Learning 6.S191: Lecture 1
Foundations of Deep Learning
Lecturer: Alexander Amini
January 2020

Lecture Outline
0:00 - Introduction
4:14 - Course information
8:10 - Why deep learning?
11:01 - The perceptron
13:07 - Activation functions
15:32 - Perceptron example
18:54 - From perceptrons to neural networks
25:23 - Applying neural networks
28:16 - Loss functions
31:14 - Training and gradient descent
35:13 - Backpropagation
39:25 - Setting the learning rate
43:43 - Batched gradient descent
46:46 - Regularization: dropout and early stopping
51:58 - Summary

Subscribe to @stay up to date with new deep learning lectures at MIT, or follow us on @MITDeepLearning on Twitter and Instagram to stay fully-connected!!
Рекомендации по теме
Комментарии
Автор

Everyone watching this video lectures knows how the quality of the lecture is.. It's speechless and crystal clear I can say. Thanks MIT and prof Alexander Amini 👌

wangsherpa
Автор

Imagine being in 2020 and having the privilege to watch a fresh/recent MIT course at your sofa, damn

DLSMauu
Автор

That's the reason why everyone want to study at MIT! The teaching quality is outstanding, you can hardly get this experience from any other colleges!

kitgary
Автор

Thank you for your professional lecture!
To be honest, my mom is an AI expert with years of dedicated experience in Machine Learning and Deep Learning. As a software engineer, I wanted to have some knowledge of Deep Learning in recent days. One day I asked my mom via email. She instantly recommended me to join your course in YouTube. Now watching your videos becomes an important part of my daily life. I'm really happy to learn a lot from your lectures.

proittechlead
Автор

This is why every student strives to get to the best institutions. The quality of teaching is just too great.

fresred
Автор

I've tried so many courses so far on deep learning but could only pick up bits and pieces from the way they were delivered/the quality of the content. Even the so called giants in the tech industry offering an AI courses was horrible. But, this!!! I've only gone through lecture 1 so far and i already feel so comfortable with the pace, and the way the concepts are put across. That's why an MIT education is unparalleled and MIT's OpenCourseware is just an absolute gem of an initiative. Thank you Prof Amini for putting this up and giving us a chance to learn as well!

syncgaming
Автор

I've only recently taken a real interest in deep learning, and this by far is one of the clearest lectures I have ever seen in any field of similar complexity

iansanders
Автор

The cleanest and tightest deep learning intro lecture I've ever come across. Most others either get lost in the theory and math or in the coding. Skipping the coding using pseudo code and displaying the math along with the diagrams was really helpful.

BishSinhaExcelsior
Автор

THANK GOD I FOUND THIS COURSE. thank you so much! great lecture, so clear. Cant believe all the new information i just learned in a 50 minute video. EVERY MINUTE was perfected to best quality. THANK YOU SO MUCH.

patmaloyan
Автор

You know you're at a top class teaching and research institution when the guy says "this is a formulation introduced by Claude Shannon, _here_ at MIT..."

arsnakehert
Автор

Just a tip: Watch the lecture a second time, you'll pick up stuff that you missed the first time

SanjitKumar-khhj
Автор

41:48 To minimize confusion: He misspoke here. SGD is not the 'vanilla gradient descent' algorithm he just demoed. SGD stands for 'stochastic gradient descent'. Is the basis for all the other algorithms in the shown list. In that regard it is 'vanilla'. However, it is true that it has a fixed learning rate. - He explains the difference later at 44:00.

tamimyousefi
Автор

Man, these concepts are said in plain english, but so effectively. Even in a few sentences I feel like I learn so much. Fantastic quality

mattcoakes
Автор

Quality Course. I have been learning about deep learning from various courses and books. This course helps piece things together...its amazing

rumplerak
Автор

I'm an engineering student and was really searching for some good stuff related to deep learning for my final year project...feeling awesome to have the latest lectures from such a prestigious institution. Thanks to you Sir.

maryumjaved
Автор

30:12 The definition of cross entropy loss function should have a negation around the whole right-hand-side, i.e., J(W) = - (1/n sigma ... ).
So the resulting value will be positive, just like most shannon-entropy-related things would be defined.

ancbi
Автор

Golden, diamond, all the best thing is converted into knownledge. It's really my privilege here to be able to watch a recent MIT course, eventually in 2020. I'm following ML field, and you just make me feel so happy and motivated to move forwards. All these knownledge are truly golden. Thank you so much.

nguyenbaodung
Автор

I would like to say thank to Mr Alexander Amini and his team for sharing comprehensive knowledge about Deep learning.

thaison
Автор

MIT's commitment to making knowledge truly "open" is commendable ... have been seeing this from the OCW days.

gurdeeepsinghs
Автор

One of the amazing thing I have found on Youtube. It is free and we can never thank you for this. I am pleased to see everything I wish is here and I wish we could have this kind of education in our country but this is great. Prof Alexander Amini you are great. Thank You so much.

TheJetcross