MIT 6.S191 (2021): Convolutional Neural Networks

preview_player
Показать описание
MIT Introduction to Deep Learning 6.S191: Lecture 3
Convolutional Neural Networks for Computer Vision
Lecturer: Alexander Amini
January 2021

Lecture Outline
0:00​ - Introduction
2:47​ - Amazing applications of vision
7:56 - What computers "see"
14:02 - Learning visual features
18:50​ - Feature extraction and convolution
22:20 - The convolution operation
27:27​ - Convolution neural networks
34:05​ - Non-linearity and pooling
38:59 - End-to-end code example
40:25​ - Applications
42:02 - Object detection
50:52 - End-to-end self driving cars
54:00​ - Summary

Subscribe to stay up to date with new deep learning lectures at MIT, or follow us @MITDeepLearning on Twitter and Instagram to stay fully-connected!!
Рекомендации по теме
Комментарии
Автор

Soo happy that this type of Quality knowledge is available free of charge for everyone. ❤️❤️

artificially.conscious
Автор

I haven't thought of ReLU as a thresholding operation before, but that's very true. I also liked when Alex said traditional computer vision filters, e.g. edge detection with Sobel, also do feature extraction.

piotrjwolanin
Автор

Hey Alexander Amini, thank you for this class! CNNs are my favorite neural networks from Deep Learning! I think vision, in a way, is a mystery and carries a immense importance for humans and thanks to that it's important for AI to be able to see, in the same way that's important to hear. Is just amazing to learn how to make computers "see", it only shows how great AI has become, how closer it is to be "aware" through every little building block we manage to create!

reandov
Автор

To all my backprop boys: Give forward pass a try. Feels good.

matthewchunk
Автор

You really explain the concepts so well !!!

ommule
Автор

Wonderful explanation about the process of convolution, the steps of a CNN, and the most common applications! Thank you very much!

fabiosouza
Автор

How is this not going viral, how are there just 67 comments. Isnt data science one of the most important subjects everywhere ? How isnt this trending ?

Naru
Автор

I enjoyed the presentation greatly. There are two comments I would like to make about the next iteration. First, CNNs themselves were not introduced until half-way through the video and so it seemed like a lot was crammed into the second half of the video. So perhaps less intuition and more description. Second, and in regards to point one, although there was a fair explanation of a single convolutional layer, how to put them together was not covered (what would secondary and further layers look like, and how do they relate to the primary layer, and why would one choose 1, 2, 3 or more such layers) and in the presentation of the final architecture with the flattening and the dense layers for classification the matter is described in an after-the-fact sort of manner without explaining why.

gregorywerner
Автор

Sgd plays a great role in training the model same as feed back systems in control systems.

shubhamsingh-lfzy
Автор

Great gesture in releasing these lectures... would be even better if laboratory experiments too are made public for aspiring citizens of the world...

ckraju
Автор

I feel like this very video is the purpose of the whole internet :) Thank you for sharing such a great piece of knowledge

Yodaful
Автор

downsampling spatial dimension and upsampling spatial dimension. it's cool!

macknightxu
Автор

This is gold present in youtube but people want other.

pradumnchavan
Автор

At 40:15, what is the purpose of the dense layer with the ReLU activation function?
Thank you Alexander Amini!

adrianamejiaalegria
Автор

Thank you very much for a given such quality and highly valuable knowledge freely 👌🤗

sathiraful
Автор

This is a very high level lecture. Thank you for sharing.

Diego
Автор

At 22:23, the slide shows a 3x3 filter with all entries as 1, so the output of the convolution operation should be -3, not 9. Alexander is trying to justify the wrong output by assuming another filter with entries similar to the image batch and considering all 1s filter as a result of the elementwise multiplication. The explanation of the convolution operation becomes very unclear this way.

carletonai
Автор

There's really a gap in entry level DL explanations that give strong intuition and just a tiny bit of formalism. I tried reading Bengio's Neural Networks but it was too complex for me. This is a great introduction to really get strong intuition that you can use as a base to build more knowledge on,

ryan_chew
Автор

there is only one lecture on each topic. where are other lectures ?? this is kind of overview

pra
Автор

I wonder from where those 7 odd dislikes came from?! I mean, this is top-quality content! Anyways, thank you so much, Alexander! 😇

aaryannakhat