Lecture 2: Image Classification

preview_player
Показать описание
Lecture 2 introduces image classification as a core computer vision problem. We see that the image classification task is made challenging by the semantic gap, but that solutions to this task can be used as a building block in other more complicated computer vision systems. We introduce machine learning as a data-driven approach to solving hard problems like image classification. We discuss several common classification datasets in computer vision. Finally we introduce K-Nearest Neighbors (KNN) as our first machine learning algorithm. This leads to a discussion of hyperparameters and cross-validation strategies that will be crucial for all the machine learning algorithms we will later use.

_________________________________________________________________________________________________

Computer Vision has become ubiquitous in our society, with applications in search, image understanding, apps, mapping, medicine, drones, and self-driving cars. Core to many of these applications are visual recognition tasks such as image classification and object detection. Recent developments in neural network approaches have greatly advanced the performance of these state-of-the-art visual recognition systems. This course is a deep dive into details of neural-network based deep learning methods for computer vision. During this course, students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in computer vision. We will cover learning algorithms, neural network architectures, and practical engineering tricks for training and fine-tuning networks for visual recognition tasks.

Рекомендации по теме
Комментарии
Автор

If you are reading this you are the ten percent (as of the time of writing this) that didn't up and leave after the intro. I hope to see you all at lecture 22.

conradwiebe
Автор

Great lectures!! Pls keep posting the latest series! Thank you!!

terryliu
Автор

Very good teaching of computer vision! Thanks Justin Johnson for these very nice lectures.

raphaelmourad
Автор

I like how he says.. 'This is WRONG.. so bad... you should not do this!' cracks me up for some reason

huesOfEverything
Автор

25:22 He just described a well-known exam technique beloved of students everywhere!

xanderlewis
Автор

He taught the essential in a great way

zhaobryan
Автор

Thank you for the lecture! Greetings from Ukraine)

DariaShcherbak
Автор

Hi
I thought the MNIST dataset had 60k training images. Or?

andrewstang
Автор

That Hot dog and not hot dogs was from Silicon Valley. The professor watches the show :)

randomsht-cywe
Автор

For the nearest neighbor classifier isn't training time going to be O(n)? If we are going to store pointers for each training example, we still have to iterate over the number of training examples, which is n.

veggeata
Автор

how can i get the homework anyone knows?

mahmoudatiaead
Автор

Well, maybe I didn't get something, but I totally disagree about the train-valid-test idea as Justin described it. We train a model on train data and evaluate on valid set to change a model behavior. That's correct, however, it does not mean we should look at the test set split only at the beginning of our research. We should estimate our model on the test set at least several times and if the model performance is very different on the test set in comparison to the validation set it means something was done very wrong - e.g. splitting strategy. Of course, using the test set influences our decisions, but how much? Can you say that the estimation of the ready model on the test set really spoils everything? I doubt that.

ДаниилГусев-сл