Overfitting, Cross Validation, Regularization, and L1 and L2 Norm Regularization in Machine Learning

preview_player
Показать описание
This is a lecture about overfitting, underfitting, generalization, cross validation, regularization, and L1 and L2 norm regularization in machine learning.

The related papers and books:
2- Trevor Hastie, Robert Tibshirani, Jerome Friedman, "The elements of statistical learning: Data Mining, Inference, and Prediction", Springer, 2009.
3- Robert Tibshirani, Martin Wainwright, Trevor Hastie, "Statistical learning with sparsity: the lasso and generalizations", Chapman and Hall/CRC, 2015.
4- Ian Goodfellow, Yoshua Bengio, Aaron Courville, "Deep learning", MIT press, 2016.
5- Robert Tibshirani, "Regression shrinkage and selection via the lasso." Journal of the Royal Statistical Society: Series B (Methodological) 58, no. 1 (1996): 267-288.
6- Lectures of Prof. Ali Ghodsi at the University of Waterloo, Department of Statistics and Actuarial Science, see his YouTube channel: @DataScienceCoursesUW
8- Lectures of Prof. Hoda Mohammadzade at the Sharif University of Technology, Department of Electrical Engineering.

Chapters:
0:00 - Learning model
3:27 - Analogy to Plato's Theory of Ideas
4:36 - True, available, and predicted observations
9:40 - MSE of the learning model
18:48 - Case 1: instance not in the training set
31:44 - Case 2: instance in the training set
45:01 - Complexity of the model
55:08 - Overfitting, underfitting, and generalization
1:04:35 - Cross validation
1:07:05 - K-fold cross validation
1:15:50 - Leave-one-out cross validation
1:18:11 - Cheating 1 in machine learning
1:22:52 - Validation set
1:29:45 - Cheating 2 in machine learning
1:40:51 - Justification of overfitting
1:51:37 - Discussion of cheating in a nut shell
1:53:26 - Regularization
2:06:02 - L2 norm regularization
2:30:51 - L1 norm regularization
2:48:26 - Comparison of L2 and L1 regularization
2:52:25 - Acknowledgment and references
Рекомендации по теме