Stanford CS229: Machine Learning | Summer 2019 | Lecture 4 - Linear Regression

preview_player
Показать описание

Anand Avati
Computer Science, PhD

To follow along with the course schedule and syllabus, visit:
Рекомендации по теме
Комментарии
Автор

Absolutely mind-blowing, this two-hour lecture comprehensively covers Linear Regression from multiple perspectives: the probabilistic approach, linear algebra through systems of linear equations, and gradient descent.

sumansan
Автор

Excellent explanation Professor Avati and your interpretations bring the world of linear algebra, probability and machine learning together in a very meaningful way.

ANANDNITINKRISHANPALSHRESTA
Автор

Lecture 4 Completed - Learned about the approach for a regression problem, GD and SGD algorithms. Probabilistic interpretation for linear regression. Onto the next now.

DevanshChaudhary-duuz
Автор

Professor Avati,thanks for your wonderful lesson.I'm a sophomore in China.And Stanford, which is the most strongest Uni in the world!

JARVIS-CHEN
Автор

Thank you for another great lecture Professor Avati. Very grateful to you and the folks at Stanford for this!

Tomharry
Автор

Excellent Sir. Thank u for giving us the opportunity to access such high quality lectures.

amitabhachakraborty
Автор

Very Good foundational concepts explained for the remaining course

tariqkhan
Автор

Hello Professor Avati,
Hope you are doing well!
I have a small question
While equating the determinant of J(teta) to zero, we assume that the value of 'teta' at which the equation is zero is the minima. But we haven't deciphered the shape of the function. This value could easily be maxima if the function had an inverse bowl shape.

anandvamsi
Автор

In 1:41:47 time stamp
X is of n*d matrix rather d*n. because X contains n training examples and d features.pls correct me if iam wrong

srisaisubramanyamdavanam
Автор

theta was a d+1 dimensiona vector, right?

Foryou-fymm
Автор

1:07:11 wouldnt be mini batch gradient descent itself be prone to overfit to that batch since we actually reduce the training set?

fabib.