Linear Regression in 12 minutes

preview_player
Показать описание

Linear Regression and Ordinary Least Squared ('OLS') are ancient and yet still useful modeling principles. In this video, I introduce these ideas from the typical machine learning perspective - the loss surface. At the end, I explain how basis expansions push this idea into a flexible and diverse modeling world.

SOCIAL MEDIA

Sources and Learning More

Over the years, I've learned and re-learned these ideas from many sources, which means there wasn't any primary sources I reference when writing. Nonetheless, I confirmed my definitions with the wikipedia articles [1][2] and chapter 5 of [3] is an informative discussion of basis expansions.

[3] Hastie, T., Tibshirani, R., & Friedman, J. H. (2009). The Elements of Statistical Learning: Data Mining, Inference, and Prediction. 2nd ed. New York: Springer.
Рекомендации по теме
Комментарии
Автор

Just here to tell you that this video is going to EXPLODE! Maybe not immediately, but eventually almost surely :). Keep at it, awesome visuals. Small note: there is a bit of an echo, which can be fixed by putting some padding/blankets on the walls/ceiling.

mCoding
Автор

this video is so underrated. the way you explained it alongside the provided demonstration makes it easily a top 5 between all tutorials regarding linear regression

stevenstonie
Автор

In case you're wondering if anyone laughed at the 0:19 joke, I definitely laughed. Dunno if it qualified as a rare honest use of the term "LOL", but it was a clearly audible guffaw, so, a CAG I guess.

WilliamDye-willdye
Автор

I was not sure how to comment... as I can hardly find the words to express how good this explanation is. Thanks a lot!

j.adrianriosa.
Автор

perfect explanation and visualization! thank you a lot making this

suleymanarifcakr
Автор

Bro, the visualizations in the last half of the video were fantastic. Amazing work man. Keep it up!

Capitalust
Автор

I hope your channel blows up. This is a clear, concise discussion that keeps things on point without pulling too many punches. Great presentation.

Can I request random effects next? ;-)

enknee
Автор

Going back to review your earlier stuff. It's good to see it was quality also from early on.

Murphyalex
Автор

I'm a student of economics and will be very helped by your content. Thank you!

hugosetiawan
Автор

AFAIK the LSE was developed by Gauss to estimate the parameters of orbits of comets, and, if the errors of observations have the normal distribution (which is sometimes named after Gauss for a good reason: IIRC, he researched the distribution to solve this very problem), the LSE estimate is actually the maximal likelihood estimate as well, and that was the original reasoning behind this method, not the computational feasibility per se. A damn good explanation tho, thank you!

daigakunobaku
Автор

As already said by some of you guys, visuals were really great this time! Never seen such least squared errors, nor these basis functions :-) Appreciate it a lot!
So even though linear regression is well known, it still can be fun to learn new things about it ;-)

antoinestevan
Автор

That was some high quality explanation you managed to put in there! The math seemed a little fast but hey, we can always rewind and watch. Cheers!

felinetech
Автор

Hi! Thank you a lot for your contents! It’s a pleasure to watch ever for a non math guy. The issue during model fitting is indeed understand the data to model. Seem easy in 2D or 3D but real data are a completely different story… I hope to improve my regression skills!

manueltiburtini
Автор

This was an awesome video man 😳👌 keep it up

xy
Автор

your vids are cool, thanks for the effort and I love watching these

davusieonus
Автор

Do you use the same package as 3B1B? This channel has that feel. Also, you should consider starting a Patreon. Your content is quite good, and I could see it garnering a considerable following. There is such a huge stats/ML community out there that is lacking a 3B1B level of content contribution...this is a huge opportunity. Thanks for publishing!

markmiller
Автор

Actually, it is not just for ease of solution that we minimize the squared error. It corresponds to the often reasonable assumption that the noise is Gaussian. And minimizing absolute differences corresponds to Laplace-distributed noise.

siquod
Автор

"All my videos are about math. Non of them are cool". The only wrong sentence in this whole video 😁

nikoskonstantinou
Автор

So many new insights here. This explanation connected so many dots.
Is this leading to Gaussian Processes?

orjihvy
Автор

Wow! I never realized linear regression is only about linearity in beta! With your expertise also in exponential families, I'd love to see you make a video about GLM! I still don't understand why GLM needs its errors to be distributed via an exponential family - maybe you'll make it clear!

steffenmuhle