Loss Functions - EXPLAINED!

preview_player
Показать описание
Many animations used in this video came from Jonathan Barron [1, 2]. Give this researcher a like for his hard work!

SUBSCRIBE FOR MORE CONTENT!

RESEOURCES
Рекомендации по теме
Комментарии
Автор

'check it out when you want to lower your self esteem' damn that sentence caught me off guard

sephchan
Автор

Nice voice, tones, speed, and graphics! Perfect for studying anytime. Thanks!

BB-rhbp
Автор

you nailed it, thanks a lot for this video, great informative lecture, keep making this contents and keep inspiring us

geeky_explorer
Автор

Wow impresive more value than dozens of study hours!!! Love it!

oscarsal
Автор

This was an incredibly accessible video! You explained complex subjects simply enough to make me curious about going deeper in loss functions. Very awesome, thank you

MarcelloNesca
Автор

Finally find a video that explains loss function clearly. Thanks for your effort!!

kayingtang
Автор

Thanks for this video, nicely explained.

vivek
Автор

Nice video! How about doing a mathematical video related to eigenvalues/vectors in ML applications?

arkasaha
Автор

Great video. Is the code for the graphics somewhere available? I would like to use and adapt it for my purposes

david-vrty
Автор

I love your animations! How did you make them?

gregorygreif
Автор

One question how do we apply/implement adaptive loss? do we have to create a list of random values of alpha; like we do in randomizedsearchcv or grid search. Or is there any other way?.... Btw your explanation was amazing.

hardikvegad
Автор

Welsch loss is what I needed for my problem, thanks

bhargav
Автор

Hey, firstly great explanation! Thanks! Secondly, I have a minor clarification. Can we then, in conclusion, say that finding the best loss function for a particular task also depends on the second derivative of the loss function? Since the final loss depends on the convex nature of the loss function.

KaranKinariwala
Автор

Great video! But I really need help understanding something. I thought the purpose of calculating MSE, SSE etc. Was to get a sense of accurate your regression model is. 0:55 I noticed how when you introduced the outliers you said that the model tries to change to also incorporate the outliers. So does that also change the linear regression model too? I'm also confused because I thought linear regression (OLS) already computes the minimum of the sum of squared errors.

ethiopiansickness
Автор

This video was the was freaking awesome!

templetonpeck
Автор

Great as always man!
Make videos about reward modelling and neural ODEs

hariharans.j
Автор

what tool you used for the visulization graph

codingwithnipu
Автор

Hello, not sure if you will reply to this older video but at 7:01 you choose the function with the greatest amount of loss and say that it fits the data the best. Was this an error? If not, can you explain why?

emmanuelgoldstein
Автор

Absolute loss does not 'ignore outliers altogether'. Outliers still have more effect on the absolute loss function than non-outliers do, it's just not as extreme an effect as MSE. The right-hand graphic at about 2:35 is wrong, the fitted line would be shifted much more upwards for low values of x.

hillosand
Автор

Hey ajay this is gaurav, I always loved to watch your in depth videos. Please make indepth videos on various optimizers like adam, adagrad, ftrl and all other

GauravSharma-uiyd