How to evaluate ML models | Evaluation metrics for machine learning

preview_player
Показать описание
There are many evaluation metrics to choose from when training a machine learning model. Choosing the correct metric for your problem type and what you’re trying to optimize is critical to the success of the model.

In this video, we will learn about the most commonly used evaluation metrics for classification and regression tasks.

Get your Free API token for AssemblyAI here 👇
Рекомендации по теме
Комментарии
Автор

super clear explanation! I seldom leave comments, but this video totally amazed me!

louisawang
Автор

Wow, thank you so much for these video. I am a software engineer by trade but increasingly big tech companies have ML system design as one of their interview rounds. Your content was amazingly helpful in preparing for those interviews!

DiwasTimilsina
Автор

This video helped me to pass the azure data scientist associate exam.
Thanks for the video.

forprogramming
Автор

Interesting how most people jump to the RECALL section. Why? Is it a harder topic?

EasyAIForAll
Автор

*INSANELY* helpful. Thank you *so* much!

xxelurraxx
Автор

Thank you so much, this video helped me understand the metrics in the clearest way possible.

penelopeharo
Автор

Thanks a lot I needed this clarification for my presentation

ahmadebrahem
Автор

How to evaluate clustering algorithms like K-Means and Fuzzy C-Means.

Mejhool-gy
Автор

How about the differences here for a non-linear regression? :)

RollingcoleW
Автор

Please ma, can you share the codes you used to plot the true positive rate vs false positive rate graph, PR curve? It looks so beautify and i can't get the exact, please help. Thanks in advance

jamesadeke
Автор

Thank you for this video, Thank you so much. I have questions to ask based on this model evaluation, my questions go like this sir, " is there a way to use the confusion matrix to know the exact datapoint in our dataset our model got wrongly during the predictive system? Also, Sir when we deploy our model to a web app using streamlit, can we use a confusion matrix to figure out which exact datapoint our model predicted wrongly by applying the confusion matrix to the final predictive system output in the web app ?

sholay
Автор

Awesome content, but something unrelated question, What are your camera settings? I especially like your camera setup, could you give info on that? What lens, what aperture, and anything else is needed to replicate the same light/room setup. Thanks 🙂

dr.dwight
Автор

how to calculate the coefficient of determination (R2)

aymanekanaoui
Автор

What do you think about repeated random data splitting e.g you split the data 80 percent for train and 20 percent for test on a random basis that preserves the class structure vs k fold cross validation? Edit yep I now know this is worse

kristianfella-glanville
Автор

Can you please upload code shortcuts of this metrics ?
Thank you in advance

DaniyarRunning
Автор

"شكرا جزيلا لك"
This means "thank you so much" in Arabic 🕴

sultanaaa
Автор

Idk i somehow seem not being capable of following this pace

michaltrodler
Автор

How do you pronounce "Accuracy"? Its so triggering lmao
Thats not american nor british english, right?

andresolbach
Автор

Well explained but might be good if you added something on threshold setting for binary classification

greenbrothersuk