filmov
tv
Machine Learning Basics: Confusion Matrix & Precision/Recall Simplified | By Dr. Ry @Stemplicity
Показать описание
This tutorial covers the basics of confusion matrix which is used to describe the performance of classification models.
The tutorial will also cover the difference between True Positives, True Negatives, False Positives, and False Negatives which can be described as follows:
• True positives (TP): cases when classifier predicted TRUE (they have the disease), and correct class was TRUE (patient has disease).
• True negatives (TN): cases when model predicted FALSE (no disease), and correct class was FALSE (patient do not have disease).
• False positives (FP) (Type I error): classifier predicted TRUE, but correct class was FALSE (patient did not have disease).
• False negatives (FN) (Type II error): classifier predicted FALSE (patient do not have disease), but they actually do have the disease
The tutorial will also cover the difference between classification accuracy, error rate, precision and recall. These metrics can be summarized as shown below:
• Classification Accuracy = (TP+TN) / (TP + TN + FP + FN)
• Misclassification rate (Error Rate) = (FP + FN) / (TP + TN + FP + FN)
• Precision = TP/Total TRUE Predictions = TP/ (TP+FP) (When model predicted TRUE class, how often was it right?)
• Recall = TP/ Actual TRUE = TP/ (TP+FN) (when the class was actually TRUE, how often did the classifier get it right?)
If you want to learn more, here’s a link to my new machine learning Classification course on Udemy:
Here’s a link to my new machine learning regression course on Udemy:
Subscribe to my channel to get the latest updates, we will be releasing new videos on weekly basis:
The tutorial will also cover the difference between True Positives, True Negatives, False Positives, and False Negatives which can be described as follows:
• True positives (TP): cases when classifier predicted TRUE (they have the disease), and correct class was TRUE (patient has disease).
• True negatives (TN): cases when model predicted FALSE (no disease), and correct class was FALSE (patient do not have disease).
• False positives (FP) (Type I error): classifier predicted TRUE, but correct class was FALSE (patient did not have disease).
• False negatives (FN) (Type II error): classifier predicted FALSE (patient do not have disease), but they actually do have the disease
The tutorial will also cover the difference between classification accuracy, error rate, precision and recall. These metrics can be summarized as shown below:
• Classification Accuracy = (TP+TN) / (TP + TN + FP + FN)
• Misclassification rate (Error Rate) = (FP + FN) / (TP + TN + FP + FN)
• Precision = TP/Total TRUE Predictions = TP/ (TP+FP) (When model predicted TRUE class, how often was it right?)
• Recall = TP/ Actual TRUE = TP/ (TP+FN) (when the class was actually TRUE, how often did the classifier get it right?)
If you want to learn more, here’s a link to my new machine learning Classification course on Udemy:
Here’s a link to my new machine learning regression course on Udemy:
Subscribe to my channel to get the latest updates, we will be releasing new videos on weekly basis:
Комментарии