K Nearest Neighbor | KNN | sklearn KNeighborsClassifier

preview_player
Показать описание
In this presentation, we have shown the KNN classification of the Iris dataset to classify the targeted output for a new instance.

K Nearest Neighbour is a simple way to classify data. K defines the number of nearest neighbors (individual data points). If we put a new instance in the data set to find the category the new instance matches with, we calculate its nearest neighbors. The most amount of neighbor the data point is close to, the instance falls into that category.

Here we have shown here:
[00:00:00​] - Intro
[00:02:28​] - KNN Algorithm
[00:03:00​] - Sci-Kit Learn KNeighborsClassifier
[00:03:36] - Implementation of KNN
[00:06:03​] - Confusion Matrix
[00:07:04] - Classification Report
[00:08:01] - How to find the best value for K
[00:09:00] - K-Fold Cross-Validation Technique

#DataScience #MachineLearning #ComputerScience #AI
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Рекомендации по теме
Комментарии
Автор

Hey man, that's really nice ! Keep the great work.

profsciencia
Автор

"The most amount of neighbor the data point is close to, the instance falls into that category"
1:59
"the number of the YELLOW triangular categories is higher, therefore the new instance falls into the RED category" then "In 5 n_neighbor, the number of YELLOW data points is higher than the number of RED.. therefore the classifier will predict the new data point will fall into the YELLOW..". How does the logic work?

granothon