Understanding the mAP (mean Average Precision) Evaluation Metric for Object Detection

preview_player
Показать описание
In this tutorial, you will figure out how to use the mAP (mean Average Precision) metric to evaluate the performance of an object detection model. I will cover in detail what is mAP, how to calculate it and I will give you an example of how I use it in my YOLOv3 implementation.

The mean average precision (mAP) or sometimes simply just referred to as AP is a popular metric used to measure the performance of models doing document/information retrieval and object detection tasks. So if you time to time read new object detection papers, you may always see that authors compare mAP of their offered methods to most popular ones.

✅ Support My Channel Through Patreon:

✅ One-Time Contribution Through PayPal:
Рекомендации по теме
Комментарии
Автор

There are an array of numbers for precision and recall in result.txt. what does it mean? Among them which one is for the particular custom classes that user looking for?

nasimthander
Автор

Hey how to visualise RoC and AuC curves?

nasimthander
Автор

hello sir do we need to do this in coco val set can't we do on same mnist test sets?
What is the difference can you explain me?

samida
Автор

how to evaluate for the license plate detection datasset

barathm
Автор

Hello, thank you for your lessens. Do you know how I can plot recal-precision curve? Like better to use tensorboard or maybe sklearn? somehow i'm a bit confused what is classifier and y and x values. I found an example "disp = plot_precision_recall_curve(classifier, X_test, y_test)". Sorry for asking, hope you can give me tips:))))

valeriiakulakova
Автор

How we calculate the accuracy for the yolov4 for social distancing video without using cuda, cudnn, pytorch, tensor flow etc.Is it possible to calculate MAP?If possible please ping me.. how to write a code for it..

KCDRofficialprojects
Автор

What are the mAP values we should aim to achieve in our detection models? What does it depend upon?
For eg, if i have 2 classes, should i expect higher map values than the detection for 20 classes? (given the same size of data for each class)

cinnamoncider
Автор

Why is mAP a preferred metric in object detection over something like an F1 score and recall?

loocyug
Автор

I have a question. Say, for example, that at confidence of 0.5 there's a successful prediction (it's IoU is higher than the threshold between ground truth and prediction). In this scenario there's a TRUE POSITIVE.
Now, when at 0.6 confidence the IoU is no longer bigger than the threshold so it becomes an unsuccessful prediction. Do we add a FALSE POSITIVE (because we have a wrong prediction) and a FALSE NEGATIVE (because we have a ground truth without a succesful prediction? Or do we just add a FALSE POSITIVE?

It made more sense for me to add both a false positive and a false negative, the issue is that at a super high confidence, say 0.9, there are almost no TRUE POSITIVES but a lot of FALSE POSITIVES and FALSE NEGATIVES, making both precision and recall tend to 0. Shouldn't both metrics follow contrary growth with the confidence value? (when precision is aprox. 0, recall should be aprox. 1 and vice-versa) I've seen this happen in pretty much every mAP tutorial but I can't figure out what am I doing wrong and I have nobody to take advice from.

donduarte
Автор

why do you say that higher the threshold value will be lower will be the mAP at 7:02 seconds in video

manavmadan
Автор

Thank you so much. I fixed all errors. Thank you so much for sharing your own knowledge.
Do you have a plan do to cross-validation on this project?

grlg
Автор

Thank you for your video. I am not clear about the concept of positive.
Is it mean:
1. bounding boxes with objectness confidence score > objectness threshold
or
2. bounding boxes with objectness confidence score > objectness threshold and class confidence score > class threshold

For yolov3, could we simply regard the class of highest predicted class probability as its predicted class instead of comparing each predicted class probability with the class threshold?

If we can simply regarded the class of highest predicted class probability as its predicted class, does it mean we regard the predicted bounding box as negative if its predicted class is different with the class of its relative ground truth??

Takoyaki-hicj
Автор

Hello Python Lessons, I love your captcha solver, how can I contact you? I just started with python and I know like nothing so I probably shouldn't start with your captcha solver, but I really need it. I had some problems so I wanted to contact you, otherwise I can give you the details about my problem in this comment section. Also, sorry that I write my problem about a like 2 year old video in here, but I don't really know how to message you

ionwango