Calculating Tensorflow Object Detection Metrics with Python | Mean Average Precision (mAP) & Recall

preview_player
Показать описание
Finished building your object detection model?

Want to see how it stacks up against benchmarks?

Need to calculate precision and recall for your reporting?

I got you!

In this video you'll learn how to calculate both training and evaluation metrics for object detection models built using the Tensorflow Object Detection API with Python. You'll learn how to evaluate your Python Object Detection model and calculate mean average precision (mAP) and average recall.

In this video, you'll learn how to:
1. Viewing Training Results and Loss inside of Tensorboard
2. Evaluating the Performance of Tensorflow Object Detection Models
3. Calculating Mean Average Precision (mAP) and Average Recall (AP) for Object Detection models
4. Viewing mAP and AR for Evaluation Data using Tensorboard

Chapters:
0:00 - Start
3:19 - Calculating Training Loss and Metrics
6:14 - Coding the Evaluation Script
9:08 - Calculating Mean Average Precision (mAP) and Average Recall (AP)
12:21 - Viewing mAP and AP in Tensorboard

Oh, and don't forget to connect with me!

Happy coding!
Nick

P.s. Let me know how you go and drop a comment if you need a hand!
Рекомендации по теме
Комментарии
Автор

Man, have been following you past this week and have already built 1 proj and 1 in going
Thank you soo much for those easy peasy tuts and also working out with errors

Thank you ❤️

manishsharma
Автор

You're one person whom I don't mind watching ads for. I watch them till the end and also click them! Hope it helps you!

coded
Автор

Super quality work, as always, clear and responsive. Glad such people exist!

from-chimp-to-champ
Автор

I have explored a lot of tutorials on youtube but this one is the best

saharshsinha
Автор

Your content is awesome, i can't wait to create new projects

TchFlicks
Автор

I don't know if you are still answering question. But did you find how to get TP, TN, FP, FN or accuracy from the model? or different class accuracy in different part. Or confusion metrix?

alafiatemon
Автор

Thank you for the very useful tutorial!It has answered many questions I previously had about OD performance metrics. May you prepare a tutorial on how to extract performance metrics (AP, AR, mAP etc) at the best saved checkpoint saved (a checkpoint with the least total loss) instead of at the last check point as it has been with your tutorial?

frankkilima
Автор

How I can calculate confusion matrix with these Average Precision and Average Recall values?

unamattina
Автор

been waiting for this!!! thank you for saving us in our research project

arellanosophia
Автор

Great content as always! Thank you Nicholas!

gustavojuantorena
Автор

Thank you so much for these tutorials, I learn alot just in one week.!
Can you please let me know how can we run this evaluation after each 1000 steps to get Average Precision at every 1000 steps?

sohailali
Автор

Dear Nicholas, thank you once again for the tutorial. After training an object detector, I get events (log) file for a training job which I can visualize on tensorBoard for various metrics. However, the total loss value at the last step of training is very different from the one I get when I run the evaluation job (I guess it is because total losses for training job are based on training data set while those of evaluation are based on evaluation data set). Worse enough, when running tensorBoard on the events (log) file for the evaluation job, I get only a single point for the total loss value at the last step, and not the graph similar to the one I get when I run tensorBoard on training events (log) file, which shows changes of total loss from start to the end of training job. I would like to get such a graph too for the evaluation job's events (log) file. My question is, will I be able to get such a graph of the total loss for the evaluation job's events (log) file (similar to that of training job showing change of total loss over time (steps/epochs)) if I manage to run an evaluation job in parallel with training job? Second, what is the meaning of 'smoothed' in relation to total loss when reading the value on TensorBoard? I am sorry for the long question!

frankgkilima
Автор

Thank you so much for these courses
May I ask you is there any way I can print confusion matrix using tensorboard?
I built my first object detection project & I need to see the confusion matrix but some how I faced a lot of errors doing that

amirael-shekhi
Автор

Dear Nicholas, Greeting. Out of curiosity, I decided to use PASCAL VOC evaluation metrics in an object detection task implemented on TensorFlow 2 OD API. I found that PASCAL VOC metrics has implementation of average precision (AP) for individual classes in the data set (that most of us have been asking for in this forum). However, a problem arises when the first class in the label_map.pbtxt produces abnormally poor performance compared to other classes. When I switch the order of classes (in label_map.pbtxt), any class placed as the first class (with id 1) in the label_map.pbtxt produces abnormally poor average precision (AP). Do you have any idea on this problem and possible solution?

frankkilima
Автор

Appreciate your efforts!! Can you please tell me how to calculate accuracy for each word (hello, yes, no, thankyou, iloveyou) separately, in the sign language tutorial, instead of calculating average accuracy?

gkhartheesvar
Автор

Really helpful video. Is there a way to also view the metrics while training the model?

courtney_ann
Автор

Hello, based on the tutorial video we only need to divide the dataset into two parts which is training and testing. Is there no need for a validation set?

xela
Автор

Can you tell us different ways in which we can increase the 'mAP' of the object detection model? Different things we consider during training and preparing the dataset for getting high mAP! Please

jiteshaneja
Автор

Very useful !! Thank you so much this superb tutorial <3

tharu.zash.
Автор

Thank you. It was very comprehensive. Just wondering when evaluating, you used another different dataset (not the training one) right? I looked at the config file and change eval_input to the evaluation dataset rather than the training set. If in the 'eval_input' property in config file, we populate the directory with the training dataset folder, we should get these metrics for the training dataset right? Also, is there any way we can plot these metrics for all checkpoints (a line graph of how these metrics change over different checkpoints) rather than just the latest checkpoint? What I am trying to do is diagnose overfitting by plotting these metrics of both training and evaluation dataset over all checkpoints.

nhikieu