Object Tracking Using YOLOv4, Deep SORT, and TensorFlow

preview_player
Показать описание
Learn how to Build an Object Tracker using YOLOv4, Deep SORT, and Tensorflow! Run the real-time object tracker on both webcam and video. This video will show you how to get the necessary code, setup required dependencies and run the tracker. It can be run using YOLOv4, YOLOv4-tiny or YOLOv3 object detection models.

#objecttracking #yolov4 #deepsort

YOLOv4 is a state of the art algorithm that uses deep convolutional neural networks to perform object detections. We can take the output of YOLOv4 feed these object detections into Deep SORT (Simple Online and Realtime Tracking with a Deep Association Metric) in order to create a highly accurate object tracker. Run Deep SORT with YOLOv4-tiny model and obtain even higher speed and FPS, perfect for mobile apps or edge devices such as Raspberry Pi or Jetson Nano.

In this video I cover:
1. Cloning the code and installing dependencies.
2. Converting YOLOv4 pre-trained model into TensorFlow model.
3. Running Object Tracker on video.
4. Filtering allowed classes to track.
5. Running Object Tracker with YOLOv4-tiny for high FPS.
6. Adding info flag to see detailed information on tracks.

-------------------------------Resources-------------------------------

If you enjoyed the video, toss it a like! 👍

Thanks so much for watching!
- The AI Guy
Рекомендации по теме
Комментарии
Автор

Let me know what additions you would like added to the code and if you run into any issues. Cheers :)

TheAIGuy
Автор

while looking for a yolov4 repository I found yours, fully documented and ready to go! What a goldmine! Works so well, many thanks!

plasminds
Автор

You have no idea how this video saved me, I cannot thank you enough

nojoodothmanal-ghamdi
Автор

I came across your channel few days ago and have been following all videos since then, I had the exact same project that you have already executed in this video and I really appreciate your work. Keep doing these videos it's an enormous source for learning practically.

deepanshuvishwakarma
Автор

To the guys for whom no bounding boxes appear in the video : create a new environment as he says in the beginning of the video and use this, rather than using one which you might have created earlier for some other purpose. This worked for me.

rajathav
Автор

Great video, well documented code and sturdy implementation 10/10

omniopen
Автор

can you please do a video for custom object training for Deep SORT. like to train deep SORT on your data set

GauravAsati
Автор

Nice video! I have a question regarding DeepSORT. DeepSORT is trained to extract features only for the person class and not for other object classes. Did you make any changes to the model or do you use the Deepsort's feature extractor trained for people on all object classes? It would be great if you could make a video on multiple object classes tracking using deepsort.

ragavendra
Автор

When I try convert the model:

conv_weights = conv_weights.reshape(conv_shape).transpose([2, 3, 1, 0])
ValueError: cannot reshape array of size 4554552 into shape (1024, 512, 3, 3)

What should I do?

guilhermedias
Автор

I'm trying to run the object_tracker script, but it get stuck on frame #1 and doesn't move. It seems like a problem with the model conversion to TF.

Can someone guide me in the right direction?

nbourre
Автор

Question: When I run your notebook it runs well but boxes cant add to the output video!! I have same input and output (without boxes and labels :/)

vahabmspour
Автор

i folow to your instruction but the video output has no bouding box. can you tell me why?

quangtuyennguyen
Автор

hope to see additional feature of counting the number of cars within the video and passing through the lane successfully?

yosmy
Автор

Great video, But can someone please explain how the deepsort is working, how it is trained on the 80 classes to track them ?, or is it possible that it can track any type of object? Thanks in advance

saifeddinemahmoudi
Автор

@The AI Guy Thank you for your work. I followed your instruction, all works like in your video, but I have nor detections neither tracking. Output avi == input avi, and when I add --info in command line, I can see only FPS, no detection&tracking info.🤔

Andrey_Zakharoff
Автор

Thank you for an awesome clear explanation, on object tracking. If I convert the tiny YoloV4 model to tensorflow lite can I use Coral USB accelerator to accelerate the inferencing? I just need a way to make a post process quantized edge tpu .tflite model.

peglegsqueeks
Автор

@The AI Guy . Your videos are very useful, thanks. I have one question, instead of using a webcam is it possible to acces via RTSP protocol for security cameras?

MiguelJimenez-cuft
Автор

Hi Ai-Guy I have a quick question. I would like to train my own data set. Today I have labeled all the pictures there, there are pictures on which no objects to be trained are visible. Do I also have to create a (picture) .txt for these pictures, which are simply completely empty? Or do Yolo and Tensorflow take it that way, if there is no text file for an image, that there is nothing to be recognized there?

martinschmit
Автор

You did a very good job here. I downloaded the code, followed your instructions and it worked perfectly well on my Windows 10 machine (with gpu). There is just one strange thing. I followed a ball which is in plain sight all the time. No interruption no obstacles. Still it finds the same ball with three different tags on it (1, 2 and 4). It guess it looses the ball for some millisecs / one or more frames and finds it again in the next frame and starts a new counter. I will find out :-)
Thank you very much for the excellent video!

gerhardheinzerling
Автор

Can this be used with blue iris for monitoring security cameras?

shannonbreaux