Key point Detection On Custom Dataset Using YOLOv7 | YOLOv7-Pose on Custom Dataset

preview_player
Показать описание
In this tutorial, I will show you how to use yolov7-pose for custom key point detection.

Keypoint detection is a computer vision task that involves identifying and localizing specific points of interest, or keypoints, in an image.
Models like detectron2, yolov7, yolov8 are designed to detect only a specific number of keypoints (e.g., 17 keypoints for the person ).
And If your custom dataset has a different set of key points or a different number of key points, then the model needs to be modified accordingly to recognize and predict those key points.
Рекомендации по теме
Комментарии
Автор

Only video on the entire youtube platform which talks about custom keypoints for yolo. everyone else is only talking about hand or pose keypoints. Thank you so much for making this video!

bombcrypto
Автор

Your video really helps me a lot! I've managed to run keypoints detection on my own Xray image datasets. Thank you sooo much! Hope to see more tutorials🥰

EricH-vn
Автор

Hi @codeWithAarohi, your video is really helpful. Just a quick question. Lets say for example you have 2 classes and each class has 1 keypoint. what should the nkpt be 1 or 2?

clementkangombe
Автор

I have created a custom dataset of hexagonal nuts, bolts and washer.

Nuts have 6 key points
Bolts have key points from 50 to 130
Washers have 8 key points.

What value to pass to the number of key points in .yaml file ?

unknown_
Автор

do you know how to solve this error?
AttributeError: Can't get attribute 'SPPF' on <module 'models.common' from

김지우-rs
Автор

if my dataset has 2 classes, 1 contains 4 keypoints and other one contains 5 keypoints, what would be nkpts??

rajbhanushali
Автор

Great video! Would it be possible to train the model considering that there would be more than one object (instance) of the same class per image, for example, in this case, several cups per image?

jordaocassiano
Автор

Hello, I’m not sure how to export this model to onnx? I have a custom model working in PyTorch, but want to covert it to onnx, and am having trouble post processing the results

LunchAtRbees
Автор

Why didn't you make any changes in the Python files ?

sa-dhu
Автор

can you teach us how to implement this to android

sanjoetv
Автор

Where are the coordinates of the detected keypoints have been saved at the end?

B.F.
Автор

I have data having two objects and every object has multiple points, (23, 11) I was facing error for having heterogenous number of points. So I made them regular by putting zero padding at end of 2nd object.
Now I am having 74, keypoints for each row, 1 id + 4 bounding box + (23*3)=69 points for both object, I have two lines.

My yaml looks like this # number of classes
nc: 2
names:
0: A
1: B
nkpt: 46
kpt_shape: [46, 3]

but model starting saying he is looking for 46*3=138 +5 in totla 143 points, so every image is getting corrupted,
Plz help me in resolving this.

muhammadhammadbashir
Автор

Hi miss, I successfully train the model but when I run the detect.py, I receive the error: TypeError: unsupported operand type(s) for *: NoneType and int. Can you please help me to solve it😢

ngoyang
Автор

How do you draw these lines between points??

АннаЛагуткина-ид
Автор

how to convert the dataset to Yolo format, nobody has the answer yet!!

iyad
Автор

ERROR: Could not find a version that satisfies the requirement onnxruntime==1.10.0 (from versions: 1.12.0, 1.12.1, 1.13.1, 1.14.0, 1.14.1, 1.15.0, 1.15.1)
ERROR: No matching distribution found for onnxruntime==1.10.0

joelbhaskarnadar
Автор

Hello, thank you very much for your teaching video. I encountered some problems while using YOLOV7 POSE and have sent you an email hoping to receive your help. Thank you very much! ask for help

方白东
Автор

Hello mam, can u provide me with help on my project?

indrakumari
Автор

AttributeError: Can't get attribute 'SPPF' on <module 'models.common' from
how to fix it?

sazzadhossain
Автор

I came across tons of errors and issues with many parts of this, including accurately converting the coco data to yolo data with more than 4 key points. My mAP values seem to be extremely low compared to if i trained them on the same dataset just on yolov5 or yolov7. Barely manages to break 20% at 500 epochs. If I did this dataset on yolov7 training, it would be way higher. Does anyone know why my precision is so cooked?

neutralaim