Simple YOLOv8 Class for Object Detection with Webcam in Real-time

preview_player
Показать описание

You will also get access to all the technical courses inside the program, also the ones I plan to make in the future! Check out the technical courses below 👇

_____________________________________________________________

In this video 📝 we are going to create a simple YOLOv8 class for Object Detection Model in Python. We will see how to deploy a trained YOLOv8 model and run live inference on a webcam. This can actually run in real-time. You can also check out my YOLOv7 Course where we cover everything from generating a dataset, setting up the model, training, and deployment.

If you enjoyed this video, be sure to press the 👍 button so that I know what content you guys like to see.

_____________________________________________________________

_____________________________________________________________

_____________________________________________________________

📞 Connect with Me:

_____________________________________________________________

🎮 My Gear (Affiliate links):
🖥️ Desktop PC:

_____________________________________________________________

Timestamps:
0:00 Intro
0:42 YOLOv8 Class
4:00 Load Model
5:53 Predict
6:29 Extract Info & Visualize
11:56 Main Loop
14:37 Results

Tags:
#yolov8 #yolo #objectdetection #ultralytics
Рекомендации по теме
Комментарии
Автор

Join My AI Career Program
Enroll in My School and Technical Courses

NicolaiAI
Автор

Thank you so much, this is really helpful and well explained.

seyedamirhh
Автор

Thank you, you helped me a lot, but I still have a problem. I trained a model to recognize license plates. It identified them, but I also want it to print the number of the car that it identified. Can you explain this method?

al-musbahitechnology
Автор

Congratulations on the video. Could you make a video like this, but using monitor screen?

maiquelkappel
Автор

I want to transfer the names of the detection objects in yolov8's real time analysis to the mobile application in real time via websocket, how do I do that?

qmixcnk
Автор

Thank you for the video. How I can set a filter to detect and draw BBox ONLY for "Person"? Is it possible to set up an area in the video frame the detector detects the objects only in that area?

a.a.ardebili
Автор

Can this implement in raspberry pi? Will this makes the fps drops?

cashyc
Автор

How can we print the objecct detected?

vijaypaulraj
Автор

Great video. However I wonder how to get a very smoothly output. I have tried multithreading however the output is a bit of choppy. Not tottaly smooth. Do you have any solution on that

athanasioschatzipanagiotou
Автор

where is the code can you share me the git link of your repo
pls

nnpncpd
Автор

Hi, I would like to know how to count the amount of detected object per class. Is this possible? I could only get the amount of all the detected objects.

mystuff
Автор

The following error comes up:
ValueError: too many values to unpack (expected 4)

It is in the following lines:
self.labels = {confidence:0.2f}"
for _, confidence, class_id, tracker_id
in detections]

inteliii
Автор

Can you build an app which integrates yolov8 for object detection?

thekushalgaikwad
Автор

Thanks for that Video, it's really helpful.
if possible, can you share the name of your chair plz😅.

umarmohamed
Автор

Great Video! But I unfortunately had an error occurred, "TypeError: unsupported format string passed to list.__format__". Any idea on how to fix this?

mystuff
Автор

Hello, thank for your video
But i have a problem:
from supervision.tools.detections import Detections, BoxAnnotator
ModuleNotFoundError: No module named 'supervision.tools'
Can you fix it? Thank you, have a nice day!

nhatpham
Автор

If I only want to display bboxes with confidence >0.5 where Can I do this in the code? Great video!!

fassesweden
Автор

Hi, I have the next error: ColorPalette.__init__() missing 1 required positional argument: 'colors'. Any idea?

currogonzalezfleta
Автор

Can YOLOv8 perform instance segmentation in real-time?

JasonKim-yzrv
Автор

I came here after seeing your comment link.

I'm trying to load my trained best.onnx file. However, I'm getting the following error:
return self._sess.run(output_names, input_feed, run_options)
[ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Got invalid dimensions for input: images for the following indices
index: 2 Got: 640 Expected: 320
index: 3 Got: 640 Expected: 320
Please fix either the inputs or the model.

I tried adding self.img_size = (320, 320) in __init__ and modifying the predict method as follows:
def predict(self, frame):
frame = cv2.resize(frame, self.img_size)
results = self.model(frame)
return results

But I still encounter the same error. Is there any specific reason for this?

soenwzb