Oriented Bounding Boxes (YOLOv8-OBB) Object Detection using Ultralytics YOLOv8 | Episode 21

preview_player
Показать описание
Unlock the full potential of object detection with Ultralytics YOLOv8-OBB! 🚀 In Episode 21, we delve into the game-changing Oriented Bounding Boxes (OBB) feature in YOLOv8, designed to offer more precise object detection by aligning bounding boxes with object orientation. This breakthrough is especially useful for applications involving aerial imagery, where objects like ships and cars move in various directions.

🔍 Key Highlights:
- Introduction to YOLOv8-OBB and its capabilities
- Detailed walkthrough of the YOLOv8-OBB documentation and code implementation
- Practical applications and use cases, including aerial image analysis
- Overview of available pre-trained models, including Nano, Small, Medium, Large, and Extra Large, trained on the DOTA v1 dataset
- Step-by-step guide on setting up and training your own YOLOv8-OBB model
- Instructions on dataset formatting for OBB tasks
- Demonstration of model inference using Python
- Exporting models to various formats like PyTorch, TorchScript, and ONNX

Join us in exploring how YOLOv8-OBB can revolutionize your computer vision projects. Whether you're tracking moving vehicles or analyzing drone footage, this episode provides all the insights you need to get started.

📚 Dive deeper with our resources:

Don't forget to like, subscribe, and hit the bell icon to stay updated on the latest advancements in AI and computer vision. Visit our website for more details and join our community on Discord!

#YOLOv8 #Ultralytics #ComputerVision #AI #ObjectDetection #OrientedBoundingBoxes #DeepLearning #AIDevelopment #TechInnovation
Рекомендации по теме
Комментарии
Автор

Can the OBB annotated images can be used with a mix of traditional bounding boxes images dataset for the training? Will it affect the output performance?

sudhikrishnana
Автор

How does the YOLOv8's oriented bounding box feature improve detection accuracy in real-world scenarios, like autonomous driving or drone surveillance, compared to traditional bounding boxes? Any trade-offs we should be aware of?

m
Автор

Lets assume we crop a big image into smaller regions (kinda like a sliding window) such that sometimes half a bounding box (or at least one corner of a box) sit outside the newly cropped image.

When normalizing between 0 and 1, should we set the coordinate of the four points normaly, such that one of the corner may have coordinates bigger than 1 or smaller than 0 ?

Or should we adapt these values to fall into the cropped image ? But then the box is not necessarily rectangular anymore.

What is the best approach to train with yolo OBB under these conditions ?

Vahroc
Автор

can I use this for detecting parcels/mails and integrate it in a robotic arm?

ningguangleaks
Автор

Is there a possibility to quantify the number of pixels that particularly define that bounding box (so basically quantifying sizes from say microscopic images)? Also, is it possible to quantify the the intensity of certain colors in a bounding box (just like colorimeter)?

zubairkhalid
Автор

hey, as a learner, i have a question of a point, should we mention the name of source into the last line, i meant inside the result row, please try to clarify of my concern

TravelwithRasel.
Автор

How can get the coordinates of the results image predicted by custom model OBB? because if use the none OBB code, the result getting None

xqgvepc
Автор

how the dataset be converted from the yolov8 annotated format to the yolov8 oriented bounding boxes format?

diebvci
Автор

Yo, this vid is fire!!! Is YOLOv8-OBB good 4 real-time drone footage? Like how does it keep up when objects r moving all fast and crazy?? Also, how does it compare with normal bounding boxes in high-motion scenarios? Anyone tried this out in the wild?! 😱🙌

AxelRyder-qb
Автор

doest it uses the same approach as yolov5-obb which use CSL for its OBB function?

frostscarlet
Автор

Very nice work! A short question, which software tool do you suggest to use for annotating our dataset with oriented bounding boxes?

dimitrispolitikos
Автор

for the cli command !yolo task=obb ... how to implement the same using model='best.pt', model.predict(), what is the arg for task=obb ??

dalinsixtus
Автор

how do I extract the X Y values of bounding box when detecting an object real-time?

kennetheladistu
Автор

how can I get the coordinates of the bounding box? Thanks you!

truonggiang-nguyen
Автор

Hey, I really appreciate the video. I have a question. I don't whether this video is the right place to ask. I downloaded the custom dataset which I annotated using roboflow. I downloaded in yolov8 format. My project is object detection. Unfortunately, many annotations were done by polygon feature. As a result, I could not train my model for yolov8 object detection. I had to convert those polygons to bounding boxes. I wrote a python code to convert polygons to bounding box format (with the help of ChatGPT). But unfortunately there were problems and after training model, performance metrics were showing zero. After some inspection, it seemed problem was in my dataset and I suspect this happened because of conversion to bounding box format. So I chose to download the data in yolov8 OBB format. I inspected the label files and there were many coordinates which were negative values. But as far as I know, these values should be normalized and between 0 to 1.

Can you tell me why negative values and values more than 1 are in the label files? Also, is there any easy way to convert polygon format to bounding box format other than using code? Thanks in advance.

sabinasultana
Автор

Hi, why the label not appear in my detection

asyraffauzi
Автор

😍Any visual heatmaps adapted for OBB object detection?

csnioos
Автор

Nice, can you give me your test video(ships.mp4)

vokgypj