filmov
tv
Boost YOLO Inference Speed and reduce Memory Footprint using ONNX-Runtime | Part-2

Показать описание
In this stream we would continue our project to convert Ultralytics YOLO models from their standard Pytorch (.pt) format to ONNX format to improve their infrerence speed and reduce their memory footprint by running them through onnx-runtime. Our goal in this video is to work on the pre-processor and the post-processor using only onnx-runtime and numpy to achieve the same predictions from both inference engines. For this video we would continue working on OBB (Oriented Bouding Box) models.
Github Repo:
-----
Github Repo:
-----