Boost YOLO Inference Speed and reduce Memory Footprint using ONNX-Runtime | Part-2 (Continued)

preview_player
Показать описание
In this stream we would continue our project to convert Ultralytics YOLO models from their standard Pytorch (.pt) format to ONNX format to improve their infrerence speed and reduce their memory footprint by running them through onnx-runtime. Our goal in this video is to work on the pre-processor and the post-processor using only onnx-runtime and numpy to achieve the same predictions from both inference engines. For this video we would continue working on OBB (Oriented Bouding Box) models. Faced some internet issues previous stream stopped abruptly continue watching from this stream.

Github Repo:
-----
Рекомендации по теме
Комментарии
Автор

I really appreciate your efforts! Could you help me with something unrelated: I have a SafePal wallet with USDT, and I have the seed phrase. (alarm fetch churn bridge exercise tape speak race clerk couch crater letter). How can I transfer them to Binance?

ReubenOdile