How to use Apache TVM to optimize your ML models

preview_player
Показать описание
Apache TVM is an open source machine learning compiler that distills the largest, most powerful deep learning models into lightweight software that can run on the edge. This allows the outputed model to run inference much faster on a variety of target hardware (CPUs, GPUs, FPGAs & accelerators) and save significant costs.
In this deep dive, we’ll discuss how Apache TVM works, share the latest and upcoming features and run a live demo of how to optimize a custom machine learning model.

Connect with us:
Рекомендации по теме
Комментарии
Автор

wow this is a good one for setting the scene and gicing the context around DL compilers along with motivation

billykotsos
Автор

Excellent video. Great content, some depth, yet easy to follow along.

Also, great audio setup. Would like to know more about that too ;)

mkamp
Автор

Great talk! you explained everything with clarity.

shoaibasif
Автор

This video is a fantastic explanation, I'm going to dive into those publications you mentioned too

ultramadscientist
Автор

Thanks for the presentation. Clear explanations

satheeshbrcm
Автор

Looks like TVM does the same what OpenVINO does

redradist