ONNX and ONNX Runtime

preview_player
Показать описание
What is the universal inference engine for neural networks?

Tensorflow? PyTorch? Keras? There are many popular frameworks out there for working with Deep Learning and ML models, each with their pros and cons for practical usability for product development and/or research. Once you decide what to use and train a model, now you need to figure out how to deploy it onto your platform and architecture of choice. Cloud? Windows? Linux? IOT? Performance sensitive? How about GPU acceleration? With a landscape of 1,000,001 different combinations for deploying a trained model from some chosen framework into a performant production environment for prediction, we can benefit from some standardization.

Рекомендации по теме
Комментарии
Автор

Really awesome presentation skills, simply precise and clear.

arslanali
Автор

Thanks for the video!
I am interested in adding a new custom operator to onnxruntime in C++, but there isn't any example for it. Does anyone know of one?

galdavid
Автор

Too short a time showing slides. Constantly need to backup & pause to digest slides.

kengustafson
Автор

OMG, please show the slides more than the presenter. I can never finish reading most of the slides because they are shown for only a few seconds. This is very, very, very annoying and disappointing.

briancase
Автор

Can I train it on Pytorch in Python and run the model in in Java, Javascript, C++, and more? I am talking about RNN, Transformer, fastRCNN, and more advance model.

jonathansum
Автор

I cant get mlmodel to onnx converter to work on mac

poolplayer
Автор

Good presentation badly edited, the ratio of presenter footage to slides is awful. Some slides are barely on the screen for a second.

sgccarey
Автор

Pranav Sharma bhai, please bring toilets to us in India

indahpratiwi