[MLOPS] Model Serving Monitoring and Traceability - The Bigger Picture - The AIIA Summit 2022

preview_player
Показать описание
The recording of our talk at the AI infrastructure alliance micro summit. This talk covers ClearML serving including monitoring and focuses on the importance of being able to trace the deployed model all the way back to the original experiment, code and data that were used to train it! One of the mayor advantages of a single tool end-to-end MLOps workflow.

💻 Get a server:

📄 Documentation on Fundamentals:

✨ Follow us and star us!

Рекомендации по теме
Комментарии
Автор

Could you please provide code for tag best model?

efeliccalebdansou
Автор

Could you please provide code for metrics monitoring, or what query was applied to produce the graph on prometheus and grafana shown initially?
Also, how to provide custom metrics for an endpoint and view them over grafana/prometheus?

ARPITAGARWAL-fosi
Автор

Where and how do you store models? I've already seen you have support for google, azure, and aws. But do you offer model storage with the package?

abdolrahimtooraanian