Serving ML Model with Docker, RabbitMQ, FastAPI and Nginx

preview_player
Показать описание
In this tutorial I explain how to serve ML model using such tools as Docker, RabbitMQ, FastAPI and Nginx. The solution is based on our open-source product Katana ML Skipper (or just Skipper). It allows running ML workflow using a group of microservices. It is not limited to ML, you can run any workload using Skipper and plugin your own services. You can reach out to me if you got any questions.

0:00 Introduction
1:50 Serving flow
5:25 Solution demo
10:30 Docker compose
15:27 Summary

CONNECT:
- Subscribe to this YouTube channel

#FastAPI #MachineLearning #Python
Рекомендации по теме