filmov
tv
Solving the Model Serving Component of the MLOps Stack With Chaoyu Yang
![preview_player](https://i.ytimg.com/vi/mqTw4RYz-pE/maxresdefault.jpg)
Показать описание
On this episode of MLOps Live we have Chaoyu Yang as our guest. Chaoyu shares his experience and answers your questions around how to solve the model serving component of the MLOps stack.
MLOps Live is a biweekly Q&A show where practitioners doing ML at a reasonable scale answer questions from other ML practitioners. Every episode focused on one specific subject related to MLOps. Only the juicy bits, the things you won’t find in a company blog post.
00:00 Introduction to MLOps Live
01:30 Chaoyu’s background and shift into MLOps
04:10 How would you summarize MLOps and model serving?
05:40 What’s the significant difference between model deployment in hyper-scale vs early-stage companies?
08:10 What’s an overkill model deployment setup?
11:20 What is the biggest challenge with deploying using open source?
13:00 Are data scientists and ML engineers able to write code like software engineers?
16:30 What is the most efficient way to expand production at a reasonable scale?
25:15 Managing services from the cloud or setting up local cloud servers?
27:10 How to achieve sharing between models with unnecessary data transfers?
30:20 The future of BentoML?
32:13 Factors that affect continuous ML model training?
34:38 Considerations when planning for production requirements?
40:00 When do you think about automation?
44:50 How to set up models against adversarial attacks?
46:30 How crucial is log-in production data?
48:58 What do you say to people who claim they don’t need MLOps?
Follow us & stay updated:
► MLOps Community: (#neptune-ai)
MLOps Live is a biweekly Q&A show where practitioners doing ML at a reasonable scale answer questions from other ML practitioners. Every episode focused on one specific subject related to MLOps. Only the juicy bits, the things you won’t find in a company blog post.
00:00 Introduction to MLOps Live
01:30 Chaoyu’s background and shift into MLOps
04:10 How would you summarize MLOps and model serving?
05:40 What’s the significant difference between model deployment in hyper-scale vs early-stage companies?
08:10 What’s an overkill model deployment setup?
11:20 What is the biggest challenge with deploying using open source?
13:00 Are data scientists and ML engineers able to write code like software engineers?
16:30 What is the most efficient way to expand production at a reasonable scale?
25:15 Managing services from the cloud or setting up local cloud servers?
27:10 How to achieve sharing between models with unnecessary data transfers?
30:20 The future of BentoML?
32:13 Factors that affect continuous ML model training?
34:38 Considerations when planning for production requirements?
40:00 When do you think about automation?
44:50 How to set up models against adversarial attacks?
46:30 How crucial is log-in production data?
48:58 What do you say to people who claim they don’t need MLOps?
Follow us & stay updated:
► MLOps Community: (#neptune-ai)