How to Deploy ML Solutions with FastAPI, Docker, & AWS

preview_player
Показать описание

This is the 5th video in a series on Full Stack Data Science. Here, I walk through a simple 3-step approach for deploying machine learning solutions.

More Resources:

References:

--

Socials

The Data Entrepreneurs

Support ❤️

Intro - 0:00
ML Deployment - 0:33
3-Step Deployment Approach - 1:52
Example Code: Deploying Semantic Search for YT Videos - 3:21
Creating API with FastAPI - 4:31
Create Docker Image - 11:13
Push Image to Docker Hub - 17:15
Deploy Container on AWS ECS - 19:46
Testing Gradio UI - 25:54
What's Next? - 27:07
Рекомендации по теме
Комментарии
Автор

One of the best aspects of AWS Elastic Cloud is how seamlessly everything comes together, whether you're using FastAPI or Docker. It's all integrated beautifully.

FREAK-stkk
Автор

This is such a great video, no nonsense straight to the point!

divyanshtripathi
Автор

Hey Shawn, videos on FastAPI and Docker from you would be great.

pawe
Автор

This is fantastic stuff as I’m pulling out my hair on this same step.

You have the right idea for the next video, but I think the next one after that is making the chat interface publicly accessible

brianmorin
Автор

Thank you so much, such videos are really very helpful

dhirajkumarsahu
Автор

Bro create a video for handling post and get request and multiple endpoints using fast api dockerize and ECR and aws lambda functions

thonnatigopi
Автор

that was an awesome video, I have a task for one click ml model deployment on aws, azure and GCP, like one click on aws and other click on azure. CAn u please guide me shortly the roadmap...!

abbasrabbani
Автор

I have a DL model which takes about 5 mins and 3gb GPU to process the query and to return result. I need to handle 5 queries per minute and I have a GPU with 8gb in GCP. How can I deploy such a model without memory leakage and I should be able to use the GPU at its full potential?

rameshbabu