filmov
tv
Hybrid Machine Learning. Integrations between local compute and AWS. AGPIAL Audiobook
Показать описание
Hybrid Machine Learning.
Integrations between local compute and the AWS cloud across the machine learning lifecycle.
Abstract
The purpose of this document is to outline known considerations, design patterns, and solutions that customers can leverage today when considering hybrid dimensions of the AWS AI/ML stack across the entire machine learning (ML) lifecycle.
Due to the scalability, flexibility, and pricing models enabled by the cloud, we at AWS continue to believe that the majority of ML workloads are better suited to run in the cloud in the long haul.
However, given that less than 5% of overall IT spend is allocated for the cloud, the actual amount of IT spend on-premises is north of 95%.
This tells us that there is a sizeable underserved market.
Change is hard - particularly for enterprises.
The complexity, magnitude, and length of migrations can be a perceived barrier to getting started.
For these customers, we propose hybrid ML patterns as an intermediate step in their cloud and ML journey.
Hybrid ML patterns are those that involve a minimum of two compute environments, typically local compute resources such as personal laptops or corporate data centers, and the cloud.
See the Basics section for a full introduction to the concept of hybrid ML.
We think customers win when they deploy a workload that touches the cloud to get some value, and we at AWS are committed to supporting any customer’s success, even if only a few percentage points of that workload hit the cloud today.
This document is intended for individuals who already have a baseline understanding of machine learning, in addition to Amazon SageMaker.
We will not dive into best practices for Amazon SageMaker per se, nor into best practices for storage or edge services.
Instead, we will focus explicitly on hybrid workloads, and refer readers to resources elsewhere as necessary.
00:00:00 Welcome
00:00:14 Abstract
00:02:05 Introduction
00:08:04 What is hybrid?
00:09:32 What hybrid is not
00:10:14 Hybrid patterns for development
00:11:13 Develop on personal computers, to train and host in the cloud
00:13:41 Develop on local servers, to train and host in the cloud
00:15:03 Hybrid patterns for training
00:15:22 Training locally, to deploy in the cloud
00:17:28 How to monitor your model in the cloud
00:18:44 How to handle retraining / retuning
00:20:20 How to serve thousands of models in the cloud at low cost
00:26:50 Data Wrangler & Snowflake
00:27:31 Train in the cloud, to deploy ML models on-premises
00:32:13 Monitor ML models deployed on-premises with SageMaker Edge Manager
00:33:29 Hybrid patterns for deployment
00:35:06 Serve models in the cloud to applications hosted on-premises
00:36:19 Host ML Models with Lambda at Edge to applications on-premises
00:38:35 AWS Local Zones
00:38:59 AWS Wavelength
00:39:26 Training with a 3rd party SaaS provider to host in the cloud
00:40:23 Control plane patterns for hybrid ML
00:42:06 Orchestrate Hybrid ML Workloads with Kubeflow and EKS Anywhere
00:43:07 Auxiliary services for hybrid ML patterns AWS Outposts
00:43:56 AWS Inferentia
00:44:56 AWS Direct Connect
00:45:09 Amazon ECS / EKS Anywhere
00:46:00 Hybrid ML Use Cases Enterprise Migrations
00:46:41 Manufacturing
00:47:35 Gaming
00:48:14 Mobile application development
00:48:48 AI-enhanced media and content creation
00:49:49 Autonomous Vehicles
00:50:54 Conclusion
00:51:51 Contributors
Integrations between local compute and the AWS cloud across the machine learning lifecycle.
Abstract
The purpose of this document is to outline known considerations, design patterns, and solutions that customers can leverage today when considering hybrid dimensions of the AWS AI/ML stack across the entire machine learning (ML) lifecycle.
Due to the scalability, flexibility, and pricing models enabled by the cloud, we at AWS continue to believe that the majority of ML workloads are better suited to run in the cloud in the long haul.
However, given that less than 5% of overall IT spend is allocated for the cloud, the actual amount of IT spend on-premises is north of 95%.
This tells us that there is a sizeable underserved market.
Change is hard - particularly for enterprises.
The complexity, magnitude, and length of migrations can be a perceived barrier to getting started.
For these customers, we propose hybrid ML patterns as an intermediate step in their cloud and ML journey.
Hybrid ML patterns are those that involve a minimum of two compute environments, typically local compute resources such as personal laptops or corporate data centers, and the cloud.
See the Basics section for a full introduction to the concept of hybrid ML.
We think customers win when they deploy a workload that touches the cloud to get some value, and we at AWS are committed to supporting any customer’s success, even if only a few percentage points of that workload hit the cloud today.
This document is intended for individuals who already have a baseline understanding of machine learning, in addition to Amazon SageMaker.
We will not dive into best practices for Amazon SageMaker per se, nor into best practices for storage or edge services.
Instead, we will focus explicitly on hybrid workloads, and refer readers to resources elsewhere as necessary.
00:00:00 Welcome
00:00:14 Abstract
00:02:05 Introduction
00:08:04 What is hybrid?
00:09:32 What hybrid is not
00:10:14 Hybrid patterns for development
00:11:13 Develop on personal computers, to train and host in the cloud
00:13:41 Develop on local servers, to train and host in the cloud
00:15:03 Hybrid patterns for training
00:15:22 Training locally, to deploy in the cloud
00:17:28 How to monitor your model in the cloud
00:18:44 How to handle retraining / retuning
00:20:20 How to serve thousands of models in the cloud at low cost
00:26:50 Data Wrangler & Snowflake
00:27:31 Train in the cloud, to deploy ML models on-premises
00:32:13 Monitor ML models deployed on-premises with SageMaker Edge Manager
00:33:29 Hybrid patterns for deployment
00:35:06 Serve models in the cloud to applications hosted on-premises
00:36:19 Host ML Models with Lambda at Edge to applications on-premises
00:38:35 AWS Local Zones
00:38:59 AWS Wavelength
00:39:26 Training with a 3rd party SaaS provider to host in the cloud
00:40:23 Control plane patterns for hybrid ML
00:42:06 Orchestrate Hybrid ML Workloads with Kubeflow and EKS Anywhere
00:43:07 Auxiliary services for hybrid ML patterns AWS Outposts
00:43:56 AWS Inferentia
00:44:56 AWS Direct Connect
00:45:09 Amazon ECS / EKS Anywhere
00:46:00 Hybrid ML Use Cases Enterprise Migrations
00:46:41 Manufacturing
00:47:35 Gaming
00:48:14 Mobile application development
00:48:48 AI-enhanced media and content creation
00:49:49 Autonomous Vehicles
00:50:54 Conclusion
00:51:51 Contributors