Training Transformers with AWS Trainium and the Hugging Face Neuron AMI

preview_player
Показать описание
In this video, I show you how to accelerate Transformer training with AWS Trainium, a new custom chip designed by AWS, and the brand new Hugging Face Neuron Amazon Machine Image (AMI).

First, I walk you through the setup of an Amazon EC2 trn1.32xlarge instance, equipped with 16 Trainium chips. Then, I run a natural language processing job, accelerating a BERT model to classify the Yelp review datatset on 32 Neuron cores.

⭐️⭐️⭐️ Don't forget to subscribe to be notified of future videos ⭐️⭐️⭐️

Interested in hardware acceleration? Check out my other videos :
Рекомендации по теме
Комментарии
Автор

This is AWESOME! I've been pulling my hair out with AWS custom chips over the last few days. Thank you!!!

OlabodeAdedoyin
Автор

This looks really cool! It may even make me want to switch away from runpod for larger jobs.

nathanbanks
Автор

Hi Julien, can you give some guidance on how this can fit into an automated workflow with SageMaker Piplines? Thank you.

loflog