Fast and Scalable Model Training with PyTorch and Ray

preview_player
Показать описание
Organizations are making substantial investments in GenAI and LLMs, and Anyscale is at the forefront of this innovation. Our Virtual AI Tutorial Series introduces core concepts of modern AI applications, emphasizing large-scale computing, cost-effectiveness, and ML models.

In this webinar, we focus on distributed model training with PyTorch and Ray. You'll learn how to migrate your code from pure PyTorch to Ray Train and Ray Data, enabling scalable and efficient AI workflows.

Join this session to learn about:

- How to migrate your code from PyTorch ecosystem libraries to Ray Train to enable large scale model training or fine-tuning
- Review reference implementations for common PyTorch+Ray scenarios
- Common performance and cost efficiency optimizations for distributed model training on Anyscale
Рекомендации по теме