filmov
tv
[53a] Intro to PyTorch Tutorial (Sebastian Raschka)
Показать описание
## Upcoming Events
Join our Meetup group for more events!
[53a] Sebastian Raschka: Introduction to PyTorch
[53b] Adrian Wälchli: Scaling Up with LightningLite
[53c] Q&A: Sebastian Raschka & Adrian Wälchli (PyTorch, LightningLite)
## Key Links
## Resources
## Community Announcements
[53a] Video 1: Intro to PyTorch
## Agenda
00:00 Reshama introduces Data Umbrella
08:25 Sebastian begins
09:50 What is PyTorch? (tensor library, automatic differentiation engine, deep learning library)
10:50 TensorFlow vs PyTorch: why PyTorch is so popular
12:55 PyTorch: tensor library (rank-x tensor: scalar, vector, matrix, 3D tensor, 4D tensor
17:19 PyTorch: automatic differentiation support
25:28 automatic differentiation in PyTorch
26:30 autograd
28:12 PyTorch: deep learning library
28:30 3 Steps in Neural Network Training
29:39 Defining the Model
33:50 Step 1: Define forward method
37:28 Step 2: Defining the training loop (initialize the model and optimizer)
41:10 Iterating over the training examples
42:52 Computing the predictions
45:10 Computing the backward pass (backpropagation)
46:41 Updating the model weights
47:14 Tracking the performance
47:52 no_grad() (we don't care about gradients here, we don't need to construct the computation graph)
48:52 Why do I like PyTorch?
50:38 Live Demo
51:10 Developer conference: Lightning DevCon
52:26 demo in Jupyter Notebook
[53b] Video 2: Scaling Up with LightningLite
[53c] Video 3: Q&A with Sebastian and Adrian
## Event
This talk will introduce attendees to using PyTorch for deep learning. We will start by covering PyTorch from the ground up and learn how it can be both powerful and convenient. At times, Machine learning models can become so large that they can't be trained on a notebook anymore. Being able to take advantage AI-optimized accelerators such as GPU or TPU and scaling the training of models to hundreds of these devices is essential to the researcher and data scientist.
However, adding support for one or several of these in the source code can be complex, time consuming and error-prone. What starts as a fun research project ends up being an engineering problem with hard to debug code. This talk will introduce LightningLite, an open source library that removes this burden completely. You will learn how you can accelerate your PyTorch training script in just under ten lines of code to take advantage of multi-GPU, TPU, multi-node, mixed-precision training and more.
## About the Speaker: Sebastian Raschka
## About the Speaker: Adrian Waelchli
#pytorch #python #deeplearning
Join our Meetup group for more events!
[53a] Sebastian Raschka: Introduction to PyTorch
[53b] Adrian Wälchli: Scaling Up with LightningLite
[53c] Q&A: Sebastian Raschka & Adrian Wälchli (PyTorch, LightningLite)
## Key Links
## Resources
## Community Announcements
[53a] Video 1: Intro to PyTorch
## Agenda
00:00 Reshama introduces Data Umbrella
08:25 Sebastian begins
09:50 What is PyTorch? (tensor library, automatic differentiation engine, deep learning library)
10:50 TensorFlow vs PyTorch: why PyTorch is so popular
12:55 PyTorch: tensor library (rank-x tensor: scalar, vector, matrix, 3D tensor, 4D tensor
17:19 PyTorch: automatic differentiation support
25:28 automatic differentiation in PyTorch
26:30 autograd
28:12 PyTorch: deep learning library
28:30 3 Steps in Neural Network Training
29:39 Defining the Model
33:50 Step 1: Define forward method
37:28 Step 2: Defining the training loop (initialize the model and optimizer)
41:10 Iterating over the training examples
42:52 Computing the predictions
45:10 Computing the backward pass (backpropagation)
46:41 Updating the model weights
47:14 Tracking the performance
47:52 no_grad() (we don't care about gradients here, we don't need to construct the computation graph)
48:52 Why do I like PyTorch?
50:38 Live Demo
51:10 Developer conference: Lightning DevCon
52:26 demo in Jupyter Notebook
[53b] Video 2: Scaling Up with LightningLite
[53c] Video 3: Q&A with Sebastian and Adrian
## Event
This talk will introduce attendees to using PyTorch for deep learning. We will start by covering PyTorch from the ground up and learn how it can be both powerful and convenient. At times, Machine learning models can become so large that they can't be trained on a notebook anymore. Being able to take advantage AI-optimized accelerators such as GPU or TPU and scaling the training of models to hundreds of these devices is essential to the researcher and data scientist.
However, adding support for one or several of these in the source code can be complex, time consuming and error-prone. What starts as a fun research project ends up being an engineering problem with hard to debug code. This talk will introduce LightningLite, an open source library that removes this burden completely. You will learn how you can accelerate your PyTorch training script in just under ten lines of code to take advantage of multi-GPU, TPU, multi-node, mixed-precision training and more.
## About the Speaker: Sebastian Raschka
## About the Speaker: Adrian Waelchli
#pytorch #python #deeplearning
Комментарии