filmov
tv
Converting from PyTorch to PyTorch Lightning
Показать описание
In this video, William Falcon refactors a PyTorch VAE into PyTorch Lightning. As it's obvious in the video, this was an honest attempt at refactoring a new repository without having prior knowledge of it. Despite this, the full conversion took under 45 minutes.
This video is meant to show all the details and issues you might run into while converting a model.
The original VAE is here:
The refactored Lightning VAE is here:
00:00 - Intro
00:55 - Why you need Pytorch lightning (even though PyTorch is already simple)
01:51 - Advantages of 16-bit precision
02:27 - Tour of the PyTorch Lightning repo
03:28 - Finding the "magic" (ie: the training loop core code)
07:47 - training_step
10:34 - train_dataloader
12:09 - configure_optimizers
12:54 - training_step vs forward
14:44 - validation_step
23:55 - dataloaders passed into .fit() vs inside LightningModule
26:38 - how to structure forward
29:26 - validation_epoch_end
30:52 - Using tensorboard (or any other logger)
33:59 - automatic model checkpointing
34:44 - how to add all Trainer args to Argparse automatically
35:56 - single-GPU training
38:22 - multi-GPU training
39:32 - 16-bit precision training
40:41 - summary
This video is meant to show all the details and issues you might run into while converting a model.
The original VAE is here:
The refactored Lightning VAE is here:
00:00 - Intro
00:55 - Why you need Pytorch lightning (even though PyTorch is already simple)
01:51 - Advantages of 16-bit precision
02:27 - Tour of the PyTorch Lightning repo
03:28 - Finding the "magic" (ie: the training loop core code)
07:47 - training_step
10:34 - train_dataloader
12:09 - configure_optimizers
12:54 - training_step vs forward
14:44 - validation_step
23:55 - dataloaders passed into .fit() vs inside LightningModule
26:38 - how to structure forward
29:26 - validation_epoch_end
30:52 - Using tensorboard (or any other logger)
33:59 - automatic model checkpointing
34:44 - how to add all Trainer args to Argparse automatically
35:56 - single-GPU training
38:22 - multi-GPU training
39:32 - 16-bit precision training
40:41 - summary
Комментарии