PyTorch Lightning - Accelerator

preview_player
Показать описание
In this video, we give a short intro on how Lightning distributes computations and syncs gradients across many GPUs. The Default option is Distributed Data-Parallel, or in Lightning, DDP.

Get social:
👉 SUBSCRIBE!
Рекомендации по теме
Комментарии
Автор

thanks for the tutorial and updating it :)

dataphile
Автор

For models based on UNETs (about 7 M of params) which accelerator do you recommend? DP or DDP?

edgarcin