Named Tensors, Model Quantization, and the Latest PyTorch Features - Part 1

preview_player
Показать описание
PyTorch, the popular open-source ML framework, has continued to evolve rapidly since the introduction of PyTorch 1.0, which brought an accelerated workflow from research to production. We'll deep dive on some of the most important new advances, including the ability to name tensors, support for quantization-aware training and post-training quantization, improved distributed training on GPUs, and streamlined mobile deployment. We'll also cover new developer tools and domain-specific frameworks including Captum for model interpretability, Detectron2 for computer vision, and speech extensions for Fairseq.

Рекомендации по теме
Комментарии
Автор

*My takeaways:*
*0. Agenda **0:44*
*1. Community growth **1:30*
- Paper implementation grouped by framework: PyTorch (44%), TensorFlow (23%), others (32%) and JAX+MXNet+Caffe2 (1%) 2:20
*2. Usage at Facebook **3:22*
*3. PyTorch background & motivation **5:29*
- Core principles 1 - developer efficiency 9:52: clean API, TorchScript, TensorBoard
- Core principles 2 - building for scales (i.e. high-performance executions for model training and inference) 13:31: optimizing for hardware backends, PyTorchJIT
*4. PyTorch 1.4 and the latest features **15:13*
- Named Tensors 15:30
- Java bindings 17:25
- Captum 18:19
- PyTorch mobile 18:48
- Quantization 20:49
- PyTorch elastic 22:32
- PyTorch RPC 24:08

leixun
Автор

Switched to Pytorch from TensorFlow in 2019, won't regret ever ... awesome framework

bibhashmitra
Автор

How does this have only 100 views in 1 hour?!

NitishSinghchin
welcome to shbcf.ru