Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis

preview_player
Показать описание
In this video from 2018 Swiss HPC Conference, Torsten Hoefler from (ETH) Zürich presents: Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis.

"Deep Neural Networks (DNNs) are becoming an important tool in modern computing applications. Accelerating their training is a major challenge and techniques range from distributed algorithms to low-level circuit design. In this talk, we describe the problem from a theoretical perspective, followed by approaches for its parallelization.

Specifically, we present trends in DNN architectures and the resulting implications on parallelization strategies. We discuss the different types of concurrency in DNNs; synchronous and asynchronous stochastic gradient descent; distributed system architectures; communication schemes; and performance modeling. Based on these approaches, we extrapolate potential directions for parallelism in deep learning."

Рекомендации по теме
Комментарии
Автор

This is the coolest lecture in the world!! Exactly what I needed

Keepedia
Автор

starting at 13:35. I think it is not 4X because the size of output and weight are different?

zhengchunliu
Автор

Very comprehensive and informative talk!

rzhang
Автор

Is this paper published in some journal (not the arxiv version)?

jo-of-joey