'Scaling Deep Learning Applications Theoretical and Practical Limits', Janis Keuper, Fraunhofer IT

preview_player
Показать описание
Our recent research [1] showed, that distributedly scaling the training of Deep
Neural Networks is a very hard problem which still lacks feasible solutions. In
this talk, we give an introduction to the main theoretical limits prohibiting efficient scalability and current approaches towards applicable solutions. Also, we discuss many additional practical problems of real world deployments of Deep Learning algorithms to HPC platforms.
Finally, we discuss the consequences and implications of these limitations in the context of Oil and Gas applications with a focus on algorithms for seismic imaging.

[1] Janis Keuper and Franz-Josef Pfreundt. 2016. Distributed training of deep neural networks: theoretical and practical limits of parallel scalability. In Proceedings of the Workshop on Machine Learning in High Performance Computing Environments (MLHPC '16) at Supercomputing 16.
Рекомендации по теме
Комментарии
Автор

Fantastic lecture! Would be nice to have a link to the slides, ppt if possible. Thanks for posting!

akompsupport