filmov
tv
Winning the Hardware Lottery by Accelerating Sparse Networks with Numenta: SigOpt Summit 2021
Показать описание
Sparse networks hold incredible promise for the future of AI. How might we build more efficient networks by leveraging sparsity?
Most deep learning networks today rely on dense representations. This stands in stark contrast to our brains, which are extremely sparse — both in connectivity and in activations.
Implemented correctly, the potential performance benefits of sparsity in weights and activations is massive. Unfortunately, the benefits observed to date have been extremely limited. It is challenging to optimize training to achieve highly sparse and accurate networks. Hyperparameters and best practices that work for dense networks do not apply to sparse networks. In addition, it is difficult to implement sparse networks on hardware platforms designed for dense computations.
In this talk, Numenta's Subutai Ahmad presents novel sparse networks that achieve high accuracy and leverage sparsity to run 100X faster than their dense counterparts. He discusses the hyperparameter optimization strategies used to achieve high accuracy, as well as the hardware techniques developed to achieve this speedup. Numenta's results show that a careful evaluation of the training process combined with an optimized architecture can dramatically scale deep learning networks in the future.
This talk was presented as a part of the 2021 SigOpt Summit.
Most deep learning networks today rely on dense representations. This stands in stark contrast to our brains, which are extremely sparse — both in connectivity and in activations.
Implemented correctly, the potential performance benefits of sparsity in weights and activations is massive. Unfortunately, the benefits observed to date have been extremely limited. It is challenging to optimize training to achieve highly sparse and accurate networks. Hyperparameters and best practices that work for dense networks do not apply to sparse networks. In addition, it is difficult to implement sparse networks on hardware platforms designed for dense computations.
In this talk, Numenta's Subutai Ahmad presents novel sparse networks that achieve high accuracy and leverage sparsity to run 100X faster than their dense counterparts. He discusses the hyperparameter optimization strategies used to achieve high accuracy, as well as the hardware techniques developed to achieve this speedup. Numenta's results show that a careful evaluation of the training process combined with an optimized architecture can dramatically scale deep learning networks in the future.
This talk was presented as a part of the 2021 SigOpt Summit.
Комментарии