filmov
tv
HC29-S7: Neural Net II
![preview_player](https://i.ytimg.com/vi/j0ra7XApB4c/maxresdefault.jpg)
Показать описание
Session 7, Hot Chips 29 (2017), Tuesday, August 22, 2017.
DNN ENGINE: A 16nm Sub-µJ Deep Neural Network Inference Accelerator for the Embedded Masses
Paul Whatmough, Harvard University/ARM Research
DNPU: An Energy-Efficient Deep Neural Network Processor with On-Chip Stereo Matching
Dongjoo Shin and Hoi-Jun Yoo, KAIST
Evaluation of the Tensor Processing Unit: A Deep Neural Network Accelerator for the Datacenter
Cliff Young, Google
DNN ENGINE: A 16nm Sub-µJ Deep Neural Network Inference Accelerator for the Embedded Masses
Paul Whatmough, Harvard University/ARM Research
DNPU: An Energy-Efficient Deep Neural Network Processor with On-Chip Stereo Matching
Dongjoo Shin and Hoi-Jun Yoo, KAIST
Evaluation of the Tensor Processing Unit: A Deep Neural Network Accelerator for the Datacenter
Cliff Young, Google