filmov
tv
CMU Neural Nets for NLP 2021 (4): Efficiency Tricks for Neural Nets
Показать описание
This lecture (by Graham Neubig) for CMU CS 11-747, Neural Networks for NLP (Spring 2021) covers:
* Tips for Training on GPUs
* Parallel Training
* Negative Sampling
* Softmax Approximations: Negative Sampling, Hierarchical Softmax
* Tips for Training on GPUs
* Parallel Training
* Negative Sampling
* Softmax Approximations: Negative Sampling, Hierarchical Softmax