filmov
tv
EfficientBERT: Trading off Model Size and Performance - Meghana Ravikumar, SigOpt
Показать описание
EfficientBERT: Trading off Model Size and Performance - Meghana Ravikumar, SigOpt
"With the publication of BERT, transfer learning was suddenly accessible for NLP, unlocking a plethora of model zoos and boosting performances for domain specific problems. Although BERT has accelerated many modeling efforts, its size is limiting. In this talk, we will explore how to reduce the size of BERT while retaining its capacity in the context of English Question Answering tasks. We’ll show how scalable hyperparameter optimization can help you tackle difficult modeling problems, draw insights, and make informed decisions.
Our approach encompasses fine-tuning, distillation, architecture search, and hyperparameter optimization at scale. First, we fine-tune BERT on SQUAD 2.0 (our teacher model) and use distillation to compress fine-tuned BERT to a smaller model (our student model). Then, combining SigOpt and Ray, we use multimetric Bayesian optimization at scale to find the optimal architecture for the student model. Finally, we explore the trade-offs of our hyperparameter decisions to draw insights for our student model’s architecture."
"With the publication of BERT, transfer learning was suddenly accessible for NLP, unlocking a plethora of model zoos and boosting performances for domain specific problems. Although BERT has accelerated many modeling efforts, its size is limiting. In this talk, we will explore how to reduce the size of BERT while retaining its capacity in the context of English Question Answering tasks. We’ll show how scalable hyperparameter optimization can help you tackle difficult modeling problems, draw insights, and make informed decisions.
Our approach encompasses fine-tuning, distillation, architecture search, and hyperparameter optimization at scale. First, we fine-tune BERT on SQUAD 2.0 (our teacher model) and use distillation to compress fine-tuned BERT to a smaller model (our student model). Then, combining SigOpt and Ray, we use multimetric Bayesian optimization at scale to find the optimal architecture for the student model. Finally, we explore the trade-offs of our hyperparameter decisions to draw insights for our student model’s architecture."