Manzil Zaheer | Big Bird: Transformers for Longer Sequences

preview_player
Показать описание

Thursday 11 February 2021

Abstract: Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. In this talk, we will look at different techniques developed to remedy this limitation as well as a new approach named BigBird with theoretical guarantees. BigBird is a sparse attention mechanism which reduces the quadratic dependence on length to linear. We show that BigBird is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having $O(1)$ global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of much longer length than what was previously possible using similar hardware. As a consequence of the capability to handle longer context, BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data.

Papers:

Bio: Manzil Zaheer is currently a research scientist at Google. He received his PhD in Machine Learning from the School of Computer Science at Carnegie Mellon University. His research interest is in developing intelligent systems that can utilize the vast amounts of information efficiently and faithfully. His work has been at the interplay of statistical models and data structures: designing novel structure-aware compact representations of data as well as organizing them for efficient access.

Moderated by: Sinead Williamson
Рекомендации по теме
Комментарии
Автор

Broadcast on Wednesday 10 February 2021

juliandarley