Simple Recurrence Improves Masked Language Models
@article{Lei2022SimpleRI, title={Simple Recurrence Improves Masked Language Models}, author={Tao Lei and Ran Tian and Jasmijn Bastings and Ankur P. Parikh}, journal={ArXiv}, year={2022}, volume={abs/2205.11588} }
In this work, we explore whether modeling recurrence into the Transformer architecture can both be beneficial and efficient, by building an extremely simple recurrent module into the Transformer. We compare our model to baselines following the training and evaluation recipe of BERT. Our results confirm that recurrence can indeed improve Transformer models by a consistent margin, without requiring low-level performance optimizations, and while keeping the number of parameters constant. For…
One Citation
Cramming: Training a Language Model on a Single GPU in One Day
- Computer ScienceArXiv
- 2022
This work investigates the downstream performance achievable with a transformer-based language model trained completely from scratch with masked language modeling for a single day on a single consumer GPU, and investigates why scaling down is hard, and which modifications actually improve performance in this scenario.
27 References
Modeling Recurrence for Transformer
- Computer ScienceNAACL
- 2019
This work proposes to directly model recurrence for Transformer with an additional recurrence encoder, and introduces a novel attentive recurrent network to leverage the strengths of both attention models and recurrent networks.
Simple Recurrent Units for Highly Parallelizable Recurrence
- Computer ScienceEMNLP
- 2018
The Simple Recurrent Unit is proposed, a light recurrent unit that balances model capacity and scalability, designed to provide expressive recurrence, enable highly parallelized implementation, and comes with careful initialization to facilitate training of deep models.
When Attention Meets Fast Recurrence: Training Language Models with Reduced Compute
- Computer ScienceEMNLP
- 2021
This work presents SRU++, a highly-efficient architecture that combines fast recurrence and attention for sequence modeling that exhibits strong modeling capacity and training efficiency and suggests jointly leveragingFast recurrence with little attention as a promising direction for accelerating model training and inference.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
- Computer ScienceNAACL
- 2019
A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.
RoBERTa: A Robustly Optimized BERT Pretraining Approach
- Computer ScienceArXiv
- 2019
It is found that BERT was significantly undertrained, and can match or exceed the performance of every model published after it, and the best model achieves state-of-the-art results on GLUE, RACE and SQuAD.
How to Train BERT with an Academic Budget
- Computer ScienceEMNLP
- 2021
It is demonstrated that through a combination of software optimizations, design choices, and hyperparameter tuning, it is possible to produce models that are competitive with BERT-base on GLUE tasks at a fraction of the original pretraining cost.
Attention is All you Need
- Computer ScienceNIPS
- 2017
A new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely is proposed, which generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
TRANS-BLSTM: Transformer with Bidirectional LSTM for Language Understanding
- Computer ScienceArXiv
- 2020
It is shown that TRANS-BLSTM models consistently lead to improvements in accuracy compared to BERT baselines in GLUE and SQuAD 1.1 experiments, and is proposed as a joint modeling framework for transformer and BLSTM.
SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems
- Computer ScienceNeurIPS
- 2019
A new benchmark styled after GLUE is presented, a new set of more difficult language understanding tasks, a software toolkit, and a public leaderboard are presented.
Quasi-Recurrent Neural Networks
- Computer ScienceICLR
- 2017
Quasi-recurrent neural networks (QRNNs), an approach to neural sequence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies inallel across channels are introduced.