Revisiting Activation Regularization for Language RNNs

@article{Merity2017RevisitingAR,
  title={Revisiting Activation Regularization for Language RNNs},
  author={Stephen Merity and Bryan McCann and Richard Socher},
  journal={CoRR},
  year={2017},
  volume={abs/1708.01009}
}
Recurrent neural networks (RNNs) serve as a fundamental building block for many sequence tasks across natural language processing. Recent research has focused on recurrent dropout techniques or custom RNN cells in order to improve performance. Both of these can require substantial modifications to the machine learning model or to the underlying RNN configurations. We revisit traditional regularization techniques, specifically L2 regularization on RNN activations and slowness regularization over… CONTINUE READING
Related Discussions
This paper has been referenced on Twitter 12 times. VIEW TWEETS

Similar Papers

Loading similar papers…