Long-Distance Dependencies Don't Have to Be Long: Simplifying through Provably (Approximately) Optimal Permutations

@inproceedings{Bommasani2019LongDistanceDD,
  title={Long-Distance Dependencies Don't Have to Be Long: Simplifying through Provably (Approximately) Optimal Permutations},
  author={Rishi Bommasani},
  booktitle={ACL},
  year={2019}
}
Neural models at the sentence level often operate on the constituent words/tokens in a way that encodes the inductive bias of processing the input in a similar fashion to how humans do. However, there is no guarantee that the standard ordering of words is computationally efficient or optimal. To help mitigate this, we consider a dependency parse as a proxy for the inter-word dependencies in a sentence and simplify the sentence with respect to combinatorial objectives imposed on the… CONTINUE READING

Figures, Tables, Results, and Topics from this paper.

Key Quantitative Results

  • Our fine-tuned baselines reflect a new state of the art for the SUBJ dataset and the permutations we introduce lead to further improvements with a 2.0% increase in classification accuracy (absolute) and a 45% reduction in classification error (relative) over the previous state of the art.
  • As an initial case study, we consider the task of sentence-level subjectivity classification and using the SUBJ dataset (Pang and Lee, 2004), we first introduce baselines that achieve a state of the art 95.8% accuracy and further improve on these baselines with our permutations to a new state of the art of 97.5% accuracy.

References

Publications referenced by this paper.
SHOWING 1-10 OF 71 REFERENCES

Deep contextualized word representations

VIEW 3 EXCERPTS
HIGHLY INFLUENTIAL