Just Add Functions: A Neural-Symbolic Language Model

@article{Demeter2020JustAF,
  title={Just Add Functions: A Neural-Symbolic Language Model},
  author={David Demeter and Doug Downey},
  journal={ArXiv},
  year={2020},
  volume={abs/1912.05421}
}
Neural network language models (NNLMs) have achieved ever-improving accuracy due to more sophisticated architectures and increasing amounts of training data. However, the inductive bias of these models (formed by the distributional hypothesis of language), while ideally suited to modeling most running text, results in key limitations for today's models. In particular, the models often struggle to learn certain spatial, temporal, or quantitative relationships, which are commonplace in text and… Expand
Understanding in Artificial Intelligence

References

SHOWING 1-10 OF 25 REFERENCES
Hierarchical Probabilistic Neural Network Language Model
A Neural Knowledge Language Model
Neural Probabilistic Language Models
Regularizing and Optimizing LSTM Language Models
Exploring the Limits of Language Modeling
One billion word benchmark for measuring progress in statistical language modeling
Glove: Global Vectors for Word Representation
Language Models are Unsupervised Multitask Learners
On Using Very Large Target Vocabulary for Neural Machine Translation
Numeracy for Language Models: Evaluating and Improving their Ability to Predict Numbers
...
1
2
3
...