Corpus ID: 31816657

Simulating Action Dynamics with Neural Process Networks

@article{Bosselut2018SimulatingAD,
  title={Simulating Action Dynamics with Neural Process Networks},
  author={Antoine Bosselut and Omer Levy and Ari Holtzman and Corin Ennis and Dieter Fox and Yejin Choi},
  journal={ArXiv},
  year={2018},
  volume={abs/1711.05313}
}
  • Antoine Bosselut, Omer Levy, +3 authors Yejin Choi
  • Published in ICLR 2018
  • Computer Science
  • ArXiv
  • Understanding procedural language requires anticipating the causal effects of actions, even when they are not explicitly stated. [...] Key Method The model updates the states of the entities by executing learned action operators. Empirical results demonstrate that our proposed model can reason about the unstated causal effects of actions, allowing it to provide more accurate contextual information for understanding and generating procedural text, all while offering more interpretable internal representations…Expand Abstract

    Figures, Tables, and Topics from this paper.

    Citations

    Publications citing this paper.
    SHOWING 1-10 OF 41 CITATIONS

    Effective Use of Transformer Networks for Entity Tracking

    VIEW 4 EXCERPTS
    CITES BACKGROUND & METHODS
    HIGHLY INFLUENCED

    Effective Use of Transformer Networks for Entity Tracking

    VIEW 4 EXCERPTS
    CITES BACKGROUND & METHODS
    HIGHLY INFLUENCED

    Generating Personalized Recipes from Historical User Preferences

    VIEW 3 EXCERPTS
    CITES BACKGROUND & METHODS
    HIGHLY INFLUENCED

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 30 REFERENCES

    Adam: A Method for Stochastic Optimization

    VIEW 1 EXCERPT
    HIGHLY INFLUENTIAL

    Sequence to Sequence Learning with Neural Networks

    VIEW 4 EXCERPTS
    HIGHLY INFLUENTIAL

    Both encoders and the decoder are single layer. The learning rate is 0.0003 initially and is halved every 5 epochs. The model is trained with the Adam optimizer

    • Kim
    • B TRAINING DETAILS OF BASELINES B.1 TRACKING BASELINES
    • 2016

    Both encoders and the decoder are single layer. The learning rate is 0.0003 initially and is halved every 5 epochs. The model is trained with the Adam optimizer

    • Kim
    • B TRAINING DETAILS OF BASELINES B.1 TRACKING BASELINES
    • 2016