Robust Navigation with Language Pretraining and Stochastic Sampling

@inproceedings{Li2019RobustNW,
  title={Robust Navigation with Language Pretraining and Stochastic Sampling},
  author={Xiujun Li and Chunyuan Li and Qiaolin Xia and Yonatan Bisk and Asli Çelikyilmaz and Jianfeng Gao and Noah A. Smith and Yejin Choi},
  booktitle={EMNLP/IJCNLP},
  year={2019}
}
Core to the vision-and-language navigation (VLN) challenge is building robust instruction representations and action decoding schemes, which can generalize well to previously unseen instructions and environments. In this paper, we report two simple but highly effective methods to address these challenges and lead to a new state-of-the-art performance. First, we adapt large-scale pretrained language models to learn text representations that generalize better to previously unseen instructions… CONTINUE READING

Figures, Tables, and Topics from this paper.

References

Publications referenced by this paper.
SHOWING 1-10 OF 21 REFERENCES

Tactical Rewind: Self-Correction via Backtracking in Vision-And-Language Navigation

VIEW 4 EXCERPTS

The Regretful Agent: Heuristic-Aided Navigation Through Progress Estimation

VIEW 6 EXCERPTS
HIGHLY INFLUENTIAL

Improving Language Understanding by Generative Pre-Training

VIEW 7 EXCERPTS
HIGHLY INFLUENTIAL

Speaker-Follower Models for Vision-and-Language Navigation

VIEW 10 EXCERPTS
HIGHLY INFLUENTIAL

Vision-and-Language Navigation: Interpreting Visually-Grounded Navigation Instructions in Real Environments

VIEW 12 EXCERPTS
HIGHLY INFLUENTIAL