Uncovering Temporal Context for Video Question and Answering

@article{Zhu2015UncoveringTC,
  title={Uncovering Temporal Context for Video Question and Answering},
  author={Linchao Zhu and Zhongwen Xu and Yi Yang and Alexander G. Hauptmann},
  journal={CoRR},
  year={2015},
  volume={abs/1511.04670}
}
In this work, we introduce Video Question Answering in temporal domain to infer the past, describe the present and predict the future. We present an encoder-decoder approach using Recurrent Neural Networks to learn temporal structures of videos and introduce a dual-channel ranking loss to answer multiple-choice questions. We explore approaches for finer understanding of video content using question form of “fill-in-the-blank”, and managed to collect 109,895 video clips with duration over 1,000… CONTINUE READING
Highly Cited
This paper has 39 citations. REVIEW CITATIONS
Recent Discussions
This paper has been referenced on Twitter 2 times over the past 90 days. VIEW TWEETS
27 Citations
53 References
Similar Papers

Citations

Publications citing this paper.
Showing 1-10 of 27 extracted citations

References

Publications referenced by this paper.
Showing 1-10 of 53 references

Don’t just listen

  • X. Lin, D. Parikh
  • use your imagination: Leveraging visual common…
  • 2015
Highly Influential
4 Excerpts

B

  • K. Cho
  • van Merrienboer, C. Gulcehre, F. Bougares, H…
  • 2015
Highly Influential
3 Excerpts

Similar Papers

Loading similar papers…