Learning realistic human actions from movies


The aim of this paper is to address recognition of natural human actions in diverse and realistic video settings. This challenging but important subject has mostly been ignored in the past due to several problems one of which is the lack of realistic and annotated video datasets. Our first contribution is to address this limitation and to investigate the use of movie scripts for automatic annotation of human actions in videos. We evaluate alternative methods for action retrieval from scripts and show benefits of a text-based classifier. Using the retrieved action samples for visual learning, we next turn to the problem of action classification in video. We present a new method for video classification that builds upon and extends several recent ideas including local space-time features, space-time pyramids and multi-channel non-linear SVMs. The method is shown to improve state-of-the-art results on the standard KTH action dataset by achieving 91.8% accuracy. Given the inherent problem of noisy labels in automatic annotation, we particularly investigate and show high tolerance of our method to annotation errors in the training set. We finally apply the method to learning and classifying challenging action classes in movies and show promising results.

DOI: 10.1109/CVPR.2008.4587756

Extracted Key Phrases

15 Figures and Tables

Showing 1-10 of 20 references

Learning object representations for visual object class recognition

  • M Marszałek, C Schmid, H Harzallah, J Van De Weijer
  • 2007
1 Excerpt

Overview and results of classification challenge

  • M Everingham, L Van Gool, C Williams, J Winn, A Zisserman
  • 2007
Showing 1-10 of 1,558 extracted citations
Citations per Year

2,659 Citations

Semantic Scholar estimates that this publication has received between 2,442 and 2,898 citations based on the available data.

See our FAQ for additional information.