Video-based Human Action Classi.cation with Ambiguous Correspondences

Abstract

This paper describes a combined tracking-classification framework for the unsupervised classification of human action. While most existing approaches assume that featurewise correspondences on people are either available or not at all, this method explicitly formalizes how the probability of correspondences can be used in computation when the correspondences are ambiguous. It is also able to exploit in a probabilistic manner any foreground-background preprocessed segmentation, even if the segmentation is of low confidence. A principled analysis of the problem leads to a novel probabilistic action representation called the correspondence-ambiguous feature histogram array (CAFHA) that is robust to variations across similar actions. Our results show that the new framework outperforms the recent Zelnik-Manor and Irani method [19] for unsupervised event classi?cation. Additionally, the framework is extended to quasi real-time action inference, achieving good recognition accuracy despite changes in person identity and variations in the actions.

DOI: 10.1109/CVPR.2005.549

Extracted Key Phrases

14 Figures and Tables

Cite this paper

@article{Feng2005VideobasedHA, title={Video-based Human Action Classi.cation with Ambiguous Correspondences}, author={Zhou Feng and Tat-Jen Cham}, journal={2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Workshops}, year={2005}, pages={82-82} }