An augmented representation of activity in video using semantic-context information

Abstract

Learning and recognizing activity in videos is an especially important task in computer vision. However, it is hard to perform. In this paper, we propose a new method by combining local and global context information to extract a bag-of-words-like representation of a single space-time point. Each spacetime point is described by a bag of visual words that… (More)
DOI: 10.1109/ICIP.2014.7025847

Topics

4 Figures and Tables

Slides referencing similar topics