Spatial Motion Patterns: Action Models from Semi-Dense Trajectories

  title={Spatial Motion Patterns: Action Models from Semi-Dense Trajectories},
  author={Thanh Phuong Nguyen and Antoine Manzanera and Matthieu Garrigues and Ngoc-Son Vu},
  journal={Int. J. Pattern Recognit. Artif. Intell.},
A new action model is proposed, by revisiting local binary patterns (LBP) for dynamic texture models, applied on trajectory beams calculated on the video. The use of semi-dense trajectory field allows to dramatically reduce the computation support to essential motion information, while maintaining a large amount of data to ensure robustness of statistical bag of features action models. A new binary pattern, called Spatial Motion Pattern (SMP) is proposed, which captures self-similarity of… 

Figures and Tables from this paper

Directional dense-trajectory-based patterns for dynamic texture recognition
An efficient approach for DT description is proposed by addressing the following novel concepts and a new framework is presented, called directional dense trajectory patterns, which takes advantage of directional beams of dense trajectories along with spatio-temporal features of their motion points in order to construct dense-trajectory-based descriptors with more robustness.
Momental directional patterns for dynamic texture recognition
Directional Beams of Dense Trajectories for Dynamic Texture Recognition
An effective framework for dynamic texture recognition is introduced by exploiting local features and chaotic motions along beams of dense trajectories in which their motion points are encoded by
Completed local structure patterns on three orthogonal planes for dynamic texture recognition
  • T. Nguyen, T. Nguyen, F. Bouchara
  • Computer Science
    2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)
  • 2017
This paper addresses a new dynamic texture operator by considering local structure patterns (LSP) and completed local binary patterns (CLBP) for static images in three orthogonal planes to capture spatial-temporal texture structures.
Dynamic Texture Representation Based on Hierarchical Local Patterns
A novel effective operator, named HIerarchical LOcal Pattern (HILOP), is proposed to efficiently exploit relationships of local neighbors at a pair of adjacent hierarchical regions which are located
Completed statistical adaptive patterns on three orthogonal planes for recognition of dynamic textures and scenes
An efficient framework, called completed and statistical adaptive patterns on three orthogonal planes (CSAP-TOP), for representation of dynamic textures and scenes is addressed, which significantly outperforms recent results in the state-of-the-art.
Prominent Local Representation for Dynamic Textures Based on High-Order Gaussian-Gradients
This work proposes an efficient shallow framework for DT representation by addressing the following novel concepts, including the first time in DT analysis that 2D/3D Gaussian-gradient filterings are taken into account as a pre-processing step to point out robust components against those influences in effect.
A Comprehensive Taxonomy of Dynamic Texture Representation
A comprehensive taxonomy of DT representation is presented in order to purposefully give a thorough overview of the existing methods along with overall evaluations of their obtained performances and to point out several potential applications and the remaining challenges that should be addressed in further directions.


Action recognition by dense trajectories
This work introduces a novel descriptor based on motion boundary histograms, which is robust to camera motion and consistently outperforms other state-of-the-art descriptors, in particular in uncontrolled realistic videos.
Actions as Space-Time Shapes
The method is fast, does not require video alignment, and is applicable in many scenarios where the background is known, and the robustness of the method is demonstrated to partial occlusions, nonrigid deformations, significant changes in scale and viewpoint, high irregularities in the performance of an action, and low-quality video.
Texture Based Description of Movements for Activity Analysis
This paper proposes a novel approach for activity analysis by describing human activities with texture features by extracts spatially enhanced local binary pattern (LBP) histograms from temporal templates and models their temporal behavior with hidden Markov models.
Discriminative Topics Modelling for Action Feature Selection and Recognition
A novel framework for recognising realistic human actions in unconstrained environments based on computing a rich set of descriptors from key point trajectories is presented and an adaptive feature fusion method to combine different local motion descriptors for improving model robustness against feature noise and background clutters is developed.
Revisiting LBP-Based Texture Models for Human Action Recognition
A new method is proposed by revisiting LBP-based dynamic texture operators that captures the similarity of motion around keypoints tracked by a realtime semi-dense point tracking method and uses self-similarity operator to highlight the geometric shape of rigid parts of foreground object in a video sequence.
On Space-Time Interest Points
  • I. Laptev
  • Mathematics
    International Journal of Computer Vision
  • 2005
This paper builds on the idea of the Harris and Förstner interest point operators and detects local structures in space-time where the image values have significant local variations in both space and time and illustrates how a video representation in terms of local space- time features allows for detection of walking people in scenes with occlusions and dynamic cluttered backgrounds.
A selective spatio-temporal interest point detector for human action recognition in complex scenes
This paper presents a new approach for STIP detection by applying surround suppression combined with local and temporal constraints, and introduces a novel vocabulary building strategy by combining spatial pyramid and vocabulary compression techniques, resulting in improved performance and efficiency.
Recognizing action at a distance
A novel motion descriptor based on optical flow measurements in a spatiotemporal volume for each stabilized human figure is introduced, and an associated similarity measure to be used in a nearest-neighbor framework is introduced.
Action recognition based on sparse motion trajectories
This work presents a method that extracts effective features in videos for human action recognition by analyzing the 3D volumes along the sparse motion trajectories of a set of interest points from the video scene and generates a Bag-of-Features (BoF) model based on extracted features.
An Efficient Dense and Scale-Invariant Spatio-Temporal Interest Point Detector
This paper presents for the first time spatio-temporal interest points that are at the same time scale-invariant (both spatially and temporally) and densely cover the video content and can be computed efficiently.