Iveel Jargalsaikhan

  • Citations Per Year
Learn More
We present a method that extracts effective features in videos for human action recognition. The proposed method analyses the 3D volumes along the sparse motion trajectories of a set of interest points from the video scene. To represent human actions, we generate a Bag-of-Features (BoF) model based on extracted features, and finally a support vector machine(More)
This paper examines the impact that the choice of local descriptor has on human action classifier performance in the presence of static occlusion. This question is important when applying human action classification to surveillance video that is noisy, crowded, complex and incomplete. In real-world scenarios, it is natural that a human can be occluded by an(More)
This paper presents work on integrating multiple computer vision-based approaches to surveillance video analysis to support user retrieval of video segments showing human activities. Applied computer vision using real-world surveillance video data is an extremely challenging research problem, independently of any information retrieval (IR) issues. Here we(More)
This demonstration shows the integration of video analysis and search tools to facilitate the interactive retrieval of video segments depicting specific activities from surveillance footage. The implementation was developed by members of the SAVASA project for participation in the interactive surveillance event detection (SED) task of TRECVid 2012. This(More)
In this paper we describe our participation in the interactive surveillance event detection task at TRECVid 2012. The system we developed was comprised of individual classifiers brought together behind a simple video search interface that enabled users to select relevant segments based on down sampled animated gifs. Two types of user – ‘experts’ and ‘end(More)
The process of transcoding videos apart from being computationally intensive, can also be a rather complex procedure. The complexity refers to the choice of appropriate parameters for the transcoding engine, with the aim of decreasing video sizes, transcoding times and network bandwidth without degrading video quality beyond some threshold that event(More)
We propose a video graph based human action recognition framework. Given an input video sequence, we extract spatio-temporal local features and construct a video graph to incorporate appearance and motion constraints to reflect the spatio-temporal dependencies among features. them. In particular, we extend a popular dbscan density-based clustering algorithm(More)
  • 1