Learn More
We present a comprehensive evaluation of shot-based visual and audio features for MediaEval 2013-Violent Scenes Detection Affect Task. To obtain visual features, we use global features, local SIFT features and motion features. For audio features, the popular MFCC is employed. Besides that, we also evaluate the performance of mid-level features which is(More)
We present a comprehensive evaluation of performance of shot-based visual feature representations for MediaEval 2012-Violent Scenes Detection Affect Task. In spite of using keyframe-based as last year, we try to apply shot-based features using the global features (color moments, color his-togram, edge orientation histogram, and local binary patterns) for(More)
We present a comprehensive evaluation of performance of visual feature representations for MediaEval 2011-Violent Scenes Detection Task. As for global features, color moments , color histogram, edge orientation histogram, and local binary patterns are used. As for local features, keypoint detectors such as Harris Laplace, Hessian Laplace, Harris Affine,(More)
Violent scene detection (VSD) is a challenging problem because of the heterogeneous content, large variations in video quality, and semantic meaning of the concepts. The Violent Scenes Detection Task of MediaEval [1] provides a common dataset and evaluation protocol thus enables a fair comparison of methods. In this paper, we describe our VSD system used in(More)
Multimedia event detection has become a popular research topic due to the explosive growth of video data. The motion features in a video are often used to detect events because an event may contain some specific actions or moving patterns. Raw motion features are extracted from the entire video first and then aggregated to form the final video(More)
Violent scenes detection (VSD) is a challenging problem because of the heterogeneous content, large variations in video quality, and complex semantic meanings of the concepts involved. In the last few years, combining multiple features from multi-modalities has proven to be an effective strategy for general multimedia event detection (MED), but the specific(More)
Affective Impact of Movies task aims to detect violent videos and affective impact on viewers of that videos [9]. This is a challenging task not only because of the diversity of video content but also due to the subjectiveness of human emotion. In this paper, we present a unified framework that can be applied to both subtasks: (i) induce affect detection,(More)
The MediaEval 2016 Predicting Media Interestingness (PMI) Task requires participants to retrieve images and video segments that are considered to be the most interesting for a common viewer. This is a challenging problem not only because the large complexity of the data but also due to the semantic meaning of interestingness. This paper provides an overview(More)