Learn More
User-generated video collections are expanding rapidly in recent years, and systems for automatic analysis of these collections are in high demands. While extensive research efforts have been devoted to recognizing semantics like “birthday party” and “skiing”, little attempts have been made to understand the emotions carried by the videos, e.g., “joy” and(More)
Despite growing research interest, emotion understanding for user-generated videos remains a challenging problem. Major obstacles include the diversity and complexity of video content, as well as the sparsity of expressed emotions. For the first time, we systematically study large-scale video emotion recognition by transferring deep feature encodings. In(More)
Emotional content is a key element in user-generated videos. However, it is difficult to understand emotions conveyed in such videos due to the complex and unstructured nature of user-generated content and the sparsity of video frames that express emotion. In this paper, for the first time, we study the problem of transferring knowledge from heterogeneous(More)
Despite growing research interest, the tasks of predicting the interestingness of images and videos remain as an open challenge. The main obstacles come from both the diversity and complexity of video content and highly subjective and varying judgements of interestingness of different persons. In the MediaEval 2016 Predicting Media Interestingness Task, our(More)
This paper introduces a novel approach for fast summarization of user-generated videos (UGV). Different from other types of videos where the semantic contents may vary greatly over time, most UGVs contain only a single shot with relatively consistent high-level semantics and emotional content. Therefore, a few representative segments are generally(More)
  • 1