• Corpus ID: 229924349

Investigating Memorability of Dynamic Media

@article{LKhc2020InvestigatingMO,
  title={Investigating Memorability of Dynamic Media},
  author={Ph{\'u}c H. L{\^e} Khắc and Ayush Rai and Graham Healy and Alan F. Smeaton and Noel E. O’Connor},
  journal={ArXiv},
  year={2020},
  volume={abs/2012.15641}
}
The Predicting Media Memorability task in MediaEval’20 has some challenging aspects compared to previous years. In this paper we identify the high-dynamic content in videos and dataset of limited size as the core challenges for the task, we propose directions to overcome some of these challenges and we present our initial result in these directions. 

Figures and Tables from this paper

References

SHOWING 1-10 OF 17 REFERENCES
The Predicting Media Memorability Task at MediaEval 2019
TLDR
All the aspects of the Predicting Media Memorability task, including its main characteristics, a description of the development and test data sets, the ground truth, the evaluation metrics and the required runs are described.
Predicting Media Memorability Using Ensemble Models
TLDR
This team ensembled transfer learning approaches with video captions using embeddings and their own pre-computed features which outperformed Medieval 2018’s state-of-the-art architectures.
Overview of MediaEval 2020 Predicting Media Memorability Task: What Makes a Video Memorable?
TLDR
A description of some aspects of this task is provided, including its main characteristics, a description of the collection, the ground truth dataset, evaluation metrics and the requirements for participants’ run submissions.
Predicting Media Memorability Using Deep Features with Attention and Recurrent Network
In the Predicting Media Memorability Task at the MediaEval Challenge 2019, our team proposes an approach that uses deep visual features with attention, and recurrent network to predict video
Using Aesthetics and Action Recognition-Based Networks for the Prediction of Media Memorability
TLDR
Experimental results are positive showing the potential of transfer learning for this tasks, and several aesthetics and action recognition-based deep neural networks, either by fine-tuning models or by using them as pre-trained feature extractors.
VideoMem: Constructing, Analyzing, Predicting Short-Term and Long-Term Video Memorability
TLDR
After an in-depth analysis of the dataset, various deep neural network-based models for the prediction of video memorability are investigated and the best model using a ranking loss achieves a Spearman's rank correlation of 0.494.
Multimodal Memorability: Modeling Effects of Semantics and Decay on Video Memorability
TLDR
A predictive model of human visual event memory and how those memories decay over time is developed, resulting in a model that is able to produce the first quantitative estimation of how a video decays in memory over time.
Increasing Image Memorability with Neural Style Transfer
TLDR
An approach based on an editing-by-applying-filters paradigm is introduced, which proposes to automatically retrieve a set of style images that, applied to the input image through a neural style transfer algorithm, provide the highest increase in memorability.
Multimodal Deep Features Fusion for Video Memorability Prediction
TLDR
A multimodal feature fusion approach for predicting the short and long term video memorability where the goal is to design a system that automatically predicts scores reflecting the probability of a video being remembered.
Computational Understanding of Visual Interestingness Beyond Semantics
TLDR
This work aims to support researchers interested in visual interestingness and related subjective or abstract concepts, providing an in-depth overlook at state-of-the-art theories in humanities and methods in computational approaches, as well as providing an extended list of datasets.
...
...