RUC at MediaEval 2016 Emotional Impact of Movies Task: Fusion of Multimodal Features

@inproceedings{Chen2016RUCAM,
  title={RUC at MediaEval 2016 Emotional Impact of Movies Task: Fusion of Multimodal Features},
  author={Shizhe Chen and Qin Jin},
  booktitle={MediaEval},
  year={2016}
}
In this paper, we present our approaches for the Mediaeval Emotional Impact of Movies Task. We extract features from multiple modalities including audio, image and motion modalities. SVR and Random Forest are used as our regression models and late fusion is applied to fuse different modalities. Experimental results show that the multimodal late fusion is beneficial to predict global affects and continuous arousal and using CNN features can further boost the performance. But for continuous… CONTINUE READING

From This Paper

Figures, tables, and topics from this paper.

References

Publications referenced by this paper.
Showing 1-10 of 13 references

Similar Papers

Loading similar papers…