Gaze movement-driven random forests for query clustering in automatic video annotation

  title={Gaze movement-driven random forests for query clustering in automatic video annotation},
  author={Stefanos Vrochidis and I. Patras and Yiannis Kompatsiaris},
  journal={Multimedia Tools and Applications},
In the recent years, the rapid increase of the volume of multimedia content has led to the development of several automatic annotation approaches. In parallel, the high availability of large amounts of user interaction data, revealed the need for developing automatic annotation techniques that exploit the implicit user feedback during interactive multimedia retrieval tasks. In this context, this paper proposes a method for automatic video annotation by exploiting implicit user feedback during… 

Inter-Brain EEG Feature Extraction and Analysis for Continuous Implicit Emotion Tagging During Video Watching

An EEG-based real-time emotion tagging approach, by extracting inter-brain features from a group of participants when they watch the same emotional video clips by employing a three-round behavioral rating paradigm.



An eye-tracking-based approach to facilitate interactive video search

The evaluation shows that important information can be extracted from aggregated gaze movements during video retrieval tasks, while the involvement of pupil dilation data improves the performance of the system and facilitates interactive video search.

Exploiting gaze movements for automatic video annotation

The evaluation shows that the use of aggregated gaze data can be utilized effectively for video annotation purposes and is proposed for automatic video annotation by exploiting gaze movements during interactive video retrieval.

Utilizing Implicit User Feedback to Improve Interactive Video Retrieval

A framework, where the video is first indexed according to temporal, textual, and visual features and then implicit user feedback analysis is realized using a graph-based methodology, which encodes the semantic relations between video segments based on past user interaction and is subsequently used to generate recommendations.

GaZIR: gaze-based zooming interface for image retrieval

GaZIR, a gaze-based interface for browsing and searching for images, which computes on-line predictions of relevance of images based on implicit feedback, and when the user zooms in, the images predicted to be the most relevant are brought out.

Gaze-Based Relevance Feedback for Realizing Region-Based Image Retrieval

The novelties of this work are the introduction of a new set of gaze features for realizing user's relevance assessment prediction at region-level, and the design of a time-efficient and effective object-based RF framework for image retrieval.

Evaluating the implicit feedback models for adaptive video retrieval

This paper explores the effectiveness of a number of interfaces and feedback mechanisms and compares their relative performance using a simulated evaluation methodology and shows the relatively better performance of a search interface with the combination of explicit and implicit features.

Information Retrieval by Inferring Implicit Queries from Eye Movements

A new search strategy, in which the information retrieval (IR) query is inferred from eye movements measured when the user is reading text during an IR task, such that relevance predictions for a large set of unseen documents are ranked significantly better than by random guessing.

Can relevance of images be inferred from eye movements?

It is shown that, in reasonably controlled setups at least, already fairly simple features and classifiers are capable of detecting the relevance based on eye movements alone, without using any explicit feedback.

A Novel Image Retrieval System with Real-Time Eye Tracking

The implicit feedback of IRSET is implemented online and real-time, which makes IRSET remarkably distinguish from other systems with implicit feedback.

Eye movement as an interaction mechanism for relevance feedback in a content-based image retrieval system

The primary focus of this paper is to evaluate the possibility of inferring the relevance of images based on eye movement data and proposes a decision tree to predict the user's input during the image searching tasks.