• Corpus ID: 3092113

ITI-CERTH participation to TRECVID 2015

@inproceedings{Moumtzidou2015ITICERTHPT,
  title={ITI-CERTH participation to TRECVID 2015},
  author={Anastasia Moumtzidou and Anastasios Dimou and Nikolaos Gkalelis and Stefanos Vrochidis and Vasileios Mezaris and Yiannis Kompatsiaris},
  booktitle={TRECVID},
  year={2015}
}
This paper provides an overview of the tasks submitted to TRECVID 2011 by ITI-CERTH. ITICERTH participated in the Known-item search (KIS) as well as in the Semantic Indexing (SIN) and the Event Detection in Internet Multimedia (MED) tasks. In the SIN task, techniques are developed, which combine motion information with existing well-performing descriptors such as SURF, Random Forests and Bag-of-Words for shot representation. In the MED task, the trained concept detectors of the SIN task are… 
ITI-CERTH participation in TRECVID 2018
TLDR
An overview of the runs submitted to TRECVID 2018 by ITI-CERTH is provided, which includes a novel activity detection algorithm that is based on human detection in video frames, goal descriptors, dense trajectories, Fisher vectors and a discriminative action segmentation scheme.
Hybrid Space Learning for Language-based Video Retrieval
TLDR
This paper proposes a dual deep encoding network that encodes videos and queries into powerful dense representations of their own and introduces hybrid space learning which combines the high performance of the latent space and the good interpretability of the concept space.
A Comparative Study on the Use of Multi-label Classification Techniques for Concept-Based Video Indexing and Annotation
TLDR
An improved way of employing stacked models is proposed, by using multi-label classification methods in the last level of the stack, to improve the effectiveness of the proposed framework compared to existing works.
Multimodal Fusion: Combining Visual and Textual Cues for Concept Detection in Video
TLDR
Fusion and text analysis techniques for harnessing automatic speech recognition (ASR) transcripts or subtitles to improve the results of visual concept detection are introduced.
Query and Keyframe Representations for Ad-hoc Video Search
TLDR
A set of NLP steps that cleverly analyse different parts of the query in order to convert it to related semantic concepts are presented, a new method for transforming concept-based keyframe and query representations into a common semantic embedding space is proposed, and it is shown that the proposed combination of concept- based representations with their corresponding semantic embeddings results to improved video search accuracy.
Machine learning architectures for video annotation and retrieval
TLDR
This thesis is designing machine learning methodologies for solving the problem of video annotation and retrieval using either pre-defined semantic concepts or ad-hoc queries, and proposes an approach to learn concept-specific representations that are sparse, linear combinations of representations of latent concepts.
Finding Semantically Related Videos in Closed Collections
TLDR
This chapter presents efforts to detect semantic concepts in video shots, to help annotation and organization of content collections, and implements a system based on deep learning, featuring a number of advances and adaptations of existing algorithms to increase performance for the task.
Local Features and a Two-Layer Stacking Architecture for Semantic Concept Detection in Video
TLDR
This paper proposes an improved way of employing stacked models, which capture concept correlations, using multilabel classification algorithms in the last layer of the stack, and examines and compares the effectiveness of the above algorithms in both semantic video indexing within a large video collection and in the somewhat different problem of individual video annotation with semantic concepts.
Dual Encoding for Zero-Example Video Retrieval
TLDR
This paper takes a concept-free approach, proposing a dual deep encoding network that encodes videos and queries into powerful dense representations of their own and establishes a new state-of-the-art for zero-example video retrieval.
Video event recounting using mixture subclass discriminant analysis
TLDR
A new feature selection method is used, in combination with a semantic model vector video representation, in order to enumerate the key semantic evidences of an event in a video signal to decide which concepts provide the strongest evidence in support of the provided video-event link.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 124 REFERENCES
ITI-CERTH participation to TRECVID 2009 HLFE and Search
TLDR
An overview of the tasks submitted to TRECVID 2009 by ITI-CERTH is provided, which provides interesting conclusions regarding the comparison of the involved retrieval functionalities as well as the strategies in interactive video search.
ITI-CERTH participation in TRECVID 2018
TLDR
An overview of the runs submitted to TRECVID 2018 by ITI-CERTH is provided, which includes a novel activity detection algorithm that is based on human detection in video frames, goal descriptors, dense trajectories, Fisher vectors and a discriminative action segmentation scheme.
The COST292 experimental framework for TRECVID 2007
TLDR
An overview of the four tasks submitted to TRECVID 2007 by COST292 is given, in shot boundary (SB) detection task, four SB detectors have been developed and the results are merged using two merging algorithms.
The MediaMill TRECVID 2006 Semantic Video Search Engine
TLDR
The 2008 edition of the TRECVID bench- mark has been the most successful MediaMill participation to date, resulting in the top ranking for both concept de- tection and interactive search, and a runner-up ranking for automatic retrieval.
COST292 experimental framework for TRECVID2008
TLDR
An overview of the four tasks submitted to TRECVID 2008 by COST292 is given, which includes the submission to the copy detection task, an interactive retrieval application combining retrieval functionalities in various modalities with a user interface supporting automatic and interactive search over all queries submitted.
On the Use of Visual Soft Semantics for Video Temporal Decomposition to Scenes
TLDR
The results show that the use of such semantic information, which the authors term ``visual soft semantics'', contributes to improved video decomposition to scenes.
Evaluation campaigns and TRECVid
TLDR
An introduction to information retrieval (IR) evaluation from both a user and a system perspective is given, high-lighting that system evaluation is by far the most prevalent type of evaluation carried out.
K-Space at TRECvid 2006
TLDR
K-Space participated in two tasks, high-level feature extraction and search in TRECVid 2006 and made use of tools and techniques from each partner, and the K-Space team consisted of eight partner institutions from the EU-funded K- Space Network.
TRECVID 2015 - An Overview of the Goals, Tasks, Data, Evaluation Mechanisms and Metrics
The TREC Video Retrieval Evaluation (TRECVID) 2011 was a TREC-style video analysis and retrieval evaluation, the goal of which remains to promote progress in content-based exploitation of digital
Re-ranking by local re-scoring for video indexing and retrieval
TLDR
A re-ranking method that improves the performance of semantic video indexing and retrieval, by re-evaluating the scores of the shots by the homogeneity and the nature of the video they belong to, is proposed.
...
1
2
3
4
5
...