• Corpus ID: 3092113

ITI-CERTH participation to TRECVID 2015

@inproceedings{Moumtzidou2015ITICERTHPT,
  title={ITI-CERTH participation to TRECVID 2015},
  author={Anastasia Moumtzidou and Anastasios Dimou and Nikolaos Gkalelis and Stefanos Vrochidis and Vasileios Mezaris and Yiannis Kompatsiaris},
  booktitle={TREC Video Retrieval Evaluation},
  year={2015}
}
This paper provides an overview of the tasks submitted to TRECVID 2011 by ITI-CERTH. ITICERTH participated in the Known-item search (KIS) as well as in the Semantic Indexing (SIN) and the Event Detection in Internet Multimedia (MED) tasks. In the SIN task, techniques are developed, which combine motion information with existing well-performing descriptors such as SURF, Random Forests and Bag-of-Words for shot representation. In the MED task, the trained concept detectors of the SIN task are… 

Figures and Tables from this paper

ITI-CERTH participation in TRECVID 2018

An overview of the runs submitted to TRECVID 2020 by ITI-CERTH is provided, which includes participation in the Ad-hoc Video Search, Disaster Scene Description and Indexing and Activities in Extended Video tasks.

ITI-CERTH participation in ActEV and AVS Tracks of TRECVID 2021

The ITI-CERTH team improves their framework, in terms of more accurate performance, by addressing the classification problem as multi-label rather than a single-label in the ActEV task.

Hybrid Space Learning for Language-based Video Retrieval

This paper proposes a dual deep encoding network that encodes videos and queries into powerful dense representations of their own and introduces hybrid space learning which combines the high performance of the latent space and the good interpretability of the concept space.

A Comparative Study on the Use of Multi-label Classification Techniques for Concept-Based Video Indexing and Annotation

An improved way of employing stacked models is proposed, by using multi-label classification methods in the last level of the stack, to improve the effectiveness of the proposed framework compared to existing works.

Multimodal Fusion: Combining Visual and Textual Cues for Concept Detection in Video

Fusion and text analysis techniques for harnessing automatic speech recognition (ASR) transcripts or subtitles to improve the results of visual concept detection are introduced.

Query and Keyframe Representations for Ad-hoc Video Search

A set of NLP steps that cleverly analyse different parts of the query in order to convert it to related semantic concepts are presented, a new method for transforming concept-based keyframe and query representations into a common semantic embedding space is proposed, and it is shown that the proposed combination of concept- based representations with their corresponding semantic embeddings results to improved video search accuracy.

Machine learning architectures for video annotation and retrieval

This thesis is designing machine learning methodologies for solving the problem of video annotation and retrieval using either pre-defined semantic concepts or ad-hoc queries, and proposes an approach to learn concept-specific representations that are sparse, linear combinations of representations of latent concepts.

Finding Semantically Related Videos in Closed Collections

This chapter presents efforts to detect semantic concepts in video shots, to help annotation and organization of content collections, and implements a system based on deep learning, featuring a number of advances and adaptations of existing algorithms to increase performance for the task.

Local Features and a Two-Layer Stacking Architecture for Semantic Concept Detection in Video

This paper proposes an improved way of employing stacked models, which capture concept correlations, using multilabel classification algorithms in the last layer of the stack, and examines and compares the effectiveness of the above algorithms in both semantic video indexing within a large video collection and in the somewhat different problem of individual video annotation with semantic concepts.

Dual Encoding for Zero-Example Video Retrieval

This paper takes a concept-free approach, proposing a dual deep encoding network that encodes videos and queries into powerful dense representations of their own and establishes a new state-of-the-art for zero-example video retrieval.

References

SHOWING 1-10 OF 102 REFERENCES

ITI-CERTH participation to TRECVID 2009 HLFE and Search

An overview of the tasks submitted to TRECVID 2009 by ITI-CERTH is provided, which provides interesting conclusions regarding the comparison of the involved retrieval functionalities as well as the strategies in interactive video search.

ITI-CERTH participation in TRECVID 2018

An overview of the runs submitted to TRECVID 2020 by ITI-CERTH is provided, which includes participation in the Ad-hoc Video Search, Disaster Scene Description and Indexing and Activities in Extended Video tasks.

The MediaMill TRECVID 2006 Semantic Video Search Engine

The 2008 edition of the TRECVID bench- mark has been the most successful MediaMill participation to date, resulting in the top ranking for both concept de- tection and interactive search, and a runner-up ranking for automatic retrieval.

COST292 experimental framework for TRECVID2008

An overview of the four tasks submitted to TRECVID 2008 by COST292 is given, which includes the submission to the copy detection task, an interactive retrieval application combining retrieval functionalities in various modalities with a user interface supporting automatic and interactive search over all queries submitted.

Evaluation campaigns and TRECVid

An introduction to information retrieval (IR) evaluation from both a user and a system perspective is given, high-lighting that system evaluation is by far the most prevalent type of evaluation carried out.

On the Use of Visual Soft Semantics for Video Temporal Decomposition to Scenes

The results show that the use of such semantic information, which the authors term ``visual soft semantics'', contributes to improved video decomposition to scenes.

K-Space at TRECvid 2006

K-Space participated in two tasks, high-level feature extraction and search in TRECVid 2006 and made use of tools and techniques from each partner, and the K-Space team consisted of eight partner institutions from the EU-funded K- Space Network.

Re-ranking by local re-scoring for video indexing and retrieval

A re-ranking method that improves the performance of semantic video indexing and retrieval, by re-evaluating the scores of the shots by the homogeneity and the nature of the video they belong to, is proposed.

The challenge problem for automated detection of 101 semantic concepts in multimedia

We introduce the challenge problem for generic video indexing to gain insight in intermediate steps that affect performance of multimedia analysis methods, while at the same time fostering

A Comparative Study on the Use of Multi-label Classification Techniques for Concept-Based Video Indexing and Annotation

An improved way of employing stacked models is proposed, by using multi-label classification methods in the last level of the stack, to improve the effectiveness of the proposed framework compared to existing works.
...