Corpus ID: 2983005

TRECVID 2015 - An Overview of the Goals, Tasks, Data, Evaluation Mechanisms and Metrics

@inproceedings{Over2010TRECVID2,
  title={TRECVID 2015 - An Overview of the Goals, Tasks, Data, Evaluation Mechanisms and Metrics},
  author={P. Over and G. Awad and J. Fiscus and Brian Antonishek and M. Michel and Wessel Kraaij and A. Smeaton and G. Qu{\'e}not},
  booktitle={TRECVID},
  year={2010}
}
The TREC Video Retrieval Evaluation (TRECVID) 2011 was a TREC-style video analysis and retrieval evaluation, the goal of which remains to promote progress in content-based exploitation of digital video via open, metrics-based evaluation. Over the last ten years this effort has yielded a better understanding of how systems can effectively accomplish such processing and how one can reliably benchmark their performance. TRECVID is funded by the National Institute of Standards and Technology (NIST… Expand
IRISA at TrecVid 2015: Leveraging Multimodal LDA for Video Hyperlinking
TLDR
This paper presents the runs that were submitted in the context of the TRECVid 2015 Video Hyperlinking task, and discusses the performance obtained by the respective runs, as well as some of the limitations of the evaluation process. Expand
Nagoya University at TRECVID 2014: the Instance Search Task
TLDR
This paper takes the asymmetrical dissimilarities based system, which performed best in the INS2013 task, as the baseline, and re-rank with an improved spatial verification method, and shows that, the re-ranking algorithm is able to further improve the baseline system at a rather fast speed. Expand
Improving semantic video indexing: Efforts in Waseda TRECVID 2015 SIN system
  • K. Ueki, Tetsunori Kobayashi
  • Computer Science
  • 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2016
TLDR
This paper proposed a method for improving the performance of semantic video indexing by extracting features from multiple convolutional neural networks, creating multiple classifiers, and integrating them, and employed four measures to accomplish this. Expand
CMU-SMU@TRECVID 2015: Video Hyperlinking
TLDR
CMU-SMU’s participation in the Video Hyperlinking task of TRECVID 2015 is described and results show that the context does not generally improve results, the search performance mainly rely on textual features, and the combination of audio and visual feature cannot provide improvements. Expand
UEC at TRECVID 2014 SIN task
TLDR
This paper submitted four runs for the SIN task of TRECVID 2014 which includes one run submitted last year as a progress run for the 2014 dataset, and the best run achieved the mean infAP=0.1537. Expand
MediaMill at TRECVID 2014: Searching Concepts, Objects, Instances and Events in Video
TLDR
The 2014 edition of the TRECVID benchmark has again been a fruitful participation for the MediaMill team, resulting in the best result for concept detection and object localization. Expand
ITI-CERTH participation to TRECVID 2015
This paper provides an overview of the tasks submitted to TRECVID 2011 by ITI-CERTH. ITICERTH participated in the Known-item search (KIS) as well as in the Semantic Indexing (SIN) and the EventExpand
Semantic Video Trailers
TLDR
This paper proposes an unsupervised label propagation approach for query-based video summarization that effectively captures the multimodal semantics of queries and videos using state-of-the-art deep neural networks and creates a summary that is both semantically coherent and visually attractive. Expand
Uploader models for video concept detection
  • B. Mérialdo, U. Niaz
  • Computer Science
  • 2014 12th International Workshop on Content-Based Multimedia Indexing (CBMI)
  • 2014
TLDR
It is observed that the improvement is generally lower for the best runs than for the weaker runs, and that tuning the models for each concept independently produces a much more significant improvement. Expand
ORAND at TRECVID 2015: Instance Search and Video Hyperlinking Tasks
TLDR
The participation of the ORAND team at Instance Search (INS) and Video Hyperlinking (LNK) tasks in TRECVID 2015 is described and different score propagation algorithms where the based on low-level features achieved best performance are tested. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 23 REFERENCES
TRECVID 2010 – An Introduction to the Goals , Tasks , Data , Evaluation Mechanisms , and Metrics
The TREC Video Retrieval Evaluation (TRECVID) 2010 was a TREC-style video analysis and retrieval evaluation, the goal of which remains to promote progress in content-based exploitation of digitalExpand
TRECVID 2006 Overview
The TREC Video Retrieval Evaluation (TRECVID) 2006 represents the sixth running of a TREC-style video retrieval evaluation, the goal of which remains to promote progress in content-based retrievalExpand
The TRECVid 2008 Event Detection evaluation
TLDR
The event detection evaluation was organized to address detection of a set of specific events that would be of potential interest to an operator in the surveillance domain. Expand
VISOR: Towards On-the-Fly Large-Scale Object Category Retrieval
TLDR
This paper compares state of the art encoding methods and introduces a novel cascade retrieval architecture and shows that new visual concepts can be learnt on-the-fly, given a text description, and so images of that category can be retrieved from the dataset in realtime. Expand
Video Corpus Annotation Using Active Learning
TLDR
This paper describes the collaborative annotation system used to annotate the High Level Features (HLF) in the development set of TRECVID 2007 and shows that Active Learning allows simultaneously getting the most useful information from the partial annotation and significantly reducing the annotation effort per participant relatively to previous collaborative annotations. Expand
Creating HAVIC: Heterogeneous Audio Visual Internet Collection
TLDR
The HAVIC (Heterogeneous Audio Visual Internet Collection) Corpus will ultimately consist of several thousands of hours of unconstrained user-generated multimedia content, designed with an eye toward providing increased challenges for both acoustic and video processing technologies. Expand
One-sided measures for evaluating ranked retrieval effectiveness with spontaneous conversational speech
TLDR
This work proposes a new class of measures for speech retrieval based on manual annotation of points at which a user with specific topical interests would wish replay to begin, based on known topic boundaries that are no longer well matched to the nature of the materials. Expand
New Metrics for Meaningful Evaluation of Informally Structured Speech Retrieval
TLDR
Two new metrics for the evaluation of search effectiveness for informally structured speech data are introduced:mean average segment precision (MASP) which measures retrieval performance in terms of both content segmentation and ranking with respect to relevance; and mean average segment distance-weighted precision ( MASDWP) which takes into account the distance between the start of the relevant segment and the retrieved segment. Expand
INEX 2007 Evaluation Measures
TLDR
The official measures of retrieval effectiveness that are employed for the Ad Hoc Track at INEX 2007 are described, showing that in earlier years all, but only, XML elements could be retrieved, the result format has been liberalized to arbitrary passages. Expand
The scholarly impact of TRECVid (2003-2009)
TLDR
An investigation into the scholarly impact of the TRECVid (Text Retrieval and Evaluation Conference, Video Retrival Evaluation) benchmarking conferences between 2003 and 2009 finds a strong relationship was found between ‘success’ at TRE CVid and ‘ success’ in citations both for high scoring and low scoring teams. Expand
...
1
2
3
...