Learn More
In this paper we describe our TRECVID 2008 video retrieval experiments. The MediaMill team participated in three tasks: concept detection, automatic search, and interactive search. Rather than continuing to increase the number of concept detectors available for retrieval, our TRECVID 2008 experiments focus on increasing the robustness of a small set of(More)
This paper describes a novel method for browsing a large collection of news video by linking various forms of related video fragments together as threads. Each thread contains a sequence of shots with high feature-based similarity. Two interfaces are designed which use threads as the basis for browsing. One interface shows a minimal set of threads, and the(More)
In this paper we describe our TRECVID 2010 video retrieval experiments. The MediaMill team participated in three tasks: semantic indexing, known-item search, and instance search. The starting point for the MediaMill concept detection approach is our top-performing bag-of-words system of last year, which uses multiple color SIFT descrip-tors, sparse(More)
Category search can be supported by methods that allow intelligent selection of potentially relevant images. This paper explores the use of a nearest neighbor network in the selection process. We created a prototype that visualizes the network of images. As in the nearest neighbor network the images are connected to similar images we assume that if an image(More)
In this technical demonstration we showcase the MediaMill system. A search engine that facilitates access to news video archives at a semantic level. The core of the system is an unprecedented lexicon of 100 automatically detected semantic concepts. Based on this lexicon we demonstrate how users can obtain highly relevant retrieval results using(More)
I nteractive prototypes are often the best way to convince an audience of a new multime-dia technology's possible impact. Because of its dynamic audiovisual nature, a multimedia application demonstration communicates applied science more effectively than a static description in a journal publication would. Ideally, a multimedia demonstrator grasps the(More)
In this paper we describe our TRECVID 2007 experiments. The MediaMill team participated in two tasks: concept detection and search. For concept detection we extract region-based image features, on grid, keypoint, and segmentation level, which we combine with various supervised learners. In addition, we explore the utility of temporal image features. A late(More)