Data Set Used
In this paper we describe our TRECVID 2008 video retrieval experiments. The MediaMill team participated in three tasks: concept detection, automatic search, and interactive search. Rather than continuing to increase the number of concept detectors available for retrieval, our TRECVID 2008 experiments focus on increasing the robustness of a small set of… (More)
This paper describes a novel method for browsing a large collection of news video by linking various forms of related video fragments together as threads. Each thread contains a sequence of shots with high feature-based similarity. Two interfaces are designed which use threads as the basis for browsing. One interface shows a minimal set of threads, and the… (More)
In this paper we describe our TRECVID 2010 video retrieval experiments. The MediaMill team participated in three tasks: semantic indexing, known-item search, and instance search. The starting point for the MediaMill concept detection approach is our top-performing bag-of-words system of last year, which uses multiple color SIFT descrip-tors, sparse… (More)
In this technical demonstration we showcase the MediaMill ForkBrowser for video retrieval. It embeds multiple query methods into a single browsing environment. We show that users can switch query methods on demand without the need to adapt to a different interface. This allows for fast and effective search trough large video collections.
In this demonstration we present xTAS, an open source web-service developed at the University of Amsterdam which allows processing multilingual textual content of your documents in a timely manner. We showcase the architecture of xTAS, together with several demonstrators that use xTAS in their architecture.
In this technical demonstration we showcase the MediaMill system. A search engine that facilitates access to news video archives at a semantic level. The core of the system is an unprecedented lexicon of 100 automatically detected semantic concepts. Based on this lexicon we demonstrate how users can obtain highly relevant retrieval results using… (More)
I nteractive prototypes are often the best way to convince an audience of a new multime-dia technology's possible impact. Because of its dynamic audiovisual nature, a multimedia application demonstration communicates applied science more effectively than a static description in a journal publication would. Ideally, a multimedia demonstrator grasps the… (More)
In this paper we describe our TRECVID 2007 experiments. The MediaMill team participated in two tasks: concept detection and search. For concept detection we extract region-based image features, on grid, keypoint, and segmentation level, which we combine with various supervised learners. In addition, we explore the utility of temporal image features. A late… (More)
In this paper we present the methods underlying the Medi-aMill semantic video search engine. The basis for the engine is a semantic indexing process which is currently based on a lexicon of 491 concept detectors. To support the user in navigating the collection, the system defines a visual similarity space, a semantic similarity space, a semantic thread… (More)