Pernilla Qvarfordt

Learn More
We describe a new approach to information retrieval: algorithmic mediation for intentional, synchronous collaborative exploratory search. Using our system, two or more users with a common information need search together, simultaneously. The collaborative system provides tools, user interfaces and, most importantly, algorithmically-mediated retrieval to(More)
We submitted six interactive search runs for TRECVID2007, including 2 single user and 4 collaborative runs. In one single-user run (FXPAL MMA) the searchers had access to all resources including text search, text similarity, image similarity, and concept similarity search. In the other single-user submission (FXPAL MMV) the transcripts were not available(More)
We describe Cerchiamo, a collaborative exploratory search system that allows teams of searchers to explore document collections synchronously. Working with Cerchiamo, team members use independent interfaces to run queries, browse results, and make relevance judgments. The system mediates the team members’ search activity by passing and reordering search(More)
The modern workplace is inherently collaborative, and this collaboration relies on effective communication among co-workers. Many communication tools -- email, blogs, wikis, Twitter, etc. -- have become increasingly available and accepted in workplace communications. In this paper, we report on a study of communications technologies used over a one year(More)
In this paper we present an architecture for multi-modal dialogue systems. It is illustrated from our development of a multi-modal information system for local bus timetable information. The system is based on a natural language interface for typed interaction that is enhanced to handle also multi-modal interaction. The multi-modal user interface was(More)
In certain applications such as radiology and imagery analysis, it is important to minimize errors. In this paper we evaluate a structured inspection method that uses eye tracking information as a feedback mechanism to the image inspector. Our two-phase method starts with a free viewing phase during which gaze data is collected. During the next phase, we(More)
Advances in technology are making it possible for users to interact with computers by various modalities, often through speech and gesture. Such multimodal interaction is attractive because it mimics the patterns and skills in natural human-human communication. To date, research in this area has primarily focused on giving commands to computers. The focus(More)