Juha-Pekka Rajaniemi

Learn More
We present a multimodal media center interface based on a novel combination of new modalities. The application is based on a combination of a large high-definition display and a mobile phone. Users can interact with the system using speech input (speech recognition), physical touch (touching physical icons with the mobile phone), and gestures. We present(More)
We present a multimodal media center interface designed for blind and partially sighted people. It features a zooming focus-plus-context graphical user interface coupled with speech output and haptic feedback. A multimodal combination of gestures, key input, and speech input is utilized to interact with the interface. The interface has been developed and(More)
We present a multimodal media center interface based on speech input, gestures, and haptic feedback. For special user groups, including visually and physically impaired users, the application features a zoomable context + focus GUI in tight combination with speech output and full speech-based control. These features have been developed in cooperation with(More)
We present a multimodal media center interface based on speech input, gestures, and haptic feedback (hapticons). In addition, the application includes a zoomable context + focus GUI in tight combination with speech output. The resulting interface is designed for and evaluated with different user groups, including visually and physically impaired users.(More)
In this paper, we present results from a long-term user pilot study of speech controlled media center. The pilot users in this case were physically disabled and the system was installed in their apartment for six weeks. We designed a multimodal media center interface based on speech. Full speech control is provided with a hands-free speech recognition input(More)
Awareness of shared work activities is important for fostering professional relationships. This paper introduces visualization techniques for a meeting support system based on automatic speech recognition and multisensory meeting information capture, including voice levels and participant comments. The system has been deployed in live meetings over the past(More)
Thorough understanding of subjective and objective measurements of speech-based interaction, especially of its user experience, is vital for practical application development. We present findings from two case studies where multimodal applications containing speech input were evaluated using a subjective evaluation methodology. Responses were investigated(More)