Esa-Pekka Salonen

Learn More
Speech can be an efficient and natural means for communication between humans and computers. The development of speech applications requires techniques, methodology, and development tools capable of flexible and adaptive interaction, taking into account the needs of different users and different environments. In this paper, we discuss how the needs of(More)
In spoken dialogue applications dialogue management has conventionally been realized with a single monolithic dialogue manager implementing a comprehensive dialogue control model. We present a highly distributed system structure that enables the integration of different dialogue control approaches to handle spoken dialogues. With this structure it is(More)
There is demand for subjective metrics in spoken dialogue system evaluation. SERVQUAL is a service quality evaluation method developed by marketing academics. It produces a subjective measure of the gap between expectations and perceptions in five service quality dimensions common for all services. We present how the method was applied to spoken dialogue(More)
We present how robustness and adaptivity can be supported by the spoken dialogue system architecture. AthosMail is a multilingual spoken dialogue system for e-mail domain. It is being developed in the EU-funded DUMAS project. It has flexible system architecture supporting multiple components for input interpretation, dialogue management and output(More)
In this paper, we introduce the concept of integrated tutoring in speech applications. An integrated tutoring system teaches the use of a system to a user while he/she is using the system in a typical manner. Furthermore, we introduce the general principles of how to implement applications with integrated tutoring agents and present an example(More)
Speech-based applications commonly come with web-based or printed manuals. Alternatively, the dialogue can be designed so that users should be able to start using the application on their own. We studied an alternative approach, an integrated tutor. The tutor participates in the interaction when new users learn to use a speech-based system. It teaches the(More)
Mobile devices, such as smartphones and personal digital assistants, can be used to implement efficient speech-based and multimodal interfaces. Most of the systems are server-based, and there is a need to distribute the dialogue management tasks between the terminal devices and the server. Since the technologies are not mature and the platforms are(More)
Graphical elements have been found very useful when spoken dialogue systems are developed and demonstrated. However, most of the spoken dialogue systems are designed for speech-only interaction and are very hard to extent to contain graphi-cal elements. We introduce a general model to visualize speech interfaces. Based on the model we present an implemented(More)
The aim of this exploratory study was to examine how the six matched triads of pre-service teachers share and construct metacognition in mathematical problem solving in WorkMates learning environment having or having not a stimulated recall group interview. More specifically, we examined socially shared metacognition and we performed the qualitative content(More)