Miguel S. Dias

Learn More
  • Vítor Duarte Teixeira, Carlos Galinho Pires, Fernando Miguel Pinto, João Freitas, Miguel Sales Dias, Eduarda Mendes Rodrigues
  • 2011
This paper presents a multimodal prototype application that aims to promote the social integration of the elderly. The application enables communication with their social network through conferencing and social media services, using natural interaction modalities, like speech, touch and gestures. We begin by discussing the requirements and design guidelines(More)
A key issue in video object tracking is the representation of the objects and how effectively it discriminates between different objects. Several techniques have been proposed, but without a generally accepted method. While analysis and comparisons of these individual methods have been presented in the literature, their evaluation as part of a global(More)
  • C Pires, F Pinto, V Teixeira, J Freitas, M Sales Dias, Carlos Galinho Pires +4 others
  • 2012
Living Home Center-Living Home Center-a personal assistant with multimodal interaction for elderly and mobility impaired e-inclusion. Abstract. This paper presents an application that allows mobility impaired and elderly users to interact with Internet-based audiovisual communication services, using multimodal natural user interaction. This platform,(More)
the journey of this study, he supported me in every aspect. He was the one who presented me to this research area and he inspired me with his enthusiasm on research and showed me ways to go when making science. I learned much from him and this thesis would not have been possible without him. I owe special gratitude to my mother, father for their continuous,(More)
Nasality is a very important characteristic of several languages, European Portuguese being one of them. This paper addresses the challenge of nasality detection in surface electromyography (EMG) based speech interfaces. We explore the existence of useful information about the velum movement and also assess if muscles deeper down in the face and neck region(More)
Silent Speech Interfaces use data from the speech production process, such as visual information of face movements. However, using a single modality limits the amount of available information. In this study we start to explore the use of multiple data input modalities in order to acquire a more complete representation of the speech production model. We have(More)
  • João Freitas, António Teixeira, Samuel Silva, Catarina Oliveira, Miguel Sales Dias
  • 2012
Nasality is a very important characteristic of several languages, especially European Portuguese. This paper addresses the challenge of nasality detection in EMG-based speech interfaces. By combining EMG data with real time imaging information, we explore the existence of useful information on the EMG data about the velum movement. Results indicate that is(More)
Visual speech animation, or lip synchronization, is the process of matching speech with the lip movements of a virtual character. It is a challenging task because all facial poses must be controlled and synchronized with the audio signal. Existing language-independent systems usually require fine tuning by an artist to avoid artefacts appearing in the(More)
In this paper, we describe the theoretical foundations and engineering approach of an infrared-optical tracking system specially design for large scale immersive virtual environments (VE) or augmented reality (AR) settings. The system described is capable of tracking independent retro-reflective markers arranged in a 3D structure (artefact) in real time(More)