S. Jothilakshmi

Learn More
This paper addresses the issues in segmentation of continuous speech into sub-word units of speech using Formants and support vector machines (SVMs). Many studies have been conducted to identify and discriminate vowels and consonants using acoustic/articulatory differences. In this study the continuous speech is segmented into smaller speech units and each(More)
In this paper we propose an unsupervised approach to speaker segmentation using autoassociative neural network (AANN). Speaker segmentation aims at finding speaker change points in a speech signal which is an important preprocessing step to audio indexing, spoken document retrieval and multi speaker diarization. The method extracts the speaker specific(More)
In this paper the Speech-to-Speech Translation (SST) system, which is mainly focused on translation from English to Dravidian languages (Tamil and Malayalam) has been proposed. Three major techniques involved in SST system are Automatic continuous speech recognition, machine translation, and text-to-speech synthesis system. In this paper automatic(More)
The visual equivalent of human face exhibiting articulatory expression of a unit of sound in spoken language is termed as Viseme. The Visemes can be used to teach hearing impaired students visually and effectively. In this paper a method to construct three dimensional human faces from a single 2D image is proposed. By this method a generic 3D mesh of human(More)
–Machine Translation is an essential approach for localization, and is especially appropriate in a linguistically diversenation like India. Automatic translation between languages which are morphologically rich and syntactically different is generally regarded as a complex task. A number of machine translation systems have been proposed in literature. But,(More)