Yasunari Yoshitomi

Learn More
A new integration method is presented to recognize the emotional expressions of human. We attempt to use both voices and facial expressions. For voices, we use such prosodic parameters as pitch signals, energy, and their derivatives, which are trained by Hidden Markov Model (HMM) for recognition. For facial expressions, we use feature parameters from(More)
We have proposed a method for facial expression recognition for a speaker using thermal image processing and a speech recognition system. In this study, using a speech recognition system, we have improved our system to save thermal images at the three timing positions of just before speaking, and just speaking phonemes of the first and last vowels. In this(More)
I investigated a method for facial expression recognition for a human speaker by using thermal image processing and a speech recognition system. In this study, we improved our speech recognition system to save thermal images at the three timing positions of just before speaking, and just when speaking the phonemes of the first and last vowels. With this(More)
We have previously developed a method for the recognition of the facial expression of a speaker. For facial expression recognition, we previously selected three images: (i) just before speaking, (ii) speaking the first vowel, and (iii) speaking the last vowel in an utterance. By using the speech recognition system named Julius, thermal static images are(More)
A method using a synthesis of facial expressions for the creation of thermal facial images has been proposed for detecting the transition of emotional states. The thermal facial changes are caused by facial muscle movement, the transition of emotional states, and physiological change. Facial muscle movement may be intentional. For this reason, in order to(More)
Many real problems with uncertainties may often be formulated as Stochastic Programming Problem. In this study, Genetic Algorithm (GA) which has been recently used for solving mathematical programming problem is expanded for use in uncertain environments. The modified GA is referred as GA in uncertain environments (GAUCE). In the method, the objective(More)
For facial expression recognition, we selected three images: (i) just before speaking, (ii) speaking the first vowel, and (iii) speaking the last vowel in an utterance. In this study, as a pre-processing module, we added a judgment function to distinguish a front-view face for facial expression recognition. A frame of the front-view face in a dynamic image(More)
For facial expression recognition, we previously selected three images: (1) just before speaking, and speaking (2) the first vowel and (3) the last vowel in an utterance. A frame of the front-view face in a dynamic image was selected by estimating the face direction. Based on our method, we have been developing an on-line system for recognizing the facial(More)
We propose a new approach aimed at sign language animation by skin region detection on an infrared image. To generate several kinds of animations expressing personality and/or emotion appropriately, conventional systems require many manual operations. However, a promising way to realize a lower workload is to manually refine an animation made automatically(More)
We previously developed a method for the facial expression recognition of a speaker. For facial expression recognition, we selected three static images at the timing positions of just before speaking and while speaking the phonemes of the first and last vowels. Then, only the static image of the front-view face was used for facial expression recognition.(More)