Junghyun Ahn

Learn More
The aim of this paper are threefold: it explores methods for the detection of affective states in text, it presents the usage of such affective cues in a conversational system and it evaluates its effectiveness in a virtual reality setting. Valence and arousal values, used for generating facial expressions of users' avatars, are also incorporated into the(More)
This paper presents a novel concept: a graphical representation of human emotion extracted from text sentences. The major contributions of this paper are the following. First, we present a pipeline that extracts, processes, and renders emotion of 3D virtual human (VH). The extraction of emotion is based on data mining statistic of large cyberspace(More)
Sentiment analysis programs are now sometimes used to detect patterns of sentiment use over time in online communication and to help automated systems interact better with users. Nevertheless, it seems that no previous published study has assessed whether the position of individual texts within ongoing communication can be exploited to help detect their(More)
Simulating a huge number of articulate figures in a real-time application is one of the challenging research topics in character animation. Several researchers have tried to improve the performance of animation using the image-based technique such as 'impostor'. This method improved the speed of the animation; however, the accuracy, memory and interactivity(More)
The communication between avatar and agent has already been treated from different but specialized perspectives. In contrast, this paper gives a balanced view of every key architectural aspect: from text analysis to computer graphics, the chatting system and the emotional model. Non-verbal communication, such as facial expression, gaze, or head orientation(More)
Achieving effective facial emotional expressivity within a real-time rendering constraint requests to leverage on all possible inspiration sources and especially from the observations of real individuals. One of them is the frequent asymmetry of facial expressions of emotions, which allows to express complex emotional feelings such as suspicion, smirk, and(More)
In this paper, we first present our crowd simulation method, Trajectory Variant Shift (TVS) based on real pedestrian trajectories re-use. We detail how to re-use and shift these trajectories to avoid collisions while retaining the liveliness of captured data. Second, we conducted a user study in a four-screen CAVE to compare our approach with three others(More)