Learn More
In this paper, we introduce Block Jam, a Tangible User Interface that controls a dynamic polyrhythmic sequencer using 26 physical artifacts. These physical artifacts, that we call blocks, are a new type of input device for manipulating an interactive music system. The blocks' functional and topological statuses are tightly coupled to an ad hoc sequencer,(More)
Behavioral coding focuses on deriving higher-level behavioral annotations using observational data of human interactions. Automatically identifying salient events in the observed signal data could lead to a deeper understanding of how specific events in an interaction correspond to the perceived high-level behaviors of the subjects. In this paper, we(More)
Therapist language plays a critical role in influencing the overall quality of psychotherapy. Notably, it is a major contributor to the perceived level of empathy expressed by therapists, a primary measure for judging their efficacy. We explore psy-cholinguistics inspired features for predicting therapist empa-thy. These features model language which(More)
We present Barista, an open-source framework for concurrent speech processing based on the Kaldi speech recognition toolkit and the libcppa actor library. With Barista, we aim to provide an easy-to-use, extensible framework for constructing highly customizable concurrent (and/or distributed) networks for a variety of speech processing tasks. Each Barista(More)
Signal-derived measures can provide effective ways towards quantifying human behavior. Verbal Response Latencies (VRLs) of children with Autism Spectrum Disorders (ASD) during conversational interactions are able to convey valuable information about their cognitive and social skills. Motivated by the inherent gap between the external behavior and inner(More)
Embodied conversational agents (ECA) offer platforms for the collection of structured interaction and communication data. This paper discusses the data collected from the Rachel system, an ECA developed at the University of Southern California, for interactions with children with autism. Two dyads each composed of a child with autism and his parent(More)
Speech and spoken language cues offer a valuable means to measure and model human behavior. Computational models of speech behavior have the potential to support health care through assistive technologies, informed intervention, and efficient long-term monitoring. The Interspeech 2013 Autism Sub-Challenge addresses two developmental disorders that manifest(More)
We present a method for characterizing salient behavioral events from audiovisual data of dyadic human interactions. This behavioral signal processing work is aimed at supporting observational analysis of domain experts such as psychologists and clinicians. We extract prosodic and spectral speech features as well as visual motion vector features on(More)