Learn More
Our goal is to develop a coplayer music robot capable of presenting a musical expression together with humans. Although many instrument-performing robots exist, they may have difficulty playing with human performers due to the lack of the synchronization function. The robot has to follow differences in humans' performance such as temporal fluctuations to(More)
This paper presents a novel synchronizing method for a human-robot ensemble using coupled oscillators. We define an ensemble as a synchronized performance produced through interactions between independent players. To attain better synchronized performance, the robot should predict the human's behavior to reduce the difference between the human's and robot's(More)
Sound source localization and separation from a mixture of sounds are essential functions for computational auditory scene analysis. The main challenges are designing a unified framework for joint optimization and estimating the sound sources under auditory uncertainties such as reverberation or unknown number of sounds. Since sound source localization and(More)
A method has been developed for improving sound source localization (SSL) using a microphone array from an unmanned aerial vehicle with multiple rotors, a “multirotor UAV”. One of the main problems in SSL from a multirotor UAV is that the ego noise of the rotors on the UAV interferes with the audio observation and degrades the SSL performance.(More)
Multichannel signal processing using a microphone array provides fundamental functions for coping with multisource situations, such as sound source localization and separation, that are needed to extract the auditory information for each source. Auditory uncertainties about the degree of reverberation and the number of sources are known to degrade(More)
Musicians often have the following problem: they have a music score that requires 2 or more players, but they have no one with whom to practice. So far, score-playing music robots exist, but they lack adaptive abilities to synchronize with fellow players' tempo variations. In other words, if the human speeds up their play, the robot should also increase its(More)
This paper reports theoretical and experimental studies on spatio-temporal dynamics in the choruses of male Japanese tree frogs. First, we theoretically model their calling times and positions as a system of coupled mobile oscillators. Numerical simulation of the model as well as calculation of the order parameters show that the spatio-temporal dynamics(More)
We present a novel method for imaging acoustic communication between nocturnal animals. Investigating the spatio-temporal calling behavior of nocturnal animals, e.g., frogs and crickets, has been difficult because of the need to distinguish many animals’ calls in noisy environments without being able to see them. Our method visualizes the spatial and(More)
This paper presents the design and implementation of selectable sound separation functions on the telepresence system “Texai” using the robot audition software “HARK.” An operator of Texai can “walk” around a faraway office to attend a meeting or talk with people through video-conference instead of meeting in(More)
We aim at developing a singer robot capable of listening to music with its own “ears” and interacting with a human's musical performance. Such a singer robot requires at least three functions: listening to the music, understanding what position in the music is being performed, and generating a singing voice. In this paper, we focus on the(More)