François Grondin

Learn More
ManyEars is an open framework for microphone array-based audio processing. It consists of a sound source localization, tracking and separation system that can provide an enhanced speaker signal for improved speech and sound recognition in real-world settings. ManyEars software framework is composed of a portable and modular C library, along with a graphical(More)
This paper presents WISS, a speaker identification system for mobile robots integrated to ManyEars, a sound source localization, tracking and separation system. Speaker identification consists in recognizing an individual among a group of known speakers. For mobile robots, performing speaker identification in presence of noise that changes over time is one(More)
Localization of sound sources in adverse environments is an important challenge in robot audition. The target sound source is often corrupted by coherent broadband noise, which introduces localization ambiguities as noise is often mistaken as the target source. To discriminate the time difference of arrival (TDOA) parameters of the target source and noise,(More)
BACKGROUND The Canadian Cardiology Society recommends that patients should be seen within 2 weeks after an emergency department (ED) visit for heart failure (HF). We sought to investigate whether patients who had an ED visit for HF subsequently consult a physician within the current established benchmark, to explore factors related to physician(More)
KEYWORDS Open source – Sound source localization – Sound source separation – Mobile robotics – USB sound card – Open hardware – Microphone array ABSTRACT Autonomous robots must be able to perceive sounds from their environment in order to interact naturally with humans. ManyEars is an open framework for microphone array-based audio processing, which allows(More)
To be used on a mobile robot, speech/non-speech discrimination must be robust to environmental noise and to the position of the interlocutor, without necessarily having to satisfy low-latency requirements. To address these conditions, this paper presents a speech/non-speech discrimination approach based on pitch estimation. Pitch features are robust to(More)