• Publications
  • Influence
Evaluation of Input Devices for Musical Expression: Borrowing Tools from HCI
TLDR
A set of musical tasks is suggested to allow the evaluation of different existing controllers to allow for the design and evaluation of new interfaces for musical expression.
Score Following: State of the Art and New Developments
TLDR
The score follower that was developed at Ircam is based on a Hidden Markov Model and on the modeling of the expected signal received from the performer, and is now being used in production.
Ftm - Complex Data Structures for Max
TLDR
FTM is the basis of several sets of modules for Max/MSP specialized on score following, sound analysis/re-synthesis, statistical modeling and data bank access, designed for particular applications in automatic accompaniment, advanced sound processing and gestural analysis.
MuBu and Friends - Assembling Tools for Content Based Real-Time Interactive Audio Processing in Max/MSP
TLDR
A set of components that support a variety of different interactive real-time audio processing approaches such as beatshuffling, sound morphing, and audio musaicing are presented.
Continuous Realtime Gesture Following and Recognition
TLDR
A HMM based system for real-time gesture analysis that relies on a detailed modeling of multidimensional temporal curves allowing for the synchronization of physical gestures to sound files by time stretching/compressing audio buffers or videos.
MnM: a Max/MSP mapping toolbox
In this report, we describe our development on the Max/MSP toolbox MnM dedicated to mapping between gesture and sound, and more generally to statistical and machine learning methods. This library is
ESCHER-modeling and performing composed instruments in real-time
This article presents ESCHER, a sound synthesis environment based on Ircam's real-time audio environment jMax. ESCHER is a modular system providing synthesis-independent prototyping of
Probabilistic Models for Designing Motion and Sound Relationships
TLDR
A mapping-by-demonstration approach in which the relationships between motion and sound are defined by a machine learning model that learns from a set of user examples, which describes four probabilistic models with complementary characteristics in terms of multimodality and temporality.
Towards a Gesture-Sound Cross-Modal Analysis
TLDR
The article shows how the method allows for pertinent reasoning about the relationship between gesture and sound by analysing the data sets recorded from multiple and individual subjects.
A multimodal probabilistic model for gesture--based control of sound synthesis
TLDR
This paper proposes to use a multimodal HMM to conjointly model the gesture and sound parameters in interactive music systems and describes an implementation of this method for the control of physical modeling sound synthesis.
...
...