Author pages are created from data sourced from our academic publisher partnerships and public sources.
Share This Author
Evaluation of Input Devices for Musical Expression: Borrowing Tools from HCI
A set of musical tasks is suggested to allow the evaluation of different existing controllers to allow for the design and evaluation of new interfaces for musical expression.
Score Following: State of the Art and New Developments
The score follower that was developed at Ircam is based on a Hidden Markov Model and on the modeling of the expected signal received from the performer, and is now being used in production.
Ftm - Complex Data Structures for Max
- N. Schnell, R. Borghesi, Diemo Schwarz, Frédéric Bevilacqua, Rémy Müller
- Computer ScienceICMC
- 1 September 2005
FTM is the basis of several sets of modules for Max/MSP specialized on score following, sound analysis/re-synthesis, statistical modeling and data bank access, designed for particular applications in automatic accompaniment, advanced sound processing and gestural analysis.
MuBu and Friends - Assembling Tools for Content Based Real-Time Interactive Audio Processing in Max/MSP
A set of components that support a variety of different interactive real-time audio processing approaches such as beatshuffling, sound morphing, and audio musaicing are presented.
Continuous Realtime Gesture Following and Recognition
- Frédéric Bevilacqua, Bruno Zamborlin, A. Sypniewski, N. Schnell, Fabrice Guédy, N. Rasamimanana
- Computer ScienceGesture Workshop
- 25 February 2009
A HMM based system for real-time gesture analysis that relies on a detailed modeling of multidimensional temporal curves allowing for the synchronization of physical gestures to sound files by time stretching/compressing audio buffers or videos.
MnM: a Max/MSP mapping toolbox
In this report, we describe our development on the Max/MSP toolbox MnM dedicated to mapping between gesture and sound, and more generally to statistical and machine learning methods. This library is…
ESCHER-modeling and performing composed instruments in real-time
- M. Wanderley, N. Schnell, J. Rovan
- Computer ScienceSMC'98 Conference Proceedings. IEEE…
- 11 October 1998
This article presents ESCHER, a sound synthesis environment based on Ircam's real-time audio environment jMax. ESCHER is a modular system providing synthesis-independent prototyping of…
Probabilistic Models for Designing Motion and Sound Relationships
A mapping-by-demonstration approach in which the relationships between motion and sound are defined by a machine learning model that learns from a set of user examples, which describes four probabilistic models with complementary characteristics in terms of multimodality and temporality.
Towards a Gesture-Sound Cross-Modal Analysis
The article shows how the method allows for pertinent reasoning about the relationship between gesture and sound by analysing the data sets recorded from multiple and individual subjects.
A multimodal probabilistic model for gesture--based control of sound synthesis
This paper proposes to use a multimodal HMM to conjointly model the gesture and sound parameters in interactive music systems and describes an implementation of this method for the control of physical modeling sound synthesis.