Learn More
In this paper we address the expressive control of singing voice synthesis. Singing Voice Synthesizers (SVS) traditionally require two types of inputs: a musical score and lyrics. The musical expression is then typically either generated automatically by applying a model of a certain type of expression to a high-level musical score, or achieved by manually(More)
A common problem of many current singing voice synthesizers is that obtaining a natural-sounding and expressive performance requires a lot of manual user input. This thus becomes a time-consuming and difficult task. In this paper we introduce a unit selection-based approach for the generation of expression parameters that control the synthesizer. Given the(More)
The objective of this research is to model the relationship between actions performed by a violinist and the sound which these actions produce. Violinist actions and audio are captured during real performances by means of a newly developed sensing system from which bowing and audio descriptors are computed. A database is built with this data and used to(More)
This paper presents a method for the acquisition of violin instrumental gesture parameters by using a commercial two-sensor 3D tracking system based on electromagnetic field (EMF) sensing. The methodolgy described here is suitable for acquiring instrumental gesture parameters of any bowed-string instrument, and has been devised by paying attention to(More)
In the last few years, many high quality and realistic violin synthesizers appeared. But the quality of the resulting sound is sometimes poorer than promised due to the lack of musical information provided to the system. It is very difficult for composers to deduce and include information like " bow speed " , " pressing force " or " fingering " to a MIDI(More)
  • Enric Guaus i Termens, Xavier Serra, +14 authors Josep Maria Comajun
  • 2010
It is also important to recognize the unconditional support from the people at the ESMUC, specially Enric Giné, Ferran Conangla, Josep Maria Comajun-cosas, Emilia Gómez (again), Perfecto Herrera (again) and Roser Galí. I would like to mention here the people who introduced me in the research at the Universitat Ramon Llull: Josep Martí and Robert Barti. In(More)
Excitation-continuous music instrument control patterns are often not explicitly represented in current sound synthesis techniques when applied to automatic performance. Both physical model-based and sample-based synthesis paradigms would benefit from a flexible and accurate instrument control model, enabling the improvement of naturalness and realism. We(More)
This paper describes recent improvements to our singing voice synthesizer based on concatenation and transformation of audio samples using spectral models. Improvements include firstly robust automation of previous singer database creation process, a lengthy and tedious task which involved recording scripts generation, studio sessions, audio editing,(More)