• Corpus ID: 12255371

Factors in human recognition of timbre lexicons generated by data clustering

  title={Factors in human recognition of timbre lexicons generated by data clustering},
  author={Gerard Roma and Anna Xamb{\'o} and Perfecto Herrera and Robin C. Laney},
Since the development of sound recording technologies, the palette of sound timbres available for music creation was extended way beyond traditional musical instruments. The organization and categorization of timbre has been a common endeavor. The availability of large databases of sound clips provides an opportunity for obtaining datadriven timbre categorizations via content-based clustering. In this article we describe an experiment aimed at understanding what factors influence the process of… 

Figures and Tables from this paper

Search Result Clustering in Collaborative Sound Collections
This work proposes a graph-based approach using audio features for clustering diverse sound collections obtained when querying large online databases, and shows that using a confidence measure for discarding inconsistent clusters improves the quality of the partitions.
Algorithms and representations for supporting online music creation with large-scale audio databases
La rapida adopcion de Internet y de las tecnologias web ha creado una oportunidad para hacer musica colaborativa mediante el intercambio de informacion en linea. Sin embargo, las aplicaciones


Acoustic lexemes for organizing internet audio
In this article, a method is proposed for automatic fine-scale audio description that draws inspiration from ontological sound description methods such as Shaeffer's Objets Sonores and Smalley's
Content-based organization and visualization of music archives
With Islands of Music, a system which facilitates exploration of music libraries without requiring manual genre classification is presented, given pieces of music in raw audio format, their perceived sound similarities based on psychoacoustic models are estimated and organized on a 2-dimensional map.
This paper presents a hierarchical user interface for efficient exploration and retrieval based on a computational model of similarity and self-organizing maps for automatically structuring and visualizing large sample libraries through audio signal analysis.
Features for audio and music classification
Four audio feature sets are evaluated in their ability to classify five general audio classes and seven popular music genres and show that the temporal behavior of features is important for both music and audio classification.
Spectromorphology: explaining sound-shapes
Electoacoustic music opens access to all sounds, a bewildering sonic array ranging from the real to the surreal and beyond, where the familiar articulations of instruments and vocal utterance are gone, and the stability of note and interval is gone.
This paper describes how audio information retrieval can be utilized to create novel user interfaces for browsing of audio collections and reports on recent work on two system prototypes: the Sonic Browser and Marsyas and current work on merging the two systems in a common flexible system.
Visualization in Audio-Based Music Information Retrieval
Pampalk, and George Tzanetakis *FX Palo Alto Laboratory 3400 Hillview Avenue, Building 4 Palo Alto, California 94304 USA {foote, cooper}@fxpal.com †Austrian Research Institute for Artificial
The magical number seven plus or minus two: some limits on our capacity for processing information.
The theory provides us with a yardstick for calibrating the authors' stimulus materials and for measuring the performance of their subjects, and the concepts and measures provided by the theory provide a quantitative way of getting at some of these questions.
Interaction Analysis: Foundations and Practice
Video technology has been vital in establishing Interaction Analysis, which depends on the technology of audiovisual recording for its primary records and on playback capability for their analysis.
History and future of auditory filter models
A continuous-time analogue CMOS implementation of the One Zero Gammatone Filter (OZGF) is presented together with its automatic gain control that models its level-dependent nonlinear behaviour.