• Corpus ID: 202115687

A METHOD FOR AUTOMATIC WHOOSH SOUND DESCRIPTION

@inproceedings{Cherny2017AMF,
  title={A METHOD FOR AUTOMATIC WHOOSH SOUND DESCRIPTION},
  author={Eugene Cherny},
  year={2017}
}
Usually, a sound designer achieves artistic goals by editing and processing the pre-recorded sound samples. To assist navigation in the vast amount of sounds, the sound metadata is used: it provides small free-form textual descriptions of the sound file content. One can search through the keywords or phrases in the metadata to find a group of sounds that can be suitable for a task. Unfortunately, the relativity of the sound design terms complicate the search, making the search process tedious… 

Figures from this paper

References

SHOWING 1-10 OF 28 REFERENCES
Sound Indexing Using Morphological Description
TLDR
This paper considers three morphological descriptions: dynamic profiles (ascending, descending, ascending/descending, stable, impulsive), melodic profiles (up, down,stable, up/down, down/up) and complex-iterative sound description (non-iteratives, iterative, grain, repetition).
Sound retrieval with intuitive verbal expressions
TLDR
A sound retrieval method described in this paper enables users to easily obtain their desired sound and adopts three keyword types, onomatopoeia, sound source, and adjective.
Automatic morphological description of sounds
TLDR
Three morphological descriptions are considered and the most appropriate audio features and mapping algorithm used to automatically estimate the profiles are presented and used to demonstrate the use of these descriptions for automatic indexing and decision trees.
Spectromorphological analysis of sound objects: an adaptation of Pierre Schaeffer's typomorphology
TLDR
This paper intends to develop Schaeffer's approach in the direction of a practical tool for conceptualising and notating sound quality, and introduces a set of graphic symbols apt for transcribing electroacoustic music in a concise score.
Constructing high-level perceptual audio descriptors for textural sounds
TLDR
The construction of computable audio descriptors capable of modeling relevant high-level perceptual qualities of textural sounds are described and the effects of tuning with respect to individual accuracy or mutual independence are demonstrated.
Sound Ontology for Computational Auditory Scence Analysis
This paper proposes that sound ontology should be used both as a common vocabulary for sound representation and as a common terminology for integrating various sound stream segregation systems. Since
Assessment of Timbre Using Verbal Attributes
As part of the perceptual processing to form an abstract picture of a sound, the question arises as to how various timbral judgments are made. An early step in these ‘processing stages’ suggests that
Vocal Imitations of Non-Vocal Sounds
TLDR
Investigating the semantic representations evoked by vocal imitations of sounds by experimentally quantifying how well listeners could match sounds to category labels offers perspectives for understanding how human listeners store and access long-term sound representations, and sets the stage for the development of human-computer interfaces based on vocalizations.
An Approach for Structuring Sound Sample Libraries Using Ontology
TLDR
This paper addresses problems with knowledge elicitation and sound design ontology engineering with metadata issues that make the search process complex, such as ambiguity, synonymy and relativity.
Audio Set: An ontology and human-labeled dataset for audio events
TLDR
The creation of Audio Set is described, a large-scale dataset of manually-annotated audio events that endeavors to bridge the gap in data availability between image and audio research and substantially stimulate the development of high-performance audio event recognizers.
...
1
2
3
...