Learn More
This thesis examines machine learning through the lens of human-computer interaction in order to address fundamental questions surrounding the application of machine learning to real-life problems, including: Can we make machine learning algorithms more usable? Can we better understand the real-world consequences of algorithm choices and user interface(More)
Model evaluation plays a special role in interactive machine learning (IML) systems in which users rely on their assessment of a model's performance in order to determine how to improve it. A better understanding of what model criteria are important to users can therefore inform the design of user interfaces for model evaluation as well as the choice and(More)
This paper presents ACE (Autonomous Classification Engine), a framework for using and optimizing classifi-ers. Given a set of feature vectors, ACE experiments with a variety of classifiers, classifier parameters, classifier ensembles and dimensionality reduction techniques in order to arrive at a good configuration for the problem at hand. In addition to(More)
In this paper, we discuss our recent additions of audio analysis and machine learning infrastructure to the ChucK music programming language, wherein we provide a complementary system prototyping framework for MIR researchers and lower the barriers to applying many MIR algorithms in live music performance. The new language capabilities preserve ChucK's(More)
Supervised learning methods have long been used to allow musical interface designers to generate new mappings by example. We propose a method for harnessing machine learning algorithms within a radically interactive paradigm, in which the designer may repeatedly generate examples, train a learner, evaluate outcomes, and modify parameters in real-time within(More)
While several researchers have grappled with the problem of comparing musical devices across performance, installation , and related contexts, no methodology yet exists for producing holistic, informative visualizations for these devices. Drawing on existing research in performance interaction , human-computer interaction, and design space analysis , the(More)
This paper describes the use of the Autonomous Classification Engine (ACE) to classify beatboxing (vocal percussion) sounds. A set of unvoiced percussion sounds belonging to five classes (bass drum, open hihat, closed hihat and two types of snare drum) were recorded and manually segmented. ACE was used to compare various classification techniques, both with(More)
A sonification is a rendering of audio in response to data, and is used in instances where visual representations of data are impossible, difficult, or unwanted. Designing sonifications often requires knowledge in multiple areas as well as an understanding of how the end users will use the system. This makes it an ideal candidate for end-user development(More)