Ubicoustics

@article{Laput2018Ubicoustics,
  title={Ubicoustics},
  author={Gierad Laput and Karan Ahuja and Mayank Goel and Chris Harrison},
  journal={Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology},
  year={2018}
}
20 Citations

ProtoSound: A Personalized and Scalable Sound Recognition System for Deaf and Hard-of-Hearing Users

ProtoSound is introduced, an interactive system for customizing sound recognition models by recording a few examples, thereby enabling personalized and fine-grained categories and discussing open challenges in personalizable sound recognition, including the need for better recording interfaces and algorithmic improvements.

FaceBit

FaceBit empowers the mobile computing community to jumpstart research in smart face mask sensing and inference, and provides a sustainable, convenient form factor for health management, applicable to COVID-19 frontline workers and beyond.

Capacitivo

Capacitivo is presented, a contact-based object recognition technique developed for interactive fabrics, using capacitive sensing that recognizes non-metallic objects such as food, different types of fruits, liquids, and other types of objects that are often found around a home or in a workplace.

IoT Stickers: Enabling Lightweight Modification of Everyday Objects

IoT Stickers demonstrates a way to associate IoT services with a dramatically wider set of objects and tasks and enables computational services to be tailored to everyday activities by setting parameters to be passed to the sticker's actions and composing the stickers together.

Let It Rip! Using Velcro for Acoustic Labeling

An early stage prototype of an acoustic labeling system using Velcro, a two-sided household adhesive product, is presented, and an automatic audio classification pipeline is used to detect and classify small sets of labels.

Characterizing the Effect of Audio Degradation on Privacy Perception And Inference Performance in Audio-Based Human Activity Recognition

This paper investigates how intentional degradation of audio frames can affect the recognition results of the target classes while maintaining effective privacy mitigation, and results indicate that degradation ofaudio frames can leave minimal effects for audio recognition using frame-level features.

Augmenting Conversational Agents with Ambient Acoustic Contexts

This work proposes a solution that redesigns the input segment intelligently for ambient context recognition, achieved in a two-step inference pipeline, first separate the non-speech segment from acoustic signals and then use a neural network to infer diverse ambient contexts.

Personal laughter archives: reflection through visualization and interaction

We present our ongoing effort to capture, represent, and interact with the sounds of our loved ones' laughter in order to offer unique opportunities for us to celebrate the positive affect in our

Hello There! Is Now a Good Time to Talk?

The key determinants for opportune moments are closely related to both personal contextual factors such as busyness, mood, and resource conflicts for dual-tasking and the other contextual factors associated with the everyday routines at home, including user mobility and social presence.

SoundWatch: Exploring Smartwatch-based Deep Learning Approaches to Support Sound Awareness for Deaf and Hard of Hearing Users

A performance evaluation of four low-resource deep learning sound classification models: MobileNet, Inception, ResNet-lite, and VGG-lite across four device architectures: watch-only, watch+phone, watch-+phone+cloud, and watch+cloud finds that the watch+phones architecture provided the best balance between CPU, memory, network usage, and classification latency.

References

SHOWING 1-10 OF 14 REFERENCES

Wesleyan

Synthetic Sensors: Towards General-Purpose Sensing

This work explores the notion of general purpose sensing, wherein a single enhanced sensor can indirectly monitor a large context, without direct instrumentation of objects, through what it is called Synthetic Sensors.

CNN architectures for large-scale audio classification

This work uses various CNN architectures to classify the soundtracks of a dataset of 70M training videos with 30,871 video-level labels, and investigates varying the size of both training set and label vocabulary, finding that analogs of the CNNs used in image classification do well on the authors' audio classification task, and larger training and label sets help up to a point.

Freesound technical demo

This demo wants to introduce Freesound to the multimedia community and show its potential as a research resource.

ESC: Dataset for Environmental Sound Classification

A new annotated collection of 2000 short clips comprising 50 classes of various common sound events, and an abundant unified compilation of 250000 unlabeled auditory excerpts extracted from recordings available through the Freesound project are presented.

Robust Sound Event Classification Using Deep Neural Networks

A sound event classification framework is outlined that compares auditory image front end features with spectrogram image-based frontEnd features, using support vector machine and deep neural network classifiers, and is shown to compare very well with current state-of-the-art classification techniques.

DeepEar: robust smartphone audio sensing in unconstrained acoustic environments using deep learning

This paper presents DeepEar -- the first mobile audio sensing framework built from coupled Deep Neural Networks (DNNs) that simultaneously perform common audio sensing tasks and shows DeepEar is feasible for smartphones by building a cloud-free DSP-based prototype that runs continuously, using only 6% of the smartphone's battery daily.

IDSense: A Human Object Interaction Detection System Based on Passive UHF RFID

This work proposes a minimalistic approach to instrumenting everyday objects with passive UHF RFID tags by measuring the changes in the physical layer of the communication channel between the RFID tag and reader and demonstrates that its real-time classification engine is able to simultaneously track 20 objects and identify four movement classes with 93% accuracy.

Zensors: Adaptive, Rapidly Deployable, Human-Intelligent Sensor Feeds

This work proposes Zensors, a new sensing approach that fuses real-time human intelligence from online crowd workers with automatic approaches to provide robust, adaptive, and readily deployable intelligent sensors.