User-specific Audio Rendering and Steerable Sound for Distributed Virtual Environments

Abstract

We present a method for user-specific audio rendering of a virtual environment that is shared by multiple participants. The technique differs from methods such as amplitude differencing, HRTF filtering, and wave field synthesis. Instead we model virtual microphones within the 3-D scene, each of which captures audio to be rendered to a loudspeaker… (More)

Topics

4 Figures and Tables

Cite this paper

@inproceedings{Wozniewski2007UserspecificAR, title={User-specific Audio Rendering and Steerable Sound for Distributed Virtual Environments}, author={Mike Wozniewski and Zack Settel and Jeremy R. Cooperstock}, year={2007} }