A Pervasive Multi-user Augmented Space for Mobile Immersive Interaction with Sound and Music

This project is a collaboration with artist Zack Settel and researchers Mike Wozniewski, Nicolas Bouillot, Romain Pellerin, and Tatiana Pietkiewicz. Funding is provided by the NSERC/Canada Council for the Arts New Media Initiative. A project Wiki is available here.

Overview

We seek to provide a compelling experience of immersive 3D audio for each individual in a group of users, located in a common physical space of arbitrary scale. Unlike other projects involving 3D audio space, our work is unique in supporting navigation within a continuous, modeled audio environment through the real world motion of users. This permits us to move beyond the confines of a purely virtual world, and instead, support interaction with other users within the same physical environment.

On a technical level, the project calls for the integration of position and orientation sensors, audio acquistion, spatial sound rendering software, and a hybrid computing architecture of powerful, centralized processing coupled with mobile devices. Our focus on these technologies is centered on their intrinsic suitability for immersive audio applications and artistic manifestations in a scalable, untethered manner.

This work builds on our earlier non-mobile Audioscape (formerly "Soundscape"), a rich, immersive, real-time audiovisual framework for artistic creation. The soundscape project included development of interactive musical instruments, based on the concept of a spatial tablature for performance. The expanded framework is intened to support multiple simultaneous participants, while providing a percpetual richness that was only available previously to a single user. Prior to the Mobile Audioscape project, the style of interaction has been mostly screen-based, where users are positioned in front of screens while navigating the virtual scene via controllers such as joysticks or gamepads. The user's audio is spatially rendered according to their current viewpoint using multiple loudspeakers or headphones. In the latter case, the user is able to turn his head and the spatialization system will adjust and render the appropriate audio for that new head orientation.

Our next goal is to add location/position awareness to the system, thus allowing users to physically move around the virtual scene instead of using input devices such as joysticks or mice. We have already experimented with small-scale motion tracking, using computer vision algorithms from the OpenCV library. For example, the 4Dmix3 installation tracks up to six users in an 800 sq. ft. gallery space. The motion of each user controls the position of a virtual avatar, who travels among a number of virtual sound generators. A buffering mechanism allows each avatar to record the sound it encounters, resulting in a spatial remixing application.

By incorporating larger-scale position tracking, such as GPS, we foresee a potentially rich platform for artistic creation. The great appeal of locative technologies is in the solutions they bring to multi-user immersive audio and music applications, and in the implicit conceptual leap of scale which translates directly to new musical experiences, and thus new forms. By freeing users from the confines of computer terminals and gallery spaces, new artistic works can be created that have almost limitless boundaries. Participants can navigate around a real physical space of arbitrary size with a corresponding (correlated) virtual audio space overlayed. Within this augmented space we intend to realize works that explore active listening, sonic play, collective mixing and navigable music, across a vast range of spatial scales.

Publications

Videos



Last update: 30 July 2008