Shared Reality

networkingtelepresenceaudiovideohapticsVR/AR/XRmultimodalvisualizationHCI
Shared Reality
The infrastructure for this project was funded by the Canadian Foundation for Innovation, the Natural Sciences and Engineering Research Council of Canada, Formation de Chercheurs et l’Aide a la Recherche, and the Laboratoires universitaires Bell.

Evolution of the System (2000-2007)

Overview

The Shared Reality Environment explores the challenging research problems associated with distributed computer-mediated human-human interaction. Our intent is to develop technology and investigate new communication metaphors that support highly interactive, complex group activity in a distributed setting. Toward this end, we are investigating active computer processing of input sources, low-latency exchange of high-fidelity audio and video streams between multiple users in different locations, and context-sensitive synthesis of the output streams.

Motivation

Anyone who has used videoconferencing tools, ranging from simple desktop applications to high-end professional systems, quickly realizes that videoconferencing is not the same as physical presence, nor even to a telephone call. While the conversants can see video images of each other, these are often of limited quality. Worse, the latency in the audio signal results in an unnatural “turn-taking” style of conversation that diminishes the quality of interaction and exaggerates the sense of distance.

In the camera-monitor mediated world of videoconferencing, the limitations of communications bandwidth and equipment capability tend to place a severe handicap on the senses of sight and sound and eliminate the sense of touch. As a result, even in state of the art videoconference rooms using the highest quality equipment, the sense of co-presence enjoyed by individuals in the same room is never fully achieved. Gaze awareness, recognition of facial gestures, social cues through peripheral or background awareness, and sound spatialization through binaural audio, all important characteristics of multi-party interaction, are often lost in a videoconference. While many of these issues can be addressed in part by improved display technology and increased bandwidth, we believe that the result will still be inadequate.

Approach

To overcome these limitations, we believe that the computer must play a more active role as an intermediary in the communications. Furthermore, it is necessary to move from the restricted videoconference environments of television monitors and stereo speakers to immersive spaces in which video fills the participant’s visual field and is reinforced by spatialized audio and vibrosensory haptic cues. Our efforts in this direction have culminated in the development of a low-latency network transport protocol and a prototype Shared Reality Environment.

The present environment consists of two rooms with multiple cameras, projectors, microphones and speakers. We have also recently introduced a high-fidelity vibrosensory system for sensing and/or actuating a platform.

Our ongoing research is investigating what classes of expression, for example, musical collaboration, can be transferred through such a medium unaltered versus those that cannot be supported without physical co-presence. For the latter, the computer may serve as a sensory surrogate or prosthesis to augment the natural forms of communication being employed.

Applications of this technology extend beyond single-user immersive visualizations to multi-party tasks including distance education, telemedicine, telecommunications, virtual tourism, and entertainment.