A person's face is mapped with video projection, which follows the face even while the person moves. Through a tablet drawing interface, others can virtually "paint" the face, with the projected results appearing in real-time. The system has applications to virtual makeup and augmented reality.
This involves research into the integrated and interchangeable use of the haptic and auditory modality in floor interfaces, and for the synergy of perception and action in capturing and guiding human walking.
rtER offers access to high-quality "live" data that may be visualized effectively both by responders in-situ and by remote operators in dedicated control rooms. Its components will include multimodal data registration, interactive visualization capabilities, and live streaming of the integrated contents.
Although other systems (e.g., Humanware's Trekker and standard GPS tools) emphasize navigation from one specific location to another, typically accomplished by explicit turn-by-turn instructions, our goal is to use ambient audio to reveal the kind of information that visual cues such as neon signs provide to sighted users. Once users notice a point of interest, additional details are available on demand.
The software has been used for a range of demanding applications including live concert streaming, remote mixing, collaborative performance, distance masters classes, remote video interpreting of sign language, and rendering of a multi-screen uncompressed high-definition video "shared space".