Project Team

Real-Time Emergency Response

A project for the Mozilla Ignite Challenge

Fri, Nov 2, 2012
We have a great initial conversation with the folks at VACCINE and look forward to follow-up!
Thu, Nov 1, 2012
We've just released a video tour of our immersive CAVE setup running the prototype rtER Mozilla Ignite application. Expect an updated version with narration in the next few days.

rtER - Mozilla Ignite from Shared Reality Lab on Vimeo.

Tue, Oct 30, 2012
During today's Ignite call, we speak with Antonio Guglielmo, who's team developed a highly relevant enabling technology that could be useful for the project. Also, Will introduces us to the team that put together this compelling video.
Thu, Oct 25, 2012
Current status: Live camera feeds can be inset over the streetview, street view can jump to camera location, bike and car accident data is shown in geographically correct location with information bubbles, and roads are augmented with colors to show traffic (with real data once the traffic API is complete).
Wed, Oct 24, 2012
Today, we're releasing our code so far as two separate projects. The applications being designed for this prototyping round have been released on github under the title of the rtER project. However during the course of our prototyping we have begun work on a new open source library which provides access to Google Street View data in Processing. This library, called Panoia, is also available on github.
Tue, Oct 23, 2012
Coding reaches a fever pitch! Sparky has a smoothed renderer for street view and is integrating mash-up data with help from Stephane, who imported some datasets with basic visualization code. Stephane's prior participation in HackTaVille came in very handy. Nehil comes through with video streaming for Android, taking advantage of the open source ipcamera, Google Libjingle, and Nanohttpd along with location services via the Android SDK. Very exciting!
Sun, Oct 21, 2012
Since the effort to register and fuse live image data with pre-existing models is a longer-term challenge, we are beginning with initial smaller-scale proof-of-concept efforts to move toward our objectives. The first of these involves a prototype API for rendering of relevant mashup data to augment the display of the environment, which might take a street-view, bird's-eye, or other perspective based on the needs of the viewer. To focus on an obvious use case where bandwidth is an issue, we are hoping to submit an interactively updating street view display that is controlled by user body motion within a disaster- or emergency-response control room, and augmented with mashup data, displayed graphically in the appropriate location. Although the datasets we are likely to poll are largely non-real-time, they are nevertheless representative of the type of content relevant to emergency responders, e.g., the locations of bicycle accidents in the city. The intent is for such a capability to serve the needs of coordinators in monitoring and allocating resources to areas in need and planning effective navigation paths for emergency responders. As a parallel effort that merges directly with this visualization environment, we hope to integrate a simple realtime video streaming smartphone app whose content could be overlaid within the display of pre-existing models, much as in the examples from Haiti before-and-after earthquake.
Tue, Oct 16, 2012
George Adams points us to the Visual Analytics for Command, Control, and Interoperability Environments (VACCINE) center of excellence, whose website links to a wealth of related research.
Mon, Oct 15, 2012
Efforts underway regarding smooth transitions for greater immersion in our Google street-view display rendering.
Sun, Oct 14, 2012
Sparky shares this article on a low-cost alternative to thermal imaging cameras.
Sun, Oct 14, 2012
In an effort to build toward the long-term objectives identified in our Ignite idea, we submitted an application in response to Ericsson's call for proposals related to "strengthening Canada's Information and Communications Technology (ICT) ecosystem". The proposal ties in to "Ericsson Response", the company's volunteer activities that leverage mobile communications to provide support for disaster relief and humanitarian aid. Our specific objectives include:
  1. Automatic registration of a live video stream from a mobile (smartphone) camera with pre-existing 2D and 3D models of the scene based on GPS and compass data to provide approximate coordinates or referencing within geospatial databases such as Google Street View and the 3D city models available from
  2. Refinement of the registration using similarity metrics between live video and potentially disparate reference data sources, e.g., multispectral imagery or oblique angle satellite views, and/or of scene content captured under significantly different conditions, e.g., daytime vs. nighttime or destruction resulting from fire or earthquake damage. For the purposes of this project, we would likely need to limit ourselves to considering only one of these cases to ensure that the challenge remains realistic.
  3. Augmented reality visualization support, beginning on the smartphone itself, in which the live video is superposed appropriately on the reference data, giving the user control over the blending between "current" and model data.
Tue, Oct 9, 2012
I note that Google and Microsoft have done loads of work on integrating dense sampling of photo sets from their respective "street view" tools, using registration techniques to match live video with static models (see Aguera y Arcas' TED talk starting about the 4:00 mark, for the potential). However, I suspect that is unlikely to work on highly heterogenous data sources without considerably more research and is probably not suitable for sparse models. Similarly, there has been considerable work on light field and dense camera array view reconstruction using image-based rendering techniques; we've done some work in this area as well for live reconstruction of arbitrary perspectives for remote stereo viewing of medical surgery procedures (using our HSVO camera array) and dyamic video mosaicing from limited camera coverage, although the latter is far from real-time. What's out there in terms of other research efforts or tools that others might have developed that would be suitable for registration of imagery, video and video-like, e.g., multispectral, content with as few constraints as possible?
Fri, Oct 4, 2012
Will Barkis is kind enough to let us post on the Ignite blog to put out a call for interested collaborators and especially emergency and disaster responders.
Wed, Oct 1, 2012
We get in touch with Scott Reuter, a "virtual operations support team" member who maintains a valuable blog on disaster planning. I'm hoping we get to pick his brain!