Occlusion Detection in Front Projection Environments based on
Camera-Projector Calibration
Overview
Front projection technology is increasingly being used to create large displays for data visualization, immersive environments and augmented reality. Recently, there has also been growing interest in the development of novel camera-projector systems to create intelligent and interactive ubiquitous displays.
Front projection displays, however, suffer from occlusions,
resulting in distracting shadows being cast onto the display and loss
of information in occluded regions. Distracting light is also
projected onto the occluding object (typically the user).
The goal of this research is to develop a camera-projector system for
occlusion detection in front projection environments.
The implemented occlusion detection technique is based on offline,
camera-projector geometric and color calibration, which then enable
online, dynamic camera view synthesis of arbitrary projected
scenes. Occluded display regions are detected through pixel-wise
differencing between predicted and captured camera images.
Such a system can be used to enable dynamic shadow removal
through Active Virtual Rear Projection (AVRP), which involves
detecting and compensating for shadows by filling them in with
redundant overlapping projectors.
As well, by determining
which projector is occluded, it is possible to avoid projecting
distracting light on the user. Alternatively, rather that suppressing
light, an occluding object itself could potentially be augmented by
customizing the projected imagery in the corresponding display region.
Calibration-based occlusion detection can also be used to facilitate
automatic user sensing in interactive display applications.
The implemented camera-projector system is demonstrated for one such
application, namely dynamic shadow detection and removal using a
dually overlapped projector display.
The problem of occlusion detection in front projection environments
has been addressed in the context of various applications, including
shadow removal,
occluder light
suppression, as well as
hand detection and tracking for gesture recognition.
Current occlusion detection techniques can be divided into two groups,
namely
direct
and
indirect
occlusion detection. The former approach locates the occluding object
directly in the scene, while the latter detects an occlusion
indirectly by locating its more easily discernible shadow.
More on related work can be found
here.
Current Results
The implemented occlusion detection algorithm consists of
offline camera-projector calibration,
followed by
online occlusion detection
that occurs for each camera frame.
Step 1: Offline Camera-Projector Calibration
Offline calibration is performed in two steps, namely geometric and
color calibration, to compute the image warping homography transform
and color transfer function, respectively, between each
camera-projector pair. The use of a planar white Lambertian display surface is assumed.
a) Geometric Calibration
Offline geometric calibration is performed for two reasons. For
display configuration during initial system setup, corrective
projector prewarps must be computed and applied to align multiple
overlapping projectors and optionally also eliminate keystone
distortion.
During the online occlusion detection process, camera-projector
image warps are required for camera view synthesis and, in the case of
shadow removal, for mapping occlusion regions detected in camera space
to corresponding projector pixels.
For each camera-projector pair, the 3x3 projector-to-camera image
warping homography Hpc is estimated:
 |
Projector-to-camera image
warping homography Hpc |
Projector image warping homographies required for multi-projector
alignment and keystone correction are also derived
(see Daniel Sud's
research for more details on multi-projector display systems).
b) Color Calibration
Offline color calibration is performed to enable projector-to-camera
color correction of the synthesized camera image when predicting the
camera view of a projected display. However, we only recover a rough
estimate of the complex nonlinear color transfer function between each
camera-projector pair. We assume that this simplification suffices
for our occlusion detection tasks.
For each camera-projector pair, a 3x4 projector-to-camera linear
color transfer matrix Mpc is estimated, which also
accounts for the black offset of the projectors:
 |
Projector-to-camera color correspondences |
Step 2: Online Occlusion Detection
During online occlusion detection, camera view synthesis is performed
to predict the appearance of the projected display as it would appear,
unoccluded, from the perpective of the monitoring camera.
Pixel-wise comparison is then performed between corresponding
predicted and captured camera images to locate significant color
inconsistencies, which correspond to occluded display regions.
Depending on camera-projector placement, these regions may represent
shadow artifacts on the display or the occluding object itself:
 |
Online occlusion detection process for a
single projector display |
Shadow Removal Application
The performance of the implemented occlusion detection system is demonstrated when integrated with a dual projector AVRP display. Work on the shadow removal system was done in collaboration with Daniel Sud.
An exclusive-OR shadow removal method was adopted. Each
display pixel is illuminated by one projector only at any given time. Assuming an unobstructed camera view, shadows are detected and eliminated by filling in corresponding display regions with the unoccluded projector.
Experimental results are provided in the figure below, which
depicts the details of the shadow removal process for one camera
frame.
As shown, the second projector compensates for the shadow that
resulted from occluding the first one. We note that although the
second projector is operating at full intensity in the corresponding
region, its display is dimmer than that of the first. During
occlusion detection, these intensity differences are accounted for by
per projector color calibration, allowing for the synthesis of a more
accurate color corrected camera image at frame i+1. In the
future, true color seamlessness between the two projectors can be
achieved by performing inter-projector color calibration.
 |
Shadow removal process for camera frame i. |
Shadow detection and removal results over a sequence of frames are
also illustrated below, where the entire display is illuminated
initially only by the first projector. Subsequent occlusions are
detected and the second projector is instructed to fill in shadows
selectively as they occur (Frames i to i+6). In Frames
i+6 and onward, it is the second projector that is being
occluded and shadowed display pixels are re-assigned to the first
projector.
 |
Shadow removal results for a sequence of
captured camera frames. |
Two simple improvements to the occlusion detection algorithm were also
implemented, including a variable thresholding technique to improve
detection in darker display regions, as well as morphological image
smoothing (erosion-dilation) for reducing noise in the occlusion map.
Future Work
We are currently working on intensity blending at shadow edges between the two
projectors (using a simple ramp function) in order to reduce the
visibility of noise and gaps after shadows have been filled in.
Publications
Hilario, M.N. (2005) Occlusion Detection in Front Projection
Environments Based on Camera-Projector Calibration, Master's thesis,
Electrical and Computer Engineering Department, McGill University.
Hilario, M.N. and Cooperstock, J.R. (2004) Occlusion Detection for Front-Projected Interactive Displays. Pervasive Computing, Vienna, April 21-23 (appears in Advances in Pervasive Computing, Austrian Computer Society (OCG), ISBN 3-85403-176-9).
Last update: 22 June 2005