Vision and Robotics Seminar
Visual Interfaces Group
Artificial Intelligence Laboratory, MIT
will speak on
Combining face and gait in view-independent recognition
We develop a view normalization approach to multi-view face and gaitrecognition. An image-based visual hull (VH) is computed from a set of monocular views and used to render virtual views for tracking and recognition.
We determine canonical viewpoints by examining the 3-D location, appearance (texture), and motion of the moving person. For optimal face recognition, we place virtual cameras to capture frontal face appearance; for gait recognition we place virtual cameras to capture a side view of the person. Multiple virtual images can be rendered simultaneously, and camera position is dynamically updated as the person moves through the workspace. Image sequences from each canonical view are passed to an unmodified face or gait recognition algorithm. We investigate statistical techniques for fusion of these two modalities, and for integration of identity evidence over time. Our experiments
show that view normalization provides greater recognition accuracy than is obtained using the unnormalized input sequences, and that integrated face and gait recognition provides improved performance over either modality alone.
Canonical view estimation, rendering, and recognition have been efficiently implemented and can run at near real-time speeds.
Joint work with Trevor Darrell and Lily Lee.
The lecture will take place in the
Lecture Hall, Room 1, Ziskind Building
on Thursday, February 7, 2002