Abstract
Dynamic analysis of video sequences often relies on the segmentation of
the sequence into regions of consistent motions. Approaching this problem
requires a definition of which motions are regarded as consistent. Common
approaches to motion segmentation usually group together points or image regions
that have the same motion between successive frames (where the
same motion can be 2D, 3D, or non-rigid). In this paper we define a new type
of motion consistency, which is based on temporal
consistency of behaviors across multiple frames in the video sequence.
Our definition of consistent ``temporal behavior'' is expressed in terms
of multi-frame linear subspace constraints. This definition applies to 2D,
3D, and some non-rigid motions without requiring prior model selection. We
further present a multi-frame multi-body segmentation algorithm which applies
the new motion consistency constraint directly to image brightness measurements,
without requiring prior correspondence estimation
nor feature tracking.
Papers
L. Zelnik-Manor, M. Machline and M. Irani, "Multi-Body Factorization With Uncertainty: Revisiting Motion Consistency " . To appear in IJCV special issue on Vision and Modeling of Dynamic Scenes.
M. Machline, L. Zelnik-Manor and M. Irani, "Multi-Body Segmentation: Revisiting Motion Consistency " Workshop on Vision and Modeling of Dynamic Scenes, (with ECCV'02)
Click here for a preliminary version: MultiBodySeg.pdf (809K)
Example Sequences and Results:
Back to Weizmann Institute Computer Vision Group
Questions and comments should be addressed to:
lihi@wisdom.weizmann.ac.il