This paper studies the problem of matching two unsynchronized video
sequences of the same dynamic scene, recorded by different stationary
uncalibrated video cameras. The matching is done both in time and
in space, where the spatial matching can be modeled by a 2D
homography or a (3D) fundamental matrix. Our approach is based on
matching space-time trajectories of moving objects, in contrast
to matching interest points (e.g., corners), as done in regular
feature-based image-to-image matching techniques. The sequences are
matched in space and time by enforcing consistent matching of all points
along corresponding space-time trajectories.
    By exploiting the dynamic properties of these space-time trajectories,
we obtain sub-frame temporal correspondence (synchronization) between
the two video sequences. Furthermore, using trajectories rather than
feature-points significantly reduces the combinatorial complexity of the
spatial point-matching problem when the search space is large. This
benefit allows to match information across sensors in situations which
are extremely difficult when only image-to-image matching is used,
including:   (a) matching under large scale (zoom) differences,   (b) very wide
base-line matching,   and   (c) matching across different sensing
modalities (e.g., IR and visible-light cameras).     We show examples
of recovering homographies and fundamental matrices under such
conditions.