Feature-Based Sequence-to-Sequence Matching
Yaron Caspi, Denis Simakov and Michal Irani


Paper: "Feature-Based Sequence-to-Sequence Matching"
Contact us:
  Yaron   Denis   Michal
Our affiliation:
  Computer Vision Group
  Faculty of Mathematics and Computer Science
  Weizmann Institute of Science
Read abstract
See examples:

Multi-sensor alignment (infra-red vs. visible).

  1. The setup: scene and cameras
  2. Two cameras (one visible light PAL, the other infra-red NTSC) are placed next to each other and capturing the same distant scene.

  3. Input sequences
  4. Camera 1 (visible light) Camera 2 (infra-red)
    Multi-sensor: input from camera 1 Multi-sensor: input from camera 2
    Video: AVI 1.4Mb   MPEG 1.65Mb Video: AVI 600Kb   MPEG 1.15Mb

  5. Detect moving objects
  6. ... using background subtraction.

  7. Extract interest points
  8. ... by taking the centeroid of each blob, in each frame.

    Construct trajectories from these points

    Camera 1 (visible light) Camera 2 (infra-red)
    Multi-sensor: trajectory from camera 1 Multi-sensor: trajectory from camera 2

  9. Use the trajectories as features for matching algorithm
  10. Recover homography (spatial alignment) and time shift (temporal alignment)
  11. Fused images (visible light and infra-red)
    Multi-sensor: fused image
    Note that the kite marked with red circle is better perceived in the infra-red sequence,
    and the kite marked with green circle - in visible light.
    In the fused sequence we clearly see both kites.

To the top of the page Last modified: Oct 06, 2006