Michal Irani
Video data provides a visual window
into the space-time world. It captures the continuous evolution of dynamic scenes
over extended regions in time and space. My research is focused on analyzing
the information contained in visual space-time volumes generated by video sequences.
In particular, I have been able to show that when moving away from image frames
and using all available information in entire space-time volumes, one can perform
tasks that are very difficult and often impossible to perform when only "slices"
of this information are used, such as discrete image frames, or discrete feature
points. This gives rise new powerful ways of analyzing and exploiting recorded
visual information from single and multiple video cameras.
My work is guided by real-world applications of video, with the aim of turning video data into a usable data-type. In particular, I have been focusing on applications such as action recognition, alignment and integration of information from multiple sensors in order to obtain enhanced visual sensing capabilities, alternative ways of visualizing recorded data, video synthesis and manipulation, rapid video search, video enhancement, video compression, and various surveillance applications.
Recent Publications
- [with Lihi Zelnik-Manor] Event-based analysis of video. IEEE International Conference on Computer Vision and Pattern Recognition (2001) 123-130.
- [with Yaron Caspi] Aligning Non-Overlapping Sequences. International Journal of Computer Vision 48(1) (2002) 39-51.
- [with Eli Shechtman and Yaron Caspi] Increasing space-time resolution in video. European Conference on Computer Vision (2002).