Fork me on GitHub

Across Scales & Across Dimensions:
Temporal Super-Resolution using Deep Internal Learning

Liad Pollak Zuckerman, Eyal Naor, George Pisha, Shai Bagon, Michal Irani

[Paper PDF] [Code] [bibtex]


When a very fast dynamic event is recorded with a low-framerate camera, the resulting video suffers from severe motion blur (due to exposure time) and motion aliasing (due to low sampling rate in time). True Temporal Super-Resolution (TSR) is more than just Temporal-Interpolation (increasing framerate). It also recovers new high temporal frequencies beyond the temporal nyquist limit of the input video, thus resolving both motion-blur and motion-aliasing. In this paper we propose a "Deep Internal Learning" approach for true TSR. We train a video-specific CNN on examples extracted directly from the low-framerate input video. Our method exploits the strong recurrence of small space-time patches inside a single video sequence, both within and across different spatio-temporal scales of the video. We further observe (for the first time) that small space-time patches recur also across-dimensions of the video sequence - i.e., by swapping the spatial and temporal dimensions. In particular, the higher spatial resolution of video frames provides strong examples as to how to increase the temporal resolution of that video. Such internal video-specific examples give rise to strong self-supervision, requiring no data but the input video itself. This results in Zero-Shot Temporal-SR of complex videos, which removes both motion blur and motion aliasing, outperforming previous supervised methods trained on external video datasets.

Video with Explanation and Results

Visual Comparison With SotA Methods

Visual Results enlarged figure star fan spinning disk cheetah billiard diamonds fire hoola hop


Thanks to Ben Feinstein for his invaluable help in getting the GPUs to run smoothly and efficiently.
This project received funding from the European Research Council (ERC) Horizon 2020, grant No 788535,
from the Carolito Stiftung and by grant from D. Dan and Betty Kahn Foundation.
Dr Bagon is a Robin Chemers Neustein AI Fellow.