|
Separating a photograph into its reflectance and illumination intrinsic images is a fundamentally ambiguous problem, and
state-of-the-art algorithms combine sophisticated reflectance and illumination priors with user annotations to create plausible results.
However, these algorithms cannot be easily extended to videos for two reasons: first, naïvely applying algorithms designed for single images to
videos produce results that are temporally incoherent; second, effectively specifying user annotations for a video requires interactive feedback,
and current approaches are orders of magnitudes too slow to support this. We introduce a fast and temporally consistent algorithm to decompose video
sequences into their reflectance and illumination components. Our algorithm uses a hybrid L2-Lp formulation that separates image gradients
into smooth illumination and sparse reflectance gradients using look-up tables. We use a multi-scale parallelized solver to reconstruct the
reflectance and illumination from these gradients while enforcing spatial and temporal reflectance constraints and user annotations. We demonstrate
that our algorithm automatically produces reasonable results, that can be interactively refined by users, at rates that are two orders of magnitude
faster than existing tools, to produce high-quality decompositions for challenging real-world video sequences. We also show how these
decompositions can be used for a number of video editing applications including recoloring, retexturing, illumination editing, and
lighting-aware compositing.
|