ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2014)
 
Interactive Intrinsic Video Editing

Nicolas Bonneel Kalyan Sunkavalli James Tompkin Deqing Sun Sylvain Paris Hanspeter Pfister
LIRIS - CNRS Harvard University SEAS Adobe

Left: Input video. Middle: Intrinsic decomposition. Right: Editing the reflectance only.
A video is interactively decomposed into temporally consistent components for reflectance (middle top) and illumination (middle bottom). Now, editing the textures in the reflectance image does not affect the illumination: changes to the brick walls, the roof tiles, and the pathway leading up to the building all maintain the complex illumination of the light through the trees. We encourage readers to zoom into this figure, and refer them to the accompanying video to see the temporally consistent nature of our decomposition.


Abstract
Separating a photograph into its reflectance and illumination intrinsic images is a fundamentally ambiguous problem, and state-of-the-art algorithms combine sophisticated reflectance and illumination priors with user annotations to create plausible results. However, these algorithms cannot be easily extended to videos for two reasons: first, naïvely applying algorithms designed for single images to videos produce results that are temporally incoherent; second, effectively specifying user annotations for a video requires interactive feedback, and current approaches are orders of magnitudes too slow to support this. We introduce a fast and temporally consistent algorithm to decompose video sequences into their reflectance and illumination components. Our algorithm uses a hybrid L2-Lp formulation that separates image gradients into smooth illumination and sparse reflectance gradients using look-up tables. We use a multi-scale parallelized solver to reconstruct the reflectance and illumination from these gradients while enforcing spatial and temporal reflectance constraints and user annotations. We demonstrate that our algorithm automatically produces reasonable results, that can be interactively refined by users, at rates that are two orders of magnitude faster than existing tools, to produce high-quality decompositions for challenging real-world video sequences. We also show how these decompositions can be used for a number of video editing applications including recoloring, retexturing, illumination editing, and lighting-aware compositing.

 
@article{BSTSPP14,
author = {Nicolas Bonneel and Kalyan Sunkavalli and James Tompkin
and Deqing Sun and Sylvain Paris and Hanspeter Pfister},
title = {Interactive Intrinsic Video Editing},
journal = {ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2014)},
volume = {33},
number = {6},
year = {2014},
}

   
Paper
PDF (12 MB)
  Paper (low res.)
PDF (1 MB)
  Presentation
PPTX (92 MB)
  Supplemental Material
ZIP (120 MB)
  Data: Input + Results
ZIP (276 MB)


 
  Supplemental Video
MP4 (120 MB) | YouTube


Acknowledgements
We thank the anonymous SIGGRAPH reviewers for their feedback.
We thank the authors of the video footage: McEnearney (Fig. 1), R. Cadieux (Figs. 3, 5, and 11), G. M. Lea Llaguno (Fig. 7), Ye et al. [2014] (Fig. 8), M. Assegaf (Fig. 9), B. Yoon (Fig. 12), The Scene Lab via Dissolve Inc. (Fig. 13 (a)). Used with permission or under CC BY-NA licence.
Copyright by the authors, 2014. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in ACM Transactions on Graphics: http://dx.doi.org/10.1145/2661229.2661253
Zip icon adapted from tastic mimetypes by Untergunter, CC BY-NA-SA 3.0 licence.
This work was partially supported by NSF grants CGV-1111415, IIS-1110955, and OIA-1125087, and LIMA - Région Rhône-Alpes.