Computer Graphics Forum (Eurographics State of the Art Reports 2017)
 
Intrinsic Decompositions for Image Editing

Nicolas Bonneel 1 Balazs Kovacs 2 Sylvain Paris 3 Kavita Bala 2

1CNRS / Univ. Lyon 1 2Cornell University 3Adobe

We evaluate state-of-the-art intrinsic image decomposition algorithms based on their ability to produce seamless, artifact-free results for image edits. To fairly compare different methods, we automatize the image editing process. Left to right, by Poisson-based inpainting of the reflectance layer, we remove a logo on a shirt using the metod of Barron et al. [2015], we add a picnic blanket over a shadow with the method of Grosse et al. [2009], and add a painting over colored shadows with the method of Bousseau et al. [2009].


Abstract
Intrinsic images are a mid-level representation of an image that decompose the image into reflectance and illumination layers. The reflectance layer captures the color/texture of surfaces in the scene, while the illumination layer captures shading effects caused by interactions between scene illumination and surface geometry. Intrinsic images have a long history in computer vision and recently in computer graphics, and have been shown to be a useful representation for tasks ranging from scene understanding and reconstruction to image editing. In this report, we review and evaluate past work on this problem. Specifically, we discuss each work in terms of the priors they impose on the intrinsic image problem. We introduce a new synthetic ground-truth dataset that we use to evaluate the validity of these priors and the performance of the methods. Finally, we evaluate the performance of the different methods in the context of image-editing applications.

 
@article{BKPB17,
author = {Nicolas Bonneel and Balazs Kovacs and Sylvain Paris and Kavita Bala}, title = {Intrinsic Decompositions for Image Editing}, journal = {Computer Graphics Forum (Eurographics State of the Art Reports 2017)}, volume = {36}, number = {2}, year = {2017},
}

 
  Paper
PDF (80 MB)
  Paper (low res.)
PDF (12 MB)
  Supplemental Material (Image editing results)
Website
  Supplemental Material (Ground truth results)
Website
  Supplemental Material (Code)
  Extended Ground-Truth Dataset
Website


Acknowledgements
We thank the authors of the intrinsic decomposition methods for having shared their implementation with us and Sean Bell for sharing his evaluation framework. We also thank Julie Digne for initiating the idea of this report, and Adobe for software donations. We would like to thank our funding agencies NSF IIS 1617861, and a Google Faculty Research Award. We acknowledge the use and adaptation of LuxRender scenes from Andrew Price (Kitchen scene), Peter Sandbacka (Hotel Lobby) and Simon Wendsche (School Corridor), PBRT scenes from Jay Hardy (White Room), Guillermo M. Leal Llaguno (San Miguel), Florent Boyer (Villa), Marko Dabrovic and Mihovil Odak (Sponza), and BlendSwap user Wig42 (Modern living room). Some of these PBRT resources were compiled by Benedikt Bitterli, and are availabe in supplemental material. Mitsuba scenes from Johnathan Good (Arabic, Babylonian and Italian Cities).