Computer Graphics Forum (Proceedings of Eurographics 2017) Consistent Video Filtering for Camera Arrays

 * Equal contribution

 1LIRIS - CNRS 2Brown University 3NVIDIA 4Adobe 5Harvard University SEAS

 Abstract Visual formats have advanced beyond single-view images and videos: 3D movies are commonplace, researchers have developed multi-view navigation systems, and VR is helping to push light field cameras to mass market. However, editing tools for these media are still nascent, and even simple filtering operations like color correction or stylization are problematic: naively applying image filters per frame or per view rarely produces satisfying results due to time and space inconsistencies. Our method preserves and stabilizes filter effects while being agnostic to the inner working of the filter. It captures filter effects in the gradient domain, then uses \emph{input} frame gradients as a reference to impose temporal and spatial consistency. Our least-squares formulation adds minimal overhead compared to naive data processing. Further, when filter cost is high, we introduce a filter transfer strategy that reduces the number of per-frame filtering computations by an order of magnitude, with only a small reduction in visual quality. We demonstrate our algorithm on several camera array formats including stereo videos, light fields, and wide baselines.

 @article{BTSSPP15, author = {Nicolas Bonneel and James Tompkin and Deqing Sun and Oliver Wang and Kalyan Sunkavalli and Sylvain Paris and Hanspeter Pfister}, title = {Consistent Video Filtering for Camera Arrays}, journal = {Computer Graphics Forum (Proceedings of Eurographics 2017)}, volume = {36}, number = {2}, year = {2017},}

 Paper PDF (45 MB) Paper (low res.) PDF (4 MB) Supplemental Material Website

 Supplemental Video MP4 (232 MB) | YouTube

 Acknowledgements We thank Kovacs et al. [2015], Ballan et al. [2010], the (New) Stanford Light Field Archive, Nagoya University Multi-view Sequence Download List, Al Caudullo Productions, and G. Pouillot for their videos. We thank Szo-Po Wang and Wojciech Matusik for use of their auto-multiscopic display, and Serena Booth for her narration. For this work, James Tompkin and Hanspeter Pfister were sponsored by the Air Force Research Laboratory and DARPA Memex program. Nicolas Bonneel thanks Adobe for software donations. Zip icon adapted from tastic mimetypes by Untergunter, CC BY-NA-SA 3.0 licence.