Code Replicability in Computer Graphics

Abstract

Being able to duplicate published research results is an important process of conducting research whether to build upon these findings or to compare with them. This process is called “replicability” when using the original authors’ artifacts (e.g., code), or “reproducibility” otherwise (e.g., re-implementing algorithms). Reproducibility and replicability of research results have gained a lot of interest recently with assessment studies being led in various fields, and they are often seen as a trigger for better result diffusion and trans- parency. In this work, we assess replicability in Computer Graphics, by evaluating whether the code is available and whether it works properly. As a proxy for this field we compiled, ran and analyzed 151 codes out of 374 papers from 2014, 2016 and 2018 SIGGRAPH conferences. This analysis shows a clear increase in the number of papers with available and opera- tional research codes with a dependency on the subfields, and indicates a correlation between code replicability and citation count. We further provide an interactive tool to explore our results and evaluation data.

Publication
ACM Transactions on Graphics (Proceedings of SIGGRAPH)

Caption: We ran 151 codes provided by papers published at SIGGRAPH 2014, 2016 and 2018. We analyzed whether these codes could still be run as of 2020 to provide a replicability score, and performed statistical analysis on code sharing.

@article{replicability,
      author = {Nicolas Bonneel and David Coeurjolly and Julie Digne and Nicolas Mellado},
      journal = {ACM Transactions on Graphics (Proceedings of SIGGRAPH)},
      month = {July},
      number = {4},
      title = {Code Replicability in Computer Graphics},
      volume = {39},
      year = {2020}
}