Wasserstein Dictionary Learning: Optimal Transport-based unsupervised non-linear dictionary learning

Abstract

This article introduces a new non-linear dictionary learning method for histograms in the probability simplex. The method leverages optimal transport theory, in the sense that our aim is to reconstruct histograms using so called displacement interpolations (a.k.a. Wasserstein barycenters) between dictionary atoms; such atoms are themselves synthetic histograms in the probability simplex. Our method simultaneously estimates such atoms, and, for each datapoint, the vector of weights that can optimally reconstruct it as an optimal transport barycenter of such atoms. Our method is computationally tractable thanks to the addition of an entropic regularization to the usual optimal transportation problem, leading to an approximation scheme that is efficient, parallel and simple to differentiate. Both atoms and weights are learned using a gradient-based descent method. Gradients are obtained by automatic differentiation of the generalized Sinkhorn iterations that yield barycenters with entropic smoothing. Because of its formulation relying on Wasserstein barycenters instead of the usual matrix product between dictionary and codes, our method allows for non-linear relationships between atoms and the reconstruction of input data. We illustrate its application in several different image processing settings.

Publication
SIAM Journal on Imaging Sciences

Caption: Top row: data points. Bottom three rows: on the far sides, in purple, are the two atoms learned by PCA, NMF and our method (WDL), respectively. In between the two atoms are the reconstructions of the five datapoints for each method. The latter two were relaunched a few times with randomized initializations and the best local minimum was kept. As discussed in section 2, the addition of an entropy penalty to the usual OT program causes a blur in the reconstructions. When the parameter associated with the entropy is high, our method yields atoms that are sharper than the dataset it was trained on, as is observed here where the atoms are Diracs despite the dataset consisting of discretized Gaussians. See subsection 4.1 for a method to reach arbitrarily low values of the entropy parameter and counteract the blurring effect.

@article{schmitz18siam,
      author = {Schmitz, M.~A. and Heitz, M. and Bonneel, N. and Ngolè Mboula, F.~M. and Coeurjolly, D. and Cuturi, M. and Peyré, G. and Starck, J.-L.  },
      doi = {10.1137/17M1140431},
      journal = {SIAM Journal on Imaging Sciences},
      keywords = {Statistics - Machine Learning, Computer Science -
Graphics, Mathematics - Optimization and Control},
      number = {1},
      title = {Wasserstein Dictionary Learning: Optimal Transport-based unsupervised non-linear dictionary learning},
      volume = {11},
      year = {2018}
}