[Kaj76] $$\mathcal{I}(\vx) = L_e(\vx) + \int_\mathcal{H} \rho(\vp, \vx) \mathcal{I}(\vp) d\vp$$
Numerical approximation
$$\int_D f(x)dx \approx \frac{1}{n} \sum_{i=0}^{n} f(x_i)$$
$$\int_D f(x)dx \approx \frac{1}{n} \sum_{i=0}^{n} f(x_i)$$
$$\int_D f(x)dx \approx \frac{1}{n} \sum_{i=0}^{n} f(x_i)$$
Many strategies
We need fast samplers
$\mathcal{I} = \int_\Omega fdx$
$\mathcal{I}_n = \frac{1}{n}\sum f(x_i), \quad \{x_i\}\in \Omega$
$$ \Delta := | \mathcal{I} -\mathcal{I}_n | $$
$$ \langle \Delta\rangle := \mathcal{I} - \langle \mathcal{I}_n \rangle $$
$$ \text{Var}(\mathcal{I}_n) := \langle \mathcal{I}_n^2 \rangle - \langle \mathcal{I}_n \rangle^2 $$
$$ Var(I_n) = O\left(\frac{\sigma_f^2}{{n}}\right) $$
Higher quality for low sample counts but... $$ Var(I_n) = O\left(\frac{1}{{n}}\right) $$ {demo}
E.g.: $x(n)=(g_{{b_{1}}}(n),\dots ,g_{{b_{s}}}(n))$ in dimension $s$
and $b_1,\ldots,b_{s-1}$ arbitrary coprime integers $g_{b}(n)=\sum _{{k=0}}^{{L-1}}d_{k}(n)b^{{-k-1}}.$ (b-ary representation of the positive integer expressed in fractional part) |
$$ \Delta = O\left(\frac{ \log(n)^{d-1}}{n}\right) $$ {demo}
$$
Var(I_n) = O\left(\frac{\sigma_f^2}{{n}}\right)
$$
$$
\Delta = O\left(\frac{
\log(n)^{d-1}}{n}\right)
$$
$$
Var(I_n) = O\left(\frac{1}{n\sqrt[d]{n}} \right)
$$
Sampling process:
Image rendering context:
Evaluate the uniformity of a point set $P_n$
Star discrepancy |
Extreme discrepancy | Isotropic discrepancy |
$f$ is an Hardy & Krause bounded variation function.
A sampler is low discrepancy if its discrepancy is in $O\left(\frac{\log(n)^{d-1}}{n}\right)$.
$$ \Delta = O\left(\frac{ \log(n)^{d-1}}{n}\right) $$ (for finite HK-BV integrand compared to $O(1/\sqrt{n})$ for Poisson sampler)
E.g. $i=11 = (1011)_2$, $\phi(i)=(.1101)_2 = \frac{1}{2} + \frac{1}{4} + \frac{1}{16} = 0.8125 \in[0,1)$
arithmetic on $\mathbb{Z}/p\mathbb{Z}$...
$P_n$ | Power spectrum | Radial Power Spectrum |
$sin(x^2 + y^2)$ | |||
$$S(x) = \frac{1}{n}\sum_{k=1}^n \alpha_k \delta(x-x_k)\quad \text{and its $m$-th Fourier series coefficient}\quad \mathbf{S}_m = \frac{1}{n}\sum_{k=1}^n \alpha_k e^{-i2\pi m x_k}\,,$$ we have $$\mathcal{I}_n=\int_0^1 f(x)S(x)dx = \int_\mathbb{R} \mathcal{F}_f(v)\mathcal{F}_S(v)dv =\sum_{m=-\infty}^{\infty} \mathbf{f}_m^* \mathbf{S}_m$$
If $S(x)$ is a stochastic point process:
$$\langle \Delta \rangle = \mathbf{f}_{0}^*(1 - \langle
\mathbf{S}_0\rangle) - \sum_{m\in\mathbb{Z}, m\neq 0}
\mathbf{f}_m^*\langle \mathbf{S}_m\rangle$$
$$Var({I}_n) =
\mathbf{f}_0^*\mathbf{f}_0 Var(\mathbf{S}_0)
+ \sum_{m\in\mathbb{Z}, m\neq 0}\mathbf{f}_m^*\mathbf{f}_m\langle
\mathbf{S}_m^*\mathbf{S}_m\rangle + \sum_{m \in
\mathbb{Z}}\sum_{l \in \mathbb{Z}, l\neq m}
\mathbf{f}_m^*\mathbf{f}_l\langle
\mathbf{S}_m^*\mathbf{S}_l\rangle$$
$\langle \Delta \rangle =
\mathbf{f}_{0}^* - \sum_{m\in\mathbb{Z}, m\neq 0}
\mathbf{f}_m^*\langle \mathbf{S}_m\rangle$
$\langle \Delta
\rangle = 0$ for Poisson process (or any unbiased sampler)
$Var({I}_n) =\sum_{m\in\mathbb{Z}}\mathbf{f}_m^*\mathbf{f}_m\langle \mathbf{S}_m^*\mathbf{S}_m\rangle$
$Var({I}_n) =\sum_{m\in\mathbb{Z}}\mathbf{f}_m^*\mathbf{f}_m\langle \mathbf{S}_m^*\mathbf{S}_m\rangle$
$Var({I}_n) =\sum_{m\in\mathbb{Z}}\mathbf{f}_m^*\mathbf{f}_m\langle \mathbf{S}_m^*\mathbf{S}_m\rangle$
High variance reduction = low energy in the low frequency domain of the sampler
Distance between a discrete measure (point pattern = set of Diracs) and a measur (e.g. Uniform measure).
$\Rightarrow$ cf SOT
Pair Correlation Function, Differential Analysis...
$$
Var(I_n) = O\left(\frac{\sigma_f^2}{{n}}\right)
$$
$$
\Delta^2 = O\left(\frac{
\log(n)^{2(d-1)}}{n^2}\right)
$$
$$
Var(I_n) = O\left(\frac{1}{n\sqrt[d]{n}} \right)
$$
For rendering:
On calcule les permutations de taille $m$ à partir d'un ensemble de taille $t^2$. On les stocke ensuite dans une LUT pour générer des ensembles de taille $n$.
BNOT [GBO+12] | Sobol [Sob67] |
Owen's Scrambling [Owe95] | LDBN ($m=16$, $t=128$) |
BNOT [GBO+12] | LDBN ($m=16$, $t=128$) |
Step [HSD13] | LDBN ($m=16$, $t=128$) |
BNOT [GBO+12] | Sobol [Sob67] | Owen's Scrambling [Owe95] | LDBN ($m=16$, $t=128$) |
On génère $K^d=16$ points à partir de Sobol. On note cet ensemble $\mathcal{S_0}$.
Et on les mélange en utilisant un Owen scrambling pour leur donner un spectre bruit bleu. On note le nouvel ensemble $\mathcal{P_0}$.
On génère $K^{2d}=256$ points à partir de Sobol. On note cet ensemble $\mathcal{S_1}$.
On commence par appliquer un scrambling global sur cet ensemble pour reconstruire $\mathcal{P_0}$. On note le nouvel ensemble $\mathcal{Q_1}$.
Ensuite, on scramble les points dans chaque tuile $\mathcal{T_1^{i}}$ de $\mathcal{Q_1}$ pour obtenir un spectre globalement bruit bleu.
Si l'ensemble initial est dyadique en base $K$ (et donc est basse discrépance), notre ensemble de permutations locales préserve la dyadicité de cet ensemble (et donc la basse discrépance).
Si l'ensemble initial est dyadique en base $K$ (et donc est basse discrépance), notre ensemble de permutations locales préserve la dyadicité de cet ensemble (et donc la basse discrépance).
Si l'ensemble initial est dyadique en base $K$ (et donc est basse discrépance), notre permutation globale préserve la dyadicité de cet ensemble (et donc la basse discrépance).
Si l'ensemble initial est dyadique en base $K$ (et donc est basse discrépance), notre permutation préserve la dyadicité de l'ensemble (et donc la basse discrépance).
n-D obtenu par scrambling indépendant de projections 2D.
BNOT [GBO+12] | Owen's Scrambling [Owe95] |
LDBN | MultiProj |
En fonction de $K$ on peut optimiser un certain nombre de dimensions.
Stratified (MSE:0.0022) | Sobol [Sob67] (MSE:0.0018) | Owen's Scrambling [Owe95] (MSE:0.0017) | MultiProj K=8 (MSE:0.0018) |
Stratified (MSE:0.0017) | Sobol [Sob67] (MSE:0.0015) | Owen's Scrambling [Owe95] (MSE:0.0016) | MultiProj K=8 (MSE:0.0013) |
Objective: sample a 2D joint density function $p(x,y)$
Compute marginal density: $$p(x) = \int p(x,y)dy$$ Conditional density function: $$p(y|x)=\frac{p(x,y)}{p(x)}$$ Idea: estimate the marginal in one direction, sample $y$ according to this 1D marginal pdf
Options:
$$ s = (1-\sqrt{u_1}) A + u_2\sqrt{u_1} B + \sqrt{u_1}(1-u_2) C$$
with $u_1$, $u_2$ unform samples in $[0,1)$
Change of variables + unitary det of the Jacobian $\Rightarrow$ Area0 preserving mapping
$$x=cos(2\pi u_2)\sqrt{1-u_1^2}\\ y=sin(2\pi u_2)\sqrt{1-u_1^2}\\ z=u_1$$
But also
Sample $p(x)$ by just probing the function (no marginal nor
integration)
Sketch of the algorithm:
Key ingredients
Bayesian approaches, Markov Chain Monte Carlo, density estimation...
Estimate $\int f(x)dx$ using $$ I_n= \frac{1}{n}\sum \frac{f(s_i)}{p(s_i)}$$ with a distribution $p(x)$ as close as possible to $f(x)$.
(weighting of the sample contributions w.r.t. estimate distribution $p$)
Ideal example: $p(x) := cf(x)$, then $Var(I_n)=0$
Importance sampling may increase the variance but there are many situations / technical ways to estimate $p(x)$ that improve the convergence
Estimate product of functions $\int f(x)g(x)dx$.
Questions:
$$ I_n= \frac{1}{n_f}\sum_{n_f} \frac{f(s_i)g(s_i)w_f(s_i)}{p_f(s_i)} + \frac{1}{n_g}\sum_{n_g} \frac{f(s'_i)g(s'_i)w_g(s'_i)}{p_g(s'_i)}$$
For instance using the balance heuristic: $$ w_\bullet(x) := \frac{n_\bullet p_\bullet(x)}{\sum_i n_ip_i(x)} $$
or the power heuristic: $$ w_\bullet(x) := \frac{(n_\bullet p_\bullet(x))^\beta}{\sum_i (n_ip_i(x))\beta} $$
Instead of estimating $I=\int_\Omega f(x)dx$ using M.C., we
estimate
$$ I= \int_\Omega f(x) -\alpha h(x)dx + \alpha H$$
where $h(x)$ is the control variate
(and
$H=\int_\Omega h(x)dx$ and $\alpha$ the strength of the control)
Then,
$\langle I^*_n\rangle=\langle I_n - \alpha H\rangle + \alpha H$
choosing an optimal coefficient $\alpha=Cov(\langle I_n\rangle,\langle H_n\rangle) /
Var(H_n)$,
we have
$$Var( I^*_n) = Var( I_n) (1 -
Corr(\langle I_n\rangle,\langle H_n\rangle)^2)$$
Example:
$n=1500$, variance reduction from 0.01947 to 0.0006