An image from the paper showing common importance sampling strategies for estimating the direct lighting at a surface point. It clearly shows the discontinuities (boundaries between yellow and black regions) in the integration domain of each sampler.

An image from the paper showing common importance sampling strategies for estimating the direct lighting at a surface point. It clearly shows the discontinuities (boundaries between yellow and black regions) in the integration domain of each sampler.

A new study led by Gurprit Singh of Dartmouth’s Visual Computing Lab presents an in-depth exploration of sampling strategies commonly used to create computer-generated images. This research highlights some previously unknown strengths and weaknesses of those strategies, and proposes a few simple tricks to tame their weaknesses.

A full report can be found here. The work was done in tandem with researchers from the Max-Planck Institute for Informatics, the University of Edinburgh, and the Université de Lyon. It will be published in the journal Computer Graphics Forum in March.

Most rendering algorithms employ some form of Monte Carlo integration, which estimates the integral of a function (the integrand) by sampling that function many times and averaging the result. As the number of samples increases, the variance (deviation from the expected result) decreases.

While all non-biased sampling strategies will converge to the correct result in the limit, we often care a great deal about decreasing the variance as quickly as possible, since rendering tends to be a don’t-hold-your-breath activity.

There are many sampling tricks used in practice to achieve a better convergence rate than that of random sampling. One is to choose samples by a distribution proportional to the value of the integrand, called importance sampling. Another is to spread out samples more evenly over the integrand’s domain, called stratified sampling.

Another strategy called multiple importance sampling combines the sampling efforts of two or more strategies, cleverly combining the results in such a way that good samples from one strategy mitigate the variance-polluting samples from another strategy.

This new report is the first to rigorously show why such sampling strategies offer a more favorable convergence rate, and where their limitations lie. It also demonstrated that the convergence rate of multiple importance sampling matches the worst convergence rate of its constituents.

Lastly, the paper showed that discontinuities, or sharp changes in the integrand’s value, adversely affect the convergence rate of Monte Carlo sampling. These discontinuities could either be inherently present or introduced by the sampling strategy.

To counteract these adverse effects, the authors of the study propose a few simple tricks to smooth the integration domain. These involve either literally blurring the edges of the integrand (introducing a little bias) or mirroring the integrand.

This report continues the steady march of progress toward examining the ‘how’ and ‘why’ behind sampling techniques for rendering, much of which has been led by co-authors Gurprit Singh and Wojciech Jarosz of Dartmouth’s Visual Computing Lab. It could help movie studios achieve more realistic computer-generated images in a shorter amount of time.

Other contributors to the work include Kartic Subr (University of Edinburgh), David Coeurjolly (Université de Lyon), and Victor Ostromoukhov (Université de Lyon).