[Bf-committers] Super-sampling and anti-aliasing
tevhadij at gmail.com
Wed Sep 22 01:13:40 CEST 2010
After fixing my mistaken Blender settings, here is what I got:
Catmul-Rom (16spp Blender) http://www.pasteall.org/pic/5919
This is of course much better than before. :)
Here are alternative implementations of Catmull-Rom filtering on a
checkerboard, all with 16 samples per pixel:
Catmul-Rom (16spp stratified) http://www.pasteall.org/pic/5920
Catmul-Rom (16spp stratified sample-jitter) http://www.pasteall.org/pic/5921
Catmul-Rom (16spp blue-noise) http://www.pasteall.org/pic/5922
Catmul-Rom (16spp blue-noise sample-jitter) http://www.pasteall.org/pic/5923
Finally, here is a rendering with 121 samples per pixel, in the hope
there will be interest in increasing the maximum number of samples in
Blender. This is equivalent to an 11x11 grid which I believe is not
too unrealistic in production quality settings:
Catmul-Rom (121 spp blue-noise sample-jitter) http://www.pasteall.org/pic/5924
Notice that the aliasing visible in this image is mostly due to the
fact that the Catmull-Rom filter is not ideal and that neither is the
reconstruction filter of our monitors. The former issue is where the
alternative kernels I am working on can offer some help.
A few differences are worth discussing.
1) Is Blender jittering the samples?
It seems not, given the presence of aliasing. I understand there are
reasons for that, since rendering and combining independent uniform
grids of samples can be made very efficient (for example, using the
graphics hardware). But perhaps there are efficient ways of overcoming
this issue if there is an interest.
2) Is Blender supersampling in gamma space?
It seems so. The darker than 50% shades of gray at the top of the
checkerboard are one symptom of this. Another symptom is that the
antialiased edges between larger black and white tiles seem a little
harsh. In general, filtering does not "commute" with gamma correction,
in the sense that the result of filtering a gamma corrected image is
not the same as converting to a linear space, filtering in linear
space, and then applying gamma correction to the filtered results. The
latter produces nicer results but many professional software suites
choose to ignore this for performance reasons and/or simplicity and/or
oversight. I believe we finally have enough FLOPS. :)
3) Other differences
3a) Looking at the source-code, it looks as though Blender is
interpreting the Catmull-Rom kernel as a radially symmetric kernel.
I.e., if we define cr1D(t) to be the 1D Catmull-Rom kernel, then
cr2D(x,y) = cr1D(sqrt(x*x + y*y)).
I believe the kernel should be used as a separable (tensor-product) kernel
cr2D(x,y) = cr1D(x)*cr1D(y).
We can try to compare the performance of these two approaches, but
intuitively, the separable kernel behaves differently with diagonal
frequencies (it extends further to the diagonals of the support than
the radially symmetric version).
3b) The other issue is that Blender seems to only integrate the
samples that lie within [-1.5,1.5] of the kernel center (is this
true?), whereas the support is [-2,2]. Again, we can try to compare
the effect of this optimization, but it seems as though the result
would be to allow more aliasing through. This effect is much more
pronounced with larger kernels, or if the user stretches the kernel to
blur more. Is there a reason for this aggressive optimization?
3c) Incidentally, the user interface parameter labeled "filter_size",
which is allowed to vary between 0.5 and 1.5 seems to be used
inconsistently in the source-code. I believe it was supposed to
multiply the support of the kernel from the standard size (1 for box,
2 for tent, 3 for quadratic, 4 for cubic etc). This is indeed what
happens for the tent filter. However, the box filter ignores it and
the behavior is reversed for the remaining kernels.
More information about the Bf-committers