[Bf-committers] Bilateral Blur

Josh Wedlake josh.wedlake at gmail.com
Thu Dec 27 12:10:18 CET 2012


Hello,
A couple of you have had run into some of my giant python generated node
trees on the bug tracker and have asked what they are in aid of.  I thought
the response might be of interest to a wider crowd, especially seeing as a
working Bilateral blur might be a potential hot topic now we have more
grainy renders than ever before!  This comes with the disclaimer that I am
simply that python hack who does those videos of spiders and mushrooms....

I built the node trees to work around a limitation of Blender's Bilateral
Blur.  Blender's bilateral blur only accepts one "Determinator" input.  The
workaround described in the 2.46 release notes (
http://www.blender.org/development/release-logs/blender-246/compositing-nodes/)
is to add the two channels you want to use as determinators (say Z and
Normal) before inputting them into the determinator.  This works sometimes,
but in more complex cases, say where you wanted to limit the bilateral blur
to Diffuse Color and Normal, it creates as many aftefacts as it solves,
blurring pixels where the normal is different and the color is different,
yet by bad luck the normal+color is the same!  It also makes nonsense
of the sigma factor.  Using color becomes important as now with cycles we
now have a largely grain free Color pass (assuming you aren't doing too
many silly things with OSL), and it helps protect textures somewhat from
the blur.

Currently in COM_BilateralBlurOperation.cpp, the method (in pesudo code) is:

for y in range(miny,maxy):
  for x in range(minx,maxx):
    if deltaColor(Z+Normal)<sigma:
       blurColor+=current_pixel()
       blurDivider+=1
output=blurColor/blurDivider


Arguably what should happen is.... (Option A)

for y in range(miny,maxy):
  for x in range(minx,maxx):
    if deltaColor(Z)<sigmaZ && deltaColor(Normal)<sigmaNormal &&
deltaColor(...)<sigma...:
       blurColor+=current_pixel()
       blurDivider+=1
output=blurColor/blurDivider


Or maybe even better would be if we multiply the blur amounts...

for y in range(miny,maxy):
  for x in range(minx,maxx):
    blurAmount=
      (1-clamp(deltaColor(Z)/sigmaZ)) *
      (1-clamp(deltaColor(Normal/sigmaNormal))) *
..... (Other Arbitrary inputs)
    blurColor+=current_pixel()*blurAmount
    blurDivider+=blurAmount
output=blurColor/blurDivider

I think its important that the Bilateral blur accepts a variable number
inputs as the best results from the bilateral blur come from blurring
passes separately as in general its OK to blur more aggressively on say an
indirect light pass rather than a direct light pass.  On a direct light
pass I might wish to use Z,Normal and Diffuse Color as determinators, but
on an indirect pass I would only use Z and Normal.  I might have special
case shaders I want to protect by throwing a Material Index pass in aswell.

Blender's current Bilateral Blur uses a square filter (the "for x, for y"
loop).  In extreme cases this can result in a disco ball effect - square
bokehs!  I know the Abode method is to mix in a gaussian filter (
http://people.csail.mit.edu/sparis/bf_course/slides08/03_definition_bf.pdf),
but although this solves the extreme cases problem, much of the time
it
simply limits the effect of the blur so I don't think this is the right way
to go.  And the Adobe blur has a permanent soft glow look, which is fine if
you're working on an 80s music video, but not otherwise.

I think that arguably the best method might be to use a flood (bucket
fill/magic wand) type algorithm spreading out from the centre pixel to
select neighbour pixels rather than looping over the whole bokeh.  This
would stop the blur bokehs from jumping across gaps which tends to fragment
the image slightly.  It would also allow a large maximum blur radius to be
set, but for most pixels the blur would be quite small (faster?).

Even after this there is still all the fun of working with a non
anti-aliased Z buffer!  My hack around is just to blow up the render to
double size, render the Z pass double size separately, then process the
bilateral blur at double size, before using a despeckle to get rid of the
single pixel artefacts, then size back down.  I'm not sure what the
seamless workflow would be here.

I also noticed that the way Blender calculates deltaColor is simply the sum
of the difference for each channel.  For a super high quality blur we would
be determinator type aware, and find dot products between normals and so
on.....

So this is a feature request?
Well, I've already hacked my way around the problem for my own personal
use, but I thought the results of a lot of experimenting might be of
interest in case there is anyone already working in this area, or my ideas
can be improved further.  The change which is(probably) simplest to make,
and the most effective at improving the bilateral blur node, is to support
multiple inputs (Option A above).  I'd be very interested to hear thoughts
on this topic,

cheers, and thanks for the continued development and fixes!


More information about the Bf-committers mailing list