[Bf-committers] flame density sampling problem

Lukas Tönne lukas.toenne at googlemail.com
Sat Jan 16 14:34:04 CET 2010

Hello again,

I made some significant progress with the fire simulation system i
announced here earlier, thanks to the help of jahka, who convinced me
to integrate this into the particle system. Particles now form the
basis of the fire's flames, so we can uses them to further simulate
fire spread and other effects at a later time. See these videos for a
small impression of how this simulation works:

Now, in order to turn this physical simulation into actual images of
fire, i started work on the second big part of this fire system, which
is rendering. Since fire is essentially a fluid, a volumetric
rendering approach is a natural choice. So my first attempt to render
the flames was to use a point density texture, which describes the
flame density, emission and color from the underlying filament data
(those curves you see in the videos) and an associated density and
turbulence function. However, after a little test implementation and
after wrapping my head around how the creators of the original method
actually did this, i figured that this is not a good approach, last
but not least because rendering times get _way_ too big (here's the
original paper again:

So now i need some help to figure out how this could be done with
Blender's current rendering methods. I need to get a little deeper
into the math here:

The problem is that directly evaluating the flame
density/illumination/etc. at a point in space is difficult, because of
two turbulence functions applied to the raw density function:

1. Let x be a point relative to a flame filament
2. The density/color/etc. at this _untransformed_ point x is then
given by a function d(x)
3. The point x is then transformed by a turbulence function T to a
point x' = T(x). This simulates turbulence of the surrounding air as
well as fluctuation of the combustion process and is essential for
realistic fire!
4. In order to determine the actual density at an arbitrary sample
point s in space (which is what we need to do for volumetric
rendering), we'd have to calculate d( T_inv( s ) ), where T_inv is the
inverse of T, which is afaik not easily calculatable ...

It would be nice if i could simply create a density texture used with
a volumetric material to render flames, but the inability to invert
the turbulence transformation and thus calculating the density at the
untransformed point is a real problem :(
The solution chosen by the authors of said paper is to generate a
complete set of point samples for the rendered flame volume in advance
and then transform these points all together. This creates a new set
of sample points, which is still sufficiently sampled due to the
coherence of the turbulence function. This however is more
complicated, since i'd have to influence the way samples for the
volumetric material are generated instead of just creating a new
texture method. I also thought about simply assuming that, due to the
random nature of noise, the inverse transformation essentially looks
similar to the transformation itself, so one could just perturb the
samples before calculating the density, but i guess this would still
look odd.

I am really just starting to get into the raytracing matter, so i
might just have overlooked a simple solution to this problem. Maybe
there actually _is_ an inverse to the turbulence transformation and i
didn't get it?
Any help is appreciated :)

More information about the Bf-committers mailing list