[Bf-cycles] 2.66 plans
Brecht Van Lommel
brechtvanlommel at pandora.be
Thu Nov 29 09:36:42 CET 2012
I think it would be cool to have a way to render all possible volume
global illumination effects. However mainly what I am interested in is
efficient ways to render common effects reasonably fast, even if that
means we discard some terms. The "Decoupled Ray-marching of
Heterogeneous Participating Media" paper I though was great. As I
understand it that is indeed based on strong anisotropic phase
function and builds on methods typically used in other renderers.
I agree dynamic loading, on the fly subdivision, etc would be great to
have. It's a very complex task though. Personally I prefer to focus on
other approaches first, and getting the main effects like hair,
volumetrics and SSS implemented. We can also do a lot of memory usage
optimization still, Arnold shows that it's possible to render very
complex scenes without too many dynamic loading tricks. And images and
volume datasets can use dynamic loading with path tracing, if we
carefully use mipmaps.
If you like to investigate dynamic loading that's great though, I
would love that have that eventually. I think the "Two-level ray
tracing with reordering for highly complex scenes" and "Ray
Differentials and Multiresolution Geometry Caching" papers are most
interesting here. Making the random sampling itself coherent sounds
like a neat trick, probably only holds up for a few dimensions but
On Thu, Nov 29, 2012 at 6:32 AM, storm <kartochka22 at yandex.ru> wrote:
> Hi Brecht!
> About Volume patch, I know that you already know that formulas, I am
> write here more for other lurkers/developers. Volumetric patch is
> actually 3 lines of code
> free_fly_distance = - log(1-random)/sigma
> pdf of this sampling = sigma*exp(-distance*sigma)
> attenuation_eval = exp(-distance*sigma)
> where "sigma" is critical section.
> Obviously, it is brute force, accurate, can render any possible variable
> density, variable particle colors, but very slow to be usable on average
> Blender user box , especially in case number of bounces > 0.
> There is special case, but most interesting for Blender users, as human
> tissue shading, where we can assume homogeneous density, constant color,
> and strong anisotropic phase function. My patch is very bad at that
> case, as it need insane number of bounces to converge flux in main
> direction (where anisotropic lobe is maximum). And my tests show, that
> number must be really big, like hundreds, and we exceed CPU register
> precision after ~50, making it unrealistic not only from computational
> time, but result as is.
> There is another approach, more common, that use separate absorption
> sigma and transition sigma, and used by almost all other renderers. My
> idea was to solve integral to automatically calculate transition sigma
> based on known HG asymmetric factor "g", and use it. But i cannot solve
> that and postpone to better time.
> About other tings, if you ask me, MB+dynamic memory(and obviously,
> dynamic on-fly trickery, like on demand subdiv) is most useful and need
> more manpower. Bidirectional, volumetrics, even SSS is a toy, real
> things is robust rendering of any possible mesh, and memory limit will
> hit again and again.
> I had plan to play with BVH algorithms, there are plenty of papers with
> cool origin ideas, but to make it fun it must be run in GPU, and you
> know what i want to say. It is not worth to optimize manually selecting
> CPU/GPU instructions to beat all that "commercial renderers", if
> foundation of algorithms is not mature and can be changed frequently. I
> think that coarse caching can be done with Monte carlo, one idea is MCMC
> (Metroplis or ERP), as they tend to dance near previous sample, other is
> even easier, ordered lower bits of random generator will make sampling
> more cache friendly, not guaranteed, but cache misses can be lowered a
> lot, at least for 1 bounce. Need to model it to proof, of course.
> The problem is, code of dynamic BVH will become very complex, lot of
> text, many workaround/heuristics. I played with simple BVH builder, and
> even that whell known structure have lot of subtle things. Same time, we
> really need general memory/resources model, that can spread dynamically
> between nodes/devices, network transparent, and almost real time. 3-4
> years, and Clouds will be everywhere, and cheap as water, need to be
> Bf-cycles mailing list
> Bf-cycles at blender.org
More information about the Bf-cycles