[Bf-committers] How about adding boost uBLAS librarytoblender/extern?

Yves Poissant ypoissant2 at videotron.ca
Wed Jan 28 03:54:54 CET 2009


From: "joe" <joeedh at gmail.com>
Sent: Tuesday, January 27, 2009 8:38 PM

> I'm not arguing that ray tracing shadows, reflections, anti-aliasing,
> etc can't be fast enough.  It's the GI and more modern material design
> that I'm wondering about.

GI in general is greatly based on some form of ray-tracing with additional 
caches of different kind depending on the algorithm used.

As for modern material properties, they are actually much simpler to define 
and compute. The number of properties are immensely reduced and being based 
on physical description instead of a large set of visual descriptions, they 
are easier to understand, less prone to idiosyncratic implementation 
particularities and immune to user miscomprehension because the resulting 
shading is always physically plausible. Their processing is unified into a 
couple simple procedures which makes them very quick to process which 
contrast with the decision ridden and exceptional cases and compatibility 
hacks that are common in legacy ad-hod renderer. The same representation and 
model can be used for "legacy" shading as well as for more physically 
plausible rendering. In the end, the physically plausible material 
description makes everything faster to set and tweak and render.

> What do you mean by resource contention?  I guess zbuffering would
> push any ray tracing data out of the cpu cache?

Basically, yes.

>> Don't wait further, Just look for "PhotoRealistic RenderMan" or "PRMan" 
>> on
>> Google.
> That name means nothing.  Have you read any of pixar's papers? Or seen
> any of their talks?  They make it perfectly clear they don't
> especially care about physically-correct algorithms so much as
> flexible algorithms artists can work with, and algorithms that are
> fast in production.  Their papers are full of "this isn't quite
> correct, but it looks plausible and/or gives the artists more
> control."

I may have used the denomination "Physically-correct" in a few places during 
this discussion without noticing but I generally try to use the denomination 
"Physically plausible". Yes, I've read several papers from Pixar web site. I 
find them very interesting and full of nice implementation ideas. What is 
clear to me, even though Pixar are not "purist", is that they are more and 
more departing from the legacy CG and moving more and more into physically 
plausible CG. That is a matter of personal perception though so it is not 
usefull further discussigng Pixar strategies.

> Ah, can this more modern way of doing materials and lighting be fast?
> I mean maxwell/indigo/etc are all very slow.

There are two issues here that can be departed as 1) Light and material 
description and 2) the calculation of shading. The said "unbiased" renderers 
are shooting for the arch-vis market. This market don't care about the 
render time. They want to impress clients and they rarely do animations. The 
unbiased renderer use physically accurate description of lights and 
materials but they throw at that a slow but accurate full light simulation. 
It is perfectly possible to still use physically plausible light and 
material description but trow at them production optimized rendering 
algorithms. You can even trow a scanliner at physically plausible light and 
materials. It would surely not produce physically accurate, not even 
physically plausible renders but you could do that. You could also throw a 
single sample ray-tracer at those descriptions and get ray-traced looking 
renders. But once you have a physically plausible infrastructure in place, 
you can trow all kind of rendering algorithm at it. Even unbiased renderer 
if you like. While if you keep on using legacy material properties, you will 
always have a very hard time (read impossible) improving the renderer output 
realisticness.

> Anyway, it sounds very interesting. It'd be a mistake to entirely
> abandon all scanline support (deep shadow maps, for example, are very
> useful for hair), but I think I agree having a pipeline optimized for
> ray tracing would be a good idea.  Perhaps have code that renders a
> tile entirely with ray tracing if in ray tracing mode. . . I don't
> know.  You think the shading code would be a problem?

I cannot see how the renderer can be further significantly optimized and 
still keep the same shading code. I mean, it can certainly be optimized but 
very far from its full potential. This legacy code is ridden with branches 
of al sort, going everywhere to try to take into account different material 
properties and making them produce usable shading in face of conflicting 
specifications.

> I've read a
> little about cache-friendly ray tracing pipelines, but I don't know
> what complex shading systems will do to the cpu cache in that
> situation.

I would compare the state of mind for programming for cache friendliness as 
very similar as programing multithreaded application. In the sense that it 
is really a state of mind. The programmer needs to have a clear 
understanding of the technical implications. Experience helps immensely too. 
And given the modern CPU architecture, one is as necessary as the other. You 
think cache friendly programming is a chore? Wait for when you will have 8 
or 16 cores contending for the same resources at the same time.

Yves 



More information about the Bf-committers mailing list