[Bf-committers] How about adding boost uBLAS librarytoblender/extern?

joe joeedh at gmail.com
Wed Jan 28 18:36:58 CET 2009


Interesting.  Any specific papers on the topic you'd recommend?

On Tue, Jan 27, 2009 at 7:54 PM, Yves Poissant <ypoissant2 at videotron.ca> wrote:
> As for modern material properties, they are actually much simpler to define
> and compute. The number of properties are immensely reduced and being based
> on physical description instead of a large set of visual descriptions, they
> are easier to understand, less prone to idiosyncratic implementation
> particularities and immune to user miscomprehension because the resulting
> shading is always physically plausible. Their processing is unified into a
> couple simple procedures which makes them very quick to process which
> contrast with the decision ridden and exceptional cases and compatibility
> hacks that are common in legacy ad-hod renderer. The same representation and
> model can be used for "legacy" shading as well as for more physically
> plausible rendering. In the end, the physically plausible material
> description makes everything faster to set and tweak and render.
>
>> What do you mean by resource contention?  I guess zbuffering would
>> push any ray tracing data out of the cpu cache?
>
> Basically, yes.
>

>> Ah, can this more modern way of doing materials and lighting be fast?
>> I mean maxwell/indigo/etc are all very slow.
>
> There are two issues here that can be departed as 1) Light and material
> description and 2) the calculation of shading. The said "unbiased" renderers
> are shooting for the arch-vis market. This market don't care about the
> render time. They want to impress clients and they rarely do animations. The
> unbiased renderer use physically accurate description of lights and
> materials but they throw at that a slow but accurate full light simulation.
> It is perfectly possible to still use physically plausible light and
> material description but trow at them production optimized rendering
> algorithms. You can even trow a scanliner at physically plausible light and
> materials. It would surely not produce physically accurate, not even
> physically plausible renders but you could do that. You could also throw a
> single sample ray-tracer at those descriptions and get ray-traced looking
> renders. But once you have a physically plausible infrastructure in place,
> you can trow all kind of rendering algorithm at it. Even unbiased renderer
> if you like. While if you keep on using legacy material properties, you will
> always have a very hard time (read impossible) improving the renderer output
> realisticness.

Ok I was confused about that, that makes sense though.  So you don't
mean something like a full light simulation, that makes a lot more
sense.

> I would compare the state of mind for programming for cache friendliness as
> very similar as programing multithreaded application. In the sense that it
> is really a state of mind. The programmer needs to have a clear
> understanding of the technical implications. Experience helps immensely too.
> And given the modern CPU architecture, one is as necessary as the other. You
> think cache friendly programming is a chore? Wait for when you will have 8
> or 16 cores contending for the same resources at the same time.

Yeah I've looked into this a little.  Now I tend to see data
structures in a different light; some algorithms just don't look that
great anymore, such as balanced binary tree structures (btrees are
more cache friendly by design), linked lists, or just anything that
can randomly jump about in memory or has a lot of overhead.

Anyway, lots of great information.  It's very fascinating.  And since
people are always bugging me to make stuff faster I'm sure I'll
eventually be making use of some of this.

Joe


More information about the Bf-committers mailing list