[Bf-committers] How about adding boost uBLAS librarytoblender/extern?

joe joeedh at gmail.com
Tue Jan 27 15:58:37 CET 2009

Real-time ray tracing has been experimented with years; the big
question is how well an offline renderer designed around those
principles will work.  To be competitive with scanline techniques, a
GI-based renderer would need to be at least as fast, and so far I've
not heard of anyone writing an offline practical GI renderer that
fast.  Speed is the really big issue here.

The draw with scanline techniuqes is they tend to scale linearly with
data complexity, allowing for much greater scene detail.  Getting ray
tracing to compete seems to involve optimizing the algorithms as much
as humanly possible, to the point of optimizing cache behaviour; you
don't need this for scanline techniques.

And even with something like rayforce (a fairly impressive real-time
ray tracer), is there any guarantee it'd be fast enough, even still?
Would it render a scene with 2 million polygons and 2 million strands
of hair in only a couple of minutes?  Or would people still use the
older faked techniques just because they render faster?

Like I said, it's all about speed.  Plenty of people are pretty good
at working in the familiar, less physically accurate world of
traditional renderers.  I'm not convinced you can get the same speed
from ray tracing, no matter what you do; take the example of deep
shadow maps.

A big draw of deep shadow maps is they are preshaded and anti-aliased,
so there's no need to do shading on lookup.  This can reduce the
amount of work needed to be done by quite a lot.  Ray tracing, on the
other hand, typically needs quite a lot of sampling to work as well,
at least for situations like hair.  Even if the ray tracer had a much
much lower cost associated per ray, it still might not be as fast
(this is why ray tracing deep shadow maps didn't catch on, btw, it had
too much work to do with hair).

P.S.: Pixar is hardly switching to a more physically-based approach.
They're all about using approximations to make it easier for the
artist to produce art.  Their ray tracing implementation is
essentially tacked on like ours is; though of course it's far more
complex because they have to deal with caching multiple levels of
micropolygons and all that.

I highly doubt they're going to experience a paradigm shift or are
going to change the RenderMan specification to be more
physically-based.  They don't care, in the slightest.

On Mon, Jan 26, 2009 at 6:46 PM, Yves Poissant <ypoissant2 at videotron.ca> wrote:
> From: "joe" <joeedh at gmail.com>
> Sent: Monday, January 26, 2009 11:14 AM
>> Blender's core math stuff basically stores vectors and matrices as
>> float arrays.  Converting the renderer to use specific datatypes would
>> likely be a huge pain, and would need a lot of forethought.
> I agree. Althopugh, there is nothing in the examples you provided that can
> only be optimized in C and not in C++.
>> What sort of profound refactoring are you talking about, btw?
> Well I tend to write too much and if I start elaborating on that it can get
> really lengthy. But here I go anyway.
> <preambule>
> The "profound" or "fundamental" refactoring I talk about is to do with
> representation of materials, lights and illumination calculations. Right
> now, the render engine is a collection of CG tricks that were developped
> through the years by researchers and that were implemented in Blender in an
> add-hoc way. This way of representing material properties and light
> properties and of calculating a shading value on object surfaces I call that
> the "legacy renderer" or "first generation renderer" way. Those tricks were
> developped by researchers that had little insights into real physics of
> material and lights.
> Legacy renderers are more or less the extension of 70's state of ad-hoc
> rendering technology with additional tricks that fitted that old model.
> Scanline converters were developped in 1967 by Wylie et al. Phong shading
> was developped in 1975 by Phong and the Phong surface properties are still
> the basis of legacy rendereers today. Z-buffers were demonstrated around
> 1975 by Catmull, Myers  and Watkins. The ray-tracing algorithm was
> demonstrated in 1980 by Whitted. By 1985, all the bases of the legacy
> rendering tricks were invented and formalized in text books such as
> "Fundamentals of Interactive Computer Graphics" by Foley and Van Dam in
> 1982, "Procedural Elements for Computer Graphics" by Rogers in 1985 "An
> Introduction to Ray-Tracing" by Glassner in 1989. In 1983, Roy Hall
> published "A Testbed for Realistic Image Synthesis" that essentiially
> provide all the material properties that are still in use in todays legacy
> renderers. The basic material properties that we tweak in Blender are all
> defined in this paper.
> Then in 1982, Clark proposed the "Geometry Engine" and this marked the date
> of the cristalization of this rendering approach for all the years to come.
> The Geometry Engine evolved into Silicone Graphics and their hardware
> graphics accelerators with their ligrary that allowed to use this hardware
> IrisGL which eventually produced OpenGL. Then there was this race to produce
> yet more powerfull OpenGL acceclerators until today. Direct3D is just
> another proprietary set of API to use OpenGL type of shading.
> On the other hand, Cook & Torrance already published some BRDF related paper
> in 1981 but their shading equations were eventually recuperated by the
> legacy renderers and the true result of their research stayed unnotice for
> years. In 1986, Kajiya published "The Rendering Equation" and demonstrated
> the first GI algorithm. But the hardware required to do that was out of
> reach even for some research teams. 1984 to 1986 were the years where
> radiosity algorithms were developped. Then a long period if stagnation
> happened until 1992 and then 1995 to 1997 where efficient GI algorithms were
> finally developped such as the Photon Mapping by Jensen in 1996, the
> Metropolis Light Transpôrt by Veach in 1997 and the Instant Radiosity by
> Keller in 1997. But by that time, OpenGL was the new standard and was
> undetronable.
> In 2009, things are changing. Single core computers are out of the way and
> multicore computer are in. It is now possible to do realtime or very near
> realtime GI of very good quality.
> </preambule>
> Legacy renderers are outdated and need to be replaced by more modern
> renderers. It is impossible to get good surface renders with legacy material
> properties and single sample illumination. It takes considerable time to
> tweak legacy material properties to get a realistic render and it takes even
> more time to do that for animation because legacy material property tweaking
> are view dependent. The modern way to describe materials are based on the
> BRDF concept. The shading algorithm needs to be totaly GI integrated. GI
> meaning Global Illumination, it means that the whole rendering equation
> should be integrated in one renderer instead of adding legacy CG trick
> results. New renderers such as LuxRender, Indigo, YafAray, Maxwell,
> FryRender, etc, and to some extent VRay and such are all based on those new
> and physically plausible (if not physically accurate) description of
> materials and light. What is more annoying is that it is impossible to map
> legacy material properties to physically plausible material properties.
> There you have it. My profound refactoring in a "nutshell" =)
> Though I'm not sure how a cache-friendly
>> shader/ray tracing pipeline would work (are there any papers  that
>> address a cache friendly shader and ray tracing pipeline, as opposed
>> to just a ray tracing pipeline?).
> That is a vast subject and it is difficult to point to one paper or eve a
> small set of paper that covers this topic.  I would say that the modern
> rendering papers are all concerned with figuring tricks for improving the
> hardware utilisation. That include the memory cache. But if there is one
> publication that I think is truely illuminating in this regard IMO, it would
> have to be "Heuristics Ray Shooting Algorithms" PhD thesis by Vlastimil
> Havran.
> Yves
> _______________________________________________
> Bf-committers mailing list
> Bf-committers at blender.org
> http://lists.blender.org/mailman/listinfo/bf-committers

More information about the Bf-committers mailing list