[Bf-committers] How about adding boost uBLAS librarytoblender/extern?
Yves Poissant
ypoissant2 at videotron.ca
Tue Jan 27 02:46:28 CET 2009
From: "joe" <joeedh at gmail.com>
Sent: Monday, January 26, 2009 11:14 AM
> Blender's core math stuff basically stores vectors and matrices as
> float arrays. Converting the renderer to use specific datatypes would
> likely be a huge pain, and would need a lot of forethought.
I agree. Althopugh, there is nothing in the examples you provided that can
only be optimized in C and not in C++.
> What sort of profound refactoring are you talking about, btw?
Well I tend to write too much and if I start elaborating on that it can get
really lengthy. But here I go anyway.
<preambule>
The "profound" or "fundamental" refactoring I talk about is to do with
representation of materials, lights and illumination calculations. Right
now, the render engine is a collection of CG tricks that were developped
through the years by researchers and that were implemented in Blender in an
add-hoc way. This way of representing material properties and light
properties and of calculating a shading value on object surfaces I call that
the "legacy renderer" or "first generation renderer" way. Those tricks were
developped by researchers that had little insights into real physics of
material and lights.
Legacy renderers are more or less the extension of 70's state of ad-hoc
rendering technology with additional tricks that fitted that old model.
Scanline converters were developped in 1967 by Wylie et al. Phong shading
was developped in 1975 by Phong and the Phong surface properties are still
the basis of legacy rendereers today. Z-buffers were demonstrated around
1975 by Catmull, Myers and Watkins. The ray-tracing algorithm was
demonstrated in 1980 by Whitted. By 1985, all the bases of the legacy
rendering tricks were invented and formalized in text books such as
"Fundamentals of Interactive Computer Graphics" by Foley and Van Dam in
1982, "Procedural Elements for Computer Graphics" by Rogers in 1985 "An
Introduction to Ray-Tracing" by Glassner in 1989. In 1983, Roy Hall
published "A Testbed for Realistic Image Synthesis" that essentiially
provide all the material properties that are still in use in todays legacy
renderers. The basic material properties that we tweak in Blender are all
defined in this paper.
Then in 1982, Clark proposed the "Geometry Engine" and this marked the date
of the cristalization of this rendering approach for all the years to come.
The Geometry Engine evolved into Silicone Graphics and their hardware
graphics accelerators with their ligrary that allowed to use this hardware
IrisGL which eventually produced OpenGL. Then there was this race to produce
yet more powerfull OpenGL acceclerators until today. Direct3D is just
another proprietary set of API to use OpenGL type of shading.
On the other hand, Cook & Torrance already published some BRDF related paper
in 1981 but their shading equations were eventually recuperated by the
legacy renderers and the true result of their research stayed unnotice for
years. In 1986, Kajiya published "The Rendering Equation" and demonstrated
the first GI algorithm. But the hardware required to do that was out of
reach even for some research teams. 1984 to 1986 were the years where
radiosity algorithms were developped. Then a long period if stagnation
happened until 1992 and then 1995 to 1997 where efficient GI algorithms were
finally developped such as the Photon Mapping by Jensen in 1996, the
Metropolis Light Transpôrt by Veach in 1997 and the Instant Radiosity by
Keller in 1997. But by that time, OpenGL was the new standard and was
undetronable.
In 2009, things are changing. Single core computers are out of the way and
multicore computer are in. It is now possible to do realtime or very near
realtime GI of very good quality.
</preambule>
Legacy renderers are outdated and need to be replaced by more modern
renderers. It is impossible to get good surface renders with legacy material
properties and single sample illumination. It takes considerable time to
tweak legacy material properties to get a realistic render and it takes even
more time to do that for animation because legacy material property tweaking
are view dependent. The modern way to describe materials are based on the
BRDF concept. The shading algorithm needs to be totaly GI integrated. GI
meaning Global Illumination, it means that the whole rendering equation
should be integrated in one renderer instead of adding legacy CG trick
results. New renderers such as LuxRender, Indigo, YafAray, Maxwell,
FryRender, etc, and to some extent VRay and such are all based on those new
and physically plausible (if not physically accurate) description of
materials and light. What is more annoying is that it is impossible to map
legacy material properties to physically plausible material properties.
There you have it. My profound refactoring in a "nutshell" =)
Though I'm not sure how a cache-friendly
> shader/ray tracing pipeline would work (are there any papers that
> address a cache friendly shader and ray tracing pipeline, as opposed
> to just a ray tracing pipeline?).
That is a vast subject and it is difficult to point to one paper or eve a
small set of paper that covers this topic. I would say that the modern
rendering papers are all concerned with figuring tricks for improving the
hardware utilisation. That include the memory cache. But if there is one
publication that I think is truely illuminating in this regard IMO, it would
have to be "Heuristics Ray Shooting Algorithms" PhD thesis by Vlastimil
Havran.
Yves
More information about the Bf-committers
mailing list