[Bf-committers] How about adding boost uBLAS librarytoblender/extern?

Yves Poissant ypoissant2 at videotron.ca
Wed Jan 28 01:16:41 CET 2009


From: "joe" <joeedh at gmail.com>
Sent: Tuesday, January 27, 2009 9:58 AM


> Real-time ray tracing has been experimented with years; the big
> question is how well an offline renderer designed around those
> principles will work.  To be competitive with scanline techniques, a
> GI-based renderer would need to be at least as fast, and so far I've
> not heard of anyone writing an offline practical GI renderer that
> fast.  Speed is the really big issue here.

No arguments can demonstrate that this is possible or demonstrate that it is 
not possible. Look for "Arauna" for an engine that can raytrace quite large 
and complex scene at less than a second per 640x480 frame on a Pentium IV. 
It flies on a quad-core. That includes shadows and reflections and 
antialiasing too. And this renderer is not even using the GPU. Just 
multithreading and SSE. Go pay a visit to ompf.org and look for a CUDA 
implementation of a realtime GI renderer (you will find Arauna there too). 
With the current hardware, it is more than hypothetical. Intel even 
demonstrated a full-raytrace Quake a few months ago using their Larrabee 
processor.

At work, I implemented a production ray-tracer that can render complex 
scenes including indirect illumination in less than 10 seconds per 800x450 
5x-AA frame on a single core and not even using SSE, the GPU or even Boost 
(or Eigen).

At my previous work, looking at the way the scanline renderer was 
interfering with the computer resources for the ray-tracer, I had the idea 
that maybe a pure ray-tracing engine could be much faster if the rendering 
was streamlined for ray-tracing and did not have to deal with resource 
contentions with the scanliner. I could not test that idea there but I could 
test it at my current job. Bingo! So today, there are no arguments that will 
convince me that it can't be done because I did it and I have the proof that 
it can be done in front of me everyday.

> The draw with scanline techniuqes is they tend to scale linearly with
> data complexity, allowing for much greater scene detail.

While a scanliner complexity is O(n) where n is the scene details, a 
raytracer complexity is O(Log(n)) because of the acceleration structure. 
Less than linear. What's more, ray-tracing scales superbly well on 
multi-cores and stream processors.

...

> I highly doubt they're going to experience a paradigm shift or are
> going to change the RenderMan specification to be more
> physically-based.  They don't care, in the slightest.

Don't wait further, Just look for "PhotoRealistic RenderMan" or "PRMan" on 
Google.

-----------

All that said, I am not advocating that the "legacy" way of specifying 
material surfaces and lights disapear from Blender and be replaced by new 
physically plausible material and light specifications. My main argument is 
more that the renderer needs serious work. And since it needs work, this 
work might as well be made taking the more modern ways of specifying 
materials and lights and more integrated ways of calculating the different 
aspect of shading in mind. Both ways could live together. But refactoring 
the renderer today and completely overlooking physical plausibility and 
photorealism would be a bazooka shot in Blender's foot. And the renderer 
architecture might as well be optimized for modern hardware using known most 
efficient techniques. Doing that is a profound refactoring IMO.

Yves 



More information about the Bf-committers mailing list