[Bf-committers] How about adding boost uBLAS librarytoblender/extern?

joe joeedh at gmail.com
Wed Jan 28 02:38:36 CET 2009


On Tue, Jan 27, 2009 at 5:16 PM, Yves Poissant <ypoissant2 at videotron.ca> wrote:
> From: "joe" <joeedh at gmail.com>
> Sent: Tuesday, January 27, 2009 9:58 AM
>
>
>> Real-time ray tracing has been experimented with years; the big
>> question is how well an offline renderer designed around those
>> principles will work.  To be competitive with scanline techniques, a
>> GI-based renderer would need to be at least as fast, and so far I've
>> not heard of anyone writing an offline practical GI renderer that
>> fast.  Speed is the really big issue here.
>
> No arguments can demonstrate that this is possible or demonstrate that it is
> not possible. Look for "Arauna" for an engine that can raytrace quite large
> and complex scene at less than a second per 640x480 frame on a Pentium IV.
> It flies on a quad-core. That includes shadows and reflections and
> antialiasing too. And this renderer is not even using the GPU. Just
> multithreading and SSE. Go pay a visit to ompf.org and look for a CUDA
> implementation of a realtime GI renderer (you will find Arauna there too).
> With the current hardware, it is more than hypothetical. Intel even
> demonstrated a full-raytrace Quake a few months ago using their Larrabee
> processor.

I'm not arguing that ray tracing shadows, reflections, anti-aliasing,
etc can't be fast enough.  It's the GI and more modern material design
that I'm wondering about.

>
> At work, I implemented a production ray-tracer that can render complex
> scenes including indirect illumination in less than 10 seconds per 800x450
> 5x-AA frame on a single core and not even using SSE, the GPU or even Boost
> (or Eigen).
>
> At my previous work, looking at the way the scanline renderer was
> interfering with the computer resources for the ray-tracer, I had the idea
> that maybe a pure ray-tracing engine could be much faster if the rendering
> was streamlined for ray-tracing and did not have to deal with resource
> contentions with the scanliner. I could not test that idea there but I could
> test it at my current job. Bingo! So today, there are no arguments that will
> convince me that it can't be done because I did it and I have the proof that
> it can be done in front of me everyday.

What do you mean by resource contention?  I guess zbuffering would
push any ray tracing data out of the cpu cache?

>> The draw with scanline techniuqes is they tend to scale linearly with
>> data complexity, allowing for much greater scene detail.
>
> While a scanliner complexity is O(n) where n is the scene details, a
> raytracer complexity is O(Log(n)) because of the acceleration structure.
> Less than linear. What's more, ray-tracing scales superbly well on
> multi-cores and stream processors.

The O(n) complexity of scanliners doesn't cause an issue though;
zbuffering even large amounts of geometry is very fast.  Ray tracers
traditionally have not been as fast in a practical sense; though as
you have said this has changed lately.

>
>> I highly doubt they're going to experience a paradigm shift or are
>> going to change the RenderMan specification to be more
>> physically-based.  They don't care, in the slightest.
>
> Don't wait further, Just look for "PhotoRealistic RenderMan" or "PRMan" on
> Google.
That name means nothing.  Have you read any of pixar's papers? Or seen
any of their talks?  They make it perfectly clear they don't
especially care about physically-correct algorithms so much as
flexible algorithms artists can work with, and algorithms that are
fast in production.  Their papers are full of "this isn't quite
correct, but it looks plausible and/or gives the artists more
control."

>
> -----------
>
> All that said, I am not advocating that the "legacy" way of specifying
> material surfaces and lights disapear from Blender and be replaced by new
> physically plausible material and light specifications. My main argument is
> more that the renderer needs serious work. And since it needs work, this
> work might as well be made taking the more modern ways of specifying
> materials and lights and more integrated ways of calculating the different
> aspect of shading in mind. Both ways could live together. But refactoring
> the renderer today and completely overlooking physical plausibility and
> photorealism would be a bazooka shot in Blender's foot. And the renderer
> architecture might as well be optimized for modern hardware using known most
> efficient techniques. Doing that is a profound refactoring IMO.
>
> Yves

Ah, can this more modern way of doing materials and lighting be fast?
I mean maxwell/indigo/etc are all very slow.

Anyway, it sounds very interesting. It'd be a mistake to entirely
abandon all scanline support (deep shadow maps, for example, are very
useful for hair), but I think I agree having a pipeline optimized for
ray tracing would be a good idea.  Perhaps have code that renders a
tile entirely with ray tracing if in ray tracing mode. . . I don't
know.  You think the shading code would be a problem?  I've read a
little about cache-friendly ray tracing pipelines, but I don't know
what complex shading systems will do to the cpu cache in that
situation.

Joe


More information about the Bf-committers mailing list