[Bf-funboard] GPU accelerated rendering

Bill Baxter baxter at cs.unc.edu
Fri Oct 22 05:13:18 CEST 2004


Since Larry Gritz just came and gave a talk on Gelato here two days ago, 
I'll just mention some of what he said.  If I recall correctly he said 
it took an average of about five or six people working full time on 
Gelato for two years to get it all working, and the result is something 
that gets about 2x performance increase "on average" over "leading film 
renderers".   He said that "on average" means some frames are slower 
with Gelato, others are up to 6 or 7x faster.   So on the workload they 
think is "typical" it averages out to 2x improvement.  But that was with 
the very first Gelato release targeting the NV40 core.  Actually he made 
it sound like they didn't really expect much of any speed boost at all 
with the 1.0 release, and were pleasantly surprised by the 2x factor.  
The strategy is more to get it working correctly and shipping now, and 
then take advantage of the 6-9 month product cycle in GPUs so that in 3 
or 4 generations, they'll have rendering speeds no one can touch.

Somebody mentioned GPU accelerated enhanced preview modes.  Gelato is 
not a preview renderer.  It is specifically for doing the final 
renders.  It supports a shading language that's a superset of Renderman 
in functionality (at least according to Larry), it puts correctness of 
output as the top priority, and though faster than other final renderers 
it's  still FAR from real time.   It is defniitely not an interactive 
preview renderer.  In fact, they're thinking more along the lines of a 
farm of headless servers equipped with NVIDIA GPUs.  You don't need a 
display to run Gelato.

Gelato took 5 or 6 really excellent engineers 2 years solid to get 
working with one brand of graphics card, working inside of that graphics 
card manufacturer's walls.  To get something similar in Blender that is 
vendor neutral, without direct assistance on things like internal 
architecture details from either vendor, will likely take much longer 
than 2 years with an open source development model.  I'm not saying it 
can't or shoudn't be done, just that it's going to be very difficult to 
pull off.   And here's what I think the clincher is: the biggest 
challenge of the problem is probably just the issue of how to 
efficiently use GPUs as generic processors, and on that front there are 
a number of current research projects in the works:  Brook and Sh are 
the two main ones.  Generic programming with GPUs is a problem people 
are working on solving, and once they do, I think it will make the task 
of programmnig something like Gelato a lot easier, and make it more or 
less vendor neutral right from the get go.  So to me, I think putting a 
lot of effort into something like a Blender version of Gelato is 
probably premature at this point.   Wait till reasonable generic 
programming tools for the GPU are in line.  Otherwise the project will 
probably spend its first year reinventing Brook badly, only to dump 
whatever they have done in the end in favor of Brook or something like it.

But I do agree with the guy who said support Blenderman.  That certainly 
can't hurt.  There's so much renderman code out there that pretty much 
any new renderer that wants to take the top notch spot with the big 
studio folk is going to have to support it in order to really catch on.  
Or if it doesn't someone will write a renderman importer for it, as was 
the case with Gelato. 

--bb

Robert Wenzlaff wrote:

>On Wednesday 20 October 2004 16:27, Konrad Haenel wrote:
>  
>
>>And lastly I feel the urge to quote:
>>
>>"I stand by what I said.  If you want Gelato (et al.), the best way to get
>>it is to support the Blenderman project.  If we had a fully useable
>>Renderman interface today, we'd have GPU rendering (plus about 20 other
>>software renderers) _TODAY_ .  Some work on Blenderman has already been
>>done."
>>
>>Gelato isn't free. We might have nice watermarked images with it's
>>evaluation copy now, in case we own an nVIDA Quadro FX board
>>(requirement). That's some downside I see.
>>    
>>
>
>I was only using Gelato as an example as it was one of the links posted.  Any 
>time we delve into using proprietary hardware (as all gfx cards are) we have 
>to deal with proprietary code.  The drivers are supplied free, as the cards 
>aren't worth squat without them, but it's proprietary code.  I'll bet that 
>since nVidia _is_ selling Gelato, they won't be too forthcoming on the 
>technical details of their cards that would allow us to do this any other 
>way.   They only supply their Linux driver in a binary form just to keep 
>those technical details secret.  
>
>There is nothing in C, C++, python, or any other language, that says how to 
>manuipulate the registers in any particular GFX card to achieve any 
>particular effect.   There are literally  thousands of registers in some of 
>these GPU's.  And the registers in Brand A aren't even remotely like the ones 
>in Brand B.   If you want to do GPU assisted rendering, you either buy their 
>API, or spend nearly as many man-hours reverse engineering their card as they 
>spent engineering it.  Oh, and when they come out with a new card, do it all 
>again.
>
>When OpenGL accesses the features of a GPU, it does so through a library 
>provided by the GPU manufacturer (or based on data provided by the GPU 
>manufacturer in the rare cases they actually publish that data).   As 
>applications programmers, we are 2-3 layers removed from that 
>complexity/uniqueness.  Just because they look the same at this level, don't 
>assume they are the same all the way down.
>
>>From your description on how you forsee these being used, it sounds like you 
>just want enhanced preview modes.  The preview modes are already done in 
>OpenGL and already use as much GPU assist as your particular driver lib 
>offers.  OpenGL offers methods to do more exact texture mapping and more 
>accurate lighting.  If you want to code a Shift-Ctrl-Z mode that uses these, 
>please do.   But that's not really what GPU assisted rendering is all about.
>
>If you want real GPU assisted rendering, you are going to have to deal with 
>nVidia's Gelato (or ATI's equivilent) at sometime.  They simply are not going 
>to tell you how to program their cards.  And since you are asking their cards 
>for the assist, you have to speak their language.  The OpenSource nVidia 
>driver developers have been butting their heads against this for several 
>years.  Their drivers are no where near as good as the nVidia provided 
>binaries.
>
>  
>



More information about the Bf-funboard mailing list