[Bf-committers] Render API follow-up

Yves Poissant ypoissant2 at videotron.ca
Thu Apr 2 03:52:40 CEST 2009


From: <echelon at infomail.upr.edu.cu>
Sent: Tuesday, March 31, 2009 1:17 PM

>   ....About the CUDA topic in the render API, while GPGPU´s and stream
> computing are a must in every graphics pippeline currenty is not well
> standarized, NVIDIA have CUDA that due to smart movements from that
> company have gained a big user base, but ATI also have its choice with
> Ati Stream basec on Brook+ , and if Intel enters with Larrabee is
> probably that they will make their movement... so that field is
> currently like a swamp, each GPGPU company trying to get the biggest
> piece of the cake
>
>   The best option seems to be OpenCL, that currently is only paperware....
>
>  So there´s two options:
>   1- several devs take care of the NVIDIA side (CUDA), the ATI side
> (Stream)  and so on
>   2- wait for a hardware independent implementation like OpenCL

Or 3- Don't wait for any of those but start designing for parallelism.

I have the feeling there is a rush to choose a GPU programming paradigm 
before even considering designing the API. We are already reading 
discussions of CUDA, Brook, OpenCL and Larrabee but I have still to read any 
discussion about what a Render API should look like, what are the needs and 
requirements?

Discussing any particular GPU programming paradigm is broadly premature and 
IMO will always be, because those paradigms will evolve. We are just 
beginning to see those paradigms coming out. The hardware industry is just 
starting to truely explore the possibilities of parallel and streaming 
architectures We are very far from the end of this exploration. Even if we 
were to choose the best one for today, there is a huge probability that this 
will be outdated in just a few years if not a few months.

Designing software for parallelism is very different than designing software 
for serial processing. One cannot expect to use the full potential of 
parallel architecture by just recompiling the source with the new 
vectorizing compiler. That is the old way of thinking: That we would get 
better performance by just swaping for a faster CPU. No more of that. The 
design itself must be approached with parallelism in mind making abstraction 
of any particular architecture of paradigm.

Why don't we just get back to the true discussion? That is :
- What is a render API?
- What would be a good render API for Blender?
- What would be a good render API for Lux and others?
- What do we know about the current interface/exporters/API to external 
renderer that we would like to improve?
- What is currently lacking?
- What kind of communication between Blender and the renderer do we need?
- How tight or how loose do we need this communnication to be?
- Can we identify and categorize which king of information can be loosely 
coupled and which ones need to be tightly coupled?
- Can the information be decoupled and exchanged in parallel chunks?
- How can this chunking be done?
- What sort of information, structures, interaction, UI must be 
echanged/shared from Blender to the renderer and from the renderer to 
Blender?
- Do we need the renderer to be able to expose UI widgets in Blender (for 
example to describe materials and lights)?
- How is the data and properties persistence handled?
- Do we need Blender to store, save and load, to/from blend files, the 
renderer particular properties?
- Do we need Blender to store, save and load, to/from blend files, the 
renderer's particular requirements for material and light properties?
- What should be the formalism for that?

etc. etc. etc.

All of those questions don't even require that a particular parallelism or 
streaming model be selected. If there is one thing that is certain, it is 
that we will have to think parallelism for the years to come no matter the 
marketing gizmos each company will come up with. The design phase should be 
thought with parallelism in mind but making abstraction of which platform 
this parallelism will be implemented in.

Yves

PS: For those interested in Larrabee, Dr Dobb's just published an very 
intersting and insiightfull article about this upcoming CPU/GPU at 
http://www.ddj.com/hpc-high-performance-computing/216402188
And Intel have released a Larrabee C++ Prototype Library at 
http://software.intel.com/en-us/articles/prototype-primitives-guide/

While those are interesting reads, I still strongly believe it would be a 
big mistake to base a design on any specific architecture. 



More information about the Bf-committers mailing list