[Bf-committers] Render API: Framework and Scene Break-Down

Bobby Parker sh0rtwave at gmail.com
Thu Jun 7 15:37:28 CEST 2007


> I used the following passes:
>   - Ambient Occlusion rendering of the static model (just once)
>   - Shadow map generation (just once)
>   - Diffuse pass (each frame)
>   - Specular pass (each frame)
> It's not 100% clear how customized passes like that (which use
 > user-specified subsets of the scene!) could be built into the exporters
> that have been described so far, but it seems to me that it would be
> much easier to do by treating the Blender scene as a database than by
> having a push model where the exporter has to take everything in one go
> and *then* decide what to do with it.

Well for renderman, that's actually pretty easy to do. I've already
achieved that end with Neqsus/BtoR. Renderman supplies so much power
with additional Display statements, there's really no limit to how
many passes can be generated. Aqsis, for instance, will automatically
render your shadowmaps for you (Although I do support scenarios where
the shadowmaps can be rendered in a seperate pass per light) and
cobble everything together that you need for AO as well. Supporting
all of those varioius renderer-specific attributes, options, and
language constructs is cake when you get down to it. The need for that
level of control is what drove me to devise my system in the first
place.



> Imagine for a second a GUI interface for an exporter (maybe written in
> wxWindows or whatever) that is launched when the export is initiated.
> The GUI could then present the user with a view onto the Blender scene,
> and allow them to mold specific rendering passes as they see fit.
> Personally, I wouldn't have much use for a "generic" RenderMan exporter,
> but I do have use for something that allows me more control.  (Hence my
> Python suggestion.)

That's really what I envision as far as Neqsus/BtoR goes. That would
be ideal...Truth be told, I would love to be able to cause my python
exporter system to appear in a window. Given renderman's requirements,
it makes a degree of sense to do something like that, and allow an
export plugin to have control beyond just handling the render process
(As Neqsus/BtoR does now, where you can select a given object and
assign material shaders, object attributes, control light (generate
shadowmaps and such for selected lights), so forth, so on...

And BTW, in my own opinion, a "generic renderman exporter" is rather
pointless, and has really limited use scenarios.

> Yes, I see this as a shortcoming of the current Python API rather than
> as a fundamental limitation.  I think Campbell Barton's recent push
> toward BPy may help here.

Totally in agreement. I haven't seen the stuff for the latest push, so
I'll check that out.

> I was wondering about this though.  I don't have many contacts out in
> the "real world" of RenderMan users.  Do people *really* use more than
> two transformations per RiMotion block?  Is it common in production?  I
> just remember building a particle system many years ago where I went to
> great pains to allow an arbitrary number of transformations per motion
> block (if Nicholas Yue is reading this: yes, partsys died a long time
> ago!), and I was terribly disappointed that people couldn't see the
> difference in any of my animations. :-/

It's not as much about the number of transformations per motion block,
but rather about acessing the data at the correct time location to
generate an accurate motion blur effect. Blender's animation system
only supports integer-based access, but I think it's important in that
respect to support float-based or sub-frame (i.e. frame 2.4, 2.8)
access since for speedy actions, it's neccessary to get the right
effect. This does of course only really apply if you're looking for
realism...but we know that the human eye is a finicky and judgemental
beast.

My current workaround for this involves creating what's effectively a
very slow-motion animation for frames where I need motion blur, and I
implement a frame-skip method so that every, say 3rd frame is
rendered, and the in-between frames are used for motion transforms.

In any event, I do think it's a good start to Aaron's process.

Bobby Parker
ShortWave


More information about the Bf-committers mailing list