[Bf-committers] Proposal: Blender OpenCL compositor

Jeroen Bakker j.bakker at atmind.nl
Thu Jan 20 18:49:25 CET 2011


Hi Lukas,

Spaghetti vs Expressions :) : I agree with your conclusion. I really see 
this as that there are to few control parameters on a node and limited 
node implementations. Also currently we have different granularities of 
nodes. We have very functional nodes, and very mathematically nodes and 
data (combine, split) nodes. I would make there mathematically nodes 
part of the functional node. The data nodes are perhaps not needed 
anymore when you have a single datatype and color modes in the node 
itself. Currently the defocus node is 2d. but is only useful in 3d. 
Therefore compositors will create complex systems with z-clips and 
render layars to first split the image in layers, defocus every layer on 
its own and combine these layers.

The same effect you see with vector blur and 2 objects moving in the 
opposite direction. As the depth is not used during the calculation, you 
need to split, calculate and combine.

Some generic way to reference scene data: Yes, I will redo that part of 
the proposal. But at the moment I don't have the solution for every 
case. In the compositor "the need of the data" should be part of the 
kernel that will use the data. But as the compositor only has limited 
scene references I don't know the ideal solution for this.
Currently will support camera data, and render data (current frame) and 
compositor settings (default color mode?)

More abstract memory manager: I agree! I wouldn't implement this fixed 
to the compositor situation. I was thinking about something like:
  - alloc(deviceId, len(Struct), width, height) for images 2D
  - alloc(deviceId, len(Struct), size) for arrays 1D
The compositor also uses the array Allocation for un/n-ary based kernel 
groups.

Jeroen.

On 01/20/2011 09:48 AM, Lukas Tönne wrote:
> There are a couple of things i'd like to note, especially those not
> directly related to OpenCL vs. CPU code (most arguments have been
> voiced already):
>
> * On the question whether horizontal layout (Blender nodes,
> Softimage), vertical layout (Houdini, Aviary) or completely customized
> layout (Nuke) is preferable: I'd like to point out that it would
> probably be difficult to use socket names and default input values for
> sockets with anything other than horizontal nodes. Most softwares that
> use a different layout approach seem to have just one single type of
> socket data, depending on the type of tree. For compositing systems
> this is simply the image buffer you want to manipulate, for more
> complex systems (such as Houdini) a socket connection can mean a
> parent-child object relation or vertex or particle data, etc.,
> depending on the type of tree.
>
> * While the restriction to one single data type in a tree allows very
> clean layout and easily understandable data flow in trees, it also
> means that there needs to be a different way of controlling node
> parameters, which usually means scripted expressions. Currently many
> nodes in Blender have sockets that simply allow you to use variable
> parameters, calculated from input data with math nodes or other node's
> results. Afaik the equivalent to expressions in Blender would be the
> driver system, but making this into a feature that is generic enough
> to replace node-based inputs is probably a lot more work than "only" a
> compositor recode (correct me if i'm wrong).
>
> * Having a general system for referencing scene data could be
> extremely useful, especially for the types of trees in the domain i am
> working in: particle sims (and mesh modifiers lately). In compositor
> nodes the only real data that must occasionally be referenced is the
> camera (maybe later on curves can be useful for masking? just a rough
> idea). For simulation nodes having access to objects, textures, lamps,
> etc. is much more crucial even.
>
> We discussed already that such references/pointers would have to be
> constants, which means that their concrete value is already defined
> during tree construction and not only when executing. This makes it
> possible to read the data at the beginning of execution and convert it
> to OpenCL readable format. Also it will allow to keep track of data
> dependencies (not much of an issue in compositor, but again very
> important for simulations). Note that there are already some places
> where data is linked in a tree (e.g. material and texture nodes), but
> these are not implemented as sockets and so don't allow efficient
> reuse of their input values by linking.
>
> * I would love to see the memory manager you are planning for tiled
> compositing be abstracted just a little more, so that it can be used
> for data other than image buffers too. In simulations of millions of
> particles the buffers could easily reach sizes comparable to those in
> compositing, so it would be a good idea to split them into parts and
> process these individually where possible.
>
> In images the pixels all have fixed locations and you can easily
> define neighboring tiles to do convolutions. This kind of calculation
> is usually not present in "arbitrary" or unconnected data, such as
> particles or mesh vertices, so an element/tile/part will either depend
> on just one of the input parts or all of them. But still having a
> generic manager for loading parts into memory could avoid some double
> work.
>
> Cheers,
> Lukas
> _______________________________________________
> Bf-committers mailing list
> Bf-committers at blender.org
> http://lists.blender.org/mailman/listinfo/bf-committers
>



More information about the Bf-committers mailing list