[Bf-committers] Proposal for unifying nodes
mierle at gmail.com
Mon Jun 15 18:04:29 CEST 2009
On Sun, Jun 14, 2009 at 11:07 AM, Robin Allen <roblovski at gmail.com> wrote:
> Hi all, I hope this is the right list.
> After hearing Ton say that nodes might see a recode, and knowing that
> users are sometimes frustrated by Blender's strict separation of tree
> types, I thought about ways to change how nodes are evaluated to let
> users use any nodes in any tree. I've put my ideas up at
> http://wiki.blender.org/index.php/User:Frr/NodeThoughts . I'd be
> willing to take this project on if people feel the design is up to
> scratch, perhaps developing in a branch akin to bmesh.
> Main points:
> * Expand nodes' data types from (float, vector, color) to include
> functions and other types
> * Define a shader to be a function of a ShaderCallData
> * Define a texture to be a function of a TexCallData
> * Allow the user to specify any nodetree outputting a shader to be
> used as a material tree; any tree outputting a texture to be used as a
> texture tree; etc.
If I recall correctly, the reason for the separation of node tree
types was that material nodes need to be high performance. Making nodes general
would impose high performance penalties on shader nodes, which
evaluate their trees millions of times.
However, there is an alternative: glue LLVM into blender (a ~4mb ram
penality) which gets us a full optimizing compiler middle and back
end, then compile the nodes to optimized code via the LLVM JIT.
This is already done for GLSL on the mac when software fallback is required
on lower end gfx cards. Adobe is also using LLVM in their PixelBender
product to compile processing pipelines similar to blender's compositing
http://llvm.org/devmtg/2008-08/Rose_AdobePixelBender_Hi.m4v (if you look
closely I'm in the audience :)
What's interesting about their work is that they are able to split the
compilation according to what can be done on the GPU. They have a custom GPU
backend for LLVM where they put as much of the processing pipeline as
possible, and the remainder is compiled for the CPU. In both cases, the
intermediate code is LLVM.
I've used LLVM for a simple functional language; it's easy to work with.
There are even C bindings, in case using the C++ bindings is not desired.
Another point in favor of LLVM: Python is moving towards running on LLVM, so
in the future blender may ship LLVM anyway when bundling Python. They
already have a JIT'ing version of Python running with LLVM, but it's not
finished. If you're curious, check out the Unladen Swallow project:
One problem with switching to just LLVM is that parts of the
compositing pipeline would have to be written in LLVM IR. A way around
this is to use Clang, a C/C++/Objective C frontend which is part of
the LLVM project. With Clang,
we could compile the current node processing code (in
C) into LLVM IR, and then at runtime, generate IR to hook together the parts
of the node tree. The IR would then be JIT'd to optimized native code. This
way each node tree would become a single optimized function.
Having LLVM + Clang would also open doors for things like compiling GLSL
shaders for final rendering (rather than just preview). However, it would be
a departure from what Blender is doing now.
> * Define implicit conversions allowing nodes (e.g. Invert) to be
> defined once to work on colors, and then be automatically converted to
> work on textures and shaders (since both are defined as functions
> returning colors).
> * Results in an extensible node system: instead of defining a new tree
> type, just define a new data type and some nodes that work on it.
> * No more duplication of code with tiny changes (math, image...)
> I'd like to hear any comments or criticisms you might have.
> Bf-committers mailing list
> Bf-committers at blender.org
More information about the Bf-committers