[Bf-committers] Proposal for unifying nodes

Robin Allen roblovski at gmail.com
Mon Jun 15 18:35:29 CEST 2009


2009/6/15 Brecht Van Lommel <brecht at blender.org>:
> This is also possible, if you apply these operations to texture
> coordinates and input them into image or procedural textures.

> A texture node tree could have a special node that gives you the default
> texture coordinate. Furthermore, the inputs of image or procedural
> textures coordinate input would use the default texture coordinate if it
> is not linked to anything.

You see, now you're splitting up the tree types again. A texture node
tree would have this special node, it would have a 'default
coordinate', whereas other tree types wouldn't. I'm not saying that
wouldn't work, I'm saying it doesn't get us anywhere, we still end up
with split trees which are demonstrably underpowered.


> For compositing nodes, I can see the advantage of passing along
> functions, then they naturally fit in a single node tree. For shader
> nodes I don't see a problem.
>
> But, even though it unifies one thing, the effect is also that it is
> inconsistent in another way. Now you need two nodes for e.g. rotation,
> one working on texture functions, and another on vectors (modifier
> nodes). And further, these nodes need to placed in the tree in a
> different order, one after, and another before the thing you want to
> rotate.

Hmm, I think there may be a misunderstanding here. Nothing would have
to be placed before the thing it modifies. The rotate texture node
would take in a Texture, an Axis and an Angle, and output a Texture.
The fact that the texture it outputs would be a function calling its
input texture with modified coordinates wouldn't even be apparent to
the user.

Likewise, vector rotate would take in a vector, axis and angle and
output a vector.

In fact, thinking about it now, the fact that rotating textures and
vectors would still be different nodes in the new system is solvable.
In the same way that anything applicable to a color is applicable to a
texture: anything applicable to a vector is applicable to a texture
simply by modifying its coordinates. So using the same implicit
conversion I proposed which converts operations on colors to
operations on textures, one could trivially implement a similar
conversion allowing any operation on a vector to be used on a texture.
So, instead of modifying the texture function's output (a color) we
modify its input (a vector).

More generally, we extend my proposed implicit conversion rule

(A -> B) -> ((Q -> A) -> (Q -> B))

to also perform the conversion

(A -> B) -> ((A -> Q) -> (B -> Q))

The specific case in question being

(vector -> vector) -> ((vector -> color) -> (vector -> color))

As you can see, since (vector -> color) is the type of a texture, this
means any operation taking and returning vectors can also work on
textures.

Now, even if this wasn't true and the implicit conversion couldn't be
implemented (which it could), the new system would still unify all the
different versions of the common nodes like Math and Invert, each of
which there currently has three different implementations, one for
each tree. Good luck modifying the Math node once we have five tree
types.

> Also, function pointers are not available in GLSL (or OpenCL), because
> hardware does not support it. If they are only used in a restricted
> manner some workaround may be possible, but this is something to keep in
> mind I think.

This is a good point, and one I haven't thought much about. I do know
a little GLSL (or should I say a little *about* GLSL) and I believe
the generality of the system would mean that the GLSL would be
generated as the node tree was evaluated, rather than binding
dynamically at runtime. I'm going to need to research this bit some
more, though.

-Rob


More information about the Bf-committers mailing list