[Bf-committers] Proposal for unifying nodes

Brecht Van Lommel brecht at blender.org
Mon Jun 15 21:40:03 CEST 2009


Hi Robin,

I think this is a matter of preference for a large part, but just want
to point out a few more subtle consequences of passing texture
functions.

First is the case where the node tree is not an actual tree but a graph.
If a node output is used by two or more nodes, it would be good to not
execute that node twice. As I understand it, the texture nodes
implementation currently executes it twice, but I may be wrong here. It
is of course fixable (though with functions the result of that first
node are not necessarily the same each time, so need to do it only when
possible).

On Mon, 2009-06-15 at 17:35 +0100, Robin Allen wrote:
> 2009/6/15 Brecht Van Lommel <brecht at blender.org>:
> > This is also possible, if you apply these operations to texture
> > coordinates and input them into image or procedural textures.
> 
> > A texture node tree could have a special node that gives you the default
> > texture coordinate. Furthermore, the inputs of image or procedural
> > textures coordinate input would use the default texture coordinate if it
> > is not linked to anything.
> 
> You see, now you're splitting up the tree types again. A texture node
> tree would have this special node, it would have a 'default
> coordinate', whereas other tree types wouldn't. I'm not saying that
> wouldn't work, I'm saying it doesn't get us anywhere, we still end up
> with split trees which are demonstrably underpowered.

Tree types is something you can't avoid in my opinion. To me the purpose
of unification would be to share nodes between tree types, not to allow
all nodes in all tree types.

A shader Geometry or Lighting node makes no sense without a shading
context, nor can you place a modifier node in a shader or particle node
tree. This kind of restriction you have to deal with in any design.

> > For compositing nodes, I can see the advantage of passing along
> > functions, then they naturally fit in a single node tree. For shader
> > nodes I don't see a problem.
> >
> > But, even though it unifies one thing, the effect is also that it is
> > inconsistent in another way. Now you need two nodes for e.g. rotation,
> > one working on texture functions, and another on vectors (modifier
> > nodes). And further, these nodes need to placed in the tree in a
> > different order, one after, and another before the thing you want to
> > rotate.
> 
> Hmm, I think there may be a misunderstanding here. Nothing would have
> to be placed before the thing it modifies. The rotate texture node
> would take in a Texture, an Axis and an Angle, and output a Texture.
> The fact that the texture it outputs would be a function calling its
> input texture with modified coordinates wouldn't even be apparent to
> the user.

Being placed before the thing it modifies I guess is a matter of
terminology, but let me be more clear on what I mean by differences in
ordering. As I understand the texture nodes code, currently the texture
manipulation runs in reverse order compared to shading nodes (while
color manipulation runs in the same order). Example that at first sight
seems to give the same result, but is actually different:

shading nodes: geom orco -> rotate X -> rotate Y -> texture voronoi
evalualted as: voronoi(rY(rX(orco)))

texture nodes: voronoi -> rotate X -> rotate Y (mapped to orco)
evaluated as: voronoi(rX(rY(orco)))

The shading nodes example may not be a great one because you want to add
ShaderCallData there too, but consider for example two rotate nodes
manipulating a vector which is then input in a modifier node.

The order of texture nodes can however be reversed, if you go over the
nodes in two passes.

> Likewise, vector rotate would take in a vector, axis and angle and
> output a vector.

> In fact, thinking about it now, the fact that rotating textures and
> vectors would still be different nodes in the new system is solvable.
> In the same way that anything applicable to a color is applicable to a
> texture: anything applicable to a vector is applicable to a texture
> simply by modifying its coordinates. So using the same implicit
> conversion I proposed which converts operations on colors to
> operations on textures, one could trivially implement a similar
> conversion allowing any operation on a vector to be used on a texture.
> So, instead of modifying the texture function's output (a color) we
> modify its input (a vector).
> 
> More generally, we extend my proposed implicit conversion rule
> 
> (A -> B) -> ((Q -> A) -> (Q -> B))
> 
> to also perform the conversion
> 
> (A -> B) -> ((A -> Q) -> (B -> Q))
> 
> The specific case in question being
> 
> (vector -> vector) -> ((vector -> color) -> (vector -> color))
> 
> As you can see, since (vector -> color) is the type of a texture, this
> means any operation taking and returning vectors can also work on
> textures.

OK, implicit conversions go a long way to unifying such nodes, I agree.

How would you drive for example specularity with a texture in the
shading nodes (or velocity in particle nodes)? Can you link up those two
directly, using perhaps orco texture coordinates by default? Or do you
add a node inbetween which takes that texture function + coordinate to
do the conversion?

Another thing that is not clear to me is ShaderCallData. With texture
nodes you're passing a function which takes a coordinate as input, what
does the shader function take as input? It doesn't make much sense to
rotate a shader result I guess, what does that rotate, the point,
normal, tangent? So you don't pass along shader functions, and it stays
basically the same?

> Now, even if this wasn't true and the implicit conversion couldn't be
> implemented (which it could), the new system would still unify all the
> different versions of the common nodes like Math and Invert, each of
> which there currently has three different implementations, one for
> each tree. Good luck modifying the Math node once we have five tree
> types.

I'm not proposing to keep the nodes separated per tree type, only a
subset of nodes would be tied to tree types, Math nodes would not be one
of those.


Anyways, I can see that it would be cool to mix texture nodes with
modifier nodes for displacement for example, and that this is only
possible when passing functions. I'm just not sure I like all the
consequences.

Brecht.



More information about the Bf-committers mailing list