[Bf-committers] Proposal for unifying nodes

Nathan Vegdahl cessen at cessen.com
Tue Jun 16 21:26:53 CEST 2009


> I think perhaps you're confusing textures with the nodes that work on them.

   I confused them when I started working with texture nodes because
it's inconsistent with how the other two node systems work.
   What I'm saying is that I don't think it should work that way in
2.5.  The "nodes as functions" paradigm makes a lot more sense to me,
and seems a lot simpler to me, than the "nodes as function-object
processors".

> Using entire trees as nodes in other trees (not of differing types because
> there should be no types) would be immensely powerful and is one of the
> goals of the design.

    Right.  But what I was trying to communicate was that the
"functions-as-data" paradigm (or "function-objects as data" if you
prefer) isn't necessary to achieve that functionality.

   It seems a lot simpler to me, for example, to just treat a texture
tree as *being* the texture function rather than as something that
*generates* a texture function.  The former is how a purely
data-oriented system would treat it, and as I understand it the latter
is how your system would treat it.
   Your proposal feels like a kind of meta-programming to me: writing
a program to write a program, rather than just writing the program.

   I'd much rather construct the function directly with nodes, rather
than construct a system of nodes that generates the function.
   And the result is ultimately the same: a function (or
function-object if you prefer) that takes input and produces output,
and which can be used anywhere.

   I hope that makes sense.

>>  If anything I see the shift towards "functions as data" as forcing
>> even more separation between tree types, rather than unifying them.
>
> Why? I see it as eliminating the distinction entirely.

   Because you're creating *more* distinct types in the form of these
function-objects.  Texture objects, shader objects, compositor objects
(and down the line: constraint objects, modifier objects, particle
system objects, etc.).  You're adding more complexity to the system,
not reducing it.
   At the very least it seems like you're just substituting tree-types
for these object-types.

--Nathan V


On Tue, Jun 16, 2009 at 11:18 AM, Robin Allen<roblovski at gmail.com> wrote:
> Hi Nathan
>
> 2009/6/16 Nathan Vegdahl <cessen at cessen.com>
>
>> > Tree types is something you can't avoid in my opinion. To me the purpose
>> > of unification would be to share nodes between tree types, not to allow
>> > all nodes in all tree types.
>>
>>    Maybe.  What the tree types really do, IMO, is simply tell Blender
>> how to use the tree.
>>
>>   Fundamentally all data can be decomposed into scalars of some sort
>> or another.  Which means that, for example, if we had a "constraint
>> nodes tree" we could still use compositing nodes in them as long as we
>> could, for example, convert a vector into a 1x3 gray-scale bitmap and
>> back.  Similar things could be done for any node type in any tree: as
>> long as you can decompose it into scalars, conversion is possible.  We
>> just need sufficient data-conversion nodes, and then all nodes can be
>> used everywhere.
>
>
> Absolutely. With you so far.
>
>
>>   So ultimately the "type" of the tree just tells Blender what
>> context to use it in.  The context does place some restrictions, but
>> they're restrictions on the contextual data available (texture
>> coordinates, for example), and the data types of the outputs of the
>> tree.  None of the restrictions have to do with the "normal"
>> functional nodes that just take input and produce output.
>>   So unification of the node trees in that sense makes a lot of sense
>> to me, and I could see it being really powerful.  Potentially it also
>> means that (as in the proposal, but with some restrictions) entire
>> trees could be used as nodes in other trees (even of differing types),
>> because they simply take input and produce output, just like nodes.
>
>
> Using entire trees as nodes in other trees (not of differing types because
> there should be no types) would be immensely powerful and is one of the
> goals of the design. Also, there is no contextual data. If texture
> coordinates were contextual data, constant within each evaluation of the
> tree, there could be no translate or rotate nodes.
>
>  What I *don't* see is how passing functions through the node
>> networks--instead of passing data--generalizes or helps anything.
>
>
> If you prefer, don't think of them as functions. Think of them as "Texture
> objects". I mean, look at the compositor. You *could* think of bitmaps as
> functions, where you pass in coordinates and get out a color. But at the end
> of the day it's all still data. True, textures call each other whereas
> bitmaps don't, but this is an implementation detail which the user doesn't
> know about, so it couldn't possibly confuse a user.
>
>  If anything I see the shift towards "functions as data" as forcing
>> even more separation between tree types, rather than unifying them.
>
>
> Why? I see it as eliminating the distinction entirely.
>
>
>> If everything is data, then everything can be much more easily unified
>> and shared between trees.
>>   Passing functions instead of data also makes a lot of things
>> implicit in the trees instead of explicit, which IMO is an extremely
>> bad thing.
>
>
> I can see why you say that: texture nodes in what you call a "data-based"
> system would have to have a coordinate input, whereas in this system they
> wouldn't have that input and would output the whole texture (as a function
> or otherwise; to the user it's just a texture). Nodes outputting textures
> can do everything "data-based" nodes can do, and they can do more because
> you can have rotate, translate and scale nodes.
>
>
>> For example, in the current texture nodes, texture
>> coordinates are both implicit as inputs to the whole tree and
>> implicitly passed downstream to later nodes.
>
>
> Nope, nodes don't pass coordinates to each other. Texture objects
> (tex_delegates) do, and it's an important distinction. Nodes only pass
> textures to each other. You're right about the texture coordinates being
> implicit to the whole tree, though. This is Blender's current architecture,
> and the texture nodes have to use it.
>
> And that's ultimately
>> because those coordinates are treated as inputs of the function object
>> being passed along.  The user never sees the coordinates, even though
>> they are a core part of how textures work, and would be powerful to be
>> able to manipulate directly as data (for example, passing them through
>> a compositing node for kicks).
>
>
> You can access the coordinates using the "coordinates" node, which outputs
> the implicit coordinates of the current evaluation, making them explicit.
> When 2.49a comes out, you'll be able to use the "At" node to use those
> coordinates again after messing with them. (The omission of "at" in 2.49 was
> a gross oversight on my part. Sorry!)
>
>  Lastly, it's confusing as hell to have nodes--which are very much
>> functions themselves--processing and passing other functions.  I can
>> imagine myself tearing a lot of hair out trying trouble-shoot issues
>> resulting from that.
>
>
> I believe that that's how you believe you would feel, but I also believe
> that, were this implemented, it would cause a whole lot less hair-pulling
> than the restrictions of the current system. Textures will not be functions
> to the user, they will be textures.
>
>
>>   All I see in the "functions-as-data" paradigm is limitations,
>> increased categorization, and user confusing.  Not freedom and
>> unification, and user clarity.
>>   IMO nodes should remain data-based.
>
>
> I sincerely believe that this is only because of misconceptions about both
> the current and proposed systems. I hope I've managed to put some of them
> right.
>
> -Rob
>
>
>>
>>
>> --Nathan V
>>
>>
>> On Mon, Jun 15, 2009 at 12:40 PM, Brecht Van Lommel<brecht at blender.org>
>> wrote:
>> > Hi Robin,
>> >
>> > I think this is a matter of preference for a large part, but just want
>> > to point out a few more subtle consequences of passing texture
>> > functions.
>> >
>> > First is the case where the node tree is not an actual tree but a graph.
>> > If a node output is used by two or more nodes, it would be good to not
>> > execute that node twice. As I understand it, the texture nodes
>> > implementation currently executes it twice, but I may be wrong here. It
>> > is of course fixable (though with functions the result of that first
>> > node are not necessarily the same each time, so need to do it only when
>> > possible).
>> >
>> > On Mon, 2009-06-15 at 17:35 +0100, Robin Allen wrote:
>> >> 2009/6/15 Brecht Van Lommel <brecht at blender.org>:
>> >> > This is also possible, if you apply these operations to texture
>> >> > coordinates and input them into image or procedural textures.
>> >>
>> >> > A texture node tree could have a special node that gives you the
>> default
>> >> > texture coordinate. Furthermore, the inputs of image or procedural
>> >> > textures coordinate input would use the default texture coordinate if
>> it
>> >> > is not linked to anything.
>> >>
>> >> You see, now you're splitting up the tree types again. A texture node
>> >> tree would have this special node, it would have a 'default
>> >> coordinate', whereas other tree types wouldn't. I'm not saying that
>> >> wouldn't work, I'm saying it doesn't get us anywhere, we still end up
>> >> with split trees which are demonstrably underpowered.
>> >
>> > Tree types is something you can't avoid in my opinion. To me the purpose
>> > of unification would be to share nodes between tree types, not to allow
>> > all nodes in all tree types.
>> >
>> > A shader Geometry or Lighting node makes no sense without a shading
>> > context, nor can you place a modifier node in a shader or particle node
>> > tree. This kind of restriction you have to deal with in any design.
>> >
>> >> > For compositing nodes, I can see the advantage of passing along
>> >> > functions, then they naturally fit in a single node tree. For shader
>> >> > nodes I don't see a problem.
>> >> >
>> >> > But, even though it unifies one thing, the effect is also that it is
>> >> > inconsistent in another way. Now you need two nodes for e.g. rotation,
>> >> > one working on texture functions, and another on vectors (modifier
>> >> > nodes). And further, these nodes need to placed in the tree in a
>> >> > different order, one after, and another before the thing you want to
>> >> > rotate.
>> >>
>> >> Hmm, I think there may be a misunderstanding here. Nothing would have
>> >> to be placed before the thing it modifies. The rotate texture node
>> >> would take in a Texture, an Axis and an Angle, and output a Texture.
>> >> The fact that the texture it outputs would be a function calling its
>> >> input texture with modified coordinates wouldn't even be apparent to
>> >> the user.
>> >
>> > Being placed before the thing it modifies I guess is a matter of
>> > terminology, but let me be more clear on what I mean by differences in
>> > ordering. As I understand the texture nodes code, currently the texture
>> > manipulation runs in reverse order compared to shading nodes (while
>> > color manipulation runs in the same order). Example that at first sight
>> > seems to give the same result, but is actually different:
>> >
>> > shading nodes: geom orco -> rotate X -> rotate Y -> texture voronoi
>> > evalualted as: voronoi(rY(rX(orco)))
>> >
>> > texture nodes: voronoi -> rotate X -> rotate Y (mapped to orco)
>> > evaluated as: voronoi(rX(rY(orco)))
>> >
>> > The shading nodes example may not be a great one because you want to add
>> > ShaderCallData there too, but consider for example two rotate nodes
>> > manipulating a vector which is then input in a modifier node.
>> >
>> > The order of texture nodes can however be reversed, if you go over the
>> > nodes in two passes.
>> >
>> >> Likewise, vector rotate would take in a vector, axis and angle and
>> >> output a vector.
>> >
>> >> In fact, thinking about it now, the fact that rotating textures and
>> >> vectors would still be different nodes in the new system is solvable.
>> >> In the same way that anything applicable to a color is applicable to a
>> >> texture: anything applicable to a vector is applicable to a texture
>> >> simply by modifying its coordinates. So using the same implicit
>> >> conversion I proposed which converts operations on colors to
>> >> operations on textures, one could trivially implement a similar
>> >> conversion allowing any operation on a vector to be used on a texture.
>> >> So, instead of modifying the texture function's output (a color) we
>> >> modify its input (a vector).
>> >>
>> >> More generally, we extend my proposed implicit conversion rule
>> >>
>> >> (A -> B) -> ((Q -> A) -> (Q -> B))
>> >>
>> >> to also perform the conversion
>> >>
>> >> (A -> B) -> ((A -> Q) -> (B -> Q))
>> >>
>> >> The specific case in question being
>> >>
>> >> (vector -> vector) -> ((vector -> color) -> (vector -> color))
>> >>
>> >> As you can see, since (vector -> color) is the type of a texture, this
>> >> means any operation taking and returning vectors can also work on
>> >> textures.
>> >
>> > OK, implicit conversions go a long way to unifying such nodes, I agree.
>> >
>> > How would you drive for example specularity with a texture in the
>> > shading nodes (or velocity in particle nodes)? Can you link up those two
>> > directly, using perhaps orco texture coordinates by default? Or do you
>> > add a node inbetween which takes that texture function + coordinate to
>> > do the conversion?
>> >
>> > Another thing that is not clear to me is ShaderCallData. With texture
>> > nodes you're passing a function which takes a coordinate as input, what
>> > does the shader function take as input? It doesn't make much sense to
>> > rotate a shader result I guess, what does that rotate, the point,
>> > normal, tangent? So you don't pass along shader functions, and it stays
>> > basically the same?
>> >
>> >> Now, even if this wasn't true and the implicit conversion couldn't be
>> >> implemented (which it could), the new system would still unify all the
>> >> different versions of the common nodes like Math and Invert, each of
>> >> which there currently has three different implementations, one for
>> >> each tree. Good luck modifying the Math node once we have five tree
>> >> types.
>> >
>> > I'm not proposing to keep the nodes separated per tree type, only a
>> > subset of nodes would be tied to tree types, Math nodes would not be one
>> > of those.
>> >
>> >
>> > Anyways, I can see that it would be cool to mix texture nodes with
>> > modifier nodes for displacement for example, and that this is only
>> > possible when passing functions. I'm just not sure I like all the
>> > consequences.
>> >
>> > Brecht.
>> >
>> > _______________________________________________
>> > Bf-committers mailing list
>> > Bf-committers at blender.org
>> > http://lists.blender.org/mailman/listinfo/bf-committers
>> >
>> _______________________________________________
>> Bf-committers mailing list
>> Bf-committers at blender.org
>> http://lists.blender.org/mailman/listinfo/bf-committers
>>
> _______________________________________________
> Bf-committers mailing list
> Bf-committers at blender.org
> http://lists.blender.org/mailman/listinfo/bf-committers
>


More information about the Bf-committers mailing list