[Bf-committers] Proposal for unifying nodes

Aurel W. aurel.w at gmail.com
Tue Jun 16 15:30:08 CEST 2009


Hi,

ok a little bit late, but I feel I have to put things about passing
functions straight, so to sum it up:

Nodes always pass a Signal, that may be a discrete one, as in pixel images
or continues ones which is equivalent to passing a function.

If a node is defining some procedural function and your total output in the
end should be a discrete signal, you are better of with evaluating an entire
chunk of data and passing it on to the next node, instead of handing over
functions. Such discrete signals would be images, sampled f-curves, audio
and so on. However for a texture you want to use a continuous signal and
therefore just pass functions on.

Matt:
> You can generate
> images from textures (by sampling the texture at every pixel in the
> image buffer), but you can't go the other way around, as part of a
> shading pipeline at least.

Also, you can always do discrete->continuous and continuous->discrete
conversion, this is equivalent of sampling a function to a picture and
converting a picture to a function. If you unify nodes it would be good to
use both and such conversions in one setup. Basic sampling theorem states,
that if you sample twice the rate of the highest frequency you can convert
back without any losses. Therefore it's a mathematical proven fact, that if
you sample a texture good enough into a discrete Image and then evaluate it
in your shading pipeline you will get the exact same results as if you
haven't done the continuous->discrete conversion and just evaluated a
R,R->Color function for each coordinate. For textures, or continuous
signals, which have very high or even infinite frequencies (complete random
noise for e.g.) sampling would of course drop those components, which also
results in a drop of quality, which would be insignificant in most cases.

Ok, I mentioned convolution before, which may not be an easy example on a
continues signal or texture. Let's say it like this, there are operations
which modify the input signal which depend on more than just one value on a
specific coordinate. The problem is, if your input signal is continues and
therefore a functionm you have to evaluate it for all the coordinates
needed. This can be done by traversing the tree from the output down in a
recursive manner, as Nethan mentioned before:

> The way texture nodes work right now, it feels like
> sometimes they "reach backwards" to grab data from up-stream nodes,
> which totally breaks my mental model of how Blender nodes work.

Well, on the top the concept still can be seen as just passing along a
Signal from node to node, it's just implemented differently some might think
even elegant. The conept completly works, if you just think of nodes using
single input values, or a pipeline (or tree) processing colour values. The
mess starts, when one node, to determine a colour value on a specific
coordinate, needs X>1 values from the input (represented by a function). To
evaluate this, it would need to evaluate the entire sub tree X times.
Because of the recursive way this works, this happens again, on the next
nodes. I hope it is clear, that all this results in exponential complexity.

Rob,
> There is a phrase in multithreaded programming that a problem is
> 'embarassingly parallel', i.e. it fits the description of a
> multithreaded task so well that you'd be a fool not to use threads. It
> seems to me that manipulating textures and shaders is embarassingly
> functional.

at this point it doesn't matter anymore, that the implementation of
evaluation can be parallelized in a good way. It might work in theory, but
many node setups will be just impossible to compute, because of the time
needed. For Number of nodes, we are talking about exponential complexity for
continues signals and linear complexity for descrete ones. It's very
important to keep that in mind!

to be continued,...

Aurel


2009/6/15 Brecht Van Lommel <brecht at blender.org>:
> Hi Robin,
>
> On Mon, 2009-06-15 at 14:59 +0100, Robin Allen wrote:
>> > Anyway, I'm not sure there are that many cases where you actually
>> > benefit from have such a function callback available? What kind of
nodes
>> > would you implement with them, if blur and sharpen could already be
done
>> > (in a bit limited from)?
>>
>> Function callback?
>
> I mean the system of passing functions rather than vectors or color
> values. What kind of node does this make possible, which is not possible
> without passing functions?
>
> My point is, that I can't really think of one with practical use.
> Simpler things like blurring or sharpening can be done with texture
> derivatives, more advanced things seem to be inefficient or difficult to
> the point of not being useful in practice.
>
> Brecht.
>
>
> _______________________________________________
> Bf-committers mailing list
> Bf-committers at blender.org
> http://lists.blender.org/mailman/listinfo/bf-committers
>



2009/6/16 Nathan Vegdahl <cessen at cessen.com>

> > back.  Similar things could be done for any node type in any tree: as
> > long as you can decompose it into scalars, conversion is possible.  We
>
>    Ack, I meant: "as long as you can decompose data into scalars...".
> Data, not the node itself.
>
>   (I shouldn't write emails at odd hours of the morning...)
>
> --Nathan V
>
> On Tue, Jun 16, 2009 at 2:26 AM, Nathan Vegdahl<cessen at cessen.com> wrote:
> >> Tree types is something you can't avoid in my opinion. To me the purpose
> >> of unification would be to share nodes between tree types, not to allow
> >> all nodes in all tree types.
> >
> >   Maybe.  What the tree types really do, IMO, is simply tell Blender
> > how to use the tree.
> >
> >   Fundamentally all data can be decomposed into scalars of some sort
> > or another.  Which means that, for example, if we had a "constraint
> > nodes tree" we could still use compositing nodes in them as long as we
> > could, for example, convert a vector into a 1x3 gray-scale bitmap and
> > back.  Similar things could be done for any node type in any tree: as
> > long as you can decompose it into scalars, conversion is possible.  We
> > just need sufficient data-conversion nodes, and then all nodes can be
> > used everywhere.
> >
> >   So ultimately the "type" of the tree just tells Blender what
> > context to use it in.  The context does place some restrictions, but
> > they're restrictions on the contextual data available (texture
> > coordinates, for example), and the data types of the outputs of the
> > tree.  None of the restrictions have to do with the "normal"
> > functional nodes that just take input and produce output.
> >   So unification of the node trees in that sense makes a lot of sense
> > to me, and I could see it being really powerful.  Potentially it also
> > means that (as in the proposal, but with some restrictions) entire
> > trees could be used as nodes in other trees (even of differing types),
> > because they simply take input and produce output, just like nodes.
> >
> >   What I *don't* see is how passing functions through the node
> > networks--instead of passing data--generalizes or helps anything.
> >   If anything I see the shift towards "functions as data" as forcing
> > even more separation between tree types, rather than unifying them.
> > If everything is data, then everything can be much more easily unified
> > and shared between trees.
> >   Passing functions instead of data also makes a lot of things
> > implicit in the trees instead of explicit, which IMO is an extremely
> > bad thing.  For example, in the current texture nodes, texture
> > coordinates are both implicit as inputs to the whole tree and
> > implicitly passed downstream to later nodes.  And that's ultimately
> > because those coordinates are treated as inputs of the function object
> > being passed along.  The user never sees the coordinates, even though
> > they are a core part of how textures work, and would be powerful to be
> > able to manipulate directly as data (for example, passing them through
> > a compositing node for kicks).
> >   Lastly, it's confusing as hell to have nodes--which are very much
> > functions themselves--processing and passing other functions.  I can
> > imagine myself tearing a lot of hair out trying trouble-shoot issues
> > resulting from that.
> >
> >   All I see in the "functions-as-data" paradigm is limitations,
> > increased categorization, and user confusing.  Not freedom and
> > unification, and user clarity.
> >   IMO nodes should remain data-based.
> >
> > --Nathan V
> >
> >
> > On Mon, Jun 15, 2009 at 12:40 PM, Brecht Van Lommel<brecht at blender.org>
> wrote:
> >> Hi Robin,
> >>
> >> I think this is a matter of preference for a large part, but just want
> >> to point out a few more subtle consequences of passing texture
> >> functions.
> >>
> >> First is the case where the node tree is not an actual tree but a graph.
> >> If a node output is used by two or more nodes, it would be good to not
> >> execute that node twice. As I understand it, the texture nodes
> >> implementation currently executes it twice, but I may be wrong here. It
> >> is of course fixable (though with functions the result of that first
> >> node are not necessarily the same each time, so need to do it only when
> >> possible).
> >>
> >> On Mon, 2009-06-15 at 17:35 +0100, Robin Allen wrote:
> >>> 2009/6/15 Brecht Van Lommel <brecht at blender.org>:
> >>> > This is also possible, if you apply these operations to texture
> >>> > coordinates and input them into image or procedural textures.
> >>>
> >>> > A texture node tree could have a special node that gives you the
> default
> >>> > texture coordinate. Furthermore, the inputs of image or procedural
> >>> > textures coordinate input would use the default texture coordinate if
> it
> >>> > is not linked to anything.
> >>>
> >>> You see, now you're splitting up the tree types again. A texture node
> >>> tree would have this special node, it would have a 'default
> >>> coordinate', whereas other tree types wouldn't. I'm not saying that
> >>> wouldn't work, I'm saying it doesn't get us anywhere, we still end up
> >>> with split trees which are demonstrably underpowered.
> >>
> >> Tree types is something you can't avoid in my opinion. To me the purpose
> >> of unification would be to share nodes between tree types, not to allow
> >> all nodes in all tree types.
> >>
> >> A shader Geometry or Lighting node makes no sense without a shading
> >> context, nor can you place a modifier node in a shader or particle node
> >> tree. This kind of restriction you have to deal with in any design.
> >>
> >>> > For compositing nodes, I can see the advantage of passing along
> >>> > functions, then they naturally fit in a single node tree. For shader
> >>> > nodes I don't see a problem.
> >>> >
> >>> > But, even though it unifies one thing, the effect is also that it is
> >>> > inconsistent in another way. Now you need two nodes for e.g.
> rotation,
> >>> > one working on texture functions, and another on vectors (modifier
> >>> > nodes). And further, these nodes need to placed in the tree in a
> >>> > different order, one after, and another before the thing you want to
> >>> > rotate.
> >>>
> >>> Hmm, I think there may be a misunderstanding here. Nothing would have
> >>> to be placed before the thing it modifies. The rotate texture node
> >>> would take in a Texture, an Axis and an Angle, and output a Texture.
> >>> The fact that the texture it outputs would be a function calling its
> >>> input texture with modified coordinates wouldn't even be apparent to
> >>> the user.
> >>
> >> Being placed before the thing it modifies I guess is a matter of
> >> terminology, but let me be more clear on what I mean by differences in
> >> ordering. As I understand the texture nodes code, currently the texture
> >> manipulation runs in reverse order compared to shading nodes (while
> >> color manipulation runs in the same order). Example that at first sight
> >> seems to give the same result, but is actually different:
> >>
> >> shading nodes: geom orco -> rotate X -> rotate Y -> texture voronoi
> >> evalualted as: voronoi(rY(rX(orco)))
> >>
> >> texture nodes: voronoi -> rotate X -> rotate Y (mapped to orco)
> >> evaluated as: voronoi(rX(rY(orco)))
> >>
> >> The shading nodes example may not be a great one because you want to add
> >> ShaderCallData there too, but consider for example two rotate nodes
> >> manipulating a vector which is then input in a modifier node.
> >>
> >> The order of texture nodes can however be reversed, if you go over the
> >> nodes in two passes.
> >>
> >>> Likewise, vector rotate would take in a vector, axis and angle and
> >>> output a vector.
> >>
> >>> In fact, thinking about it now, the fact that rotating textures and
> >>> vectors would still be different nodes in the new system is solvable.
> >>> In the same way that anything applicable to a color is applicable to a
> >>> texture: anything applicable to a vector is applicable to a texture
> >>> simply by modifying its coordinates. So using the same implicit
> >>> conversion I proposed which converts operations on colors to
> >>> operations on textures, one could trivially implement a similar
> >>> conversion allowing any operation on a vector to be used on a texture.
> >>> So, instead of modifying the texture function's output (a color) we
> >>> modify its input (a vector).
> >>>
> >>> More generally, we extend my proposed implicit conversion rule
> >>>
> >>> (A -> B) -> ((Q -> A) -> (Q -> B))
> >>>
> >>> to also perform the conversion
> >>>
> >>> (A -> B) -> ((A -> Q) -> (B -> Q))
> >>>
> >>> The specific case in question being
> >>>
> >>> (vector -> vector) -> ((vector -> color) -> (vector -> color))
> >>>
> >>> As you can see, since (vector -> color) is the type of a texture, this
> >>> means any operation taking and returning vectors can also work on
> >>> textures.
> >>
> >> OK, implicit conversions go a long way to unifying such nodes, I agree.
> >>
> >> How would you drive for example specularity with a texture in the
> >> shading nodes (or velocity in particle nodes)? Can you link up those two
> >> directly, using perhaps orco texture coordinates by default? Or do you
> >> add a node inbetween which takes that texture function + coordinate to
> >> do the conversion?
> >>
> >> Another thing that is not clear to me is ShaderCallData. With texture
> >> nodes you're passing a function which takes a coordinate as input, what
> >> does the shader function take as input? It doesn't make much sense to
> >> rotate a shader result I guess, what does that rotate, the point,
> >> normal, tangent? So you don't pass along shader functions, and it stays
> >> basically the same?
> >>
> >>> Now, even if this wasn't true and the implicit conversion couldn't be
> >>> implemented (which it could), the new system would still unify all the
> >>> different versions of the common nodes like Math and Invert, each of
> >>> which there currently has three different implementations, one for
> >>> each tree. Good luck modifying the Math node once we have five tree
> >>> types.
> >>
> >> I'm not proposing to keep the nodes separated per tree type, only a
> >> subset of nodes would be tied to tree types, Math nodes would not be one
> >> of those.
> >>
> >>
> >> Anyways, I can see that it would be cool to mix texture nodes with
> >> modifier nodes for displacement for example, and that this is only
> >> possible when passing functions. I'm just not sure I like all the
> >> consequences.
> >>
> >> Brecht.
> >>
> >> _______________________________________________
> >> Bf-committers mailing list
> >> Bf-committers at blender.org
> >> http://lists.blender.org/mailman/listinfo/bf-committers
> >>
> >
> _______________________________________________
> Bf-committers mailing list
> Bf-committers at blender.org
> http://lists.blender.org/mailman/listinfo/bf-committers
>


More information about the Bf-committers mailing list