[Bf-committers] Proposal for unifying nodes

joe joeedh at gmail.com
Wed Jun 17 03:38:42 CEST 2009


I'm not sure why you think this is all possible. . .while having a system to
convert between any arbitrary kind of data is certainly possible, you'd end
up with a pretty big mess when you're done.  You'd literally have to find
meaningful conversions between everything from shader functions, to image
buffers, to *geometry*, to who knows what else.  In practice, I suspect
there would be a ton of edge cases, where things would not *quite* work as
people expect, and we'd be swamped in bug reports. ;)

Besides, this sort of mix and matching isn't powerful, it's mashing
different systems and different concepts together, and hoping something
useful will come out of it.  It mostly relies on people coming up with
intelligent, useful conversions, which may be more difficult then it seems.
Especially when you want to drive unrelated concepts, like mixing shading
nodes with modifier nodes (there may be some cases it could be useful, but
in general it'd mostly be clunky, buggy, and unstable).

And don't forget that some nodes require contextual data, that won't always
be available.  Modifier nodes, for example, probably would need a scene (at
the very least), and shader nodes already get a bunch of stuff behind the
scenes.

Joe

On Tue, Jun 16, 2009 at 7:01 PM, Robin Allen <roblovski at gmail.com> wrote:

> Hi Matt
>
> 2009/6/17 Matt Ebb <matt at mke3.net>
>
> > I think you misunderstand - I think what Brecht is saying here is that
> the
> > idea of putting a shader object inside for example a modifier node stack
> > doesn't make any sense and is not a practical use case. In this context,
> it
> > makes sense for the shader node to be attached to a material (as in, a
> > material is a container of shading nodes).
> >
>
> Ah, okay. Yes, he probably did mean that. This is an issue that's been
> raised,
> but my answer is that if tree types are unified, then there won't really be
> any
> such thing as a modifier node tree, there will just be a node tree. If the
> node
> tree outputs a modifier, you can use it as one. Why should the user have to
> decide what the tree will be used for before he starts building it?
>
> In any case, I disagree that putting a shader object in the same tree as
> modifier
> nodes makes no sense. You could get some interesting effects driving shader
> parameters from the mesh data.
>
> I think if you have a fixed pipeline then it's fair enough to dictate what
> users
> will and won't want to do, but the node system has the potential to be so
> much
> more powerful than that.
>
> Take Python, for example. Who in their right mind would want to import
> curses
> and opengl in the same program? No-one, right? So it should be an error?
>
> Anyway, I've been trying to understand this proposal, but i still don't see
> > how it's an improvement. Here are a few objections I have:
> >
> > I think you're too quick to equate shading and textures to image buffers
> > (as
> > in the compositor) - concrete things that can be manipulated. They're not
>
>
> They *are*, though. Look at the current texture nodes -- they manipulate
> textures.
>
> they're functions that take inputs and return outputs. They can be 2D or
> 3D,
> > they can be procedural (like the point density texture in the sim_physics
> > branch), and they respond to different input.
>
>
> They're functions in the mathematical sense, yes (which certainly doesn't
> preclude them from being acted upon) but to us, to users, they're images.
> If you can't rotate a texture, if you can't blur a texture, users will be
> asking
> why.
>
> One major difference between images in the compositor and shading/textures
> > is what Nathan mentioned - in the compositor, there is implicit
> information
> > available, the pixel that's being processed. That's always going to be
> > there, and it's always constant.
>
>
> I'm pretty sure the compositor processes image buffers, not single pixels.
> I'm
> not sure what you mean by "current pixel" in the context of compositing.
>
>
> > In the context of textures, this doesn't
> > exist - the texture coordinates (and derivatives) are inputs to the
> > texture,
> > along with all the other texture's inputs (i.e. noise size, or point
> > density
> > lookup radius). Making this specific coordinate input backwards and
> > implicit
> > somehow, but not the other inputs, is very confusing and arbitrary, and
> > breaks the mental model of the flow of information through the tree.
>
>
> No, this is where you're making a logical error. Noise size et al. are
> inputs
> to the texture *generator*, not the texture. Coordinates are inputs to the
> texture. You can sample a texture *at* a coordinate. You can *create* a
> texture *with* a noise size.
>
> I think maybe you're thinking of texture nodes as if they were implemented
> according to Nathan's design, where a texture node "is" a texture, and you
> feed the coordinate to the node. This isn't how texture nodes work, and
> that
> design has its own problems which I've explained in my reply to Nathan.
>
> In fact, this is how the original texture nodes were implemented, but I
> recoded them because I realised I'd been conflating textures with texture
> generators.
>
> One of the whole points of a node system is that it clearly visualises the
> > chain of inputs and outputs. You have data, feed that into a node to do
> > something with it, and get an output, which you can then feed into
> > something
> > else.
>
>
> Exactly! In this case the data is textures.
>
>
> > In your proposal, rather than information constantly flowing from
> > inputs to outputs, you have information flowing both backwards (texture
> > coordinates) and forwards (everything else). This is very confusing and
> > it's
> > probably why it's taken so long for me to grasp this proposed concept.
> It's
> > also inconsistent in that it seems to only work this way for 'textures',
> > whatever they are. All other nodes would have a clear flow of inputs and
> > outputs, but there you've got a disparity between how different nodes
> > process information, but one that's not even communicated at all, which I
> > consider worse.
>
>
> No, this is wrong. All you have to do is think of the textures themselves
> as
> the data which is flowing. Take this example:
>
> http://img5.imageshack.us/img5/7158/spinn.png
>
> Textures flow through rotate nodes and come out rotated. Textures flow
> through mix nodes and mix together. Textures flow through the Spin group
> and come out with the effect applied. Nothing is flowing backwards.
>
> Now imagine the setup required to do this using, as you suggest, coordinate
> inputs. I don't think it's even possible, and if it is it certainly won't
> be
> anywhere near as easy to follow.
>
>
> > The other thing that concerns me too is that by taking away direct access
> > to
> > texture coordinates, it'd drastically removing functionality, rather than
> > adding more. Your examples of rotation/translation/scale are already
> > trivial
> > with texture coordinates, and in your proposal it seems you'd have to
> rely
> > on specific nodes designed for manipulating this hidden implicit data.
> > Currently I can do all kinds of funky things by manipulating the
> > coordinates
> > directly - they're just inputs. For example this:
> >
> > http://mke3.net/blender/etc/texco_warp.png
>
>
> Nice. Could you warp a compositor buffer like that?
>
>
> > Anyway, I still don't understand why this is even something that's
> > necessary
> > to be done.
> >
>
> I don't understand why having separate tree types is something that's
> necessary
> to be done. Okay, it already *has* been done, but that's not a good reason.
>
> If the goal is of unifying node systems, cleaning up the
> > back-end code, making it easier to add different node types, I don't see
> > how
> > the current data-driven model (perhaps with some modifications) precludes
> > that.
>
>
> It precludes that because of its reliance on hidden contextual data, and
> the
> different ways each tree type is evaluated. Shader nodes are per-pixel.
> Compositor
> nodes are per-frame. Put a compositor node in a texture tree and it'll ask
> "what frame are we on? Where's the framebuffer?" Contextual data is passed
> implicitly to each node through its data pointer, and the data pointer is
> set when
> the tree starts evaluation, depending on what tree type it is.
>
> Several users have wanted to know why they can't use texture nodes in the
> shader
> tree. Texture nodes are mappings, they're useful in just about any context.
>
> Users should be able to create trees with any inputs and outputs they wish.
> There
> is no reason why they shouldn't be able to.
>
> -Rob
> _______________________________________________
> Bf-committers mailing list
> Bf-committers at blender.org
> http://lists.blender.org/mailman/listinfo/bf-committers
>


More information about the Bf-committers mailing list