[Bf-committers] Proposal for unifying nodes

Matt Ebb matt at mke3.net
Wed Jun 17 01:52:38 CEST 2009


On 17/06/2009, at 6:18 AM, Robin Allen wrote:

 2009/6/16 Brecht Van Lommel <brecht at blender.org>
>
> OK, I think I understand the idea the concept of a Shader object now.
>
>> Still I prefer to just use materials for that. The fact that you can
>> create a shader object in the middle of any node tree, instead of
>> having to make a material, that's neat, but it's really such a corner
>> use case that I don't think it's worth it.
>>
>
> I disagree, I think this would move shader nodes from being just a "mixer"
> to something you could actually create unique effects with. I don't think
> that being "worth it" is a phrase I would use here, since that implies that
> we're giving something up to gain something.
>

I think you misunderstand - I think what Brecht is saying here is that the
idea of putting a shader object inside for example a modifier node stack
doesn't make any sense and is not a practical use case. In this context, it
makes sense for the shader node to be attached to a material (as in, a
material is a container of shading nodes).

> The user doesn't have to think of textures as functions at all. The user
> will most likely think of them as textures. To think of textures as
> textures, and texture nodes as working on textures is surely just as
> natural
> as to think of texture nodes as textures (as well as being, for
> aforementioned reasons, hugely more useful).
>
...

> A user will see nodes that work on textures,
> nodes
> that work on shaders, nodes that work on colors, and so on.
>

Anyway, I've been trying to understand this proposal, but i still don't see
how it's an improvement. Here are a few objections I have:

I think you're too quick to equate shading and textures to image buffers (as
in the compositor) - concrete things that can be manipulated. They're not,
they're functions that take inputs and return outputs. They can be 2D or 3D,
they can be procedural (like the point density texture in the sim_physics
branch), and they respond to different input.

One major difference between images in the compositor and shading/textures
is what Nathan mentioned - in the compositor, there is implicit information
available, the pixel that's being processed. That's always going to be
there, and it's always constant. In the context of textures, this doesn't
exist - the texture coordinates (and derivatives) are inputs to the texture,
along with all the other texture's inputs (i.e. noise size, or point density
lookup radius). Making this specific coordinate input backwards and implicit
somehow, but not the other inputs, is very confusing and arbitrary, and
breaks the mental model of the flow of information through the tree.

One of the whole points of a node system is that it clearly visualises the
chain of inputs and outputs. You have data, feed that into a node to do
something with it, and get an output, which you can then feed into something
else. In your proposal, rather than information constantly flowing from
inputs to outputs, you have information flowing both backwards (texture
coordinates) and forwards (everything else). This is very confusing and it's
probably why it's taken so long for me to grasp this proposed concept. It's
also inconsistent in that it seems to only work this way for 'textures',
whatever they are. All other nodes would have a clear flow of inputs and
outputs, but there you've got a disparity between how different nodes
process information, but one that's not even communicated at all, which I
consider worse.

The other thing that concerns me too is that by taking away direct access to
texture coordinates, it'd drastically removing functionality, rather than
adding more. Your examples of rotation/translation/scale are already trivial
with texture coordinates, and in your proposal it seems you'd have to rely
on specific nodes designed for manipulating this hidden implicit data.
Currently I can do all kinds of funky things by manipulating the coordinates
directly - they're just inputs. For example this:

http://mke3.net/blender/etc/texco_warp.png

Anyway, I still don't understand why this is even something that's necessary
to be done. If the goal is of unifying node systems, cleaning up the
back-end code, making it easier to add different node types, I don't see how
the current data-driven model (perhaps with some modifications) precludes
that.

cheers,

Matt


More information about the Bf-committers mailing list