[Bf-committers] Proposal for unifying nodes

Robin Allen roblovski at gmail.com
Wed Jun 17 16:26:03 CEST 2009

2009/6/17 Matt Ebb <matt at mke3.net>

> On Wed, Jun 17, 2009 at 12:59 PM, Robin Allen <roblovski at gmail.com> wrote:
> :) Inventing more imaginary abstractions isn't going to fix that. How would
> such modifier objects cope with this situation?
> http://mke3.net/blender/etc/mesh_shader_modifiers.png
> There are two things out of context here - the shader node outputting a
> specular colour, and a shader node taking a vertex group input to its
> reflectivity value.

Hi Matt, you're absolutely right. It doesn't make any sense to connect
shader nodes to modifier nodes like that. That's not what I'm suggesting.
Obviously, anyone can see it makes no sense!

I might have mentioned using shaders and modifiers in the same tree, but
that's not what I meant. It's easier to think of a use case for using
textures and modifiers together, so here's an example of that:


Here you've got a custom texture tree that defines a sort of ripple texture,
and you're using it to drive a modifier. A texture's "context" is its
coordinates, and we can get those from the mesh.

So, in our current Blender, the "context" for a node (coordinates, for
textures) is something special that only Blender can provide when evaluating
the tree. Here a node can use whatever values you specify for the "context".

> It does, because your design hides all of this. Your proposal assumes a lot
> of magic going on behind the scenes, converting between different abstract
> types, creating temporary buffers, etc. From experience, I would not like
> to
> use this. I can imagine all sorts of strife like memory usage spiralling
> out
> of control due to a side effect of some hidden conversion somewhere, or
> slowdowns under certain conditions that you can't figure out. In practical
> usage this sort of thing is what keeps you back at work late at night and
> unhappy.

> For example, in this (http://mke3.net/blender/etc/texco_radblur.png) it's
> obvious that the more texture nodes you drop in, the slower it will get. In
> your example, it's very hidden, different things are going to happen
> depending on whether those rotation nodes are in parallel, or in series.
> This is a simple example for textures, but it gets much trickier when you
> start thinking about integrating all sorts of other node systems for
> modifiers, particles, etc.

I don't know that it is that obvious. Is it really more obviously slow than
looking at the internals of that Spin node I did?

If the rotation nodes are in parallel, you're creating three new textures.
If they're in series you're creating one. Again, it seems fairly easy to
figure out what will be slower.

Now, you're right that performance is a concern, but I don't agree that I'm
"hiding" bottlenecks. I would say I'm "encapsulating" them. That blur node
could get real slow real fast. Sure. But that now becomes a property of
blurring: blurring is slow. You drop in a blur node, things slow down. The
same blur effect will be equally slow in the new and old systems. But in
mine it's traceable to a single node.

Also, a little "quality" slider on the blur node, influencing how many
samples are taken, could go a long way to a) mitigate the slowdown and b)
educate users that blurring can be an intensive operation.

> The good thing about most node systems is that they let you dive into the
> structure of the system, see how it's working, modify the data connections
> and internal processing directly. It should illuminate, not cover up and
> obscure.

I get where you're coming from. You want things to be as low-level as
possible. My system lets you mess with stuff like coordinates, but it also
provides useful abstractions like textures-as-data *and* generic trees with
custom inputs. Yours only does the former.


More information about the Bf-committers mailing list