[Bf-committers] Proposal for unifying nodes

Matt Ebb matt at mke3.net
Wed Jun 17 06:20:43 CEST 2009


On Wed, Jun 17, 2009 at 12:59 PM, Robin Allen <roblovski at gmail.com> wrote:

> > In the abstract sense, sure, drivering shader parameters from mesh data
> is
> > good functionality. However in the context of nodes that act on meshes it
> > doesn't make sense. What would a mesh modifier node be? It would be a
> node
> > gets executed for each vertex in a mesh, responds to inputs, and outputs
> a
> > new mesh. It doesn't make any sense to have a shader node in there,
> because
> > it's completely out of context. You don't input data to a shader for each
> > vertex as the system is processing mesh updates in the dependency graph -
> > there is no such thing as a shader in that context. And vice versa, the
> > renderer doesn't iterate over vertices when it shades a pixel.
>
> You're going to hate me for saying this, but if modifier nodes returned
> "modifier objects" which were then called to modify meshes, then they could
> coexist with any other nodes with no evaluation context in sight.
>

:) Inventing more imaginary abstractions isn't going to fix that. How would
such modifier objects cope with this situation?

http://mke3.net/blender/etc/mesh_shader_modifiers.png

There are two things out of context here - the shader node outputting a
specular colour, and a shader node taking a vertex group input to its
reflectivity value.

The first one (specular colour) makes no sense here. Saying that you can
wrap it into some shaderesult abstraction doesn't solve the problem of how
you get that data in the first place, which is entirely dependent on the
scene's render data, shader's brdf, position of lamps, angle of view,
shadows. This information is clearly not available for each vertex within a
modifier node tree.

The second one is equally bad. The renderer only has access to the final
mesh that's prepared for render. The vertex group that's generated in that
node tree isn't even part of the final mesh, the topology is different.

In both these cases, it's not even a matter of having more abstract types -
specular colour (vector of 3 floats) easily conceptually plugs into a
displacement vector (vector of 3 floats), and a vertex group (scalar float
value) easily plugs into reflectivity (scalar float value). The issue is
getting that data in the first place. There are limitations inherent in how
Blender works, in what contextual data can be available. It can't always
just be 'passed through'.

This is a very valid point. Textures could indeed become very slow if they
> sampled their inputs multiple times, and then those inputs did the same
> thing. *But*, this has nothing to do with my design.


It does, because your design hides all of this. Your proposal assumes a lot
of magic going on behind the scenes, converting between different abstract
types, creating temporary buffers, etc. From experience, I would not like to
use this. I can imagine all sorts of strife like memory usage spiralling out
of control due to a side effect of some hidden conversion somewhere, or
slowdowns under certain conditions that you can't figure out. In practical
usage this sort of thing is what keeps you back at work late at night and
unhappy.

For example, in this (http://mke3.net/blender/etc/texco_radblur.png) it's
obvious that the more texture nodes you drop in, the slower it will get. In
your example, it's very hidden, different things are going to happen
depending on whether those rotation nodes are in parallel, or in series.
This is a simple example for textures, but it gets much trickier when you
start thinking about integrating all sorts of other node systems for
modifiers, particles, etc.

The good thing about most node systems is that they let you dive into the
structure of the system, see how it's working, modify the data connections
and internal processing directly. It should illuminate, not cover up and
obscure.

cheers,

Matt


More information about the Bf-committers mailing list