[Bf-committers] Proposal for unifying nodes

Robin Allen roblovski at gmail.com
Wed Jun 17 04:59:19 CEST 2009


> In the abstract sense, sure, drivering shader parameters from mesh data is
> good functionality. However in the context of nodes that act on meshes it
> doesn't make sense. What would a mesh modifier node be? It would be a node
> gets executed for each vertex in a mesh, responds to inputs, and outputs a
> new mesh. It doesn't make any sense to have a shader node in there, because
> it's completely out of context. You don't input data to a shader for each
> vertex as the system is processing mesh updates in the dependency graph -
> there is no such thing as a shader in that context. And vice versa, the
> renderer doesn't iterate over vertices when it shades a pixel.


You're going to hate me for saying this, but if modifier nodes returned
"modifier objects" which were then called to modify meshes, then they could
coexist with any other nodes with no evaluation context in sight.

The correct way to do things like this is have mesh modifer nodes that can
> output their data as vertex colour or UV coordinate layers for example, and
> use that data in your shading tree.
>

That's *a* way to do things, sure.


> No, they're not images, and trying to make this assumption/abstraction
> doesn't mean it's actually true. Textures are 3 dimensional and completely
> dynamic, they respond to input. You can (or should be able to) modify a
> cloud's texture's noise size based on the angle between view vector and
> face
> normal while offsetting it's lookup coordinate based on a vertex colour.


http://img268.imageshack.us/img268/734/stuffy.png


> I understand the abstraction that you're trying to make, but I don't think
> it's a good one. It misrepresents what actually happens in a shading
> pipeline, which can lead to all sorts of nasty things - one example in your
> radial blur is that it's completely hidden where and when the texture is
> actually sampled, making it possible for people to unknowingly create very
> slow shaders. I would hate to have to debug such a thing for performance.
> At
> least in the data-driven version of your radial blur, it's very explicit
> what's going on and when the texture is being accessed.


This is a very valid point. Textures could indeed become very slow if they
sampled their inputs multiple times, and then those inputs did the same
thing. *But*, this has nothing to do with my design. If you want to do five
radial blurs on a cloud texture it's going to be slow however you implement
it. However, only a few nodes will need to sample their inputs multiple
times, and those are the nodes which will slow things down. Aurel and I were
discussing ways around this on IRC earlier; it's possible that sampling to a
buffer and processing that is the best way to go for intensive operations.


>
> http://mke3.net/blender/etc/texco_radblur.png


Can't swap that out for something else though, can you? What if you want to
blur a checkerboard in the same way? What if the operation was more complex
than blur, using its input lots of times? These are things that users will
want to do.

 Also, you're depending on there being a shader context, using that mapping
node there. Textures are useful outside of shaders.


It's not a logical error, it's semantics. It is the same thing in the actual
> texture code:
>
> static int marble(Tex *tex, float *texvec, TexResult *texres)
>
> tex and texvec contain inputs (parameters and coordinate, which are all
> variable), texres is the output. There's no difference. You can plug a
> location vector into the size input or an arbitrary float value into the
> input coordinate.


You're right: thinking about it, the two are mathematically equivalent.
(size, coord) -> color is equivalent to (size) -> (coord -> color).
Blender's C codebase, lacking the functional abstraction, implements it as
the former. The texture nodes implement it as the latter.

Again, textures are not data, textures are dynamic functionality.
>

They're equivalent. You can see them as either. Personally I think users
would find "textures as things" to be a nicer abstraction.

But that contextual data is entirely necessary. You can't just gloss over
> how (the modifier system, or the particle system, or the renderer) actually
> works, what data is available in context, what information is safe or
> restricted at certain points in execution and assume that an abstracted
> node
> system will make it all go away. You can't pass mid-modifier-execution mesh
> data through a data pointer to a shader node. You can't pass
> mid-shader-execution colour data to a particle node. Perhaps in theory it
> could be possible, but not by any feasible means in Blender. To achieve
> this
> is not just changing the conceptual framework of the node editor, it's
> completely changing Blender's architecture.


I'm not suggesting doing any of these things. Blender's core architecture
doesn't need to change. This is only about nodes. Of course the contextual
data needs to be passed to the nodes. I'm just thinking of ways to express
it that don't require separation of tree types.

I'd argue that there is no need for there to be texture nodes at all, and
> that everything should be done in the shader tree. And there's no reason it
> can't be done, by passing coordinates as inputs.


Why the shader tree? What if you want to use textures in compositing or
other trees? Textures are just coordinates in, colors out. They're
infinitely useful.

-Rob


More information about the Bf-committers mailing list