[Bf-committers] Proposal for unifying nodes

Matt Ebb matt at mke3.net
Wed Jun 17 04:00:16 CEST 2009


On Wed, Jun 17, 2009 at 11:01 AM, Robin Allen <roblovski at gmail.com> wrote:

> In any case, I disagree that putting a shader object in the same tree as
> modifier
> nodes makes no sense. You could get some interesting effects driving shader
> parameters from the mesh data.


In the abstract sense, sure, drivering shader parameters from mesh data is
good functionality. However in the context of nodes that act on meshes it
doesn't make sense. What would a mesh modifier node be? It would be a node
gets executed for each vertex in a mesh, responds to inputs, and outputs a
new mesh. It doesn't make any sense to have a shader node in there, because
it's completely out of context. You don't input data to a shader for each
vertex as the system is processing mesh updates in the dependency graph -
there is no such thing as a shader in that context. And vice versa, the
renderer doesn't iterate over vertices when it shades a pixel.

The correct way to do things like this is have mesh modifer nodes that can
output their data as vertex colour or UV coordinate layers for example, and
use that data in your shading tree.


> > I think you're too quick to equate shading and textures to image buffers
> > (as
> > in the compositor) - concrete things that can be manipulated. They're not
>
> They *are*, though. Look at the current texture nodes -- they manipulate
> textures.
>
> they're functions that take inputs and return outputs. They can be 2D or
> 3D,
> > they can be procedural (like the point density texture in the sim_physics
> > branch), and they respond to different input.
>
> They're functions in the mathematical sense, yes (which certainly doesn't
> preclude them from being acted upon) but to us, to users, they're images.
> If you can't rotate a texture, if you can't blur a texture, users will be
> asking
> why.


No, they're not images, and trying to make this assumption/abstraction
doesn't mean it's actually true. Textures are 3 dimensional and completely
dynamic, they respond to input. You can (or should be able to) modify a
cloud's texture's noise size based on the angle between view vector and face
normal while offsetting it's lookup coordinate based on a vertex colour.
They aren't so static like compositing buffers are.

I understand the abstraction that you're trying to make, but I don't think
it's a good one. It misrepresents what actually happens in a shading
pipeline, which can lead to all sorts of nasty things - one example in your
radial blur is that it's completely hidden where and when the texture is
actually sampled, making it possible for people to unknowingly create very
slow shaders. I would hate to have to debug such a thing for performance. At
least in the data-driven version of your radial blur, it's very explicit
what's going on and when the texture is being accessed.

http://mke3.net/blender/etc/texco_radblur.png

Making this specific coordinate input backwards and
> > implicit
> > somehow, but not the other inputs, is very confusing and arbitrary, and
> > breaks the mental model of the flow of information through the tree.
>
> No, this is where you're making a logical error. Noise size et al. are
> inputs
> to the texture *generator*, not the texture. Coordinates are inputs to the
> texture. You can sample a texture *at* a coordinate. You can *create* a
> texture *with* a noise size.


It's not a logical error, it's semantics. It is the same thing in the actual
texture code:

static int marble(Tex *tex, float *texvec, TexResult *texres)

tex and texvec contain inputs (parameters and coordinate, which are all
variable), texres is the output. There's no difference. You can plug a
location vector into the size input or an arbitrary float value into the
input coordinate.

Again, textures are not data, textures are dynamic functionality.

If the goal is of unifying node systems, cleaning up the
> > back-end code, making it easier to add different node types, I don't see
> > how
> > the current data-driven model (perhaps with some modifications) precludes
> > that.
>
> It precludes that because of its reliance on hidden contextual data, and
> the
> different ways each tree type is evaluated. Shader nodes are per-pixel.
> Compositor
> nodes are per-frame. Put a compositor node in a texture tree and it'll ask
> "what frame are we on? Where's the framebuffer?" Contextual data is passed
> implicitly to each node through its data pointer, and the data pointer is
> set when
> the tree starts evaluation, depending on what tree type it is.


But that contextual data is entirely necessary. You can't just gloss over
how (the modifier system, or the particle system, or the renderer) actually
works, what data is available in context, what information is safe or
restricted at certain points in execution and assume that an abstracted node
system will make it all go away. You can't pass mid-modifier-execution mesh
data through a data pointer to a shader node. You can't pass
mid-shader-execution colour data to a particle node. Perhaps in theory it
could be possible, but not by any feasible means in Blender. To achieve this
is not just changing the conceptual framework of the node editor, it's
completely changing Blender's architecture.

Several users have wanted to know why they can't use texture nodes in the
> shader tree. Texture nodes are mappings, they're useful in just about any
> context.


I'd argue that there is no need for there to be texture nodes at all, and
that everything should be done in the shader tree. And there's no reason it
can't be done, by passing coordinates as inputs.

cheers,

Matt


More information about the Bf-committers mailing list