[Bf-committers] Shading System Proposals

Brecht Van Lommel brecht at blender.org
Thu Dec 17 13:07:21 CET 2009


Hi Yves,

On Thu, Dec 17, 2009 at 4:37 AM, Yves Poissant <ypoissant2 at videotron.ca> wrote:
> Typical legacy shaders that combine diffuse and specular actually are
> emulating a double-layer material except that everything is mixed up. Then
> we talk about separating passes from those shaders. This legacy shader model
> made us thinking about this issue in reverse. We should instead think of
> combining the layers instead of separating them. I think it makes the model
> more easily understandable when those "passes" are explicitly separated into
> different layers of BSDFs. And it is nearer to the real thing anyway.

I agree we should encourage users to build shaders with layering.
However, I am not sure what you are suggesting. That instead of a node
system we should restrict the user to a layer system?

> This situation is not only for indirect lighting. Say we are looking at an
> object reflected in a mirror and we see both the object and its reflection
> in the shot. How willl this separation be kept through the mirror? Is the
> reflected image considered totaly specular even if the reflected object is
> partialy specular and partially diffuse? What parts of the reflected image
> will go into the specular pass and in the diffuse pass? If those passes are
> then tweaked in the compositor, how will this affect the reflected image?

Specular reflections and refractions are indeed problematic for
compositing, and I don't think we should try to do anything clever to
improve that, the separation between specular and diffuse will be lost
through a mirror.

>> ... But the point is that we are not restricting ourselves to
>> physically motivated use cases. We get a node setup created by a user,
>> and have to deal with that somehow, even if it is a "legacy" material
>> or doing something that makes no sense physically, a mix node with
>> different blending nodes is just a basic feature of any node shading
>> system.
>
> Yes. I understand. I'm not trying to say that simple output blending should
> not be available. There are a ton of situations where we wouldn't need the
> power of blending BSDF proporties. This is fine for animating transitions
> for instance. But when the intention is to produce a different BSDF from
> mixing two BSDFs, I believe that a better result would be achieved by mixing
> the BSDF properties.
>
> So blending BSDF, I agree. I see three ways to do that: 1) blending the
> output colors, 2) blending the monte-carlo integrations. and 3) blending the
> BSDF properties.

Blending output colors and properties are both possible with a node
setup, and if the latter is better we can try to figure out a way to
do that easier than doing it manually.

>> Letting the node setup create a BXDF and using that for rendering
>> (which is similar to pbrt materials), or evaluating the actual nodes
>> while rendering would really be equivalent when it comes to physically
>> based rendering algorithms.
>
> I'm not sure I understand this one. What do you mean by "Letting the node
> setup create a BXDF"? The user can create a BSDF with a set of nodes? Or a
> set of nodes is designed to produce a BSDF object (or data type) in the form
> of another set of nodes? Or a set of nodes is designed to tweak one of the
> preset BSDF nodes?

I was referring to implementation. The node would allocate a BXDF in
memory, fill it in and pass that through to the next node. Using the
BXDF in the integrator then would not refer back to the original nodes
but instead use this constructed BXDF. So you could say it creates
another set of nodes in a way. This is like Material GetBSDF in pbrt.
I'm not sure I'm a proponent of this though, I prefer to just keep
using the original nodes.

>> Where it becomes useful is when you want to do non-physical things,
>> like this node setup:
>> http://wiki.blender.org/uploads/1/1d/Shading_nodes_C.png
>>
>> In that graph it is using the BXDF implicitly in the Light node, so
>> not necessarily passing it along between nodes. ...
>
> I don't understand what this node setup is doing. Can you explain? You say
> that the BXDF is used implicitly in the Light node. From that I deduct that
> the color output from the Light node is the result of processing the light
> through the BSDF. Is that correct?

Yes.

> What do you mean by "BSDF don't have access to colors that are already lit?"
>
> Otherwise, if I interpret verbatim what I see in those nodes, The light
> color at a hit point is multiplied by the AO factor at the same hit point
> and this is writen in the combined pass. This have the effect of coloring
> the AO with the light color. Then the Lambert BSDF does its shading
> calculation for the same hit point and writes the resulting color in the
> bxdf pass. Later, through compositing, I can combine them.

For the pixel, plugging something in the combined node would override
automatically using the BSDF for shading the first hit. So it would
compute lighting with the BSDF in the Light node, and then multiply
that with AO and fill it into the pixel. So I would say the effect is
darkening the light with AO.

>> ... If we were passing
>> along a BXDF type it would allow you a bit more flexible setups, in
>> that you could have multiple Light nodes driven by multiple BXDF's
>> setup and mix the results of those. ...
>
> To me, a BSDF interacts with light. That is the fundamental purpose of BSDF.
> It receives light and output reflected light. When I read "multiple Light
> nodes driven by multiple BXDF's", I'm lost. It seems like the reverse of the
> logical thing to do. Lights are what drives the illumination in a scene.
> Lights are not driven. So there seems to be some fundamental assumptions
> behind such a sentence that I am missing and I'd like you to elaborate on
> that.

It is indeed not the right thing to do physically speaking. The
purpose of that would be to allow some more flexibility to do
non-physical things at the first hit, like you can in e.g. renderman.
What goes into the pass outputs like combined is not a BSDF, but a
color, so it is not restricted to being driven by lights, instead the
material can drive the lights and combine things in different ways.
Evidently that doesn't work for all rendering algorithms.

Brecht.


More information about the Bf-committers mailing list