[Bf-committers] Shading System Proposals

Yves Poissant ypoissant2 at videotron.ca
Thu Dec 17 04:37:00 CET 2009


Brecht,

> It's just terminology, I don't mind calling it BSDF, the intention was
> never to present the user with separate BRDF and BTDF node trees. The
> reason I called it BXDF is because depending on the pass being
> rendered, we may actually be dealing with a BSDF/BRDF/BTDF. For the
> combined pass of course there would be only a BSDF, ...

Yes. I understand. Conceptually, when coding the different passes, it is 
usefull to separate them. It helps divide the problem and conquer it so to 
speak.

> ... but we still want to support diffuse/specular/.. passes.

I agree with that. The way I see it is that this should be intrinsically 
possible with a layered BSDF model. In real materials, specular and diffuse 
components come from different layers of a material too. I seem to recall 
that both you and Matt mention this at some point in your wikis. So the 
coating layer would drive the specular pass and the underlying layer would 
drive the diffuse pass. Given that those are separate BSDFs, the user would 
actually be free to connect the coating BSDF output to the diffuse pass if 
he wishes though.

Typical legacy shaders that combine diffuse and specular actually are 
emulating a double-layer material except that everything is mixed up. Then 
we talk about separating passes from those shaders. This legacy shader model 
made us thinking about this issue in reverse. We should instead think of 
combining the layers instead of separating them. I think it makes the model 
more easily understandable when those "passes" are explicitly separated into 
different layers of BSDFs. And it is nearer to the real thing anyway.

I'm having difficulties seeing how that separated BSDF output thing would 
works for any path further than the camera to material path though. As soon 
as we start accumulating secondary paths contribution, it seems to me that 
this separation starts to get fuzzy since specular component (the coating 
BSDF) from the first hit material receives contribution from everything, 
direct light and indirect light coming from specular and diffuse reflections 
undiscriminately. I think you also acknowledges this in your wiki.

This situation is not only for indirect lighting. Say we are looking at an 
object reflected in a mirror and we see both the object and its reflection 
in the shot. How willl this separation be kept through the mirror? Is the 
reflected image considered totaly specular even if the reflected object is 
partialy specular and partially diffuse? What parts of the reflected image 
will go into the specular pass and in the diffuse pass? If those passes are 
then tweaked in the compositor, how will this affect the reflected image?

My intention is not to prove that this approach should be abandoned. Having 
done production work, I fully understand the need to render in passes and 
tweak in compositor. I mainly want to expose the possible limitations so 
solutions may be examined.

> There's plenty of use cases for mixing them, when you want to blend
> from one material to another, or varying the material using a texture,
> etc. ...

Yes. I see the point. I wonder if there is not another way of achieving this 
goal though. Blending the output of BSDFs will monotonically blend one 
appearance into another one. This type of blending have the effect of 
temporarily layering two BSDF. Sort of like the inbetween results of a Morph 
blend where we can see both images superposed together.

An alternative way would be to vary the BSDF properties. For instance, 
blending a material into another material could also be done by blending the 
values of a stack of BSDFs properties from one set of properties into 
another set of properties. So one could vary the roughnesses of the coating 
BSDF independently from the roughness of the underlying BSDF and yet 
independently from the absorption of the coating BSDF. They could all follow 
a different IPO curve for instance. This would be more powerfull than 
blending the output. At least, this is a method that should be possible. 
Doing it this way would not produce the superposing effect.

There is a relatively recent paper about mixing BSDF to produce a new BSDF. 
They compiled several BSDFs and the user can vary the appearance by 
manipulating sliders to get a more metalic appearance or a more plastic 
appearance, etc. But they do that by mixing the BSDF properties. Not by 
mixing the BSDF outputs.

> ... But the point is that we are not restricting ourselves to
> physically motivated use cases. We get a node setup created by a user,
> and have to deal with that somehow, even if it is a "legacy" material
> or doing something that makes no sense physically, a mix node with
> different blending nodes is just a basic feature of any node shading
> system.

Yes. I understand. I'm not trying to say that simple output blending should 
not be available. There are a ton of situations where we wouldn't need the 
power of blending BSDF proporties. This is fine for animating transitions 
for instance. But when the intention is to produce a different BSDF from 
mixing two BSDFs, I believe that a better result would be achieved by mixing 
the BSDF properties.

So blending BSDF, I agree. I see three ways to do that: 1) blending the 
output colors, 2) blending the monte-carlo integrations. and 3) blending the 
BSDF properties.

> Letting the node setup create a BXDF and using that for rendering
> (which is similar to pbrt materials), or evaluating the actual nodes
> while rendering would really be equivalent when it comes to physically
> based rendering algorithms.

I'm not sure I understand this one. What do you mean by "Letting the node 
setup create a BXDF"? The user can create a BSDF with a set of nodes? Or a 
set of nodes is designed to produce a BSDF object (or data type) in the form 
of another set of nodes? Or a set of nodes is designed to tweak one of the 
preset BSDF nodes?

> Where it becomes useful is when you want to do non-physical things,
> like this node setup:
> http://wiki.blender.org/uploads/1/1d/Shading_nodes_C.png
>
> In that graph it is using the BXDF implicitly in the Light node, so
> not necessarily passing it along between nodes. ...

I don't understand what this node setup is doing. Can you explain? You say 
that the BXDF is used implicitly in the Light node. From that I deduct that 
the color output from the Light node is the result of processing the light 
through the BSDF. Is that correct?

What do you mean by "BSDF don't have access to colors that are already lit?"

Otherwise, if I interpret verbatim what I see in those nodes, The light 
color at a hit point is multiplied by the AO factor at the same hit point 
and this is writen in the combined pass. This have the effect of coloring 
the AO with the light color. Then the Lambert BSDF does its shading 
calculation for the same hit point and writes the resulting color in the 
bxdf pass. Later, through compositing, I can combine them.

I guess this node setup is being evaluated for every pixel?

Sorry for so many questions but I feel my miscomprehension of those issues 
is what gets in the way of getting in line with the shading system proposal.

> ... If we were passing
> along a BXDF type it would allow you a bit more flexible setups, in
> that you could have multiple Light nodes driven by multiple BXDF's
> setup and mix the results of those. ...

To me, a BSDF interacts with light. That is the fundamental purpose of BSDF. 
It receives light and output reflected light. When I read "multiple Light 
nodes driven by multiple BXDF's", I'm lost. It seems like the reverse of the 
logical thing to do. Lights are what drives the illumination in a scene. 
Lights are not driven. So there seems to be some fundamental assumptions 
behind such a sentence that I am missing and I'd like you to elaborate on 
that.

> I don't think they are only adding color values, how else would they
> sample diffuse + highly specular materials with any efficiency? ...

>From their document "understanding" (Understanding mantra rendering), it is 
obvious that efficiency is not their main concern. Read the section "== 
sampling ==", their description of the sampling methods between the 
micropolygon render and the raytrace render. Their sampling methods are not 
intelligent at all. Search for "10 million" in this document and read their 
example about the 10 million polygon object seen from a far distance. That 
is why I believe they only mix the output and the BSDF has already done its 
deterministic sampling.

> ... But what they do doesn't matter that much to me.

To me neither. I mean, I don't want to keep on speculating on what they 
might do or not. I'd like to get the input from a real Mantra/VEX user 
though.

> Passing along a vector, and having mix/layer nodes pick randomly from
> inputs is simple and pretty much what pbrt does. So I know individual
> BXDFs + mixing/layering in a node setup will work.

Yup. This will work.

Yves 



More information about the Bf-committers mailing list