[Bf-committers] Shading System Proposals

Brecht Van Lommel brecht at blender.org
Mon Dec 21 10:30:15 CET 2009


Hi Yves,

On Sat, Dec 19, 2009 at 3:48 AM, Yves Poissant <ypoissant2 at videotron.ca> wrote:
> No. I'm not suggesting that we restrict anybody into any ways of working.
> But the way stuff are represented in an application help the user build a
> mental image of the concepts. So if the provided preset, legacy or new,
> BSDFs are designed and represented like layered BSDF, then the user will get
> used to this representation and this will guide his eventual attempts at
> building new BSDFs. On the other hand, is supplied BSDF look like the
> current material nodes, then the users will try to replicate this model.
>
> Just provided well designed examples to start. Right now, the shaders are
> separated as diffuse and specular (and other) and in the UI, the diffuse
> shader comes first and then the specular. This does not help convey the idea
> of layered BSDF. It conveys the idea that a typical shader should have a
> diffuse and a specular components and this is not true. Diffuse and specular
> are really just two BSDFs. And they don't need to be both present on a
> material. For example, a colored transparent lacker over a metal would be
> better represented with two specular BSDF. Right now, the only way to do
> that is with material nodes and because each material node comes with all
> the CG shader hacks together, most of them need to be turned off.
>
> Also, layering BSDFs and doing that in a physically plausible way by
> manually linking nodes setups can become a nightmare. It would be nice to
> have a Layering node that would take care of a lot of that sort of
> bookkeeping. like keeping the energy balanced between the individual layers
> even when taking the Fresnel or the Kubelka-Munk factors into account. This
> could be turned OFF of course. Again, this sort of automatic handling of
> things would not be a panacea.and could be overriden or done manually. But
> having that as an example could show how to approach BSDF building in a more
> plausible way.

The idea would be to have presets that control some node setup, so
we'll have to figure out good presets and nodes needed to build them.
I don't have a overall picture of which ones would be needed yet. What
I'm interested in for the design is, what kind of informations needs
to flow between nodes for such energy conservation to work for
example? Currently my thinking is that we need:

* eval: given in/out vectors, return color
* sample: given in vector, return out vector + pdf
* intensity estimate: given in vector (or not)?, integral over
outgoing directions

But that might not be enough for all possible setups?

> Blending the Monte-carlo integration process could also be handled with
> nodes if we had a switching node that would switch between two or more BSDFs
> based on some static or dynamic probabilities for each BSDFs and a russian
> roulette when come the time to integrate samples. We don't want to sample
> both/all BSDF and weight-combine the resulting color. We would want to
> sample/integrate each BSDF according to their probabilities of contributing.

Right, I think this is what the mix node should do already in it's
sample function. If it has the intensity estimate from incoming nodes,
and has the blending factor, it should be able to do this.

>> For the pixel, plugging something in the combined node would override
>> automatically using the BSDF for shading the first hit. So it would
>> compute lighting with the BSDF in the Light node, and then multiply
>> that with AO and fill it into the pixel. So I would say the effect is
>> darkening the light with AO.
>
> I'm trying to understand.
>
> 1) The BXDF input in the output node is there to pass in the Lambert BXDF
> back to the Light node. So now, the light node can use this BSDF to compute
> the color of the hit. This shaded color is then multiplied by the AO.
> Somehow, I doubt this is the correct interpretation because this would mean
> the BSDF need to travel backward through the multiply note to the Light.
>
> or 2) The light and the AO at the hit are multiplied together and that
> result is passed to the output node where a Lambert BSDF is waiting to pick
> that information for shading the hit. But then a BSDF needs a light vector
> and multiplying the light which is light vector dependent with the AO which
> is not would result in wrong shading. So I doubt this is the correct
> interpretation too.
>
> or 3) ... I don't know.

Right, this is quite confusing, but 1) is correct. What would really
be happening is this:
http://users.telenet.be/blendix/shader25_plugging_in_bxdf.png

We could require these to be plugged in explicitly, or perhaps make
separate output nodes to make this more clear. It would mostly be
convenient to not having to connect these things, but maybe it's more
important to be clear.

> BTW, I assume that the Light node represents any light in the scene. Or is
> one Light node associated to only one light in the scene?

That would be configurable, the default would be all lights, but I
guess you would be able to set a light group.

> One thing that I seem to observe is that the hit to camera needs to be
> pretty much hard-wired and the light to hit too. Is that right?

It depends a bit, but if you want the rendering algorithm to support
all possible node setups, then yes that needs to be hard-wired. But if
it doesn't make sense for some algorithm to support e.g. a Light node,
then that can just be not supported there.

> What would be a minimum node setup for just traditional Legacy CG render of
> a scene?

Disregarding textures, there would be a diffuse and specular node,
joined by an add node which then goes into the bxdf output. Not sure
how to do emission yet, I guess there needs to be a separate output
for that as well in which a color would be plugged (or should that be
an EDF?).

> How are multiple-reflection bounces represented in a nodes setup? Implicitly
> or explicitly?

Implicitly, bounces would use the BSDF.

> What would a BSDFs node setup look like if designed to be used in a photon
> map or a path tracer?

The same hopefully, that is the intention anyway :). Both would be
using the BSDF. However, if the user wanted to, they would be able to
explicitly specify what part is diffuse and what is specular, though
by default that would be automatically determined.

Brecht.


More information about the Bf-committers mailing list