[Bf-committers] Shading System Proposals

Yves Poissant ypoissant2 at videotron.ca
Sat Dec 19 03:48:55 CET 2009


Brecht,


>> Typical legacy shaders that combine diffuse and specular actually are
>> emulating a double-layer material except that everything is mixed up.
>> Then
>> we talk about separating passes from those shaders. This legacy shader
>> model
>> made us thinking about this issue in reverse. We should instead think of
>> combining the layers instead of separating them. I think it makes the
>> model
>> more easily understandable when those "passes" are explicitly separated
>> into
>> different layers of BSDFs. And it is nearer to the real thing anyway.
>
> I agree we should encourage users to build shaders with layering.
> However, I am not sure what you are suggesting. That instead of a node
> system we should restrict the user to a layer system?

No. I'm not suggesting that we restrict anybody into any ways of working.
But the way stuff are represented in an application help the user build a
mental image of the concepts. So if the provided preset, legacy or new,
BSDFs are designed and represented like layered BSDF, then the user will get
used to this representation and this will guide his eventual attempts at
building new BSDFs. On the other hand, is supplied BSDF look like the
current material nodes, then the users will try to replicate this model.

Just provided well designed examples to start. Right now, the shaders are
separated as diffuse and specular (and other) and in the UI, the diffuse
shader comes first and then the specular. This does not help convey the idea
of layered BSDF. It conveys the idea that a typical shader should have a
diffuse and a specular components and this is not true. Diffuse and specular
are really just two BSDFs. And they don't need to be both present on a
material. For example, a colored transparent lacker over a metal would be
better represented with two specular BSDF. Right now, the only way to do
that is with material nodes and because each material node comes with all
the CG shader hacks together, most of them need to be turned off.

Also, layering BSDFs and doing that in a physically plausible way by
manually linking nodes setups can become a nightmare. It would be nice to
have a Layering node that would take care of a lot of that sort of
bookkeeping. like keeping the energy balanced between the individual layers
even when taking the Fresnel or the Kubelka-Munk factors into account. This
could be turned OFF of course. Again, this sort of automatic handling of
things would not be a panacea.and could be overriden or done manually. But
having that as an example could show how to approach BSDF building in a more
plausible way.

It would be nice, BTW, to add an "Energy conservation" checkbox for Phong,
Blinn, etc legacy shader. For most of those shaders, we know how to do that.
Oh yes... And allow the Spec property to be increased beyond 2.0. This
currently make it impossible to manually energy ballance the specular
shaders even for someone who knows how to do it and would like to do it.

Of course, if someone wants to experiment and proceed any other ways, why
not. But this is not the sort of issues I'm interested in discussing. I'd
rather focus my discussion on looking for possible issues and solutions for
eventually integrating the proposed shading system and its node-based BSDF
representation into a physically based rendering pipeline. Make sure the
proposed shading system will not prevent building plausible BSDFs and using
it in a plausible way if needed.

If a photon mapping or a path tracer is integrated in Blender, will we be
able to design node-based BSDFs that will work in those rendering contexts
too? Will we be able to design BSDF that will work in both rendering
context? Will a BSDF designed in a GI rendering context be directly usable
in a legacy CG rendering context for quick renders? A BSDF designed around a
diffuse shader and a specular shader will it give usable results in a GI
rendering context? Those are questions that interest me.

>> So blending BSDF, I agree. I see three ways to do that: 1) blending the
>> output colors, 2) blending the monte-carlo integrations. and 3) blending
>> the BSDF properties.
>
> Blending output colors and properties are both possible with a node
> setup, and if the latter is better we can try to figure out a way to
> do that easier than doing it manually.

Blending the Monte-carlo integration process could also be handled with
nodes if we had a switching node that would switch between two or more BSDFs
based on some static or dynamic probabilities for each BSDFs and a russian
roulette when come the time to integrate samples. We don't want to sample
both/all BSDF and weight-combine the resulting color. We would want to
sample/integrate each BSDF according to their probabilities of contributing.

> I was referring to implementation. The node would allocate a BXDF in
> memory, fill it in and pass that through to the next node. Using the
> BXDF in the integrator then would not refer back to the original nodes
> but instead use this constructed BXDF. So you could say it creates
> another set of nodes in a way. This is like Material GetBSDF in pbrt.
> I'm not sure I'm a proponent of this though, I prefer to just keep
> using the original nodes.

I will have to go and re-read this pbrt section. It's been 4 years already.

>> Otherwise, if I interpret verbatim what I see in those nodes, The light
>> color at a hit point is multiplied by the AO factor at the same hit point
>> and this is writen in the combined pass. This have the effect of coloring
>> the AO with the light color. Then the Lambert BSDF does its shading
>> calculation for the same hit point and writes the resulting color in the
>> bxdf pass. Later, through compositing, I can combine them.
>
> For the pixel, plugging something in the combined node would override
> automatically using the BSDF for shading the first hit. So it would
> compute lighting with the BSDF in the Light node, and then multiply
> that with AO and fill it into the pixel. So I would say the effect is
> darkening the light with AO.

I'm trying to understand.

1) The BXDF input in the output node is there to pass in the Lambert BXDF
back to the Light node. So now, the light node can use this BSDF to compute
the color of the hit. This shaded color is then multiplied by the AO.
Somehow, I doubt this is the correct interpretation because this would mean
the BSDF need to travel backward through the multiply note to the Light.

or 2) The light and the AO at the hit are multiplied together and that
result is passed to the output node where a Lambert BSDF is waiting to pick
that information for shading the hit. But then a BSDF needs a light vector
and multiplying the light which is light vector dependent with the AO which
is not would result in wrong shading. So I doubt this is the correct
interpretation too.

or 3) ... I don't know.

BTW, I assume that the Light node represents any light in the scene. Or is
one Light node associated to only one light in the scene?

> It is indeed not the right thing to do physically speaking. The
> purpose of that would be to allow some more flexibility to do
> non-physical things at the first hit, like you can in e.g. renderman.
> What goes into the pass outputs like combined is not a BSDF, but a
> color, so it is not restricted to being driven by lights, instead the
> material can drive the lights and combine things in different ways.
> Evidently that doesn't work for all rendering algorithms.

I think I would like to concentrate on plausible node flows. What I feel,
here, is that the philosophy is to give all the flexibility that can be
possible. Allow any node to connect to any node as long as the input and
output types are compatible or there are casting operators/nodes for them.
The number of possible combinations of nodes is staggering. So in order to
have a productive discussion, I'd like to stay with node setups that make
sense physically speaking.

One thing that I seem to observe is that the hit to camera needs to be
pretty much hard-wired and the light to hit too. Is that right? A couple
questions:

What would be a minimum node setup for just traditional Legacy CG render of
a scene?
How are multiple-reflection bounces represented in a nodes setup? Implicitly
or explicitly?
What would a BSDFs node setup look like if designed to be used in a photon
map or a path tracer?

Yves



More information about the Bf-committers mailing list