[Bf-committers] Shading System Proposals

Yves Poissant ypoissant2 at videotron.ca
Wed Dec 16 04:45:19 CET 2009


I think I would prefer that we use the term BSDF instead of BxDF. BSDF means 
the unification of the BRDF and the BTDF while BxDF means that we don't know 
which one of those two we are dealing with. I can see that separating both 
may be desirable for flexibility purposes. But I think that seeing them as 
separate and implementing them separately can only lead to difficult to 
solve issues eventually. While seeing them as unified can allow someone to 
use only the transmissive part of ot or only the reflective part or both in 
combination.

From: "Matt Ebb" <matt at mke3.net>

> I've done some research on this, and I think I have a fair idea of
> what's going on in there. I actually suspect it's quite similar to
> what I was toying around with before [1]. Rather than containing
> merely values, the brdf type probably contains some function pointers
> to pre-made functions (i.e. lambert distribution, phong distribution,
> etc), since that's really the only way to represent a Scattering
> Distribution _Function_.

You are about on the same page as I am there. I agree with your BxDF type 
model. But I don't think the Houdini BSDF (pbr or not) is anywhere near that 
sophistication though.

> It also contains some other flags to say what
> kind of scattering it can represent (diffuse, glossy, specular,
> emission) etc, ...

That is an interesting idea. I never thought of that. I mean I never thought 
of having a sort of library of scattering functions that could be used by a 
BSDF. It is not yet clear how this could be used though. It is common to use 
an appproximation of the true distribution function to generate the sample 
directions and scale by the probability of the approximated distribution but 
this is usually done in a controlable way to make sure there are no large 
discrepancies between the true distribution and the approximated one that 
may cause severe sampling artifacts. Usually, the approximated distribution 
is carefully selected to fully cover the whole range os the true 
distribution for the whole range of the BSDF parameters..

Why do we need an "emission" scattering type?

> ... and probably some custom data to represent bsdf
> parameters (like a glossiness slider).

And/or an input so the glossiness can be varied spatially or temporally.

> I'm guessing this via a few VEX functions:
> * sample_bsdf() [2] that takes a bsdf, 2d random samples, and shading
> geometry as input, and returns an outgoing vector - eg. for lambert a
> cosine weighted vector in hemisphere, and

You mean a cosine weighted distribution? For a lambert, the probability 
distribution is constant so it would be uniformly distributed over the 
hemisphere.

It is indeed a good approach to pass in a uv coordinate and have the BSDF 
compute a direction from that. This way, we can avoid sampling corelation 
artifacts that can produce interesting moirés and lisajous effects but are 
undesirable in real rendering situations. In a path tracing approach, every 
bounce should use the sampling points from 2 dimensions further of a multi-D 
sampler (Sobol for example).

> * eval_bsdf() [3] which takes a bsdf and shading geometry input and
> returns a colour (proportion of light that's reflected) - eg for
> lambert, L.N .

In this case, the cosine weighting that comes from the dot product is 
correct because we are calculating the irradiance while in the case of 
sample_bsdf(), the cosine weighting is implied by the fact that the surface 
is bombarded by samples and the sample densities on the surfaces are 
intrinsically cosine weighted. That is why the cosine weight is not needed 
when we sample.

> These above functions would basically just execute the bsdf callbacks
> with the input information.

Yup.

> As for how it mixes bsdfs, some ideas are either it somehow
> dynamically generates new distribution function callbacks that
> represent multiple combined distributions (which i seriously doubt)

Me too I doubt. Although it is feasible to mix both the reflectance and the 
distribution, I can't see any way to use the resulting distribution in an 
efficient way. I mean, how could we mix importance sampling strategies?

> or it uses some kind of internal layering/stacking system where it keeps
> track of what BxDFs make up the entire BSDF, and with what weights.

There is a usecase I can relate to. To me, layering BSDF makes sense. 
Multiple layer materials are pretty common and it makes perfectly sense to 
model this through a form of layering or stacking of multiple BSDFs.

On the other hand, I don't see what type of problem the "mixing BSDF" model 
is trying to solve and I would like to read usecases of that.

>> I can think of a few ways to do this using a node tree, doing the
>> computation F by evaluating nodes with some entirely or partially
>> excluded, and doing sample distribution by picking randomly from
>> distributions provided by nodes that contain a BXDF.
>
> You don't want to do it randomly, it should be weighted it based on
> layering - a probability per BxDF component. So for example if you're
> shading a glass shader with perfect specular BRDF and specular BTDF,
> blended via fresnel, then if the incoming ray is at glancing angles,
> there's much higher probability (fresnel function) of picking an
> outgoing vector based on the BRDF, not the BTDF.

Yup. I agree with that.

>> Another
>> possibility would be to pass a long a BXDF type through nodes, then
>> for example a mix node could create a new BXDF that can do more clever
>> importance sampling. Neither seems particularly elegant or transparent
>> to the user to me, but I can't think of good alternatives at the
>> moment.
>
> That's basically what Mantra PBR does, explicitly with the bsdf data
> type/node connections. The output node has a bsdf input, so for
> situations where you just need to find an outgoing scattering
> direction, you a) get the final bsdf arriving at the output via the
> node tree, and b) pass that final bsdf to the sample_bsdf() function.

Mantra bsdf data type is very basic and their functionalities are 
constrained to their model of shading. When they do micropolygon pbr, they 
shoot a user defined number of secondary rays from the micropolygon vertices 
and when they do raytraced pbr, they shoot a user specified number of 
secondary rays from a user specified number of pixel samples. Their sampling 
strategy is globally defined. Not on a per BSDF basis. What the BSDF can do, 
in addition to returning a color, is return a sample direction from a UV 
coordinate. That is already more than just returning a color but is 
insufficient for implementing bidirectional path tracing for instance. Their 
pbr shading strategy is only a path tracing from the camera into the scene. 
They supply a set of BSDF such as Phong, Ashikhmin, etc. with very easy to 
implement direction warping function so their sample distribution exactly 
match their pdf and their BSDF don't have to provide a probability along 
with the sampling direction.

Bottom line, I'm not impressed by Mantra, pbr or not and I'm not ready to 
glorify their system.

> The more I understand about this, the more I really think it's the
> right way to go. They've really done an excellent job coming up with a
> modern system, without the 90s trappings, and with the flexibility of
> programmable/editable shading. I'd really like to see Blender head
> down that path too as a design that will serve us for the future.

I'm far from being convinced that a BSDF data type is feasible. I can easily 
see a BSDF node though. I can see its inputs and outputs and its 
functionalities. But I can't see that as a data type that can be passed 
around through nodes. I can't see how such a data type could be used at some 
point in a noodle soup. I can't see why that would be usefull either. What 
type of problems is this passing around a BSDF data type is supposed to 
solve? Why would you need to pass a BSDF around? Why is it more usefull to 
use it "there" instead of "here"? If I need to transform data before feeding 
it to a BSDF, then I can do that through nodes. I don't need to pass the 
BSDF through the nodes. Lots and lots of questions.

I have not been able to figure with certainty how a BSDF data type is used 
in VEX either. I think I'm on the same page as Brecht on this issue. In the 
video where we see that noodle where multiple BSDF are being added together, 
it seems to me that it is highly probable that they are only adding color 
values. Just like the typical legacy CG shaders and their noodle fits 
exactly this model too. They feed a BSDF for the diffuse part, add a BSDF 
for the glossy part and then add a BSDF for the specular (mirror) part. This 
model rings so many bells (you know I = kd*D()+ks*S()+kr*R()) that it is 
hard to imagine that this is doing anything fancier than just adding colors. 
So the different BRDFs can use multiple sampling to derive their respective 
shading but I believe that their outputs are nothing more than colors.

Yves 



More information about the Bf-committers mailing list