[Bf-committers] From Farsthary another anouncement

joe joeedh at gmail.com
Fri Feb 6 09:29:14 CET 2009


Out of curiosity, does it really look terrible in practice, if the
legacy<->brdf conversions aren't always perfect?  I mean if it looks
"good enough" (as pixar's researchers put it :) ) then it'd probably
be fine.  I'm also wondering how a shading language would fit into
such a system.  You couldn't really restrict people to writing pure
BRDF's, so it wouldn't always be correct, necessarily.  It's kindof
like how in DSM, node materials can't always be handled correctly,
because they can output anything.

Joe

On Thu, Feb 5, 2009 at 7:15 AM, Yves Poissant <ypoissant2 at videotron.ca> wrote:
> From: "Brecht Van Lommel" <brecht at blender.org>
> Sent: Thursday, February 05, 2009 2:03 AM
>
>> Mainly my point here was that if you are implementing photon mapping or
>> something similar in Blender with the existing shaders, it is easy to make
>> a black box out of them for path tracing.
>
> Technically, I agree that it is relatively easy to wrap those components
> into a black box. Would it be a good idea to use that for path tracing? If
> it is only for playing or experimenting with it, then yes. But if it is to
> calibrate the path tracing so it can serve as a reference for other GI
> algorithm later, then I would say this is a very risky idea.
>
>> If you don't bother with things like importance sampling, to get the full
>> BRDF value it is really just a matter of adding the results from the
>> individual ones together, as far as I understand.
>
> Typically, you approach BRDF sampling in a probabilistic way using the
> technique of russian roulette. When the BRDF is composed of separate
> components, you first select which one of the components is going to be
> sampled with the roulette then this component does its distribution thing
> and returns the result. In some way, it can be viewed as adding together the
> components results but really, it serves better the comprehension of the
> modelt to view each sample as coming from a different event and independent.
> Not added together. It is more like the global result of all this that is
> important. And the global result is also probailistic in the sense that we
> evaluate the density of those individual irradiance events in the scene.
>
>> I'm not saying though that in a physically based renderer this will give
>> great results, but it should be the same as when you use those functions
>> in another algorithm if that algorithm is implemented correctly, so it's a
>> useful sanity check.
>
> I have a lot to say about this but I need to go to work and if I start I
> will write yet another lecture ;-). So I'll wait for when I come back from
> work.
>
> But just for a start, because I mentioned using the russian roulette for
> selecting which component is to be selected for sampling. The russian
> roulette assumes, for separable BRDF, that the coefficients (kd, ks, kr,
> etc) represent the probabilities of contribution and thus the probabilites
> of being selected. So it is assumed that the total of kd+ks+kr+kt == 1.
> Typically, that is almost never the case from legacy material representation
> and the total is usually much higher than 1. So before using the roulette,
> the coefficients are normalized so the total is one, in some way, forcing
> some form of physicallly correctness. The render result will be different in
> both renderer.
>
> This is just the tip of the iceberg though. There are more issues coming
> mainly from the specular component and its normalization (or lack of it) but
> also from the other components. More to come.
>
>> Heh, the thing I thought may be difficult is turning the physical ones
>> into legacy ones, so they would still give reasonable results when using
>> algorithms that rely on splitting things up (for example irradiance
>> caching, SSS, baking GI for games, .. ). I have never tried this though so
>> I wouldn't know.
>>
>> The other way around seem quite doable to me _if_ you don't expect those
>> things to give physically correct results, which I think is reasonable.
>
> That is another topic where I have a lot to say.
>
> >From a technical POV, indeed, you are right. Packing the legacy components
> into a black box is easy to do and we use to call that "shaders" although
> "shaders" nowaday can mean a lot more than that. And still from a technical
> POV, doing the reverse is hard. I agree.
>
> When I say the reverse, I'm not refering to the technical challenge of
> implementing those "conversions". I'm refering to the impossibility of
> coming up with an infaillible algorithm that will do the correct legacy to
> physically plausible conversion automatically as the user originally
> designed the material. This, in my experience, is very hard to do (meaning
> impossible right now) and is related to the "tip of the iceberg" I refered
> to earlier. The reverse conversion may require numerical integration of the
> BRDF to figure equivalent legacy coefficients but it is doable as long as
> the BRDF is not one of those weirdly anysotropic ones. It is even possible
> to get OpenGL 1.1 render to look almost photorealistic using such
> techniques.
>
> Got to leave for work now.
>
> Yves
>
> _______________________________________________
> Bf-committers mailing list
> Bf-committers at blender.org
> http://lists.blender.org/mailman/listinfo/bf-committers
>


More information about the Bf-committers mailing list