[Bf-committers] Realistic Materials (Farsthary)
Yves Poissant
ypoissant2 at videotron.ca
Fri Feb 27 14:11:49 CET 2009
From: <echelon at infomail.upr.edu.cu>
Sent: Thursday, February 26, 2009 4:13 PM
> All the papers and implementations I have visited seems to divide the
> BxDF in its caracteristics parts (specular,reflection and so) and treat
> them separatelly sampling acording to it.
>
> for example some implementations say:
>
> if (random()<kd)
> sample_diffuse_direction()
> if (kd<random()<kd+ks)
> sample_specular_direction()
>
> and so, basically it divides the implementation in as many contributuion
> part as can be done and samplig them according to some random criteria.
> But that way it could be difficult to add any-shape BxDF and sampling
> other effects like refractions and so.
Yup. You will find that strategy for most every empirical BxDF that were
developped from the legacy CG models. I call those BxDF, "transitive BRDFs"
because they were developped to be incorporated into legacy renderers
without requiring to break everything else apart and also because at the
time, the legacy shaders were well known while BRDF themselves were just
being experimented and the researchers were in a better state of knowledge
if they started from a model they already knew. But IMO, those empirical
models are truely transitive in the sense that they will eventually be
replaced by better models that can emulate true physically measured BxDFs.
> But then again that strategy is BXDF dependant... so I need to research
> a generalized way...
I'm thinking out loud here.
I believe it will always be more or less BxDF dependent. What I mean, is
that the BxDF will always need to supply a sort of "sampling" service. In
fact, if you look at it, we can say thet the BxDF *is* the sampling service.
The BxDF is a scatering function. That is its definition.
There is not one single representation for that scattering function. The
empirical BxDF use a technique based on the russian roulette to select which
"component" will be sampled. A little bit more abstract than that, you find
BxDF which are based on fitting multiple basis functions to measured data.
While it may turn out, in certain of those models, that one basis could be
labeled "the diffuse component" and another basis labeled "the specular
component", this is totally arbitrary and may actually just roughly
correspond to those labelings or may not correspond to those labels at all.
For some models with multiple (4 and up) bases, it becomes very difficult to
separate the diffuse from the specular part.
It is obvious to me that the most efficient representation of an arbitrary
BxDF is not available right now and is into development. Personally, I
suspect it will be based on mult-dimensional (at least 4D) wavelets. The
advantages of wavelets is that the function inversion necessary to produce a
sampling distribution function from a probability distribution function is
already possible and known (see "Wavelet importance sampling: efficiently
evaluating product of complex functions").
> I want a
> flexible system that could handle potentially arbitrary BxDFs, so I´m
> asking render experiened writers: I that possible? If is not due to
> some fundamented mathematical principle then I should design a path
> tracer tailored to several BxDF implementations and release that, and
> then if someone need to support a new BxDF model then important parts
> of the sample methods will need to be adjusted.
It is not so much about fundamental mathematical principles as about
separation of concepts. I guess one could design a sort of intermediary BxDF
representation that fits some sampling strategy, translate every BxDF into
that representation and go that way. But first, that intermediate and
efficient representation is not known yet. And second, Why go that route? A
BxDF is already a distribution function. All you really need to do is
delegate to the BxDF the responsibility of generating the samples. You will
probably end up with some BxDF models that are very weak in importance
sampling because this is intrinsically difficult to do fot those models and
other BxDF models for which importance sampling will be very efficient. But
that is life. Survival of the fittest in the BxDF domain.
Basically, every BxDF implementation should provide sampling services. At
least, if you start with that approach, then you will gain better experience
in determining how to abstract all of that eventually.
> If there´s no evidence of a limitting principle and in fact exist such a
> method then I will take my time in research it :) because the sampling
> startegy is the very foundation of the Path tracer.
Another more abstract approach could be to have each BxDF provide a set of
services where the sampler can interrogate the BxDF about its own sampling
space characteristics. Outside the BxDF, the "sampler" could combine this
knowledge with its knowledge about the environment and decide on the optimal
importance sampling pattern. If you are doing a PhD, that might be a good
subject.
Yves
More information about the Bf-committers
mailing list