[Bf-cycles] Base 'Render' samples (simple request)
jrdnmlr at gmail.com
Mon Nov 14 04:56:32 CET 2011
great!! hmm... well if we can establish a bunch of test .blend files I
could gather more data on those renders.
to answer your question, though, it's BOTH average and maximum. since
we will be computing the rate of change of all pixel values by
histogram there will definitely be some derivative that will work for
most test cases (user settable, of course) and we can distinguish by
looking at the histogram plot.
that would let me determine whether this is truly a general short-term
solution or not.
I could make the .blend files myself but its more accurate if we
sample .blend files from the community I think.
are there test .blend files that have been used during the development
or I could make a bunch of suzannes and bunnies with all the default
materials in one scene to get generic pan-material data...
On Nov 13, 2011, at 10:14 PM, Brecht Van Lommel
<brechtvanlommel at pandora.be> wrote:
> Could you clarify what you would take as the stopping condition?
> Average pixel difference, maximum pixel difference, ...? Average won't
> work if only a small part of the image is noisy, maximum makes it very
> sensitive to individual pixels.
> On a more general note, I'm ok with short term solutions, if I know
> how they can evolve into the proper solution in the long term. Adding
> a short term solution that you then have to change means breaking
> backwards compatibility, or keeping around old code that complicates
> adding new features.
> The other thing is that I should be able to support it up to a certain
> level. If it's there, people will expect to be able to set this value
> and send their animations to a renderfarm, and get some predictability
> in render times, which I don't think you get with a simple
> On Sun, Nov 13, 2011 at 3:31 PM, Jordan Miller <jrdnmlr at gmail.com> wrote:
>> there is a complicated long term solution, I agree.
>> but don't get caught up in the individual pixels in the difference image...
>> what I'm proposing is a simple approach to rapidly get there for the
>> short term. if you look at all the difference images you can see they
>> follow a very reproducible pattern over time, which I have quantified
>> by looking at the histogram. all we have to do is compute this
>> histogram along the way after 2, 4, 16, 32.... render samples.
>> any single pixel "holding things back" still allows the render to
>> finish since it is not changing if it is holding things back. this
>> would mean the pixel is changing by more than 250 intensity values
>> every render. this is not possible, even with fireflies. but they
>> could be measured.
>> if we assume that, given an infinite render time (infinite number of
>> samples), the pixels will no longer change, then all we have to do is
>> assess the number of pixels that are still changing and by how much to
>> pick a reproducible place to stop and have a given quality.
>> it's a very simple first approximation that I think will work quite generally.
>> if you have more test scenes I'd be happy to try then out too...
>> On Nov 13, 2011, at 8:49 AM, Brecht Van Lommel
>> <brechtvanlommel at pandora.be> wrote:
>>> What you're proposing here are basically the first steps to an
>>> adaptive sampling algorithm. I think that would be great to add, but
>>> implementing this well is not so easy. Which is not to say that I
>>> don't think it shouldn't be done, but that it's going to take more
>>> than a few days of work to have this working really well.
>>> I'm not sure if you're proposing this for per pixel adaptive sampling,
>>> or deciding to stop for the entire frame. In the latter case this is a
>>> bit problematic, since using the mean value may still mean there is a
>>> lot of noise in part of the image, and the min/max may lead to one bad
>>> pixel holding back the entire image. So this should really be done on
>>> a per pixel level.
>>> The difference image you show looks like it shows the noise in the
>>> image, but this is deceiving, it only shows part of the pixels that
>>> actually have noise. If you're going to make a decision, this pixel
>>> needs no more samples, you have to do that with great confidence. If
>>> you're wrong 5% or even 1% of the time, that's still many pixels that
>>> will be wrong.
>>> There's a few ways to improve this, among them using information from
>>> neighbouring pixels (blurring the difference image) and using more
>>> statistical measures. Tweaking and testing this all takes time :)
>>> On Sun, Nov 13, 2011 at 2:02 AM, Jordan Miller <jrdnmlr at gmail.com> wrote:
>>>> I've added more details here:
>>> Bf-cycles mailing list
>>> Bf-cycles at blender.org
>> Bf-cycles mailing list
>> Bf-cycles at blender.org
> Bf-cycles mailing list
> Bf-cycles at blender.org
More information about the Bf-cycles