[Bf-viewport] 16 Bit Float

Troy Sobotka troy.sobotka at gmail.com
Thu Sep 29 16:25:03 CEST 2016


On Thu, Sep 29, 2016, 6:40 AM Mike Erwin <significant.bit at gmail.com> wrote:

> On Wed, Sep 28, 2016 at 12:54 AM, Troy Sobotka <troy.sobotka at gmail.com>
> wrote:
>>
>> If an image has an imager flip its colourspace via the selection box,
>> regenerate the 16 bit float buffer with the new transform input.
>>
>
> Is it enough for each image or buffer to *know* what color space its
> values are using, without changing those values numerically? Transform to
> reference when compositing into the display buffer (or output file).
>

Think about how many operations are being done every single time the image
is required.

It doesn't make sense from a performance standpoint, and it certainly
doesn't make sense from an algorithm / function standpoint; much easier if
the developer *knows* that the buffer is always in reference, design
algorithms that are colour space agnostic and aware of the scene referred
to domain referred transform. Much, much, much more performant too I would
think from a paint perspective? Painting onto an image buffer then would be
something like:

* Brush / stroke colour transformed to reference.
* Paint on reference.
* View transform to output context.

Compare against:
* Brush / stroke colour transformed to reference.
* Transform every pixel to reference every update, and calculate brush on
it.
* View transform to output context.

If we realize that those images might be 8k etc... Yikes.


>
>
>> On an additional note, I have had more than a few folks ask me if the
>> newer PBR system will be fully colour managed by default. My guess is that
>> it will, but I am curious as to whether or not this was discussed in the
>> sprint?
>>
>
> Not discussed in depth, but yes the idea is to have color managed IO, with
> rendering / lighting / materials in linear HDR.
>

This is excellent.

HDR, in this context, is a bit of a misnomer. There are two unique models
at work; the scene referred domain and the display referred. While there
*appears* to be an overlap of numerical values, that should not be
extrapolated into a congruency or overlap semantically. Radically different
models.

There must always be a transform from the scene referred domain to the
display referred. HTTP://cinematiccolor.com has a very useful PDF, endorsed
by the Visual Effects Society, that touches on the differences between
models. The TL;DR is that the scene referred domain has no magical meaning
to 1.0, and that black and white do not exist.


>
>> Finally, given that there will be the 16 bit float buffer, I am curious
>> as to the reasoning to keep the anachronistic 8 bit format around? It is
>> *hugely* problematic in a managed system, not even beginning to discuss the
>> horrible nightmare of alpha that we currently have as a result. Is it
>> viable to simply promote all 8 bit assets to 16 bit float and avoid the
>> ensuing nightmare?
>>
>
> 8 bit color components are not for assets, but for UI elements & widgets
> only. Anything to do with color picking or material preview we'll pay
> closer attention to. The idea is that not every task uses real lighting &
> colors (pre-material modeling for example).
>

As briefly chatted about in IRC, it is well past due time to consider those
UI elements given the fact that an extremely mainstream computer fruit
company has left sRGB primaries behind them.


> Kudos to whoever tacked on the 30 bit display buffer as well. You deserve
>> a cookie.
>>
>
> Thank you, it's delicious! High-end displays are one of my interests. True
> HDR display output is too new for me to know much about it, but getting a
> quality SDR signal out post tone mapping is something we can do fairly
> early in 2.8 development.
>

10-10-10 is of tremendous value, sRGB, some wide gamut, or HDR displays. It
also sets up for the ability to deal with newer HDR displays with some
degree of grace. Big props for putting it on the map.

Bear in mind that even when going to an HDR display, it is still a display
referred transform from the scene referred. No real differentiation between
HDR / SDR at that transform other than numerical transforms, as those are
purely hardware terms. The transform happens on the output transform, no
different to what it does now nor how it would for any other alternate
medium or output context. See the ACES transforms by HPD for examples of
varying transfer curves to accommodate HDR output at 48, 1000, 2000, 4000,
and 10000 nits.

Thanks for starting the discussion on this front.

With respect,
TJS
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.blender.org/pipermail/bf-viewport/attachments/20160929/243aecdb/attachment.htm 


More information about the Bf-viewport mailing list