[Bf-funboard] New compositing and VSE monitor tools

Troy Sobotka troy.sobotka at gmail.com
Sat Feb 20 05:37:37 CET 2016


On Fri, Feb 19, 2016, 1:16 PM Knapp <magick.crow at gmail.com> wrote:

> Thank you for that example!
> So this happens because the Red to no red curve is not the same as the Cyan
> to no cyan curve?
>

Not quite sure what you are meaning here, but the long and short of it is
that all display referred colour encodes use a single transfer curve. So
no, there is no difference between the nature of the red, green, or blue
nonlinear curvature of sRGB in this case.

This happens because we are using fast math and not correct math?
>

No. Most mathematics regarding pixels, with the exception of some of the
canonized AdobePDF specification blend modes, attempt to model light
phenomena. As a result, in most instances, the math is more or less correct.

This will happen after the film is rendered out also or only in the viewer?
>

Always.

Do we have the same problem in the compositor?
>

With particular attention to this facet, no it does not happen in the
compositor.

Thanks.  I hope I am not wasting your time. Rather have you programming
> than educating me. :-)
>

I don't spend much time programming, so no worry there. :)

The reason the above happens is a collision between the math that, in most
cases emulates a model of mixing light, and the nature of the encoded RGB
values.

When you take a typical JPEG, TIFF, or that godforsaken horrible format
known as PNG, the values are baked into a “ready to dump to display”
format. That is, they are bent away from the actual levels of light per
pixel into a display bent encode.

When we perform math on these bent values, because we aren't communicating
energy levels as they are modeled in a physical environment, all of our
results are fundamentally incorrect. Blurs, mixes, traversals, you name it
-- are completely wrong.

If we think of a typical manipulation process, we could think in terms of
Input,  Reference, and Output. The Input is the image's encoded state. The
Reference is the cauldron we do all of our manipulations in. The Output
might be the display or another output context.

To alleviate the nonlinearity issue above, we would need what is known as a
linearized reference space. That is, we want the data to represent roughly
radiometric ratios as opposed to display-bent, nonlinear ratios.

To do this, we would need to linearize the encoded data to a linear state.
With sRGB, we can invert the display-bent transform and our display
referred 0.0 to 1.0 data would then be roughly linearized to the best of
our ability, and the reference could be said to be display linear[1].

Now, when we perform manipulations on the reference space, the results are
closer to “correct”, where “correct” is roughly what we would expect to see
in our physical environment when light mixes.

As you may have inferred, the VSE has been hacked to have a non-linear
reference space while the compositor uses a strictly linearized one.

Why not use a linearized reference space in the VSE then? Why not use
linearized reference spaces everywhere? Why aren't we fixing this? Why?!
Why?! Why!!??

Those questions, and many more, much like the nuances of colour, are
extremely complex once you begin to peel apart design needs. :)

With respect,
TJS

[1] This is a critical distinction between display linear and scene linear,
which are two diametrically opposed concepts. Scene linear is typically how
ratios of light are stored in a scene referred reference space such as the
rendering output from Cycles, where the values range from zero to infinity.
Display linear is the downgraded display referred range of values. The two
are very different models, with very different impacts.


More information about the Bf-funboard mailing list