[Bf-vfx] Formats and Color

Andrew Hunter andrew at aehunter.net
Sun Apr 8 18:32:02 CEST 2012


Hey Peter,

On Sun, Apr 8, 2012 at 9:45 AM, Peter Schlaile <peter at schlaile.de> wrote:
> Hi Andrew,
>
> I have problems to understand some of your text:

Hopefully I can make myself clearer :)

>> Perhaps the greatest challenge to supporting ACES in Blender is the need to
>> excise most of the image loading and color related code.
>
> I see three points, where we have to add color management stuff:
>
> a) at image/movie load time
>   for the 8 bit pipeline it should integrate with swscaler/ffmpeg
>   and should be changeable by the user.

Call me crazy but I feel that Blender should only accept image
sequences rather than movies. That ideological but it simplifies many
things.

> b) at display time
>   preview displays should be able to switch between output profiles.
>
> c) at render output time
>   again: configurable for the user.

Furthermore, the color management system needs to be able to chain
arbitrary transforms at any stage of the process.

>
> It doesn't look like a really big change to me. Since I'm currently in
> the need of adding technicolor cinestyle support for my Canon EOS 5D,
> does anyone mind, if I start coding, starting with xat's branch, which
> seems to use OCIO?

What about the internal working space of Blender? Modern motion
picture cameras have gamuts that greatly exceed the SRGB primaries
enforced by the image loading code.

>> Looking at the
>> comments on the mango blog about how the author of the DPX code didn't know
>> that there were more than one log curve that could be used shows the folly
>> of embeding that transform at that level. If something isn't correct, there
>> is no way for the user to adjust.
>
> I don't think, bashing developers helps a lot. What you should keep in
> mind: color space transformations can be done very fast on input data,
> but are pretty slow on float, so the best place is on file load using
> look up tables.

I wasn't trying to bash the developer, rather illustrate the mistake
in that approach. Once that transform is baked into the data going
into blender, there is little a user can do to correct for it.

What I am suggesting is that the image loading code should just serve
the data. Nothing more :)

It is admittedly biased, it is the approach that OpenImageIO takes.
The artist is far more knowledgeable of the context of image
production that the developer. One should enable the other.

> At least, for a CPU implementation. For GPU, it doesn't really matter.

OpenColorIO works on both the CPU and the GPU.

> That said: sure, the user should be able to control, which profile is in
> use.

I'm glad we agree on that.

>> With the current incomplete colour management system, blender's dicotomy of
>> SRGB or srgb primaries with linear gamma concept would need to be removed.
>> Also, ACES explicitly requires the support for HDR values (the +- 16 bit
>> floating point range) internally and only clamps to a 0-1 range using the
>> RRT.
>
> The float pipeline already has that feature. Correct me if I'm wrong.

Loading EXR files clamps the range to 0-1, which if the file is
scene-linear, throws out much of the data. Further more, many of the
image nodes clamp to the same 0-1 range.

>> As for reading F65 Raw, only applications that use the Sony SDK (like
>> Davinci Resolve) will support the raw files natively.
>
> <troll>
> Hmm, well, sure. Only applications that have support for a certain
> format can read it.
> </troll>

Ton asked if OCIO some how worked on raw data. I was aswering. On a
related note, there has been a proposal on the OIIO mailing list about
a GSOC project to implement raw support (via libraw).

>> In, Raw is inappropriate for heavy VFX work, I will elaborate on the
>> reasons why later in the mail. It is however useful for colour grading.
>
> Reading the rest of your post, I still haven't understood, why "Raw is
> inappropriate for heavy VFX work".

It has to do with what a "raw" image intrinsicly is.

> I do know, that most studios don't use RAW data for that, but that has
> to do with decoding speed to my knowledge. (It's simply faster to do all
> the grading work on HD proxies and use the RAW data in the conforming
> step at the end.)

It also has to do with raw isn't a complete image. Say you want to
composite an alpha over. You need to do all the raw processing steps
beforehand to get data that makes sense in that context (RGB values).
I would state that data storage is significantly cheaper than
equivalent processing time. So it makes sense to bake in those
decisions, like white balance and denoising,  then do your
compositing.

All of my experiences as a client in colour grading sessions has had
us working at full resolution. Watching Baselight render complex
transforms as well as tracked secondary corrections at 4k in real time
is a marvellous thing.

> In fact, regarding the data size we are talking about, I think, direct
> support for F65 RAW files is a pretty clever thing.
>
> Especially, since you can do certain color transformations on RAW files,
> which aren't that easy/possible after the debayering step. (White
> balance comes to mind.)

This is why colour grading applications are able to ingest raw footage
but they do not do the data manipulation directly on the "raw" data.
They work on the cached output of the developed data.

>> > What software is currently using opencolorIO with f65 support?
>> > Did you try to read in the data we posted and process it?
>>
>> Nuke and the rest of the foundry apps. Beta support for After Effects and
>> Blender :)
>
> Beta support on Blender?

Xat has a branch of blender with OCIO integrated. I linked to it in an
earlier message.

>> Raw is raw :)
>
> Ouch. Read my other post on that one.

You misunderstand me. That was in context to the noise comment by Ton.

>> That all the secret sauce that camera manufacturers used to
>> put in hardware to hid as much of that as possible is gone. Your seeing
>> what the sensor sees and sometimes that means lots of noise. It would be
>> worth looking into what processing you might have to do prior to the
>> exporting of the OpenEXR files.
>
> What you see is, sensor data (which differs between cameras) piped
> through a compression scheme (which isn't lossless at least for RED
> cameras), cooked up by debayering, which adds some / a lot of softness,
> if it doesn't do it's job correctly / has been configured the wrong way.

Could you elaborate on what you mean here? Are you alluding to a
connection between perceptual sharpness and noise?

>> On top of that, many camera manufacturers apply some form of compression to
>> the raw frame to further reduce the data rate. Red, for example, uses a
>> form of wavelette compression similar to jpeg2000. I am not certain if Sony
>> employes their own compression scheme.
>
> They do :)
>
>> As to why I think raw isn't suitable for heavy vfx, the advantages derived
>> from the above features, namely flexibility and parameters that arn't
>> baked, can be a hinderance.
>
> Why?

See the above comment about disk space and computing time.

[...]

Cheers,

Andrew


More information about the Bf-vfx mailing list