[Bf-vfx] Formats and Color

Peter Schlaile peter at schlaile.de
Sun Apr 8 20:39:57 CEST 2012


Hi Andrew,

> Call me crazy but I feel that Blender should only accept image
> sequences rather than movies. That ideological but it simplifies many
> things.

Yes, I actually *do* call you crazy at this point :), since I personally 
like Blender for the fact, that you can throw any video file on it and 
it will work.

If you like to work with image sequences, that works nevertheless, too.

And sure: I know, why most people start at some point to use image
sequences in other software packages - but that says actually more about
the inability of those software packages to handle input files properly...

(frame exact seeking comes to mind...)

> Furthermore, the color management system needs to be able to chain
> arbitrary transforms at any stage of the process.

sure, should be easy to add.

> What about the internal working space of Blender? Modern motion
> picture cameras have gamuts that greatly exceed the SRGB primaries
> enforced by the image loading code.

but shouldn't Blender actually just work in float space, only clamping the
value at the final rendering step and everything should be fine
and dandy? (still scratching my head on this one)

> What I am suggesting is that the image loading code should just serve
> the data. Nothing more :)

Half agreed :) I read a little bit more in OCIO documentation. It really
looks pretty nice. Probably we can replace all the swscaler mumbo jumbo code
with OCIO.

Still, I think that main blender should have to deal with either RGBA byte 
or RGBA float and *not* the full detailed list of color coding formats
ffmpeg (or any other image input library you can name) supports...

> Loading EXR files clamps the range to 0-1, which if the file is
> scene-linear, throws out much of the data. Further more, many of the
> image nodes clamp to the same 0-1 range.

The former can't be changed, correct? And the later looks more like a 
bug within those nodes, doesn't it?

> It also has to do with raw isn't a complete image. Say you want to
> composite an alpha over. You need to do all the raw processing steps
> beforehand to get data that makes sense in that context (RGB values).
> I would state that data storage is significantly cheaper than
> equivalent processing time. So it makes sense to bake in those
> decisions, like white balance and denoising,  then do your
> compositing.

Hmm. Reading normal image means: 

* load file into memory
* decode file format (uncompress)
* do color space transformation (one matrix mul)
* do scaling (can be folded into the matrix)

RAW image processing usually only adds:

* do bayer matrix interpolation

which can be done on the GPU and only adds a few cycles if done on the CPU.
(There are high quality debayer filters, that can do that.)
http://graphics.cs.williams.edu/papers/BayerJGT09/ e.g.

White balance is also a simple rescaling operation on the bayer matrix 
input values.

That usually isn't really more expensive then say, reading a JPEG-file.

If you are thinking of REDRAW: the step that makes it slow is actually
the JPEG2000 decoding. Everything else can be done pretty fast.

And: please remember, that, if you are using full resolution files
on a network storage, io bandwidth *will* certainly start to be a problem.

Denoising. Hmm, interesting point. I never used denoising on RED
RAW data and got pretty good results out of it. 

And: I think, that should be a seperate node filter, since if you are
really desperate enough to need a denoiser you probably don't want to
have it work on the whole picture but use some masks.

Nevertheless: noone says, that you can't generate proxy files as a first
step to work on. But I don't want to *force* people to generate proxies.

As I said: usually you first build a proxy and do all your work on that
and use the original file for the final conforming step.

But: if you can get away without that, why bother?

(Only to make the OpenImageIO folks proud? Naaah...)

> This is why colour grading applications are able to ingest raw footage
> but they do not do the data manipulation directly on the "raw" data.
> They work on the cached output of the developed data.

To my knowledge, you can't do proper white balance after the debayering step,
since you loose necessary information on the way (it's not exactly easy to
reconstruct the original bayer matrix...).

> Xat has a branch of blender with OCIO integrated. I linked to it in an
> earlier message.

That's the one which isn't exactly BETA. I haven't tried it, but regarding
his own documentation, it seems to be more in a work in progress state.

>> What you see is, sensor data (which differs between cameras) piped
>> through a compression scheme (which isn't lossless at least for RED
>> cameras), cooked up by debayering, which adds some / a lot of softness,
>> if it doesn't do it's job correctly / has been configured the wrong way.
>
> Could you elaborate on what you mean here? Are you alluding to a
> connection between perceptual sharpness and noise?

I was talking about the fact, that Ton never saw the original data, but
rather the data, the Sony camera compressed and F65viewer converted for him.

Since both steps aren't necessarily lossless and there is a lot of software
(including professional one for RED cameras...) that actually is pretty
bad at debayering e.g., I only wanted to point out, that there isn't one
source of trouble within the system :)

I'm still wondering where he got all that noise from. (ISO 800 doesn't look
exactly like a very large value...)

>>> As to why I think raw isn't suitable for heavy vfx, the advantages derived
>>> from the above features, namely flexibility and parameters that arn't
>>> baked, can be a hinderance.
>>
>> Why?
>
> See the above comment about disk space and computing time.

That can, as pointed out above, just turn out the other way. I think, this decision
should be made by the user, not the software package.

Cheers,
Peter


-- 
--
Peter Schlaile





More information about the Bf-vfx mailing list