[Bf-committers] S-DNA and Changes

Peter Schlaile peter at schlaile.de
Thu Nov 25 19:45:27 CET 2010


Hi Leo,

>  1. Write a "VSE 2" and create all-new structures?

this will break compatibility with older versions of blender. Should only 
be done as a last resort and if you *really* know, what you are doing.

>  2. Create some kind of "compatibility loader" or data import filter
> that converts the old data to the new, as far as possible? That is, we
> read old and new formats, but only write new format.

that is *always* necessary, otherwise, you can't open old files or make 
sure, that on load of an old file, your new structure elements are 
initialized properly. This is done in doversions() in readfile.c .

>  3. Convert the old structs to the new in-memory? That is, we read and
> write the old format, maybe with some back-compatible changes, but use a
> new format internally?

nope. After each editing operation, DNA data has to be in sync, since DNA 
load/save is also used on undo operations(!).

I'd also suggest, that you first try to make sure, that you *have* to 
change something, and why. Since, you guessed it, you will most likely 
make some people unhappy, that want to open their new .blend file with an 
old version and see things broken all over the place.

So I'm a bit wondering, what you want to change?

I tried to understand the blog post you linked some days ago.

To quote your blog: (disadvantages of the current system)

> 1.1. Disadvantages

> The disadvantages come when we wish to perform any kind of processing 
> that requires access to more than just the current frame. The frames in the 
> sequencer aren't just random jumbles of image data, but sequences of 
> images that have a lot of inter-frame information: Object motion, for 
> example, is something that is impossible to compute given a single frame, 
> but possible with two or more frames.

> Another use case that is very difficult with frame-wise rendering is 
> adjusting the frame rate of a clip. When mixing video from different 
> sources one can't always assume that they were filmed at the same frame 
> rate. If we wish to re-sample a video clip along the time axis, we need 
> access to more than one frame in order to handle the output frames that 
> fall in-between input frames - which usually is most of them.

To be honest: you can't calculate the necessary optical flow data on the 
fly, and: most likely, people want to have some control over the 
generation process. (Maybe they just want to use externally generated 
OFLOW files from icarus?)

To make a long story short: we should really just add a seperate 
background rendering job, to add optical-flow tracks to video tracks, just 
like we did with proxies, only in the background with the new job system 
and everything should be fine. (For scene tracks or OpenEXR-sequences with 
a vector pass, there even is already optical flow information available 
for free(!) )

In between frames should be handled with float cfras (the code is already 
adapted at most places for that) and the new additional mblur parameters.

That has the additional advantage, that you can actually *calculate* real 
inbetween frames in scene tracks.

For other jobs, like image stabilisation, you should just add similar 
builder jobs. (Which most likely don't have to write out full frames, but 
just generate the necessary animation fcurves.)

The implications of your track rendering idea is - scary.
Either you end up with a non-realtime system (since you have to calculate 
optical flow information on the fly in some way, which is, to my 
knowledge, not possible with current hardware) or you have to render 
everything to disk - always.

I, as a user, want to have control over my diskspace (which is very 
valuable, since my timelines are 3 hours long, and rendering every 
intermediate result to disk is *impossible*!).

Or, to put it another way: please show a case to me, that *doesn't* work, 
with a simple "background builder job" system, where you can add arbitrary 
intermediate data to video, meta or effect tracks. Having to access 
multiple frames at once during playback *and* doing heavy 
calculation on them doesn't sound realtime to me by definition, and that 
is, what Ton told me, the sequencer should be: realtime. For everything 
else, the Compositor should be used.

You could still use RenderMode: (CHUNKED, SEQUENTIAL and FULL) to make 
that background render job run in the most efficient way. But it is still 
a background render job, which is seperated from the rest of the pipeline.

As always, feel free to proof me wrong. If I got it correctly, your 
interface idea looks like a good starting point for a background builder 
system interface. 
So probably, if you convince everyone, that this is the best thing to do for 
playback, too, we might end up promoting your builder interface to the 
preview renderer, who knows?

BTW:
I'm currently rewriting the sequencer render pipeline using a generic 
Imbuf-Render-Pipeline system, which will move some things around, 
especially all those little prefiltering structures will find their way 
into a generic filter stack. But when I do that, I will make sure, that 
their is really a good reason for that, since, as stated above, it will 
certainly break things for some people and that should come with a real 
benefit, not only aesthetics...

Cheers,
Peter



More information about the Bf-committers mailing list