[Bf-committers] S-DNA and Changes

Leo Sutic leo.sutic at gmail.com
Fri Nov 26 11:50:15 CET 2010

Hi Peter,

you raise some very good points, and I'll have to go back to the drawing
board for a moment.

For example: How do you play a movie backwards? Not easy with a
next()-style interface.

It's going to take a little while, though, because the next() style
interface makes some other things like image stabilization and object
tracking much easier.

One question, though:

> map_vse_frame_to_cfra() is *really* a non-trivial function!

How? Isn't it just a fcurve.evaluate (frame)? Or is it the speed control
strips that make it non-trivial?


On 2010-11-26 10:25, Peter Schlaile wrote:
> Hi Leo,
>> Then you can have one strip with an fcurve to map from VSE output frames
>> to scene frames:
>> Strip 1: Scene, frames 1-20, with an fcurve that maps from VSE frames
>> (iterated over using next()) to scene frames. It covers frames 1-219 in
>> the VSE.
>> If I may modify your code a little:
>> class scene_iterator : public iterator {
>> public:
>> 	Frame next () {
>>                // fcurves go in map_vse_frame_to_cfra
>>                float cfra = map_vse_frame_to_cfra (nextFrame);
>> 		setup_render(cfra);
>>                ++nextFrame;
>> 		return render();
>> 	}
>>    int nextFrame;
>> }
> and again, that doesn't work with fcurves and gets really nasty 
> with stacked speed effects.
> map_vse_frame_to_cfra() is *really* a non-trivial function!
>> VSE only sees a sequence of discrete frames
> uhm, why is that exactly the case? In fact, currently it renders 
> internally with floats.
>> - which is precisely what its domain model should look like,
>> because video editing is about time-discrete sequences, not continuous.
> again, why? The point behind making cfra continous was, that the *input* 
> strip can make it's best afford to do inter-frame interpolation or do 
> in-between rendering.
> It depends heavily on the *input* strip, how that is done best.
> So: no, I *strongly* disagree with your opinion, that the "VSE sees a 
> sequence of discrete frames". In fact, it doesn't!
>> The Blender scene is a  continuous simulation - having a float cfra 
>> makes sense, because time is continuous there. In the VSE the domain 
>> objects are discrete in time. Having a float cfra makes no sense.
> as stated above, I disagree.
>>> Which brings me to the point: what was the sense in dropping the random
>>> access interface again?
>>> The imbuf reader has also a random access interface, but internally keeps
>>> track of the last fetched frame and only reseeks on demand.
>> It is always possible to wrap a sequential interface in a random-access
>> interface, or vice versa. The purpose of dropping the random access
>> interface was to be able to write code that didn't have to deal with two
>> cases - you'd know that you'll always be expected to produce the next
>> frame, and can code for that. Less to write, less to go wrong.
> uhm, you always will need a fetch() and a seek() function, so where 
> exactly does your idea make things simpler?
>> Clients of the code know that they can iterate over *every* frame in the
>> sequence just by calling next(). With a random access interface -
>> especially one that uses floats for indexing - you'll always worry if
>> you missed a frame, or got a duplicate, thanks to rounding errors.
> uhm, as stated above, next() isn't really defined in your sense.
> You can define a version, that does CFRA + 1 if you like. If that is 
> really helpfull is another question.
> In fact, the next cfra for a given track will be defined by the topmost 
> speed effect fcurve and then calculated down the stack. That won't break 
> your initial idea of changing the order in which frame calculation takes 
> place, it only reflects the fact, that next() isn't that easy to calculate 
> if you do retiming in a creative way.
> So: yes, next() won't be easy to calculate in advance for a given track, 
> but yes: that is a fundamental problem, if we allow stacked retiming with 
> curves. Even if you do retiming with simple factors, you will run into the 
> problem, that if the user speeds up a track with say a factor of 100 you 
> probably don't want to blend *all* input frames into the output frame but 
> limit the sampling to say 10 inbetween frames. (That's the way Blender 
> 2.49 does it and Blender 2.5 will do it soon using the new SeqRenderData 
> parameters.)
> Cheers,
> Peter
> ----
> Peter Schlaile
> _______________________________________________
> Bf-committers mailing list
> Bf-committers at blender.org
> http://lists.blender.org/mailman/listinfo/bf-committers

More information about the Bf-committers mailing list