[Bf-committers] VSE Strip-Wise Rendering

Peter Schlaile peter at schlaile.de
Thu Sep 30 00:35:51 CEST 2010


Hi Leo,

> 1. Render A10-A15
> 2. Render B10-B15
> 3. Render Final 10-Final 15
> 4. Render A16-A20
> 5. Render B16-B20
> 6. Render Final 16-Final 20
>
> Of course, we could use a chunk size of 1 and then we'd be back to the
> way things work today, but having larger chunk sizes allows us to
> amortize a fixed chunk cost over several output frames. For example,
> when computing optical flow, you need two frames to produce OF for one
> frame. That's twice the amount of frames. But to produce OF for two
> frames, you need just three input frames, not four. For ten OF frames,
> you need eleven input frames. In general, to produce N output frames,
> you need N+1 input frames. That cost for that final frame is amortized
> over the other N frames, so we want N, the chunk size, to be as big as
> possible.
>
> I hope that explains what I want to do.

that sounds indeed usefull (and is, what I was already planning 
in some parts for doing multi-threaded rendering the right way(tm)). The 
real problem with this idea comes with the integration of the animation 
system. Don't get me wrong, it has to be solved, but it won't be very 
easy. (Animation system currently changes variables all over the place, 
you will have to make a copy of the complete scene for every thread and 
let every thread have it's own animation system and CFRA.)

>> My understanding of your idea is currently: I'd have to render everything
>> from the beginning and that sounds, uhm, sloooow? :)
>
> Yeah, that would suck. But we don't have to do that any more than we do
> now. That is, not at all.

good :)

> So what is the real gain with the kind of processing that I propose?
>
> Your render/bake pass solves one problem - it gives the plugin access to
> more than the current frame. But it also introduces problems:
>
> 1. Every time a parameter is changed, the bake/render must re-run.
> 2. When it runs, it processes, as you say, the entire strip.
> 3. So you:
>    a. don't get any real-time processing.
>    b. can't split the processing over several nodes in the render farm.

The real problem is, that you can't calculate optical flow in realtime 
AFAIK. So: my render/bake *does* actually solve the problem, that you have 
to only build your optical flow data once and reuse it later.

If you got it realtime capable with your engine, I will shutup immediately 
at this point and want to see some link to that engine :)

> I would then further argue that with these modifications, the
> render/bake process is nothing but a VSE plugin. Which brings me to my
> motivation for wanting to do heavy surgery on the VSE.

Well, a binary plugin system doesn't have to be "heavy surgery". I wrote a 
proposal in 2007 but haven't adapted it to new animation system or 
RNA. (Which solved some of my problems I had at that time with the 
old animation system.)

You can take a look at my old proposal here:
http://wiki.blender.org/index.php/Dev:Ref/Requests/Plugin-System

> Yet Blender isn't very modular. In Blender, the speed control strip, for
> example, doesn't do any speed control - that is done in the sequencer,
> in a special-cased if statement inside the render function. I don't see
> how this can work in the long run, in terms of growing the code and
> feature set.

I think, you missed something here. The new speed control is indeed near 
to a noop. But that is intentional.

My idea for the new code base was: make CFRA a float, that means, make the 
input strip handle in between frames. For SCENE-strips that means, that 
they can actually *render* inbetween frames (at least, when somebody 
fixes the render interface to use floats everywhere). For other input 
strips that means, they should use, whatever optical flow source is 
available for them.

> First: Today it is optical flow, and yes, we can solve it the way you
> propose. But what will we face tomorrow? Is it something that we
> absolutely want to do as part of Blender, or would we rather offload it
> on someone else and then just ship the plugin along with Blender?

As you pointed out yourself, optical flow is something like an alpha 
channel. It is part of the input strip, and how it is generated there, is 
really a different story and heavily depends on the type of input!

> Second: I think Blender's job is primarily to synchronize and provide a
> runtime for video processing plugins locally or in a cluster. Not to
> actually *do* effects. That is better handled via plugins.

I spend a lot of time thinking about a binary plugin interface, that will 
work across installations very well and tries to be very good at API 
versioning in both directions. (plugin <-> core)

I still like some of the ideas, that got into that system (especially the 
way, symbol dependencies are avoided completely!), but it still isn't 
there, where it should be.

So if you want to take up where I left, go ahead!

> Blender
> should provide an "optical flow" channel for images, much like it has an
> alpha channel - this is where OF can be stored and where it can be read
> from. But I don't think Blender should generate OF data from, for
> example, videos. It can generate it as part of the 3D render, but that
> is because it is a lot easier there.
>
> Third: Even if we don't support dynamically loadable plugins, we need
> clean internal interfaces that allow access to sequences of images, not
> just single frames.
>
> I realize that Blender is 15 years old, very complex, and that it's a
> lot more to it than just to waltz in and doodle up a plugin system. But
> I think it is necessary to try. Like I said, I will develop this in my
> own branch and unless I succeed, you won't hear about it. But I would
> like to gauge the interest for such a modification, because if I
> succeed, I do want my code merged back into the trunk. Forking, or
> maintaining my own build of Blender, is out of the question.
>
>> Regarding the tools you have written, do you thing, that adding per
> effect
>> strip render/bake would solve your problems? (It could be done in such a
>> way, that the bake function could request arbitrary frames from it's
> input
>> track.)
>
> It would work, but I'd lose the real-time feedback and great UI I was
> hoping for, making Blender a lot less attractive as a way of sharing my
> code in a way that is useful for the target audience (artists).

ok. I'm a bit surprised, that you had a need for real time feedback in 
optical flow *generation*, but I must admit, that I haven't worked much 
with it. So: if you got it realtime, and you need instand feedback, you're 
right.

Cheers,
Peter

--
Peter Schlaile


More information about the Bf-committers mailing list