[Bf-funboard] VSE , compositing, and 3d integration...

David Jeske davidj at gmail.com
Sun Jul 7 04:20:31 CEST 2013


I think there are two different topics here..."project/UI integration", and
"pipeline integration"...

1) 3d / composite / vse   project/UI integration

Blender's current UI suggests I should be able to make three 50-frame
animation scenes (Cube, Sphere, Plane), and then make a fourth 150 frame
scene "VSE Sequence" which brings them together into the VSE Clip Editor.
In fact in the VSE I can "add scene" and the strips are brought in.

However, this is where things start to break down. The default VSE-Timeline
is pointed at both 3d-view editors and animation editors.. which causes odd
linking between whatever scene is loaded in the Default/3d view and the
(different) VSE Scene. In fact, for this usage we would prefer scene to be
a global selection across all layouts, so we don't run into these
cross-matched situations. Even correcting for these effects, I can't find a
setup which allows the VSE to properly "play through" the three clips
arranged end-to-end.

Whether this is my not understanding blender, or actual bugs is somewhat
irrelevant. The VSE "Add -> Scene.." suggests that I can add a bunch of
scenes to the VSE and sequence them, but in practical terms it doesn't
really work. This would be really nice to fix....

Once this is fixed, then it might be nice to construct an overall composite
into the VSE scene, perhaps just changing color-tone for the entire movie.
I don't see any reason to do this in each individual scene's composing
setup.

Does the VSE need to pre-render proxies for the scenes to handle this? Does
it need to downscale the compositing effects? It doesn't really matter to
me. All of the above seems like a really reasonable and simple thing to do.

2) Compositor / VSE Pipeline integration

Blender compositing+3d rendering could be have better interactive
capabilities via a combination of component-pre-rendering and
preview-degredation (resolution and quality) .. much like Nuke proxies.
Currently doing anything like this in Blender is very-very manual. As an
amateur user, not a big production studio, I'd also like to pipe
solid/textured/GLSL renders into the compositor, so I could use it more
interactively during live/3d VFX integration.
The ability to approach real time interactivity through pre-rendering and
preview-degredation is exactly what NLEs do in their compositing/effect
pipelines. There is no "magic" solution to pushing pixels. There are
situations where they can't push pixels in real time, and it requires
low-resolution preview and/or pre-rendering. This seems very much the same
as Nuke proxies.

It's possible I'm just naive, but when I look at the feature-sets of the
two areas, I see commonalities bigger than differences. I also think that,
in general, Blender capabilities often lag behind the state-of-the-art by a
delay which is dwarfed by Moore's Law. Applied to above example, I mean
that by the time Blender has the capabilities of Nuke and Maya, Moore's law
may allow real-time 4k compositing on a wristwatch. (okay, that's a joke,
but I'm trying to make a point).

The point is, both pipelines *approach* realtime for some set of input
paramaters, they don't achieve it. I can see an argument that a
fixed-function NLE pipeline can outperform a node-based compositing
pipeline for the same set of operations. However, I don't see why a single
pipeline can't function as both an *approaching* real-time preview
3d+compositing pipeline, and an *approaching* real-time NLE effect and
compositing pipeline.

-----

To cap off this overly long email...

As an amateur user, not a big production studio, I'd also like to see more
interactive preview capabilities pushed into the 3d+compositing pipeline!
I'd like to feed (solid/textured/GLSL) rendered 3d into VFX compositing,
for much quicker VFX blocking and interactive editing. Combine that with
easy low-resolution preview settings and proxy rendering, and I could get
real-time 3d+compositing playback, for some resolution and quality. Seems
relatively easy to do, and also seems inline with the new direction to
"integrate BGE/GLSL more thoroughly into blender".

If there are real technical reasons this approach can run into problems,
I'd like to understand what they are. However, "this is not the way things
are done" doesn't really mean anything to me. Blender and open-source 3d
wasn't the way things were done 10 years ago.


More information about the Bf-funboard mailing list