[Bf-funboard] VSE , compositing, and 3d integration...
davidj at gmail.com
Fri Jul 5 07:17:53 CEST 2013
I wrote up something that seems semi-coherent in my BA thread to help
redesign the VSE.. Thought I would post it here....
"integrate Compositing, 3d and the VSE" -> This is clearly pivotal to a
great VSE. How do we do it?
Node setups already have keyframing attribute values, so they do handle
time variance. What we need is a way to package a blender scene, with
compositing and/or 3d, into a "reusable effect scene". When we do this, we
should be able to...
a) ... export any compositing animatable attributes as the "external
control" for the scene (possibly through drivers). For example, for a
Vignetting effect, there might be "color", "radius", and "falloff". Once
this is packaged, these attributes could be keyframed directly in the VSE.
In the future we could even allow node-setups to have "custom" python-coded
UI widgets for their attributes.. like an Audio VU meter, etc.
b) ...configure/export external video-inputs to the scene (much like
composing node video inputs)... these might be used directly in 2d
compositing, or they might be placed on a 3d texture on a 3d object... For
example, a page-flip transition can be a scene with a 3d book page flip,
with two video sources mapped to the pages.
c) ...configure/export external "3d logic / driver / scripting variables",
such as text for a titling animation.. This could be used either directly
to feed a field creating the 3d text-object creation.. or it could be used
by some new types of modifiers or python script to do per-letter object
creation and animation
d) Then we allow the VSE to control speed/time of the effect clip. By
setting the effect duration, you are slowing down or speeding up time with
respect to the "effect scene" (which has some challenges) An IPO curve can
even give you variable time control of the "effect scene" right from with
the VSE and a graph editor (or a new VSE customized graph editor). (Ohh and
we need an asset browser)
This heavily leverages all of what Blender already is, and my hope is this
will be easier to code than you might think. In a sense blender already has
all of these features independently. What is needed is some encapsulating
and glue UI to wrap scenes up nicely into the VSE. Then we can use all of
blender's power to create effects for the VSE. At the same time, the user
can "go pro" on an effect scene at any time, directly opening and altering
it's details to get whatever effect he wants.
There are some loose ends for sure. Here are some brainstorm thoughts about
1) time-scaling issues... There will be situations where an effect
shouldn't be stretched, but repeated, or just "length controlled" (aka,
show the first N seconds of the 50 second effect available). These are
pretty easy to handle. There may be more complicated issues.. fortunately,
users will always have the answer to "go in and edit the effect directly"
2) Effect animations may want to often take in "lists" of things. The
simplest list is just the list of characters in a text-title. If the
animator wants to cause the letters to swirl around in a cloud before
finally coming to rest together... This can be done in Python, but AFAIK
there is no way to handle this automatically for any text input by an
animator. Ideally we would come up with some kind of "input array" modifier
or logic-tool.. which would allow you to take a list of characters, list of
objects, list of images, list of videos, etc... make an aniamtion for one
of them, and then define some way to repeat/shift/randomize/alter the
animation for the entire list.
More information about the Bf-funboard