[Bf-taskforce25] NLA system thoughts

Nathan Vegdahl nathanvegdahl at gmail.com
Fri Mar 6 22:09:53 CET 2009


Some possible use-cases to consider (ubuntu-style):

- Mr. Blue wants to do a crowd simulation with thousands of
similarly-structured characters.  He creates several actions in one of
the rig files (run cycles, looking around, falling over, etc.), and
wants to be able to apply them independently to all the characters in
the crowd based on their states.

- Crazy Larry runs an animation studio with limited human resources.
To reduce workload and speed up production time, he wants to create a
set of stock actions (run/walk/idle cycles) for each character that
are stored with each character's rig.  When the rigs are linked into a
scene, these stock actions can then easily be used in the NLA editor.
For final tweaking of the shot, all of the animation is baked down to
a single action for easy editing.

- Mrs. Brown wants to use NLA to sequence and mix several rigid-body
simulations of a 50x50 brick wall for a shot in a film.  She runs the
simulations and puts each (baked) simulation into a separate action.
She can then use these actions in the NLA editor.
(The alternative: 2500 separate actions to manage per simulation: one
for each of the brick objects.)

- William Reynish is making some funny character animations for a
credits sequence.  He would like each animation to be in a single
separate action, because it makes organizational sense, and makes it
easy to place/adjust them in the timeline using the NLA editor.  One
of the animations is a juggling chinchilla.  The character and
juggling balls are separate objects.  William wants to put the whole
thing (balls and chinchilla) in the same action.

- Andy Goralczyk is working on an epic animated series about a crazy
scientist.  The scientist's giant lab has a complex lighting setup
involving over 40 light sources.  In every episode, when the scientist
turns on the lights, the lights in the lab switch on in a particular
order, with particular brightness and color animations.  This
animation is the same for almost every episode, and Andy does not want
to animate it by hand every time.  He wants to create a single action
which he can then easily reuse for every episode.

- Nathan Vegdahl likes to abuse actions for versioning his animations
(i.e. actions: "anim_v1", "anim_v2", "anim_v3", etc.).  He would like
each action to contain all the animation in the scene for that
version.  Admittedly this is an abuse, and not a primary use-case. ;-)

(Also, I just noticed I forgot to include Bassam in these emails.  I'm
adding him now.)

--Nathan V

On Fri, Mar 6, 2009 at 8:52 AM, Nathan Vegdahl <nathanvegdahl at gmail.com> wrote:
> Hrm.  Based on your responses, it sounds like I seriously
> misunderstood your design for the NLA system.  It sounds like the "IPO
> bag" feature that was so drool-worthy is impossible under this system.
>
>> as soon as you want to directly make one datablock use
>> some keyframes created for a similar one, you get stuck with the problem of
>> having path matching issues.
>
> Yeah, this makes sense, which is why having the *possibility* for
> attaching actions to objects makes sense, as outlined in my second
> e-mail.  Having an action for an armature, which you can then also
> apply to another armature, makes sense.  Just like the old system.
> But being limited to that workflow is very... limiting.  Perhaps I
> misunderstood, but I thought one of the major points of redoing the
> NLA system was to escape from this limitation.
>
>> Once you start trying to lump everything from several
>> objects, potentially several characters with hundreds of bones each, from
>> the same scene into a single action, it will quickly become prohibatively
>> difficult to manage these for the animator as it quickly becomes confusing
>> which character a particular bone or channel belongs to.
>
> It would not be difficult at all, because you can already filter by
> selection.  Moreover, the hierarchy of the channel list (in the
> dopesheet and graph editors) would make this quite obvious which
> object a channel belonged to.  Presumably there would be an
> armature->bone->channel hierarchy for bones, and an object->channel
> hierarchy for normal objects.
>
> Again, what you say makes it sound like the system cannot handle IPO
> bags, which was the big use-case that the previous NLA system could
> not handle.  Aside from the general "everything is animatable" that
> applies to all of animato, I'm struggling to see how the new NLA
> system is any different than the old one, based on what you've said so
> far.  It sounds like it has all the same limitations.
>
> --Nathan
>
>
>
> On Fri, Mar 6, 2009 at 2:26 AM, Joshua Leung <aligorith at gmail.com> wrote:
>> Hi,
>>
>> On Fri, Mar 6, 2009 at 1:49 PM, Nathan Vegdahl <nathanvegdahl at gmail.com>
>> wrote:
>>>
>>> 1) The NLA editor should be, ideally, extremely similar to the
>>> sequence editor.  There really isn't any reason to have the two
>>> interfaces be particularly different, or to have the user interact
>>> with them any differently.  Therefore: why not use the same UI code?
>>> This unifies the user experience greatly, and reduces coding work both
>>> now and in the future.  Only very small changes would need to be made
>>> to accommodate NLA editing.
>>
>> Certainly there can be a great amount of overlap between the two. However,
>> where this idea breaks down is when you start trying to figure out which
>> 'Object' or datablock the strips belong to. Perhaps this is why you proposed
>> the single scene-level action?
>>
>> Or were you meaning with this idea that the fancy strip-drawing code and
>> multiple-strips in a single row things should be used? If this is what you
>> meant, that certainly this is what I intend to do. NlaTracks can contain
>> multiple strips.
>>
>>>
>>> 2) IMO, ideally by default there is a single scene-level action that
>>> *all* animation goes into.  Then, if someone is just animating
>>> normally (i.e. no NLA) they don't have to know anything about NLA, and
>>> they don't get confused by action references popping up all over the
>>> place in the Graph Editor and Dopesheet.
>>
>> Hmm... I don't really like this idea too much... at least not yet ;).
>>
>> The reason why I've currently got the system set up to create actions per-ID
>> (i.e. mostly object, but can be other things such as materials, lamps, etc.
>> by default), is that as soon as you want to directly make one datablock use
>> some keyframes created for a similar one, you get stuck with the problem of
>> having path matching issues.
>>
>> Also, for efficiency sake, it should be known that settings should be
>> animated in an action that is located as close to the datablock where the
>> setting exists.
>>
>> Another more worrisome problem though, is being able to maintain some of the
>> organisation-related goodies we can provide easily by simply sticking (for
>> the most part) to attaching to objects/datablocks with flexibility to do
>> things some other way. Once you start trying to lump everything from several
>> objects, potentially several characters with hundreds of bones each, from
>> the same scene into a single action, it will quickly become prohibatively
>> difficult to manage these for the animator as it quickly becomes confusing
>> which character a particular bone or channel belongs to.
>>
>> I remember at one stage I've contemplated auto-generating hierarchies for
>> channel displays based on the RNA-paths, but abandoned the idea due to the
>> complexities + overhead involved with that. For each channel, you'd need to
>> disect the path to obtain the relevant hierarchial divisions, and based on
>> these check whether the channel fits in under the same headings as a
>> previous one did, and create/add levels + the channel as appropriate. Also,
>> this leads to the problem of making sure that any 'channels' (i.e. levels,
>> etc.) created dynamically can be expanded in some way...
>>
>>>
>>> 3) The user should have to *explicitly* create new actions.  Blender
>>> should *never* *ever* implicitly create actions without explicit user
>>> intervention telling it to do so. (The default scene action is the
>>> only exception.)
>>
>> What? this doesn't make sense.... You're implying that actions shouldn't
>> ever be created by either Blender or the user!
>>
>>>
>>> 4) Similarly to #3, the user should typically have to *explicitly*
>>> switch the action they are currently working on.  Blender should
>>> *almost* never implicitly switch actions, except where it really makes
>>> sense to do so. (I can't think of any cases where it would make sense,
>>> but I have a gut feeling there might be some.)
>>
>> This isn't that relevant with the dopesheet/graph editors, but is probably
>> relevant for the action editor.
>>
>>>
>>> 5)  The typical workflow for working with actions should be to
>>> create/switch to the action you want to work on, and start animating
>>> as you normally do.  All keys that you add (regardless whether they
>>> are for objects, materials, modifiers...) go into the currently active
>>> action.  And all displays/editors reflect the current action.
>>
>> To a certain degree, absolute Keying Sets solve this problem for the most
>> part.
>>
>>>
>>> 6) The action that you are currently working on should be obviously
>>> reflected in the UI somewhere.
>>>
>>> More complex features can be layered on top of these 6 points (such as
>>> actions within actions), but unless someone can convince me otherwise,
>>> I think it's important that this be the basic way that NLA works in
>>> Blender.
>>>
>>> I hope my explanations made sense.  If any of you need clarification,
>>> do not hesitate to ask.
>>>
>>> --Nathan
>>
>>
>


More information about the Bf-taskforce25 mailing list