[Bf-taskforce25] NLA system thoughts

Nathan Vegdahl nathanvegdahl at gmail.com
Fri Mar 6 17:52:01 CET 2009


Hrm.  Based on your responses, it sounds like I seriously
misunderstood your design for the NLA system.  It sounds like the "IPO
bag" feature that was so drool-worthy is impossible under this system.

> as soon as you want to directly make one datablock use
> some keyframes created for a similar one, you get stuck with the problem of
> having path matching issues.

Yeah, this makes sense, which is why having the *possibility* for
attaching actions to objects makes sense, as outlined in my second
e-mail.  Having an action for an armature, which you can then also
apply to another armature, makes sense.  Just like the old system.
But being limited to that workflow is very... limiting.  Perhaps I
misunderstood, but I thought one of the major points of redoing the
NLA system was to escape from this limitation.

> Once you start trying to lump everything from several
> objects, potentially several characters with hundreds of bones each, from
> the same scene into a single action, it will quickly become prohibatively
> difficult to manage these for the animator as it quickly becomes confusing
> which character a particular bone or channel belongs to.

It would not be difficult at all, because you can already filter by
selection.  Moreover, the hierarchy of the channel list (in the
dopesheet and graph editors) would make this quite obvious which
object a channel belonged to.  Presumably there would be an
armature->bone->channel hierarchy for bones, and an object->channel
hierarchy for normal objects.

Again, what you say makes it sound like the system cannot handle IPO
bags, which was the big use-case that the previous NLA system could
not handle.  Aside from the general "everything is animatable" that
applies to all of animato, I'm struggling to see how the new NLA
system is any different than the old one, based on what you've said so
far.  It sounds like it has all the same limitations.

--Nathan



On Fri, Mar 6, 2009 at 2:26 AM, Joshua Leung <aligorith at gmail.com> wrote:
> Hi,
>
> On Fri, Mar 6, 2009 at 1:49 PM, Nathan Vegdahl <nathanvegdahl at gmail.com>
> wrote:
>>
>> 1) The NLA editor should be, ideally, extremely similar to the
>> sequence editor.  There really isn't any reason to have the two
>> interfaces be particularly different, or to have the user interact
>> with them any differently.  Therefore: why not use the same UI code?
>> This unifies the user experience greatly, and reduces coding work both
>> now and in the future.  Only very small changes would need to be made
>> to accommodate NLA editing.
>
> Certainly there can be a great amount of overlap between the two. However,
> where this idea breaks down is when you start trying to figure out which
> 'Object' or datablock the strips belong to. Perhaps this is why you proposed
> the single scene-level action?
>
> Or were you meaning with this idea that the fancy strip-drawing code and
> multiple-strips in a single row things should be used? If this is what you
> meant, that certainly this is what I intend to do. NlaTracks can contain
> multiple strips.
>
>>
>> 2) IMO, ideally by default there is a single scene-level action that
>> *all* animation goes into.  Then, if someone is just animating
>> normally (i.e. no NLA) they don't have to know anything about NLA, and
>> they don't get confused by action references popping up all over the
>> place in the Graph Editor and Dopesheet.
>
> Hmm... I don't really like this idea too much... at least not yet ;).
>
> The reason why I've currently got the system set up to create actions per-ID
> (i.e. mostly object, but can be other things such as materials, lamps, etc.
> by default), is that as soon as you want to directly make one datablock use
> some keyframes created for a similar one, you get stuck with the problem of
> having path matching issues.
>
> Also, for efficiency sake, it should be known that settings should be
> animated in an action that is located as close to the datablock where the
> setting exists.
>
> Another more worrisome problem though, is being able to maintain some of the
> organisation-related goodies we can provide easily by simply sticking (for
> the most part) to attaching to objects/datablocks with flexibility to do
> things some other way. Once you start trying to lump everything from several
> objects, potentially several characters with hundreds of bones each, from
> the same scene into a single action, it will quickly become prohibatively
> difficult to manage these for the animator as it quickly becomes confusing
> which character a particular bone or channel belongs to.
>
> I remember at one stage I've contemplated auto-generating hierarchies for
> channel displays based on the RNA-paths, but abandoned the idea due to the
> complexities + overhead involved with that. For each channel, you'd need to
> disect the path to obtain the relevant hierarchial divisions, and based on
> these check whether the channel fits in under the same headings as a
> previous one did, and create/add levels + the channel as appropriate. Also,
> this leads to the problem of making sure that any 'channels' (i.e. levels,
> etc.) created dynamically can be expanded in some way...
>
>>
>> 3) The user should have to *explicitly* create new actions.  Blender
>> should *never* *ever* implicitly create actions without explicit user
>> intervention telling it to do so. (The default scene action is the
>> only exception.)
>
> What? this doesn't make sense.... You're implying that actions shouldn't
> ever be created by either Blender or the user!
>
>>
>> 4) Similarly to #3, the user should typically have to *explicitly*
>> switch the action they are currently working on.  Blender should
>> *almost* never implicitly switch actions, except where it really makes
>> sense to do so. (I can't think of any cases where it would make sense,
>> but I have a gut feeling there might be some.)
>
> This isn't that relevant with the dopesheet/graph editors, but is probably
> relevant for the action editor.
>
>>
>> 5)  The typical workflow for working with actions should be to
>> create/switch to the action you want to work on, and start animating
>> as you normally do.  All keys that you add (regardless whether they
>> are for objects, materials, modifiers...) go into the currently active
>> action.  And all displays/editors reflect the current action.
>
> To a certain degree, absolute Keying Sets solve this problem for the most
> part.
>
>>
>> 6) The action that you are currently working on should be obviously
>> reflected in the UI somewhere.
>>
>> More complex features can be layered on top of these 6 points (such as
>> actions within actions), but unless someone can convince me otherwise,
>> I think it's important that this be the basic way that NLA works in
>> Blender.
>>
>> I hope my explanations made sense.  If any of you need clarification,
>> do not hesitate to ask.
>>
>> --Nathan
>
>


More information about the Bf-taskforce25 mailing list