[Bf-animsys] Depsgraph Refactor - GSoC2013 System Design Proposal

Nathan Vegdahl nathanvegdahl at gmail.com
Mon Jul 8 00:45:57 CEST 2013


Hi Joshua,

> My considerations for this were:
[...]
> 1) Some of the things evaluation steps that need to be performed when
> evaluating things (most notably with armatures, as described in the
> Granularity article) don't quite correspond to tangible pieces of
> data/datablobs that are exposed to users in the UI. I'm particularly
> thinking of things like constructing the temporary IK Trees used in IK
> solving, setting up temporary evaluation objects/state data for
> constraints/constraint targets, and perhaps some other obscure steps
> like that.

Okay, that makes sense.  I guess I'm just wondering why, if that is a
beneficial choice for inner nodes, it isn't also a beneficial choice
for outer nodes.  But I think you touched on that here:

>   * Outer nodes simply hold references to the data that's contained
> within the subgraphs attached to them. They act a bit like
> markers/landmarks for narrowing down the search space for nodes which
> are actually used for perform the evaluation steps required. So,
> instead of perhaps having 50,000 nodes, we "only" have 1000 or so to
> check ;)

This all a bit over my head, so forgive me if I misunderstand.  But it
sounds, then, like the outer nodes exist mainly for optimization
purposes, whereas the inner nodes are the real "meat" of the new
design.  Am I correct in that?

And if that is the case, are there any limitations imposed by outer
nodes not being the same kind of graph as inner nodes?  For example,
what if in the future we want to unify the constraint system for
objects and bones (which would be nice...).  Could we construct an IK
chain of objects, even though that would involve outer nodes?

[Regarding animation/actions:]
> Yep, I was just using this as one of the key
> examples of what you'd want to do with that
> sort of thing.

Awesome.  I don't think I have anything of substance to add, then. ;-)

> As a bit of a brainwave I had when just typing this, we could very
> well just piggyback this off the existing "Group" functionality:
> * In the Object Properties, we have that panel for "Add to Group", and
> a bunch of subpanels describing how each object belongs to each group.
> This basically shows that the object is part of the group.
> * If each group can get an AnimData block associated with it, and
> perhaps with the ability to also assign arbitrary ID-blocks to it too
> (perhaps in list separate to the objects list for backwards compat
> reasons), then it becomes possible to have a mechanism/attachment
> point for actions to apply to a whole bunch of ID's/objects at the
> same time. Naturally, we'd also provide RNA access to the group's
> lists of data, which allows us to create paths like:
> "objects[\"GroupedCube\"].location", etc. to animate stuff within that
> group
> * Evaluation of this would simply use the standard mechanisms we're
> working on now...

This sounds absolutely fabulous.  Having a way to construct arbitrary
groupings of ID blocks for animation would be the "holy grail", so to
speak. :-D

In my previous e-mail, I was more just thinking of easy ways to "piggy
back" off the existing design.  But arbitrary groupings would be even
better.

--Nathan


On Sat, Jul 6, 2013 at 7:44 AM, Joshua Leung <aligorith at gmail.com> wrote:
> Hi Nathan,
>
> See replies below...
>
> On Sat, Jul 6, 2013 at 9:18 AM, Nathan Vegdahl <nathanvegdahl at gmail.com> wrote:
>> Okay, I've had a chance to go through the proposal more carefully now.
>>  It still looks good, but I have a few questions/comments:
>>
>> First, regarding this:
>>> Inner nodes are used to actually keep track of relationships
>>> on a fine-enough scale that most pseudo-cyclic situations
>>> won't show up as such. To be precise, the set of inner nodes
>>> (i.e. all the nodes on the right hand side of the diagram)
>>> defines the full set of evaluation steps that can be
>>> performed/executed in the scene in response to tagged
>>> changes. Each individual atomic node here is an evaluation
>>> step that doesn't really contain any others.
>>
>> I also read the blog post about this, and the descriptions of inner
>> nodes sound more along the lines of data-processing nodes (i.e. each
>> node represents an operation of some kind) than data nodes (i.e. each
>> node represents an object or piece of data).  Am I correct in this
>> inference?
>
> Practically yes...
>
>> And if so, what is the motivation for the inner nodes
>> being data-processing style nodes rather than data nodes (especially
>> when the outer nodes are data nodes)?
>>
>
> My considerations for this were:
> 1) Some of the things evaluation steps that need to be performed when
> evaluating things (most notably with armatures, as described in the
> Granularity article) don't quite correspond to tangible pieces of
> data/datablobs that are exposed to users in the UI. I'm particularly
> thinking of things like constructing the temporary IK Trees used in IK
> solving, setting up temporary evaluation objects/state data for
> constraints/constraint targets, and perhaps some other obscure steps
> like that.
>
> 2) From a purely implementation point of view:
>   * Outer nodes simply hold references to the data that's contained
> within the subgraphs attached to them. They act a bit like
> markers/landmarks for narrowing down the search space for nodes which
> are actually used for perform the evaluation steps required. So,
> instead of perhaps having 50,000 nodes, we "only" have 1000 or so to
> check ;)
>   * Inner nodes on the other hand are effectively a mechanism for
> wrapping up the evaluation step functions into some generic format
> that can be placed on a task queue (and sent to a scheduler to do the
> dirty work of getting them to run). Each one of these is basically a
> function pointer, with some additional data perhaps to make it easier
> to trace what exactly they're being used to calculate (so that tags
> can be applied correctly), as well as the tagging/traversal state
> stuff needed to let the depsgraph do its magic. It may turn out that
> the vast majority will actually be "data" nodes (in the sense that
> they represent the evaluation function for specific pieces of data),
> rather than being "add/multiply/mix" type nodes.
>   * I've been dabbling with the idea of letting ID Group nodes
> enumerate the datablocks they've got in their "headliner" (i.e.
> referenced ID's) section, so that these id's can be used for tagging
> inner nodes, instead of having a whole bunch of pointers that we must
> check/adjust when copying the graph.
>
>> Second:
>>> A second example here is for handling/hosting animation
>>> evaluation for “character + props”, where it'd be nice to
>>> be able to include the props in the same action as the
>>> character animation[...]
>>
>> I just want to make sure we're on the same page here.  My hope for the
>> future of the animation system is that actions can be used to hold any
>> related animation data.  So this isn't just characters+props (although
>> that is a major use-case), but it's also cases like
>> lights+scene-background-color.  I would absolutely love to see a way
>> to group objects for animation purposes (even just using object groups
>> for the purpose, perhaps), as that would cover probably 95% of
>> use-cases.
>
> Yep, I was just using this as one of the key examples of what you'd
> want to do with that sort of thing.
>
> As a bit of a brainwave I had when just typing this, we could very
> well just piggyback this off the existing "Group" functionality:
> * In the Object Properties, we have that panel for "Add to Group", and
> a bunch of subpanels describing how each object belongs to each group.
> This basically shows that the object is part of the group.
> * If each group can get an AnimData block associated with it, and
> perhaps with the ability to also assign arbitrary ID-blocks to it too
> (perhaps in list separate to the objects list for backwards compat
> reasons), then it becomes possible to have a mechanism/attachment
> point for actions to apply to a whole bunch of ID's/objects at the
> same time. Naturally, we'd also provide RNA access to the group's
> lists of data, which allows us to create paths like:
> "objects[\"GroupedCube\"].location", etc. to animate stuff within that
> group
> * Evaluation of this would simply use the standard mechanisms we're
> working on now...
>
>> Finally, and this is actually for Brecht:
>>
>>> * Maybe more related to proxies, but I think we should make a clear
>>> distinction between groups/objects that are instanced (evaluated at
>>> their original location and duplicated into a new place without any
>>> evaluation, so fast and memory efficient) or proxied (evaluated in
>>> their new place, possibly interacting with other objects there).
>>
>> If we can do this in some kind of automatic way (e.g. auto-checking if
>> data has any overrides on it, and marking it as a pure instance if it
>> doesn't), that would be great.  But if the user has to understand that
>> distinction and set it up, I think that would impose a bit too much
>> and would clutter what could otherwise be a very elegant and seamless
>> experience. Being able to link/instance things, and just override
>> properties and data on them, would be a marvellous workflow.  (And, by
>> the way, then we could finally ditch the confusing "proxy" term ;-)
>> Maybe call it "local overrides".)
>>
>
> Indeed, the very existence of the "proxy" concept in the UI so far is
> perhaps merely a bit of a kludge for the fact that we don't have any
> way of automagically keeping track of "overrides" added to data. It
> might've been a slightly different story if we'd been looking at
> introducing them post-2.5 (with RNA), but then again, BBB and most of
> the 2.4 series might not have gone quite so smoothly ;)
>
>
> Hope that clarifies things a bit,
> Joshua
> _______________________________________________
> Bf-animsys mailing list
> Bf-animsys at blender.org
> http://lists.blender.org/mailman/listinfo/bf-animsys



More information about the Bf-animsys mailing list