[Bf-committers] Bf-committers Digest, Vol 54, Issue 26

Benoit Bolsee benoit.bolsee at online.be
Mon Jan 19 18:37:15 CET 2009


There is some misunderstanding in what time-dependency actually means.
In your example, I assume you mean that you want to modify a constraint
at frame 20 only (e.g. the position of an IK target). This will
effectively have an impact on the IK chain movement around frame 20,
because the algorithm takes into account the dynamics of the armature,
but it will probably not change much at frame 30 and certainly not at
frame 150 because you have not changed the constraints at these frames:
the IK algorithm is designed to match the constraints and equal
constraints will produce equal results.
In this respect, the new algorithm performs very much like the old one:
all the constraints are matched at each frame (as long as the system is
not over-constrainted) and you can just expect more natural transition
from one frame to the next. 
In particular, the current IK solver can produce two completely
different poses at frame n and n+1 although the target position is very
similar simply because the solver has followed different convergence
paths to resolve the two poses; that's impossible with the new IK
algorithm.
Note that the new algorithm matches the constraint velocity too: it's
not enough to know the position of a target at frame n, we need to know
its velocity (the velocity can be extracted from the Ipo curves). The
dependency on velocity is a new feature; I don't know how it will
afffect the work of the animators but I believe it should be easy to
handle. 

Today I had a good meeting with the main programmer of the IK algorithm
at the KUL and I have now a better idea how it will be integrated in
Blender.
The current Pose constraint Tab will remain: the new system is not
compatible with the way the constraints are evaluated today and I think
I need a new Tab with a new set of constraints. 

The stateful nature of the new algorithm will be handled via a flexible
point cache: Blender will extract from the scene the parts that must be
resolved by the new algorithm and build a robot-tree representing the
system to solve (armatures and constraints). The robot-tree stays in
memory as long as no element is changed in the scene that affects the
robot-tree (adding or removing a constraint, changing the bones, etc).
A point cache is attached to the robot-tree, that keeps tracks of all
the variables of the system: state values, input values, desired
constraint values.
When Blender needs to know the pose of an armature, it fetches the
robot-tree, rebuilds it if not present, digs into the cache to find the
current frame, retrieves the input values from the scene and compare
with the input values in the cache. If they are the same, it assumes
that they were also the same for all the previous frames and simply
extract the state values from which it computes the armature pose using
the information in the robot tree. If the input values are different, it
trashes the cache for the current frame and up and recomputes the state
values assuming the input and state values were the same for all
previous frames. In case the previous frame is missing in the cache, it
does the same thing but taking the most recent frame present in the
cache as the start point for the calculation.

This algorithm assumes correct animation only if you execute the
animation from first frame. It's the best method I can think of because
it's impossible to rebuild all the simulation data for all the frames
all the time.

When the animator is happy with the result, it bakes an action: the
point cache is transferred in a fully specified action (= an action
channel for each bone and a control point for each frame) which can be
used in the NLA sequencer as before.
 
/benoit

> ----------------------------------------------------------------------
> 
> Date: Sun, 18 Jan 2009 15:04:48 -0800
> From: "Nathan Vegdahl" <cessen at cessen.com>
> Subject: Re: [Bf-committers] new IK algorithms in Blender
> To: "Herman Bruyninckx" <Herman.Bruyninckx at mech.kuleuven.be>,
> 	"bf-blender developers" <bf-committers at blender.org>
> Message-ID:
> 	<bd1b4c730901181504q55e38dc8g42a3dc76fe5ae69 at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
> 
>     IMO, the biggest issue I see with this is the 
> frame-dependence. As an animator, I want frames to be 
> *independent* of each other because that makes things 
> predictable for me: if I change frame 20, I know it's not 
> going affect frame 30 or frame 150 which I was already happy 
> with.  As an analogy, imagine trying to draw if making marks 
> on one part of the page changed marks that you already made 
> on another part of the page.  It would be a very frustrating 
> experience.
>     In general, I think tools for hand-made animation should 
> be highly local in nature whenever possible, unless there is 
> a truly substantial benefit for having it be otherwise.
> 
>     However, having said that, it sounds like this could be 
> useful outside of hand-made animation applications.  Game 
> engine, crowd simulations, etc.
>     I'd just shy away from using it as a tool for animators.
> 
> --Nathan V
> 
> On Tue, Jan 13, 2009 at 8:15 AM, Herman Bruyninckx 
> <Herman.Bruyninckx at mech.kuleuven.be> wrote:
> > On Tue, 13 Jan 2009, Brecht Van Lommel wrote:
> >
> > [...]
> >> From the animation point of view you quickly get questions like:
> >> * How to interactively edit animation at keyframe 200 without
having 
> >> to wait and rebake from frame 1 each time? Is this possible at all?
> >
> > It is: the constraints determine the _instantaneous_ motion, so the 
> > baking has all the state information that is necessary to continue 
> > from a given point with modified constraint or armature parameters.
> >
> > In addition to the instantaneous IK solver, robot programmers (and I

> > am quite sure also animators) need:
> > (i) "logic scheduling" of such instantaneous motions supported by
some 
> > form of a finite state machine (for which the Game Engine already
has 
> > a decent code base!); and
> > (ii) pre-programmed (but customizable) "gaits" and "postures", that 
> > is, sets of "IPO curves" that give nominal motions such as walking, 
> > reaching, running, picking up things, etc. (In your words, such
gaits 
> > are "baked IPOs" that can be resimulated with customized settings.)
> >
> > So, the questions you raise are to the point, but the (possible) 
> > solutions to them are within our medium term vision of the 
> > development.
> >
> > Herman
> >
> >> * How to keep animated data and simulated data separated? By
layering 
> >> one on top of the other? Like layering actions in the NLA perhaps?
> >> * How to tweak simulated results? How do you make it clear to the 
> >> user that edits to simulated data will be lost? Or can such edits
be 
> >> layered too?
> >>
> >> However from the robotics point of view maybe all you want to do is

> >> specify parameters, and then run the simulation from scratch each 
> >> time and inspect it? In that case the solution can be simpler but 
> >> perhaps not as suited for animation.
> >>
> >> Brecht.
> >> _______________________________________________
> >> Bf-committers mailing list
> >> Bf-committers at blender.org 
> >> http://lists.blender.org/mailman/listinfo/bf-committers
> >>



More information about the Bf-committers mailing list