[Bf-committers] new IK algorithms in Blender

Brecht Van Lommel brecht at blender.org
Tue Jan 13 15:44:12 CET 2009


Hi Benoit,

Benoit Bolsee wrote:
> Unlike the current IK algorithm in Blender, the new algorithms are 
> stateful and time dependent: you cannot get the pose for a given frame 
> number if you don't know the pose and the internal state at the previous 
> frame (and recursively at the begining of the simulation). The BGE is a 
> natural place to implement the algorithm because it is naturally time 
> dependent but we also want to have the animation available in Blender 
> for frame by frame inspection of the actions.
> One possible approach is baking via the BGE: you prepare the setup, 
> define the task by constraints and run the simulation in the BGE. The 
> joints positions are stored in Ipo curves and retrieved in Blender.
> Another approach is to have baking or caching in Blender like cloth (I 
> didn't look at cloth code yet). Baking the IK solution should be fairly 
> quick, potentially much quicker than the current IK algorithm that 
> always starts from the rest position at each frame.
>  
> My idea is to implement a flexible caching system that will be available 
> in Blender for animation and in the BGE for recording the simulation or 
> the actual physics parameters when the BGE is used to control a real 
> robotic setup. I'm interested to hear your opinion on that.

I'm not sure about the best solution, both point caching and ipo 
curves give a part of the solution. Point caching as used by cloth 
deals with a part of the problem doing caching, baking, clearing 
caches on changes, etc, but it works with modifiers currently and not 
ipo curves or the armature animation system. If it's baked in ipo 
curves on the other hand that means you already have playblack, 
editing and visualization of curves. But you still need a way there to 
distinguish between ipo curves as animated by the user and fed into 
constraints and the cached/baked ipo curves resulting from those 
constraints..

I have no idea how the workflow would be for robotics, and also how 
such a system works best in an animation system even, mixing animation 
and simulation is complex. So I suppose you should look at practical 
use cases and figure out what you need for them.

 From the animation point of view you quickly get questions like:
* How to interactively edit animation at keyframe 200 without having 
to wait and rebake from frame 1 each time? Is this possible at all?
* How to keep animated data and simulated data separated? By layering 
one on top of the other? Like layering actions in the NLA perhaps?
* How to tweak simulated results? How do you make it clear to the user 
that edits to simulated data will be lost? Or can such edits be 
layered too?

However from the robotics point of view maybe all you want to do is 
specify parameters, and then run the simulation from scratch each time 
and inspect it? In that case the solution can be simpler but perhaps 
not as suited for animation.

Brecht.


More information about the Bf-committers mailing list