[Bf-committers] new IK algorithms in Blender
cessen at cessen.com
Mon Jan 19 00:04:48 CET 2009
IMO, the biggest issue I see with this is the frame-dependence.
As an animator, I want frames to be *independent* of each other
because that makes things predictable for me: if I change frame 20, I
know it's not going affect frame 30 or frame 150 which I was already
happy with. As an analogy, imagine trying to draw if making marks on
one part of the page changed marks that you already made on another
part of the page. It would be a very frustrating experience.
In general, I think tools for hand-made animation should be highly
local in nature whenever possible, unless there is a truly substantial
benefit for having it be otherwise.
However, having said that, it sounds like this could be useful
outside of hand-made animation applications. Game engine, crowd
I'd just shy away from using it as a tool for animators.
On Tue, Jan 13, 2009 at 8:15 AM, Herman Bruyninckx
<Herman.Bruyninckx at mech.kuleuven.be> wrote:
> On Tue, 13 Jan 2009, Brecht Van Lommel wrote:
>> From the animation point of view you quickly get questions like:
>> * How to interactively edit animation at keyframe 200 without having
>> to wait and rebake from frame 1 each time? Is this possible at all?
> It is: the constraints determine the _instantaneous_ motion, so the baking
> has all the state information that is necessary to continue from a given
> point with modified constraint or armature parameters.
> In addition to the instantaneous IK solver, robot programmers (and I am
> quite sure also animators) need:
> (i) "logic scheduling" of such instantaneous motions supported by some form
> of a finite state machine (for which the Game Engine already has a decent
> code base!); and
> (ii) pre-programmed (but customizable) "gaits" and "postures", that is,
> sets of "IPO curves" that give nominal motions such as walking, reaching,
> running, picking up things, etc. (In your words, such gaits are "baked
> IPOs" that can be resimulated with customized settings.)
> So, the questions you raise are to the point, but the (possible) solutions
> to them are within our medium term vision of the development.
>> * How to keep animated data and simulated data separated? By layering
>> one on top of the other? Like layering actions in the NLA perhaps?
>> * How to tweak simulated results? How do you make it clear to the user
>> that edits to simulated data will be lost? Or can such edits be
>> layered too?
>> However from the robotics point of view maybe all you want to do is
>> specify parameters, and then run the simulation from scratch each time
>> and inspect it? In that case the solution can be simpler but perhaps
>> not as suited for animation.
>> Bf-committers mailing list
>> Bf-committers at blender.org
> K.U.Leuven, Mechanical Eng., Mechatronics & Robotics Research Group
> <http://people.mech.kuleuven.be/~bruyninc> Tel: +32 16 322480
> Coordinator of EURON (European Robotics Research Network)
> Open Realtime Control Services <http://www.orocos.org>
> Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm
> Bf-committers mailing list
> Bf-committers at blender.org
More information about the Bf-committers