[Bf-committers] Problems with time in do_nla()
tapplek at gmail.com
Thu Jan 5 19:46:02 CET 2006
Warning! noob opinion
On Thu, Jan 05, 2006 at 12:00:52AM +0100, Ton Roosendaal wrote:
> Blender's time system is a nightmare still! I've never had time to make
> that nice and clean during the animation refactor.
> Nevertheless, a frame is supposed to be a short (or int), it's the
> image being rendered you know! There's api calls in blender to convert
> them to float, like system_time().
> But there's a lot of confusement going on... like for object
> startframes, motion blur, time-mapping, etc.
> The do_all_actions() call could just get a float frame input yes, but
> that's probably better todo when the bigger picture gets solved.
I think that blender should work not with frames, but with time.
Frames are an integral part of film and cartoons, but I believe
that computer animation is different enough that frames should
only be second-class citizens of blender.
Video (and audio) are fundamentally continuous, analog signals,
broadcast throughout the real world. The problem with analog
signals is that it takes an infinite amount of information to
record or process an arbitrary analog signal. Luckily the rules
of signal sampling say that if the signal is continuous and not
too jumpy (bandlimited), then it is possible to *perfectly*
reconstruct that analog signal from a series of samples. The
only condition the sampling must satisfy is frame rate. As long
as we sample at more than twice the bandlimiting frequency, we
can perfectly reconstruct the signal from the samples using
smooth interpolation. What this means is that if the most sudden
event we wish to record takes 0.1 seconds, we must sample at
over 20 fps (say 21 fps) in order to perfectly reconstruct the
This is the principle used in all audio and video processing.
There is an additional nicety for video: The eye will do the
interpolation for us, saving the computer from having to
interpolate between frames all the time. That is why video can
be a series of still images instead of an always changing value
like in audio.
In film, the ideal signal is what passes into the camera and is
then sampled into the computer. In cartoon, there really is no
original signal, just the frames. However, blender (and any
computer animation software) is different. Blender does not work
with the frame by frame samples of the video; it creates them;
it creates the ideal, continuous video signal, and can create a
frame at any moment in the video by taking a snapshot of the
continuous video signal and rendering it.
Thus, since we have access to the original video at every moment
in time (not just a finite set of frames), frames should be more
of an afterthought. The only thing to worry about is making the
frame rate high enough so as to get at least two samples of the
fastest event in the video stream. This guarantees that the
continuous video signal will be totaly recovered when
interpolated by the eyes.
Just my thought on why blender should work more in the realm of
seconds than of frames.
More information about the Bf-committers