[Bf-committers] Problems with time in do_nla()
stealthapprentice at yahoo.com
Sat Jan 7 09:13:13 CET 2006
I think time should be represented always as double.
Ultimately, you have to consider two considerations;
simulation requirements and output requirements.
In my experience these kinds of tools work best when
the simulation engine runs at its appropriate speed,
for example, an euler integrator for physics needs to
run at a frame rate that guarantees non-divergent
behavior for particular spring/damping ratios.
The output might be film at 24, or game at 60, or
possibly game at "highly variable fps"; the latter
being arguably undesireable depending on genre.
Therefore, at frame generation time, it is good to
have robust sampling of the various simulations
comprising the frame. For particle systems, I prefer
stochastic sampling of the desired interval; for rigid
body, I prefer fixed sampling with interpolation.
Life is easiest then if time is represented internally
by double, and represented to the user as either
frame, time, or SMPTE code depending on the desired
output medium. If everything is running as double
there are no potential issues with round off, and
subframes can be accurately computed always.
Finally, I suggest time needs to be represented as
double because some compilers up/down convert float to
double for arithmetic on the FPU. If all variables in
the sample below are float, the result of 1 & 2 can be
different, but will be the same if the variables are
double. The reason is due to when rounding occurs.
1) if (x > (y+z)) foo();
2) x -= z; if (x > y) foo();
This behavior leads to nasty off by one frame errors
if some code uses 1) and some code uses 2).
--- Matthew Fulmer <tapplek at gmail.com> wrote:
> Warning! noob opinion
> On Thu, Jan 05, 2006 at 12:00:52AM +0100, Ton
> Roosendaal wrote:
> > Hi,
> > Blender's time system is a nightmare still! I've
> never had time to make
> > that nice and clean during the animation refactor.
> > Nevertheless, a frame is supposed to be a short
> (or int), it's the
> > image being rendered you know! There's api calls
> in blender to convert
> > them to float, like system_time().
> > But there's a lot of confusement going on... like
> for object
> > startframes, motion blur, time-mapping, etc.
> > The do_all_actions() call could just get a float
> frame input yes, but
> > that's probably better todo when the bigger
> picture gets solved.
> I think that blender should work not with frames,
> but with time.
> Frames are an integral part of film and cartoons,
> but I believe
> that computer animation is different enough that
> frames should
> only be second-class citizens of blender.
> Video (and audio) are fundamentally continuous,
> analog signals,
> broadcast throughout the real world. The problem
> with analog
> signals is that it takes an infinite amount of
> information to
> record or process an arbitrary analog signal.
> Luckily the rules
> of signal sampling say that if the signal is
> continuous and not
> too jumpy (bandlimited), then it is possible to
> reconstruct that analog signal from a series of
> samples. The
> only condition the sampling must satisfy is frame
> rate. As long
> as we sample at more than twice the bandlimiting
> frequency, we
> can perfectly reconstruct the signal from the
> samples using
> smooth interpolation. What this means is that if the
> most sudden
> event we wish to record takes 0.1 seconds, we must
> sample at
> over 20 fps (say 21 fps) in order to perfectly
> reconstruct the
> original signal.
> This is the principle used in all audio and video
> There is an additional nicety for video: The eye
> will do the
> interpolation for us, saving the computer from
> having to
> interpolate between frames all the time. That is why
> video can
> be a series of still images instead of an always
> changing value
> like in audio.
> In film, the ideal signal is what passes into the
> camera and is
> then sampled into the computer. In cartoon, there
> really is no
> original signal, just the frames. However, blender
> (and any
> computer animation software) is different. Blender
> does not work
> with the frame by frame samples of the video; it
> creates them;
> it creates the ideal, continuous video signal, and
> can create a
> frame at any moment in the video by taking a
> snapshot of the
> continuous video signal and rendering it.
> Thus, since we have access to the original video at
> every moment
> in time (not just a finite set of frames), frames
> should be more
> of an afterthought. The only thing to worry about is
> making the
> frame rate high enough so as to get at least two
> samples of the
> fastest event in the video stream. This guarantees
> that the
> continuous video signal will be totaly recovered
> interpolated by the eyes.
> Just my thought on why blender should work more in
> the realm of
> seconds than of frames.
> Matthew Fulmer
> Bf-committers mailing list
> Bf-committers at projects.blender.org
Yahoo! DSL Something to write home about.
Just $16.99/mo. or less.
More information about the Bf-committers