[Bf-committers] The (un)official render daemon discussion

Lars Nilsson chamaeleon at gmail.com
Tue Nov 18 18:19:49 CET 2008


On Tue, Nov 18, 2008 at 12:03 PM, Kent Mein <mein at cs.umn.edu> wrote:
> I think we need to think about this also from a picture vs animation
> framepoint.  When doing an image you obviously want to do the stiching
> but if doing an animation, it probably makes more sense to give each
> node a frame and or a block of frames to work on instead.  The system
> should be able to handle this.  (Or just go the frame route if you
> want to keep things simple.)

tSNet allowed for splitting still images into pieces and used full
frames for animations. Seemed like the simplest way of allowing
multiple machines to evenly split the amount of according to the power
of each machine. Whenever a machine finished it was working on, it'd
get another work unit. Over time faster machines do more image pieces
or frames than slower machines, and not a whole lot of idle time,
until the very last bits, when a slower machine might be responsible
for the finishing the job.

Another side-note, as far as trueSpace is concerned, there's a certain
amount of up-front effort made by trueSpace before it starts to
render, so making pieces too small could have a detrimental effect on
overall speed, if too much time is being spent initializing stuff for
rendering, rather than doing actual rendering on larger pieces.
Whether the same holds true for Blender or not, I don't know, but if
one imagine you need to create some intermediate file or whatever to
feed a render engine for each request, based on a .blend file, the
more times you have to do it, the slower it will be in direct relation
to the time it takes to do this work. Built-in renderers would
presumably have less of a overhead, while external renderers might be
slightly worse in this respect. Hope this comment make sense.

Lars Nilsson


More information about the Bf-committers mailing list