[Bf-committers] Sunday meeting minutes, 25th feb 2007

Giuseppe Ghibò ghibo at mandriva.com
Mon Feb 26 20:15:45 CET 2007

Ton Roosendaal ha scritto:

 > [...]
> Alternative proposal: do a 2.44 in a relative short timeframe, like 2 
> months (end of april).
> That gave a lot of positive endorsement.
> So the proposal is:
> - cvs is open again (bcon1)
> - work on new projects can start, but only those that can be completed 
> in 6 weeks
>   (please discuss/mention projects in public before starting it!)
> - big refactor projects then can start in may.
> - during next 2 months, also papers/design for refactors can be created 
> and approved
> Possible projects for 2.44:
> - make blender 64 bits safe (Ton)
> - image browser (Andrea)
> - adding composite nodes (Bob)
> - vertex array drawing for derivedmesh (Joe)
> - ffmpg cleanup, and integration for release in osx and windows
> - baking: object-to-object ray normal baking (baking hires object into 
> lowres)
> - pynode (Nathan)

Sorry for not being on the chat meeting, but we'd like to emphasize a
few aspects on productivity and multicore which can be considered
for the 2.44 or later.

Current performances of newer CPUs are "growing vertically" at slower
ratio than in the past. E.g. we had the AMD Opteron 250 at 2.4Ghz
available at the end of 2004. Now in 2007 the highest clock rate for
this kind of CPU is 3.0Ghz for the A64 (and 2.8 for the Opterons),
which is just meaning 25% faster.

Indeed this kind of CPUs are now "growing horizontally" at a higher
ratio, becoming "multicore". So it's now not uncommon to find
quad-core and either dual quad-core CPU in the desktop consumer
machines, but unfortunately these kind of CPUs would remain mostly
unused (or not usable) for blender. So how about improving (or adding)
the parallelization of certain tasks|modules (either trough pthread or
OpenMP, which is scheduled also in next gcc 4.2) in blender, which
might be:

- fluids
- softbody
- game engine (splitting physics and logic from rendering)
- compositor
- sequences playback, either as Imaging as well as the Sequencer. This
    could be also made in some distributed way (e.g. preparing frame
    x+1, x+2, ...,  x+M [where M is the # of cores] ready for playback).

of course it's not said that each parallelization task it's easy (or
possible), but maybe it could be payed some kind of attention to these
aspects in the next round of developing.

Together with this, there are also aspects on productivity, which are
not strictly connected to parallelization but rather to have processes
of the same blend file working in background (without having to launch
another instance of blender or doing some script which emulates a
slave task), which are not less important, e.g.:

- rendering from the same blend file in which you're actually
modeling.  For instance if you use blender to render animated textures
to be used on mesh A and still want to continue to model the mesh B in
the same scene, an option is to save your scene in a XX.blend file,
save a copy as YY.blend and to render it apart, continuing to model B
within XX.blend.

- recording an image sequence out of compositor nodes while continuing
to model in a 3d window.

- lauching an animation (alt+A) in a window i.e. to to see the camera
move through the camera itself, while tweaking the scene in the

- and so on (in general all those situations in which you're stuck
waiting for Blender doing or finishing something).

With this premises, some questions also arise:

- how much time could take a parallelization task of each of the
   modules above (listing from easiest|shortest to longest).

- can the google summer of code be considered for some of the above

- which one of programming model would be most affordable to do this?
   pthread, OpenMP (gcc 4.2/gomp/...), etc.


More information about the Bf-committers mailing list