[Bf-committers] Sunday meeting minutes, 25th feb 2007
tbaldridge at gmail.com
Mon Feb 26 22:48:32 CET 2007
>>So how about improving (or adding)
>>the parallelization of certain tasks|modules (either trough pthread or
>>OpenMP, which is scheduled also in next gcc 4.2) in blender, which
I agree with this idea. However there is a issue here. Simply going
multi-threaded is a great idea, but limited in my view. OpenMP and
pthreads use multiple threads, but the same process, and they are not
By using a API like PVM and MPI, we would allow blender to be scalable
not only across multiple cores/CPUs but also across multiple machines,
i.e. a render farm.
Some parts of Blender (such as fluids and radiosity) have rather small
datasets, and it would be possible to stream these across a gigabit
network to other workstations increasing the over all performance.
I know we're talking about allot of work, but if a flexible framework
could be developed using PVM , it may be possible to not only allow
Blender to "catch up" with some of the more threaded packages out
there, but also to far surpass them.
The problem with PVM is that all memory is dupicated, that is, each
"thread" requires it's own block of memory process, independent of the
other threads. So maybe a hybrid of the two would be more in style,
pthreads for local CPUs, and PVM for other machines.
One thing I will mention, unlike other apps, Blender's load/save
functions are extremely fast. This means that the whole .blend file
could be created, and streamed across the network to the other
clients. Basically we already have a built-in serializer/pickler!
I've done some work with this in the past, and am in the process of
developing a full PVM style Clustering module in pure python. If
anyone is willing to work on me with this, I'd be willing to dive into
writing up a few design docs in the concept and possibly contributing
a bit of code. I don't have loads of time, but then again, no one
re·cur·sive /rɪˈkɜrsɪv/: Adj. See Recursive.
More information about the Bf-committers