[Bf-committers] Re: Internal renderer question

K. Richard Rhodes krich at frontiernet.net
Wed Oct 12 20:48:35 CEST 2005


Trevor McDonell wrote:

> Hello,
>
>> Although very useful, I don't recommend such coding project to 
>> someone  who's not totally familiar to how Blender works, and 
>> especially the  rendering system.
>>
>> I've done a lot of work to get Blender rendering in threads (SDL),  
>> doing that involved a lot of very painful code rewrites. It now 
>> renders  a scanline based thread. Having threads running for entire 
>> Parts is at  this moment a task I won't even try myself.
>
>
> Yes, I started having the suspicion a few weeks ago that my approach, 
> a fairly standard distributed computing technique, really wasn't the 
> best way to go about things with Blender, but bashed away at it hoping 
> for some positive results. I guess this confirms my suspicions!
>
>> I have no active knowledge about the consequences of "MPI" or  
>> "distributed memory". You could really first tell us the 
>> functionality  you require, before already having decided on crucial 
>> technical  implementation issues. :)
>
>
> Yes, I agree, but hindsight is always 20/20 (=
> The work was started as a project course I decided to take at uni this 
> semester. The course itself is very open-ended, but I was required to 
> define what I planned to do very early on. Still, I've learnt a lot, 
> and that doesn't preclude me from now following this new direction. 
> The general aim for the project is to be able to make Blender render 
> in real-time (given a sufficiently large number of machines) which I 
> guess can then be used for some interesting things.
>
>> For example; for a cluster you can just start up individual blender  
>> binaries, and signal each running process to render a single part.  
>> That's not shared memory, but quite efficient. (This feature is not  
>> implemented yet, although it's possible with a hacky trick already).
>
>
> I had been toying with this idea, and was going to try this within the 
> next few days, since my current approach was making no headway. I 
> guess I can move onto this now without looking back (=
>
> For this, so far I know I can create the blender processes on the 
> remote machines. They could all read the same .blend file from a 
> shared mount (say) although broadcasting it over the network would be 
> much faster (not sure how to do this off hand though, this step is 
> hazy, and very welcome to suggestions). After this, I should be able 
> to send them the selected render options and control who does what 
> part pretty easily. Getting the rect data back should be fine as well. 
> Said off the top of my head with a fair bit of blind optimism, but it 
> effectively circumvents the major problem I was having.
>
>> If 'distributed memory' means that you run threads that share memory, 
>> I  recommend you to not even try!
>
>
> The most common example would be of a cluster of (independent) 
> machines all networked. MPI is a library you can use to send messages, 
> data, and such, between the machines. It's pretty much become the 
> de-facto standard these days for parallel computing.
>
>
> Thanks again everyone for your help and suggestions. I shall see how 
> this technique plays out.
>
> -Trev
>
> _______________________________________________
> Bf-committers mailing list
> Bf-committers at projects.blender.org
> http://projects.blender.org/mailman/listinfo/bf-committers
>
>
You may wish to look into the Dr. Queue project http://www.drqueue.org/ 
I have used this in the past on a 20 system "render farm" of mine

K-Rich


More information about the Bf-committers mailing list