[Bf-committers] [Bf-blender-cvs] [26206] Compiling issue

Brecht Van Lommel brecht at blender.org
Sat Jan 23 15:53:31 CET 2010

Hi Joe,

Right, I replied to the wrong mail, I was talking about the
guardedalloc changes. I understand this is experimental, but I don't
think some more experimentation will be prove this to be the right
thing to do. It may well lead to a speedup in simple test cases, but a
simple use of pooling can lead to much wasted memory and make problems
worse when running Blender for a while. So it is not clear to me what
the purpose is here, if this is the start of writing an advanced
memory allocator then I don't think we should try to do that
ourselves, and if not then I don't think this can be good enough to
put in a release.


On Sat, Jan 23, 2010 at 3:19 PM, joe <joeedh at gmail.com> wrote:
> What are you talking about specifically?  It helps with ghash, because
> each bucket node was being allocated individually, causing a
> significant speed problem.  This particular solution was very
> appropriate; it's why we have the mempool library in the first place.
> Now, the experimental code I committed (#ifdef'd out) to guardedalloc
> is different (and was a
> different commit even).  This particular commit has nothing to do with
> that.  On that topic, OSX has (or had, anyway) a reputation for having
> a system allocator almost as slow as windows; linux is the only OS as
> far as I know (other then the BSDs I guess) that doesn't suffer from
> this.  So it's hardly simply a windows issue.
> The overhead we get from guardedalloc isn't all that bad, really.  I
> wasn't talking about that in the slightest.  What I was talking about
> was the significant performance loss we get from overusing the system
> allocator, which has caused significant problems for me and others.  I
> committed the code #ifdef'd out, so people who need it can play around
> with it but not cause problems for others.  There's a reason it's
> *experimental*.
> Joe

More information about the Bf-committers mailing list