[Bf-committers] How about adding boost uBLAS library toblender/extern?

Shaul Kedem shaul.kedem at gmail.com
Mon Jan 26 16:00:43 CET 2009


Hi,

 Thank you for the detailed answers; some feedback:
 - MSVC can deal with template debugging, some of the other platforms can't
 - From your answers I understand that it is possible to do the exact
same thing in C, it will just make the code look worse. I am sure
there are ways to go around that.
 - "it's your call" - I am just giving feedback as you guys, the call
is at the hands of others :)

Regards,
Shaul

On Mon, Jan 26, 2009 at 9:37 AM, Konstantinos Margaritis
<markos at codex.gr> wrote:
> I'll only comment on the points I disagree.
>
> Στις Monday 26 January 2009 16:08:19 ο/η Shaul Kedem έγραψε:
>>  - C++ is a superset of C, which means C++ can never be faster than C..
>
> Valid for most cases but not always, see below.
>
>>  - Template based coding is hard to debug, sometimes impossible
>> (believe me, when u see gdb crash it's not a pretty sight)
>>  - C is simpler and compilers are better in optimizing C code
>
> not always true. Unless you hand tune C code, a C++ compiler has better chance
> of rescheduling the instructions for a given algorithm.
>
>> It's not a fluke that linus didn't switch to C++ for linux - he has
>> good reasons for it. UI is nice in C++, even some other things more
>> related to data manipulation, but kernel work should remain in C, and
>> if I'll take Yves' argument and reverse it - there is no reason why we
>> can't learn from eigen and code the same optimizations in C,
>
> At the start of 2008 I was determined to change the whole math stuff in
> Blender - and I actually have a local tree (based on 2.45)- with every math
> routine replaced with an optimised version that is SIMD optimised. In theory
> it would offer a big speedup (my plan was to do SIMD optimisations first and
> then port it to Cell), but that didn't work for the following reasons, this
> was the reason that I gave up -or rather put this work on hold:
>
> 1. replacing all math functions and passing normal float[4] structs in every
> function, means that everytime you have to do load/save from the vector
> registers to memory, which degrades performance hugely. Ideally, it would mean
> that eg. to do a complex matrix manipulation, which would be really sped up by
> SIMD math, unless one would write custom SIMD code in the function (and thus
> ignoring the math functions completely) the speed gain would be minimal,
> because the overhead of loading the matrix into registers, doing the
> calculation, then saving again, then loading again (in another function),
> performing another calculation, saving, etc. would counter any gain from SIMD
> optimisations. I know, I did it already. The same is true for SSE, AltiVec and
> Cell optimisations (my work is exactly related to SIMD architectures from SSE
> and Cell to the ARM Neon).
> 2. Eigen uses templated matrix structures. For complex operation, it uses
> internally the vector form and only loads/saves at the end of the operation.
> While in practice it's still slower than hand-tuned optimised SIMD code, in
> most cases where change to the code should remain minimal and as readable as
> possible -you don't want loads of #ifdef SSE inside the renderer code as it
> would hinder readability. In my experience, using some library like Eigen2
> would increase readability, remove the necessity for separate math libraries
> and also provide extra speed over the current scheme or even anything that
> would not involve hand-tuned optimised SIMD code inside each routine.
>
> Of course it's your call, just wanted to let you know that in some cases C++
> *can* actually be faster than C, especially if you don't really want to go
> low-level.
>
> Konstantinos Margaritis
> Codex
> http://www.codex.gr
> _______________________________________________
> Bf-committers mailing list
> Bf-committers at blender.org
> http://lists.blender.org/mailman/listinfo/bf-committers
>


More information about the Bf-committers mailing list