[Bf-committers] Code optimisation

Anders Nilsson bf-committers@blender.org
23 Apr 2004 15:49:17 +0200


There is no way a compiler can do proper vector optimization unless
being hinted, it doesn't know how many iterations your for-loops are or
where it really matters. For a compiler that vectorize, look at the link
below (and look at it's benchmark vs other compilers such as Intel's).
Notice that they mention 'hinting' in order to generate the best code
possible.

http://www.codeplay.com/vectorc/feat-vec.html

If there should be vectorising of Blender code it should be in a
separate module. Say a number of generic functions that do vector
addition, matrix multiplication and so on. They should not be allowed to
work on single entities but rather on lists of them, like adding 100
pairs of vectors. Writing such, if possible, generic functions would
even clean up the calling code given they don't impose to much
constraints on the data send to them.

Especially those authors who like/understand them can use it and the
others can ignore it. Factoring these kinds of things out of the code
tree once they are recognized as slow or often used is always a nice
idea. I can see that it might apply in skinning/deformation and such.
Not in the drawing of small gui-components.

The generic functions should be implemented in standard ansi-c with
optimized sse/fpu/mmx-version for some platforms if possible. Somehow
these functions must be initialized on start-up, not all X86's have
sse/mmx etc and the proper path must be chosen. This can be done through
functions-pointers or so.

The big question here is if it's possible to use blenders data
structures as is. Alignment issues (sse needs 128-byte boundaries of all
reads, right?) and other problems might make this impossible. It seems
like a huge task to designing those general functions and writing good
assembler-code where needed.

Sorry for the lengthy post.

Anders Nilsson