[Bf-committers] Import, research on speed improvements

Sebastian A. Brachi sebastianbrachi at gmail.com
Sun Jan 5 23:00:34 CET 2014

Hi all,
this is the first time writing to the list, I'd like to start learning a
lot and hopefully help to improve blender in some areas.
My main interest right now is Blender's import/export pipeline.
Currently I'm making an addon to import a lot of different formats from
different game engines and also rewriting/improving some existing ones like
PSK/PSA and MD5.

I'd like to research the best possible ways to import to blender; my first
concern besides code style is speed in large or medium files, and I have a
couple questions. I've been reading
http://wiki.blender.org/index.php/User:Ideasman42/ImportExport_TODO , and
the idea of using C/C++ modules is very interesting. Here are some
observations and questions for importing,

(expect some misconceptions):

1) Python file reading/parsing.

 *Seems fast enough to me in binary data, even with processing many GB's. I
haven't tested XML/text data though (seems many users complain of the obj
importer, but where are the main bottlenecks is unknown to me)
  Also best practices for reading seems to improve the speed (see 2))

 Q: C + Ctypes doesn't seem very good since we don't want to rewrite the
reading part in C if it's done in python, and compile dlls or OS with the
addons, right? But if someone
    like me would do it, does it seem like the best option because we can
still keep the mesh building or other data handling in more readable

 Q: Is it worth to investigate Cython/pypy for speed improvements here and
was it used in past addons pre 2.5?
    I haven't done any tests so far and I'd like to know opinions first,
haven't found more than a couple threads in the list that mention it, and a
few in BA:

2) Python binary data unpack (struct module).

  * This seems to add a lot of overhead, specially in big files. Also best
practices allow for some speedups (like the use of struct.Struct or
  Besides the main document in the API for best practices with a few tips
in string handling when importing, I couldn't fine much info.

  Q: Is it worth to start/modify a document for best practices, and also
add benchmarks? Who could I ask to review it?

  * What if Blender could accept/interpret python bytes objects to build
geometry, avoiding the unpacking in python? E.g., reading a face index
buffer all at once, and just passing the count,
  type (short in most cases) and bytes object to Blender. In case of Vertex
buffer seems more complicated, since many parameters need to be defined,
such as stride, type of each element,
  ignore vertex normals if included, etc.

  Q: Would this be a reasonable option to investigate, or even possible to

  * Another bottleneck is when binary data uses half-floats (used
extensively in game formats for UV data).
    The struct module doesn't have support for them (there is a patch
though: http://bugs.python.org/issue11734), so a python
    function must be used instead. I'm using this one:
http://davidejones.com/blog/1413-python-precision-floating-point/ which is
pretty slow.

  Q: Is the python function optimal? I couldn't find better ones.
  Q: If not, is it feasible to do one of the following? :
     a) Implement the patch mentioned above in our blender python
     b) Create the function in C and expose it to the python API
  Q: If b) is the best option, do you think is an OK task for me, as a
first approach to Blender's C code? (no much experience in C besides
college first year)

3) Python data to blender data (converting lists to C arrays with
mesh.vertices.add, mesh.polygons.add, etc):

  *I've been doing a lot of tests but not much digging into C code. I don't
seem to understand the process very well.
   Using ctypes arrays, or array.array don't seem to have any performance
improvement when used  in reading the file (it's actually a little slower)
nor building the mesh.

  Q: When using python data to import geometry, the python objects need to
be converted to C arrays and that's the overhead right?
  Q: When using c-like objects in python like ctypes array or array.array,
they still need to be converted to C arrays, so that's why performance is
not improved?
  Q: What could be done to avoid the conversion without using C? Something
like passing a pointer to a ctypes array instead?


More information about the Bf-committers mailing list