[Bf-committers] Import, research on speed improvements

Howard Trickey howard.trickey at gmail.com
Mon Mar 10 10:34:00 CET 2014


I wrote most of a patch to have Blender export .obj files using C code
inside Blender. It saved a lot of time when exporting big files.  But I got
bogged down in getting all the texture outputs, etc., right.  If someone is
interested in trying to finish it, I could send the patch I had so far.

- Howard


On Mon, Mar 10, 2014 at 3:41 AM, Martijn Berger <martijn.berger at gmail.com>wrote:

> Hi all,
>
> Just a side note:
> You should be careful to compare blenders obj importer to something awesome
> that you have written yourself. There is one thing that can be done to
> speed it up greatly but that breaks entities that are split ( started and
> finished elsewhere ).
> Blenders obj importer needs to build a list now and make sure it got
> everything before it can do the actual object import. If you convert this
> to a generator you save a ton of memory and get a dramatic speedup. All
> "normal" obj files will load but some exotics won't. It is little things
> like this that make the importer slow also besides being written in python.
> And it is little things like this that will take a couple of orders of
> magnitude of your speedups if you implement an obj importer that has exact
> feature parity.
>
>
>
> On Mon, Mar 10, 2014 at 8:05 AM, Fredrik hansson <
> fredrikhansson_12345 at yahoo.com> wrote:
>
> > you can always do as i did and grab some of the stanford models they
> don't
> > have uvs/normals but works anyway,
> > the xyzrgb_dragon one is 574mb or so and takes 27seconds to load for me
> or
> > 7seconds if the file is in the file cache.
> >
> > can't say how long blender takes to do the same as it runs out of memory
> > and crashes after a few minutes(32bit build)
> > it did manage to do the parsing step however 82seconds.
> > note that these numbere would be smaller if i was at home haven't got a
> > ssd here at work.
> >
> > as to Dan's question what i do here is not to thread every line but
> > instead load say 1mb of the file at a time split that up
> > into sections of1-numcores of lines,  that i then parse and stitch
> > together in the original order since i know what order the chunks are.
> >
> > the main speedup i found however is to just load the entire file or parts
> > of the file into memory and then parse.
> > loading the entire thing is if im not mistaken the fastest but uses way
> to
> > much memory while if i load 1mb at a time,
> > i actually use less memory than the file size in the end (338mb for the
> > xyzrgb dragon)
> >
> >
> > // Fredrik
> >
> >
> >
> >
> > On Monday, March 10, 2014 5:50 AM, metalliandy <
> > metalliandy666 at googlemail.com> wrote:
> >
> > I'm sure I can get something to you, mate. Be aware that the files are
> > often 900mb+ though
> >
> > I will upload something tomorrow.
> >
> > Cheers,
> >
> > -Andy
> >
> > On 10/03/2014 03:12, Sebastian A. Brachi wrote:
> > > Andy, could you provide a link to an example to do some benchmarks?
> > > I'd prefer to work with real user cases than a super-subdivided
> suzzane.
> > >
> > >
> > > On Sun, Mar 9, 2014 at 8:42 PM, metalliandy
> > > <metalliandy666 at googlemail.com>wrote:
> > >
> > >> I have been praying for a faster obj loader for years!
> > >> I use massive objs on a regular basis (up to 900mb) so this would be
> > >> amazing. IMHO all I/O plugins should be done in C/C++ as python just
> > >> doesn't have the speed.
> > >> FWIW, Blender takes about 30mins to export a 31.5m poly mesh while
> > >> ZBrush takes around 2 mins for the same mesh.
> > >>
> > >> Cheers,
> > >>
> > >> -Andy
> > >> On 09/03/2014 19:21, Fredrik hansson wrote:
> > >>> hi,
> > >>> i
> > >>>    have recently written a obj loader myself just for fun. now i did
> > try
> > >>> to
> > >>> optimize it quite a bit by making it multithreaded and a few other
> > >>> things.
> > >>> i have put it up on https://github.com/FredrikHson/fast_obj_loader
> > >>> now
> > >>>    granted this is a much simpler example than doing everything that
> > >>> blender does
> > >>> with pushing it into blenders internal structure and all that.
> > >>> anyway tried it + blenders current importer.
> > >>> some stats
> > >>> blender total 17.8 sec.
> > >>> 8.6sec. parse time
> > >>>
> > >>> mine:
> > >>> sigle threaded 0.6seonds
> > >>> multi threaded 0.26sec
> > >>>
> > >>>    this is on a 36mb obj file on a ssd drive with 8threads.
> > >>>
> > >>> so yes it could probably benefit from being done in c/c++ the
> question
> > >> is still if its worth the trouble
> > >>> i personally never import anything much heavier than that 36mb file
> > >> myself due to
> > >>> slow viewport/manipulation speeds after actually getting the file
> into
> > >> blender and 18seconds is
> > >>> a bit annoying but i don't do it that often that its really much of
> an
> > >> issue.
> > >>> what is much worse imo is export speeds ( i often forget export
> > selected
> > >> only) when dealing with
> > >>> dense subsurfed meshes.
> > >>>
> > >>> // Fredrik
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>> On Saturday, March 8, 2014 3:30 AM, Sebastian A. Brachi <
> > >> sebastianbrachi at gmail.com> wrote:
> > >>> Hi Paul,
> > >>> I'm pretty satisfied with the performance of my importers right now,
> > and
> > >>> specially compared to max-script.
> > >>> Turned out the major bottleneck was from UV data, since I wasn't
> using
> > >>> foreach_set which is a much efficient method than adding uv data in a
> > >> loop.
> > >>> check
> > >>>
> > >>
> >
> http://blenderartists.org/forum/showthread.php?321210-Importing-UV-coordinates
> > >>> This is for video game data that usually isn't very heavy though
> > >> (although
> > >>> I'm also importing whole levels with hundreds of assets, all in ~40
> > >>> seconds),
> > >>> and also not using an efficient method for unpacking half-floats
> (used
> > a
> > >>> lot in video games for uvs).
> > >>>
> > >>> As I said before, I heard users complaining .obj importer performance
> > vs
> > >>> zbrush's for example in >500 mb data (sculpts mainly).
> > >>> There might be room for improvement there, but consider the topics
> > >>> discussed (would be good to rewrite it? is C an better option? can
> > small
> > >>> tweaks improve performance considerably?)
> > >>>
> > >>> Regards
> > >>>
> > >>>
> > >>>
> > >>> On Thu, Mar 6, 2014 at 12:26 PM, Paul Melis <paul.melis at surfsara.nl>
> > >> wrote:
> > >>>> Hi Sebastian,
> > >>>>
> > >>>> I read your interesting thread on blender import improvements. Did
> you
> > >>>> make any progress on this topic? Would this be something that a
> > >>>> summer-of-code student can work on as well?
> > >>>>
> > >>>> Regards,
> > >>>> Paul
> > >>>>
> > >>>>
> > >>>>
> > >>>> On 01/05/14 23:00, Sebastian A. Brachi wrote:
> > >>>>
> > >>>>> Hi all,
> > >>>>> this is the first time writing to the list, I'd like to start
> > learning
> > >> a
> > >>>>> lot and hopefully help to improve blender in some areas.
> > >>>>> My main interest right now is Blender's import/export pipeline.
> > >>>>> Currently I'm making an addon to import a lot of different formats
> > from
> > >>>>> different game engines and also rewriting/improving some existing
> > ones
> > >>>>> like
> > >>>>> PSK/PSA and MD5.
> > >>>>>
> > >>>>> I'd like to research the best possible ways to import to blender;
> my
> > >> first
> > >>>>> concern besides code style is speed in large or medium files, and I
> > >> have a
> > >>>>> couple questions. I've been reading
> > >>>>>
> http://wiki.blender.org/index.php/User:Ideasman42/ImportExport_TODO,
> > >> and
> > >>>>> the idea of using C/C++ modules is very interesting. Here are some
> > >>>>> observations and questions for importing,
> > >>>>>
> > >>>>> (expect some misconceptions):
> > >>>>>
> > >>>>>
> > >>>>> 1) Python file reading/parsing.
> > >>>>>
> > >>>>>      *Seems fast enough to me in binary data, even with processing
> > many
> > >>>>> GB's. I
> > >>>>> haven't tested XML/text data though (seems many users complain of
> the
> > >> obj
> > >>>>> importer, but where are the main bottlenecks is unknown to me)
> > >>>>>       Also best practices for reading seems to improve the speed
> (see
> > >> 2))
> > >>>>>      Q: C + Ctypes doesn't seem very good since we don't want to
> > >> rewrite the
> > >>>>> reading part in C if it's done in python, and compile dlls or OS
> with
> > >> the
> > >>>>> addons, right? But if someone
> > >>>>>         like me would do it, does it seem like the best option
> > because
> > >> we can
> > >>>>> still keep the mesh building or other data handling in more
> readable
> > >>>>> python?
> > >>>>>
> > >>>>>      Q: Is it worth to investigate Cython/pypy for speed
> improvements
> > >> here
> > >>>>> and
> > >>>>> was it used in past addons pre 2.5?
> > >>>>>         I haven't done any tests so far and I'd like to know
> opinions
> > >> first,
> > >>>>> haven't found more than a couple threads in the list that mention
> it,
> > >> and
> > >>>>> a
> > >>>>> few in BA:
> > >>>>>         e.g.,
> > >>>>> http://blenderartists.org/forum/showthread.php?278213-
> > >>>>> TEST-Cython-performances-test
> > >>>>>
> > >>>>>
> > >>>>>
> > >>>>> 2) Python binary data unpack (struct module).
> > >>>>>
> > >>>>>       * This seems to add a lot of overhead, specially in big
> files.
> > >> Also
> > >>>>> best
> > >>>>> practices allow for some speedups (like the use of struct.Struct or
> > >>>>> list.extend).
> > >>>>>       Besides the main document in the API for best practices with
> a
> > >> few tips
> > >>>>> in string handling when importing, I couldn't fine much info.
> > >>>>>
> > >>>>>       Q: Is it worth to start/modify a document for best practices,
> > and
> > >> also
> > >>>>> add benchmarks? Who could I ask to review it?
> > >>>>>
> > >>>>>       * What if Blender could accept/interpret python bytes objects
> > to
> > >> build
> > >>>>> geometry, avoiding the unpacking in python? E.g., reading a face
> > index
> > >>>>> buffer all at once, and just passing the count,
> > >>>>>       type (short in most cases) and bytes object to Blender. In
> > case of
> > >>>>> Vertex
> > >>>>> buffer seems more complicated, since many parameters need to be
> > >> defined,
> > >>>>> such as stride, type of each element,
> > >>>>>       ignore vertex normals if included, etc.
> > >>>>>
> > >>>>>       Q: Would this be a reasonable option to investigate, or even
> > >> possible
> > >>>>> to
> > >>>>> do?
> > >>>>>
> > >>>>>       * Another bottleneck is when binary data uses half-floats
> (used
> > >>>>> extensively in game formats for UV data).
> > >>>>>         The struct module doesn't have support for them (there is a
> > >> patch
> > >>>>> though: http://bugs.python.org/issue11734), so a python
> > >>>>>         function must be used instead. I'm using this one:
> > >>>>>
> > http://davidejones.com/blog/1413-python-precision-floating-point/which
> > >>>>> is
> > >>>>> pretty slow.
> > >>>>>
> > >>>>>       Q: Is the python function optimal? I couldn't find better
> ones.
> > >>>>>       Q: If not, is it feasible to do one of the following? :
> > >>>>>          a) Implement the patch mentioned above in our blender
> python
> > >>>>>          b) Create the function in C and expose it to the python
> API
> > >>>>>       Q: If b) is the best option, do you think is an OK task for
> me,
> > >> as a
> > >>>>> first approach to Blender's C code? (no much experience in C
> besides
> > >>>>> college first year)
> > >>>>>
> > >>>>>
> > >>>>> 3) Python data to blender data (converting lists to C arrays with
> > >>>>> mesh.vertices.add, mesh.polygons.add, etc):
> > >>>>>
> > >>>>>       *I've been doing a lot of tests but not much digging into C
> > code.
> > >> I
> > >>>>> don't
> > >>>>> seem to understand the process very well.
> > >>>>>        Using ctypes arrays, or array.array don't seem to have any
> > >> performance
> > >>>>> improvement when used  in reading the file (it's actually a little
> > >> slower)
> > >>>>> nor building the mesh.
> > >>>>>
> > >>>>>       Q: When using python data to import geometry, the python
> > objects
> > >> need
> > >>>>> to
> > >>>>> be converted to C arrays and that's the overhead right?
> > >>>>>       Q: When using c-like objects in python like ctypes array or
> > >>>>> array.array,
> > >>>>> they still need to be converted to C arrays, so that's why
> > performance
> > >> is
> > >>>>> not improved?
> > >>>>>       Q: What could be done to avoid the conversion without using
> C?
> > >>>>> Something
> > >>>>> like passing a pointer to a ctypes array instead?
> > >>>>>
> > >>>>>
> > >>>>> Thanks!
> > >>>>> Regards
> > >>>>> _______________________________________________
> > >>>>> Bf-committers mailing list
> > >>>>> Bf-committers at blender.org
> > >>>>> http://lists.blender.org/mailman/listinfo/bf-committers
> > >>>>>
> > >>> _______________________________________________
> > >>> Bf-committers mailing list
> > >>> Bf-committers at blender.org
> > >>> http://lists.blender.org/mailman/listinfo/bf-committers
> > >>> _______________________________________________
> > >>> Bf-committers mailing list
> > >>> Bf-committers at blender.org
> > >>> http://lists.blender.org/mailman/listinfo/bf-committers
> > >>>
> > >> _______________________________________________
> > >> Bf-committers mailing list
> > >> Bf-committers at blender.org
> > >> http://lists.blender.org/mailman/listinfo/bf-committers
> > >>
> > > _______________________________________________
> > > Bf-committers mailing list
> > > Bf-committers at blender.org
> > > http://lists.blender.org/mailman/listinfo/bf-committers
> > >
> >
> > _______________________________________________
> > Bf-committers mailing list
> > Bf-committers at blender.org
> > http://lists.blender.org/mailman/listinfo/bf-committers
> > _______________________________________________
> > Bf-committers mailing list
> > Bf-committers at blender.org
> > http://lists.blender.org/mailman/listinfo/bf-committers
> >
> _______________________________________________
> Bf-committers mailing list
> Bf-committers at blender.org
> http://lists.blender.org/mailman/listinfo/bf-committers
>


More information about the Bf-committers mailing list