[Bf-python] bpy doctest proof of concept

Toni Alatalo antont at kyperjokki.fi
Thu Sep 27 20:02:52 CEST 2007


For quite a while, I've been thinking of ways to improve regression testing of 
Blender (and test driven development in e.g. api design, but that's a 
different topic). I think I even posted to here about doctests for some 
reason a while ago, and have been talking about them on irc too. The double 
role of being example docs and runnable tests seems just great.

Well, now finally got to actually test making them for Blender, and running 
them in Blender, and it just works! Here is one of the bpy documentation 
examples ported to a working doctest:

--- http://an.org/blender/bpy.doctest ---

This is a doctest for the bpy module. bpy is a new py api for Blender,
see http://members.optusnet.com.au/cjbarton/BPY_API/Bpy-module.html

A port of the bpy data access example, 'make a new object from an
existing mesh and make it active'. This assumes the default blend.

    >>> import bpy
    >>> scn = bpy.data.scenes.active
    >>> me = bpy.data.meshes['Cube']
    >>> me.name
    'Cube'
    >>> ob = scn.objects.new(me) #new object from the mesh
    >>> scn.objects.active = ob
    >>> scn.objects.active.data.name
    'Cube'

--- end ---

Btw when making that i discovered a bug in the example, where the first line 
has 'scn' for the scene reference and later it is 'sce' ;p

The runner to be executed within Blender is just like this (doctest is 
included in the standard lib in 2.5 so most of you probably have it installed 
already):

--- http://an.org/blender/testrunner.py ---
import doctest

testfp 
= "/home/antont/bf-blender/trunk/blender/source/blender/python/api2_2x/doc/bpy.doctest"

print "=== Beginning running bpy doctests ==="
failures, successes = doctest.testfile(testfp, module_relative=False)
print "Got %d failures, and %d successes - see details from output above." % 
(failures, successes)
print "=== Done with bpy doctests ==="

--- end ---

How output shows in the console, and how the doctestrunner points out 
mistakes, is left as an exercise for the reader :) The doctest mode in emacs, 
bundled in the python-mode package on ubuntu at least (i bet debian too), is 
nice for editing doctest files.

So could bf-python actually start using these? Seems possible and even 
worthwhile to me. Or?

Certainly there are problems and challenges the unique environment of Blender 
sets for both running and authoring tests this way. One thing is that the 
console in Blender is different and does not currently produce output that 
the normal doctest parser would get. But authoring in Emacs and running in 
Blender worked quite ok for me now. One interesting possibility here is 
IPython - there is interest in supporting doctests, and perhaps someone comes 
up with a clever way to author doctests there (some nice way to store 
execution results to doctest files) - perhaps within / in connection to the 
interactive Python notebook project I was involved with earlier too. And as 
IPython is written in Python, and has already been integrated with many GUI 
toolkits like qt, gtk and wx, it might work inside Blender too (like the 
current console script). One other good thing about IPython for Blender is 
that it supports the 'autocad-like' syntax of writing 'rotate x 90' instead 
of the typical 'rotate(x, 90)'

Intriguing issues have come when working with respect to data in Blender. I 
quickly accepted that it is perfectly ok for some of the tests to depend on a 
certain blend - like in that example the default blend where there is a Cube. 
Of course they can be written so that they work from scratch too, as the API 
supports creation of many things (I guess even images nowadays .. I was 
considering porting the first example that assigns images but decided not to 
start there). 

But the strange area is when thinking about what doctests are, and what blends 
are: in a doctest there is 1) an operation, and 2) a reference result. 
Running the test checks whther doing 1) leads to 2) or not. In normal py 
cmdline, and in many kinds of programming, the results are things with 
descriptive text representations. But in Blender they can be e.g. 3d 
geometry. So would it make sense to think of e.g. a blender meshdata block as 
the result? Could we construct blendtests so that operations are mapped to 3d 
data, instead of lines of text? Would there be any benefit in that? A test 
blend could have reference data against which test results would be compared. 
Well, that comparison would require analysing e.g. the geometry numerically 
anyway, so perhaps there is no benefit and that line of thought makes no 
sense. Or does it .. it is easier to have a mesh in a blend to compare with, 
than to store the long lists of vertices as python lists to the test set. One 
area where I was thinking of using reference data for testing is the armature 
and constraints etc. animation system: a test blend would have the complete 
rig, and IPOs about how certain points in it move in the reference animation. 
Then a script would move bones / use whatever manipulators to do simulate an 
animator, and the resulting positions would be compared with the data that 
was recorded to the testset using a Blender in which the rig worked. But I 
guess that's again another topic and not about doctests really.

So, doctests work in Blender, I think they are nice, the apidocs already have 
a wealth of examples that can be ported nicely. Would be good that those 
examples are guaranteed to work and be up-to-date, and would be good to have 
tests for the api (which of course test the blender core too). Of course a 
complete regression suite is different than a set of meaningful examples, but 
the doctest community already has guidelines and tools for dealing with that.
 
I don't know when / if I'll have time to pursue this more, so please anyone if 
you think this is worthwhile feel free to take it forward. Of course one 
thing would be also finding a way to actually produce documentation from 
these, dunno if some doctest - epydoc integration has been done. I like the 
idea of completing a full cycle early on, so would like to see a machine that 
builds blender nightly (or after every commit) and reports the results. Also 
I find this a nice way to learn the new bpy api.  So might make time for 
this, but don't know if can / will, or when. I am actively using test driven 
driven development in other projects daily, though, and kind of consulting 
some collegues about it and willing to learn more.

cheers,
~Toni




More information about the Bf-python mailing list