[Bf-committers] Automated testing

Bastien Montagne montagne29 at wanadoo.fr
Wed Mar 26 20:13:49 CET 2014


Well, writing tests is not the funniest part of the work, but I agree we 
should have more, esp. since we saw again with 2.70 that getting a large 
amount of users testing even RC's is not an easy thing.

And if this means writing more py wrappers to Blender's pieces of code, 
in the end it's also positive, giving as side effect more tools to 
scripters (writing tests in a simple, dynamic language like python is 
much better).

And I actually already joined the team by starting work over UI units 
handling! ;)

Cheers,
Bastien

On 26/03/2014 20:02, Campbell Barton wrote:
> On Thu, Mar 27, 2014 at 2:31 AM, Reed Law <reedlaw at gmail.com> wrote:
>> I was curious about the state of Blender's source code so I checked it out
>> and after looking about I was surprised not to find automated tests for
>> much of the C/C++ code. Then I came across this FAQ
>> item<http://wiki.blender.org/index.php/Dev%3aDoc/FAQ#Why_don.27t_you_run_automated_tests.3F>that
>> spells out the reasons for not running automated tests. None of the
>> "reasons" hold water (and they should be called "excuses").
>>
>>     - Tests that require user interaction are called "integration" or
>>     "end-to-end" tests. If all of the underlying units are well-tested and the
>>     UI is just a thin layer then this type of testing can be kept to a minimum.
>>     - Tests that take too long to run indicate tightly coupled code.
>>     - Tests taking too much effort to maintain is an illusion. I would bet
>>     *not* having tests requires much more effort and leads to the problem
>>     described in The Mythical Man
>> Month<https://en.wikipedia.org/wiki/The_Mythical_Man-Month>:
>>
>>        - The fundamental problem with program maintenance is that fixing a
>>        defect has a substantial (20-50 percent) chance of introducing
>> another. So
>>        the whole process is two steps forward and one step back.
>>     - Tests covering areas that hardly ever change is a *good* thing. They
>>     can help a new contributor understand what the code is doing by showing
>>     examples of the interface for those classes.
>>     - Having tests for areas that are immediately noticeable when broken is
>>     also good because it saves that extra step of noticing the breakage, or of
>>     getting a bug report from someone else.
>>     - Tests that break but are not bugs should be updated to reflect the new
>>     features/interface.
>>     - Tests that break but are not a problem for the end user should also be
>>     refactored. What is the use case being tested in that case?
>>     - Although many Blender developers are volunteers, tests can be enforced
>>     by only merging pull requests that have good test coverage and pass the
>>     automated test suite run on a continuous integration server for each commit.
>>
>> I realize that the solution to these problems is found in the call for help
>> at the bottom of the FAQ item. But there is only so much a single
>> contributor can do, especially in light of a long history of untested code.
>> Perhaps setting up a continuous integration server with a service such as
>> Travis CI would be a first step.
> Hi Reed,
>
> All valid points, and I'd dearly like to have better automated testing
> in Blender, edit the wiki doc (yes these points are 'excuses'), I
> wrote up this section, as well as most of the tests in `source/tests`
>
> In fact I planned to work on this more, but didn't get around to it yet.
>
> ----
>
>
> Way to begin?
> ======
>
> How we could do to kick things off...
>
> - Define areas we know we can test usefully (math API yes,
> sculpting... maybe not...).
>
> - Setup tests for one of these module (eg `bmesh`, `modifiers`,
> `constraints`, `unit-conversion`).
>
> - Make sure tests are fairly comprehensive / useful.
>
> - Document the process (in the developer wiki).
>
> - Currently some of the tests break a lot and are hard to maintain,
> until we can get all working reliably, define a smaller set of tests
> which we CAN maintain so we get used to tests passing and paying
> attention immediately if they fail (as we do now for compiler
> warnings).
>
> Rinse and repeat.
>
> ----
>
>
> Background info
> ======
>
> Recently with suggested changes to units conversion I've requested we
> have tests before making larger changes:
> https://developer.blender.org/D340#comment-3
>
> ... with the new `mathutils,kdtree` there are tests in
> `source/tests/bl_pyapi_mathutils.py` and it helped find a few bugs
> while reviewing the patch.
>
> Regarding your comment """But there is only so much a single
> contributor can do""",
> ... Probably thats how anyone attempting testing on an old code-base
> feels, but dont think it helps so much.
>
> As far as I can see, its best if we can get tests working in some
> limited way, then increase coverage over time.
>
> Later on when the basics are working we can check on more advanced
> testing techniques.
>
> ----
>
>
> More organization?
> ========
>
> It could help to have a testing team, a group of people who define
> themselves some goals, assign people to fix existing tests and arrange
> for new areas to be tested.
>
> I'm not totally convinced this would work - developers should be
> writing tests anyway right? :) but until we have this in place, it
> could help to get some direction put in place and also that the people
> involved with testing have others to collaborate with and not have to
> attent to setup testing for huge codebase alone.
>
> Anyone interested in being apart of this?
>
> ----
>
> Regards
> - Campbell
> _______________________________________________
> Bf-committers mailing list
> Bf-committers at blender.org
> http://lists.blender.org/mailman/listinfo/bf-committers
>



More information about the Bf-committers mailing list