[Bf-committers] Automated testing

Reed Law reedlaw at gmail.com
Wed Mar 26 16:31:43 CET 2014


I was curious about the state of Blender's source code so I checked it out
and after looking about I was surprised not to find automated tests for
much of the C/C++ code. Then I came across this FAQ
item<http://wiki.blender.org/index.php/Dev%3aDoc/FAQ#Why_don.27t_you_run_automated_tests.3F>that
spells out the reasons for not running automated tests. None of the
"reasons" hold water (and they should be called "excuses").

   - Tests that require user interaction are called "integration" or
   "end-to-end" tests. If all of the underlying units are well-tested and the
   UI is just a thin layer then this type of testing can be kept to a minimum.
   - Tests that take too long to run indicate tightly coupled code.
   - Tests taking too much effort to maintain is an illusion. I would bet
   *not* having tests requires much more effort and leads to the problem
   described in The Mythical Man
Month<https://en.wikipedia.org/wiki/The_Mythical_Man-Month>:

      - The fundamental problem with program maintenance is that fixing a
      defect has a substantial (20-50 percent) chance of introducing
another. So
      the whole process is two steps forward and one step back.
   - Tests covering areas that hardly ever change is a *good* thing. They
   can help a new contributor understand what the code is doing by showing
   examples of the interface for those classes.
   - Having tests for areas that are immediately noticeable when broken is
   also good because it saves that extra step of noticing the breakage, or of
   getting a bug report from someone else.
   - Tests that break but are not bugs should be updated to reflect the new
   features/interface.
   - Tests that break but are not a problem for the end user should also be
   refactored. What is the use case being tested in that case?
   - Although many Blender developers are volunteers, tests can be enforced
   by only merging pull requests that have good test coverage and pass the
   automated test suite run on a continuous integration server for each commit.

I realize that the solution to these problems is found in the call for help
at the bottom of the FAQ item. But there is only so much a single
contributor can do, especially in light of a long history of untested code.
Perhaps setting up a continuous integration server with a service such as
Travis CI would be a first step.


More information about the Bf-committers mailing list