[Bf-committers] Automated testing

Campbell Barton ideasman42 at gmail.com
Thu Mar 27 22:22:36 CET 2014


On Fri, Mar 28, 2014 at 3:14 AM, Reed Law <reedlaw at gmail.com> wrote:
> On 26/03/2014, Bastien Montagne wrote:
>> Well, writing tests is not the funniest part of the work, but I agree we
>> should have more, esp. since we saw again with 2.70 that getting a large
>> amount of users testing even RC's is not an easy thing.
>>
>> And if this means writing more py wrappers to Blender's pieces of code,
>> in the end it's also positive, giving as side effect more tools to
>> scripters (writing tests in a simple, dynamic language like python is
>> much better).
>>
>> And I actually already joined the team by starting work over UI units
>> handling! ;)
>>
>> Cheers,
>> Bastien
>
> Test-driven development (TDD) is more about design than verification. Tools
> like KLEE might provide some utility, but one of the chief goals in writing
> tests is to create a tight feedback loop between design and implementation.
> First off, when you start with writing a test, you're forced to consider
> the external interface. You will consider questions such as how will the
> application use this class or call this function? The second benefit is
> that you don't have to rely on a lot of "manual" prototyping. By that I
> mean creating some throwaway code or using a REPL to test your design
> manually. Your failing test will provide an automated way to see if the
> design is working. Once the test is passing you can then freely refactor
> your code without fear of breaking it.
>
> I used TDD to create a ray tracer <https://bitbucket.org/reedlaw/rays> with
> full test coverage. I don't know how I could have done it without writing
> tests upfront. I used the GTest framework to write tests in C++. I'm not
> sure how Blender's python tests are set up, but it seems to me that if
> you're writing tests first it will be hard to do it that way. Don't they
> depend on an interface being present in order to run? In TDD, you only want
> to write the bare minimum to get a test to execute before beginning to
> implement production code. That usually means just empty functions.
>
> I think a good strategy would be to begin requiring test coverage for new
> code and gradually increasing test coverage of old code. The main benefit
> of writing tests for untested code is to aid in refactoring.

This could work in some areas but I think it relies on us having
testing in-place to begin with.

If someone writes new code they shouldn't need to setup a new testing
framework - this is something I rather we do as a project and first
find a good way to test C/C++ code, (building, where to put stubs etc,
which testing system to use... etc).

-- 
- Campbell


More information about the Bf-committers mailing list