[Bf-committers] Automated testing

W. Nelson indigitalspace at yahoo.com
Thu Mar 27 21:18:07 CET 2014


Nice to see automated testing gaining traction.  Like I mentioned in last Sunday's weekly dev chat, it's helpful to have a check off list.  This is especially true when devs are working from different time zones.  If the test check off list is not finalized from reports then the version does not get officially released.

IMHO this can help avoid "a" releases because it helps avoid official release decisions that are made based on opinions.

Even just simple, quick and easy to use written reports and check off lists  can make the difference by having concrete results to review IMHO and experience.  Below is a quick outline I clipped from a website on using a check off list and reporting method.  

Respectfully submitted for your consideration,
Thanks always to the core devs for their hard work,
JTa

.

This uses a check off list method:
http://www.projectconnections.com/templates/detail/software-unit-test-plan.html

How to use it
    *Review the Overview of Unit Testing starting on the following page as you prepare for code development and testing.
    *Review your design specifications and source code for the units to be tested.
    *Create a Unit Test Plan and detailed test cases using the guidelines on page 4. Perform a peer review on the Unit Test Plan.
    *Create any test "stubs" required to provide input to or receive output from the code module.
    *When it's time to test particular units, compile the code in the test environment to check for any missing files required for test plan execution.
    *Execute the tests. Compare information/values received out of the tested software to those expected, as documented in the Unit Test Plan.
    *Record any failures on a Unit Test Report Form (page 6), including reporting bugs/defects and changes needed to code, and note re-test needed. Update the Unit Test Plan if needed.
    *Retest code when an updated version is available. Record results on the Unit Test Report Form.
    *When the unit is considered to have passed all tests, archive the final Report form(s).
    *Compile Unit Test Report forms related to a given sub-system into a Summary Unit Test Report (page 5).
    *Provide any necessary changes/feedback to the related software specifications and design/implementation documents.
 




________________________________
 From: Reed Law <reedlaw at gmail.com>
To: bf-committers at blender.org 
Sent: Thursday, March 27, 2014 9:14 AM
Subject: Re: [Bf-committers] Automated testing
 

On 26/03/2014, Bastien Montagne wrote:
> Well, writing tests is not the funniest part of the work, but I agree we
> should have more, esp. since we saw again with 2.70 that getting a large
> amount of users testing even RC's is not an easy thing.
>
> And if this means writing more py wrappers to Blender's pieces of code,
> in the end it's also positive, giving as side effect more tools to
> scripters (writing tests in a simple, dynamic language like python is
> much better).
>
> And I actually already joined the team by starting work over UI units
> handling! ;)
>
> Cheers,
> Bastien

Test-driven development (TDD) is more about design than verification. Tools
like KLEE might provide some utility, but one of the chief goals in writing
tests is to create a tight feedback loop between design and implementation.
First off, when you start with writing a test, you're forced to consider
the external interface. You will consider questions such as how will the
application use this class or call this function? The second benefit is
that you don't have to rely on a lot of "manual" prototyping. By that I
mean creating some throwaway code or using a REPL to test your design
manually. Your failing test will provide an automated way to see if the
design is working. Once the test is passing you can then freely refactor
your code without fear of breaking it.

I used TDD to create a ray tracer <https://bitbucket.org/reedlaw/rays> with
full test coverage. I don't know how I could have done it without writing
tests upfront. I used the GTest framework to write tests in C++. I'm not
sure how Blender's python tests are set up, but it seems to me that if
you're writing tests first it will be hard to do it that way. Don't they
depend on an interface being present in order to run? In TDD, you only want
to write the bare minimum to get a test to execute before beginning to
implement production code. That usually means just empty functions.

I think a good strategy would be to begin requiring test coverage for new
code and gradually increasing test coverage of old code. The main benefit
of writing tests for untested code is to aid in refactoring.
_______________________________________________
Bf-committers mailing list
Bf-committers at blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


More information about the Bf-committers mailing list