[Bf-committers] [GSoC 2018] Questions regarding tests in Python

Brecht Van Lommel brechtvanlommel at pandora.be
Fri Mar 2 19:06:09 CET 2018


Hi Łukasz,

I've updated the wiki page with more detail since it was quite vague:
https://wiki.blender.org/index.php/Dev:Ref/GoogleSummerOfCode/2018/Ideas#Tests_for_Regressions

On Fri, Mar 2, 2018 at 2:46 PM, Łukasz Hryniuk <lukasz.hryniuk at wp.pl> wrote:

> 1. There are a list of areas to be tested. How they will be chosen?
>

It's up to you to pick some areas for your proposals. Testing multiple
areas from the wiki page seems doable, but if you want to propose others
you can.

2. How tests should look like?
>


> I don't understand, what does it exactly mean "we don't get 1:1 match with
> bmesh", but this comment is from 2012, so I assume that's not true any more.
>

These tests are a bit outdated, but this is referring to differences
between Carve and BMesh boolean implementations. We only have BMesh
booleans now so something should be updated there.


> The goal of "Tests for Regressions" project is to actually check results,
> so... I've started writing a test for Array modifier, created an object,
> then another - expected one - and in test, I've applied the modifier and
> I've compared the result with the expected mesh using
> bpy.types.Mesh.unit_test_compare(), which, as I've seen, compares data
> like vertices, edges and so on of two meshes (I haven't found many uses of
> that method in tests).
>
> Should a test in this project look like this?
>

What you are describing is more of a unit test for the "Tests for Core
Libraries" idea. Both can be useful. But mainly the idea I had in mind for
regression testing was to do it in a way that tests can be created quickly,
and that it is easy for developers to use and maintain. It could already be
used to check that master and blender2.8 are giving the same results for
example.

See the description on the wiki page.


> 3. Where they should be placed: .blend or .py?
>

I think it's best to create a .blend for the input data, and Python scripts
to test it with multiple modifiers, nodes, tools and settings.


> 4. How it will be evaluated, i.e. how much a participant is supposed to
> achieve by each evaluation?
>
> Will it be set by number of tests/coverage (I've got no idea how to check
> it)? Is it up to me to set milestones with a mentor, basing on my
> intuition, how much time testing each area will take?
>

As part of the proposal you create a planning, and mentors can give
feedback on that after the proposal is submitted to make it better. It's
not so much about specific number of tests, for evaluation we look at how
you are doing overall.

Regards,
Brecht.


More information about the Bf-committers mailing list