[Bf-committers] [GSoC 2018] Questions regarding tests in Python

Łukasz Hryniuk lukasz.hryniuk at wp.pl
Fri Mar 2 14:46:44 CET 2018


Hi,

I'm messing for a while with Blender sources, getting familiar with the 
code and trying to find an area I could be the most effective during the 
GSoC. I'd like to ask about the "Tests for Regressions" idea:


1. There are a list of areas to be tested. How they will be chosen?

Are there any usage statistics? Further development plans? From the most 
basic ones to the most complicated? Is it up to me to test e.g. 
modifiers first?


2. How tests should look like?

E.g. under /lib/tests/modifier_stack/ are .blend files just showing, how 
modifier is supposed to affect given mesh (e.g. curve_modifier.blend; 
which in fact is subsurf + curve combination) and the other ones 
preparing scene, applying modifier and calling validate() on resulting 
mesh (array_test.blend). There is also 
/blender/tests/python/bl_mesh_modifiers.py file, with a comment:

# Currently this script only generates images from different modifier
# combinations and does not validate they work correctly,
# this is because we don't get 1:1 match with bmesh.
#
# Later, we may have a way to check the results are valid.

I don't understand, what does it exactly mean "we don't get 1:1 match 
with bmesh", but this comment is from 2012, so I assume that's not true 
any more.

The goal of "Tests for Regressions" project is to actually check 
results, so... I've started writing a test for Array modifier, created 
an object, then another - expected one - and in test, I've applied the 
modifier and I've compared the result with the expected mesh using 
bpy.types.Mesh.unit_test_compare(), which, as I've seen, compares data 
like vertices, edges and so on of two meshes (I haven't found many uses 
of that method in tests).

Should a test in this project look like this?


3. Where they should be placed: .blend or .py?

I can create expected object using Python by giving vertices/joining 
primitives for some tests. They can be also, probably faster, created 
using GUI.

What's recommended?

Creating a .blend file is much easier and more convenient for reviewing 
what's happening in Blender, but it makes harder checking the actual 
code/scene details (like objects' positions, modifiers parameters) and 
searching for it (I haven't found any tool to grep a text inside .blend 
file; only blendfile.py, which as I see, can be used to do so with a 
not-so-little effort). Moreover, I think it's easier to organize tests 
in .py file. In .blend one idea is to use layers (?) to separate tests 
for different parameters (e.g. for array modifier I'd like to test merge 
option and constant offset separately).


4. How it will be evaluated, i.e. how much a participant is supposed to 
achieve by each evaluation?

Will it be set by number of tests/coverage (I've got no idea how to 
check it)? Is it up to me to set milestones with a mentor, basing on my 
intuition, how much time testing each area will take?


Regards,
Łukasz Hryniuk


More information about the Bf-committers mailing list