[Bf-committers] Proposal to Change Mathutils Vectors

Paul Melis paul.melis at sara.nl
Wed Dec 7 13:45:36 CET 2011


Hi Andrew,

On 12/07/2011 11:33 AM, Andrew Hale wrote:
> Correct me if I'm wrong, but do you propose to have 3D vectors with
> matrix multiplication just work (i.e. making assumptions about the w
> coordinate and renormalising if necessary) but retain 4D vectors which
> compute the product without modification of the w component? This sounds
> quite logical, however, these transformations deal with points so I'm
> not sure I understand what you mean by differentiating between vectors
> and normals.

Regarding the last point: suppose you have a 3D model consisting of a
set of points with associated normals. If you want to transform the
model using some transformation matrix M then you can simply update each
point by matrix mulplication with M and normalizing the result. I.e. you
would represent each 3D point coordinate as (x, y, z, 1), multiply with
M to get (x', y', z', w') and divide by w' to get the final transformed
coordinates. The normal vectors of the model however need to be
transformed by multiplying with (M^-1)^T, i.e. the transpose of the
inverse of M. So you would first compute Mn = transpose(inverse(M)) and
then compute each transformed normal vector using (nx', ny', nz', 0) =
Mn * (nx, ny, nz, 0).

This is the type of situation where a good API can hide lots of details
the user might not be interested in. The key here I think is not to have
users use mathematical operations like multiplication, but to make them
use methods that handle and hide the above complexities. For example
(obviously pseudocode not based on current Blender Python API :)):

# Load wavefront model
model = load("blah.obj")

# Create transformation
S = Matrix.scale(2, 2, 2)
R = Matrix.rotate_x(90)
T = Matrix.translate(1, -4, 3)
# Combined transformation: 1) scale, 2) rotate, 3) translate
M = S * R * T

# Transform model
for v in model.vertices:
    new_position = M.transform_point(v.position)
    new_normal = M.transform_normal(v.normal)
    ....

Here, the transform_normal() method could take care of computing
(M^-1)^T and caching it in the M instance, so it only needs to be
computed once. The user only works with 3-tuples for points, vectors and
normals without having to worry about normalization after multiplication
and such. In fact, the user could be completely unaware that M*v
multiplication is being doing. Furthermore, transform_point() would take
care of normalization after multiplication, while transform_vector()
wouldn't perform that step as it isn't necessary for vectors.

Interestingly I now seem to be arguing that for doing general
transformation stuff like above you never have to work with 4-tuples or
homogenous coordinates, more or less supporting your original proposal
:) I just looked at the RenderMan specification, as existing references
might provide some insights. The spec defines different data types for
points, vectors and normals, all being simple float[3] arrays. So the
type of a value defines the interpretation of the value. Interestingly,
there's also a "float RtHPoint[4]", which probably represents a value in
homogenous coordinates, but it's used nowhere in the API :)

I guess the bottom line is whether exposing the 4-tuple Blender vector
internals has any use when writing Python scripts. On the one hand
having access to the real underlying data provides great power. But in
the current Python API there are no high-level methods available and
users need to use low-level matrix-vector multiplications that forces
them to having to understand what they're doing. Perhaps adding a
higher-level API like something above is a good way forward, while
leaving the low-level stuff in place....

Regards,
Paul




More information about the Bf-committers mailing list