[tuhopuu-devel] LSCM sls_solver.cpp line 310

Bjornmose tuhopuu-devel@blender.org
Tue, 6 Apr 2004 00:42:52 +0200


hi brecht, hi all
( Hello Mr. Levy / special guest invited via CC / please correct me if i
am going wrong )
pieces start to fit, ( may be, it is my head? )
please give feedback on understanding GRAPHITE's code

starting of with f(x) := || (A)x -b||^2 to be minimized, assuming L2
norm ||a|| := sqrt(<a,a>)
rewrite f(x) := <(A)x-b,(A)x-b> ( <a,b> denotes scalar(AKA "inner")
product)
this expands to <(A)x,(A)x> +<-b,(A)x>+<(A)x,-b>+ <-b,-b>
knowing <(A)a,b> = <a,(A*)b> with (A*) beeing the hermite conjugate of
(A) which is the transposed (At) for pure real members of (A).
derivating using the product rule and some other mutlidimensional
vector/linear calculus leads to
grad f(x) = 2 ((A*)(A)x + (A*)b) which has to be zero vector in a
extremum (we hope for good reason to be a minimum) of f(x)
by introducing (G) := (A*)(A) and r = (A*)b or (G) := (At)(A) and r =
(At)b for real matrix (A)
so the least squares minimization problem ( which becomes quite nasty
for gerneral objective functions f(x) := O(x)^n + O(x)^n-1 + ... )
reduces to solving a linear equation (G)x = r

so far maths,
it happens in our case that the rank  of  (G) and r is likely to become
some hunderets or more,
so we have to deal with the situation, that most of the members of (G)
are zero (or close to)
this is where "spares marices" are invoced.

so the talked about lines in code (sls_solver.cpp line 310) have to be
considered
as  optimized building up the linear system (G)x = r for (G) beeing
sparse  ( Mr. Levy: i'd like to have this confirmed )
( i would not want to re-develop all of this )

now having this nicly packed we address a powerfull linear solver
called superLU to do the rest of the work,
need some extra knitbits to crop the result and are DONE !


Bjoern(MoS (physics))e Jens Ole Wund

----- Original Message -----
From: "Bjornmose" <bjornmose@gmx.net>
To: <tuhopuu-devel@blender.org>
Sent: Sunday, April 04, 2004 1:57 AM
Subject: [tuhopuu-devel] LSCM sls_solver.cpp line 310


> hi blendix and all
> as far as i can read it :{
> the "least_sqaure" flag modifies the original matrix A by adding some
> matrix M "on the fly"  consisting of "square" elements a[i]*a[j] so A
> becomes soemthing like A+M.
> And the components of the target vector b become scaled "on the fly".
> b -> b'
> So it seems to be solving (A+M)x= b' which is again a linear system to
> solve say (A'')x =b';
> I am still struggeling to evaluate this /* bruno help us , i know that
> you know! */
>
> so what i know(ref1)
> LET y(x):= x^2+b*x+c; y(x) has a extrama, derive dy/dx=2*x+b; so slove
> dy/dx =0 to get x in the minimum,maximum (and it is unique prooven!).
> expanding this to multiple dimensions is not so hard to do.
>
> i think (ref1) this must do the trick:
> Knowing "least squares" in a general sense in our case reduces to
solve
> a "true square" problem. /* no higher order terms in taylor's series
> occur */
> So knowing the derivates this becomes a linear problem in derivates
> space ( leaving one "unknown" for every dimension, (defined by
> pinning) )
> /* i am thinking "math" and feel so clumpsy to express my intention /
> sorry for that / bruno?? */
> }
>