Eigenvalues, eigenvectors in Ruby?

-------- Original-Nachricht --------

Datum: Thu, 22 Nov 2007 17:05:09 +0900
Von: [email protected]
An: [email protected]
Betreff: eigenvector discrepency was (Re: [Matrix] eigenvalues, eigenvectors in Ruby ???)

Eigenvectors form an orthogonal basis of a vector space. As a basis
is just a maximal set of linear independent vectors, there
are infinitely many of them … the only condition is that they
span the eigenspace corresponding to the eigenvalue.
So, if the eigenvalue equations matrixvector=eigenvaluevector
hold, I’d not be concerned.

Cameron McBride [email protected] wrote:

Do you know right off how its accuracy compares to GSL/LAPACK routines?

That statement refers to inaccuracies in numerical
calculations introduced due to the fact that Float or Double precision
numbers cannot exactly represent certain numbers (this
actually holds true for any number that is not a ‘very’ finite
sum of powers of 2 and (1/2) - think of
1/10, sqrt(2),sqrt(3), pi.
It has nothing to do with differences in eigenvectors, even though
an entry in them might probably be 1/sqrt(2), rather than 0.7172,
as a numerical software might claim.
Now, as we know thanks to Ed Borasky, in Ruby, you have two choices:

a.) use bindings to GSL or R or Lapack or … , and thus ultimately to
do calculations in a fast language like C or Fortran, which
uses Double precision, and gives a fast, yet not quite exact result.
That’s what you would do almost exclusively. People who teach numerics
at universities will tell you you can’t do otherwise (i.e., that’s
their translation of the English "Matlab,
Octave etc. can’t do otherwise unless you rewrite almost everything

  • once for being able to use Rationals, and then again, to be able
    to use continued fractions - because you learned about them only
    two months later …")

b.) use a pure Ruby, object-oriented implementation of numerical
algorithms as in the extended matrix.rb suggested by Ed Borasky.
That will give you exact calculations using Rationals right away,
and exact results of expressions like sqrt(2)+sqrt(3) via a
(yet to be written addition method for a class of continued fractions,
see Arithmetic with Continued Fractions, as the expressions
for sqrt(2) and sqrt(3) lead to periodic continued fractions, i.e.,
representations involving finite information).
The calculations to be done are far more involved than adding two
Floats,
and, additionally, Ruby is slower than C or Fortran, but it can be done,
using Ruby, in very finite coding time.

In most situations, sticking with a.) is ok., but if you really
wanted b.), no numerical software will do it for you, unless you rewrite
a substantial part of the code … in Ruby, you just define
another class and some methods to it.

Best regards,

Axel

Axel E. [email protected] wrote:

Eigenvectors form an orthogonal basis of a vector space. As a basis
is just a maximal set of linear independent vectors, there
are infinitely many of them … the only condition is that they
span the eigenspace corresponding to the eigenvalue.
So, if the eigenvalue equations matrixvector=eigenvaluevector
hold, I’d not be concerned.

eigenvalues are the same between ExtendedMatrix and GSL.

-------- Original-Nachricht --------

Datum: Thu, 22 Nov 2007 18:45:02 +0900
Von: [email protected]
An: [email protected]
Betreff: Re: eigenvector discrepency was (Re: [Matrix] eigenvalues, eigenvectors in Ruby ???)

eigenvalues are the same between ExtendedMatrix and GSL.
Yes, but that’s not what I meant, as you can have different vectors
mapped to the same vector under a matrix mapping.
You’d have to check, whether, for a matrix m, the following holds:

for each v that is an eigenvector to m, and each corresponding
eigenvalue
c (vectors and numbers as delivered by the software),

mv=cv .

I think one can be pretty sure that this has been thoroughly
tested for rb-gsl. So, just to gain confidence in extendedmatrix.rb,
I’d test that.

With respect to your Float rounding post, I think ‘===’ is
a built-in Ruby method of its own, though not necessarily for the
Float class, and thus this might create problems when
parsing. So either you introduce additional brackets or you
rename the method to ‘float_round’ or something. Just a guess.

Best regards,

Axel

Une Bévue wrote:

        [ 1, 0, 1, 0 ],

p a.cJacobiV
6.116e-01 -2.536e-01 -9.158e-16 -7.494e-01
2.818e-01 -8.152e-01 4.852e-16 5.059e-01 ]
=end

the eigenvalues are “the same” however eigenvectors doesn’t have even
the same sign to within a multiplicative constant ???

what did i miss this morning ?
a cofee cup ??

An eigenvector represents a direction in N-dimensional space. Two
different packages will not necessarily report eigenvectors with the
same sign as a result. As someone else posted, eigenvalues and
eigenvectors are the solutions of the equations

Matrixeigenvector = eigenvalueeigenvector,

so the way to test an eigensolution is to evaluate the above equations
to see if they are satisfied within, say, 10 - 15 decimal places.

M. Edward (Ed) Borasky [email protected] wrote:

An eigenvector represents a direction in N-dimensional space. Two
different packages will not necessarily report eigenvectors with the
same sign as a result. As someone else posted, eigenvalues and
eigenvectors are the solutions of the equations

Matrixeigenvector = eigenvalueeigenvector,

so the way to test an eigensolution is to evaluate the above equations
to see if they are satisfied within, say, 10 - 15 decimal places.

OK, thanks, i’ll check for because i’m using eigenvectors ony to build a
permutation matrix…

Axel E. [email protected] wrote:

I think one can be pretty sure that this has been thoroughly
tested for rb-gsl. So, just to gain confidence in extendedmatrix.rb,
I’d test that.

OK, presently it’s what i’m doing.
thanks

With respect to your Float rounding post, I think ‘===’ is
a built-in Ruby method of its own, though not necessarily for the
Float class, and thus this might create problems when
parsing. So either you introduce additional brackets or you
rename the method to ‘float_round’ or something. Just a guess.

no === method (following the pixaxe for Float), ye i’ll try another
naming to see…

again, thanks for your reply

Axel E. [email protected] wrote:

Yes, but that’s not what I meant, as you can have different vectors
mapped to the same vector under a matrix mapping.
You’d have to check, whether, for a matrix m, the following holds:

for each v that is an eigenvector to m, and each corresponding eigenvalue
c (vectors and numbers as delivered by the software),

mv=cv .

i did verify that, that’ OK for two different matrices (a and b).
however those two matrices are different within a permutation which
normally (OK with GSL) i could find the permutation matrix p such that :

p * a * p^(-1) = b

this permutation matrix being computed by comparing an eigenvector of a
with the corresponding of b (same values different order) which i kind
find in case of ExtendMatrix…

Une Bévue wrote:

Float class, and thus this might create problems when
parsing. So either you introduce additional brackets or you
rename the method to ‘float_round’ or something. Just a guess.

no === method (following the pixaxe for Float), ye i’ll try another
naming to see…

again, thanks for your reply

RSpec has an “approximately equal” operator that allows you to specify
how close two floating point numbers “should be”. I forget what it’s
called, but you should be able to lift the code for it and specify how
close numbers need to be. There is something like this built into the
eigensystem solvers, however, so you probably don’t want to use a finer
tolerance than the solver did. :slight_smile: Actually, since this capability is
built into RSpec, you should consider using RSpec as a test framework
for number crunching code.

Now that we’re doing number crunching on RubyTalk, allow me to post a
link to the classic references on it:

  1. Golub and Van Loan, Matrix Computations,
    http://www.amazon.com/Computations-Hopkins-Studies-Mathematical-Sciences/dp/0801854148

This is the reference for the matrix extensions on RubyForge.

  1. Wilkinson, The Algebraic Eigenvalue Problem,
    Amazon.com

-------- Original-Nachricht --------

Datum: Fri, 23 Nov 2007 02:35:01 +0900
Von: [email protected]
An: [email protected]
Betreff: Re: eigenvector discrepency was (Re: [Matrix] eigenvalues, eigenvectors in Ruby ???)

with the corresponding of b (same values different order) which i kind
find in case of ExtendMatrix…


Une Bévue

What is the problem that you are actually trying to solve ?
Why do you compute eigenvectors to permutation matrices ? A permutation
matrix will always be unitary and thus won’t change the eigenvectors
(up to ordering, of course). For the construction of eigenvectors, you
can look up the Gram-Schmidt orthogonalization ( or orthonormalization)
method, but I think you’re now doing something that requires a lot of
computation (finding eigenvectors requires solving a linear equation
system), which you could probably do without …

Best regards,

Axel

Axel E. [email protected] wrote:

What is the problem that you are actually trying to solve ?

I’ve two molecules described with a matrix :
a = Mol.new( [ [ 0, 1, 1, 0 ],
[ 1, 0, 1, 0 ],
[ 1, 1, 0, 1 ],
[ 0, 0, 1, 0 ] ] )

b = Mol.new( [ [ 0, 1, 1, 1 ],
[ 1, 0, 0, 0 ],
[ 1, 0, 0, 1 ],
[ 1, 0, 1, 0 ] ] )

for example the first one in the first row of a says the carbon numbered
1 is connected to the carbon number 0.

i know, from construction a and b differs by a permuation p such that :

p * a * p^(-1) = b

only the numbered of carbon atoms is different in that case.

to state that a === b (within a permutation) it is suffciant to have :

eigenvalues of a === eigenvalues of b without respect to the order

however to find p i need the eigenvectors as given by the article :

in the aboove case:
p =
[ [ 0, 0, 1, 0 ],
[ 0, 0, 0, 1 ],
[ 1, 0, 0, 0 ],
[ 0, 1, 0, 0 ] ]

and computing :

p * a * p^( -1 ) =
[ [ 0, 1, 1, 1 ],
[ 1, 0, 0, 0 ],
[ 1, 0, 0, 1 ],
[ 1, 0, 1, 0 ] ]

gives b

or computing :

p^( -1 ) * b * p =
[ [ 0, 1, 1, 0 ],
[ 1, 0, 1, 0 ],
[ 1, 1, 0, 1 ],
[ 0, 0, 1, 0 ] ]

i do have a first version of a script using GSL, working as expected.
however with ExtendedMatrix i’m unable to find p from eigenvectors.

using GSL or ExtendedMatrix i get the same eigenvalues :
GSL:
a.eigval = [ 2.170e+00 3.111e-01 -1.000e+00 -1.481e+00 ]
b.eigval = [ 2.170e+00 -1.481e+00 3.111e-01 -1.000e+00 ]
ExtendedMatrix
a.eigval = Vector[-1.0, 2.17008648662603, -1.48119430409202,
0.311107817465982]
b.eigval = Vector[-1.48119430409202, 0.311107817465982, -1.0,
2.17008648662603]

with GSL, if i compare the eigenvectors :
a.eigvec = [
5.227e-01 3.682e-01 -7.071e-01 3.020e-01
5.227e-01 3.682e-01 7.071e-01 3.020e-01
6.116e-01 -2.536e-01 0.0 -7.494e-01
2.818e-01 -8.152e-01 0.0 5.059e-01 ]
b.eigvec = [
6.116e-01 7.494e-01 2.536e-01 0.0
2.818e-01 -5.059e-01 8.152e-01 0.0
5.227e-01 -3.020e-01 -3.682e-01 -7.071e-01
5.227e-01 -3.020e-01 -3.682e-01 7.071e-01 ]

( i’ve rounded to 0.0 the values below e-15)

from column 0 it’s easy to see what aa to be permuted
(all the values are the same but in a different order)
the first 5.227e-01 in a.eigvec column 0 is found in the third row of
b.eigvec column 0 then p[0, 2] = 1
the second 5.227e-01 in a.eigvec column 0 is found in the last row of
b.eigvec column 0 then p[1, 3] = 1
the 6.116e-01 (row 2) in a.eigvec column 0 is found in the first row of
b.eigvec column 0 then p[2, 1] = 1
the 2.818e-01 (row 3) in a.eigvec column 0 is found in the second row of
b.eigvec column 0 then p[3, 1] = 1

giving :

[ [ 0, 0, 1, 0 ],
[ 0, 0, 0, 1 ],
[ 1, 0, 0, 0 ],
[ 0, 1, 0, 0 ] ]

unfortunately i don’t have any theoretical suggestification of this
procedure…

M. Edward (Ed) Borasky [email protected] wrote:

RSpec has an “approximately equal” operator that allows you to specify
how close two floating point numbers “should be”. I forget what it’s
called, but you should be able to lift the code for it and specify how
close numbers need to be. There is something like this built into the
eigensystem solvers, however, so you probably don’t want to use a finer
tolerance than the solver did. :slight_smile: Actually, since this capability is
built into RSpec, you should consider using RSpec as a test framework
for number crunching code.

I’ll have a look upon RSpec, thanks !

Une Bévue

----- Original Message -----
From: “Axel E.” [email protected]
To: “ruby-talk ML” [email protected]
Sent: Thursday, November 22, 2007 3:19 PM
Subject: Re: eigenvector discrepency was (Re: [Matrix] eigenvalues,
eigenvectors in Ruby ???)

-------- Original-Nachricht --------

Datum: Fri, 23 Nov 2007 02:35:01 +0900
Von: [email protected]
An: [email protected]
Betreff: Re: eigenvector discrepency was (Re: [Matrix] eigenvalues,
eigenvectors in Ruby ???)

with the corresponding of b (same values different order) which i kind
find in case of ExtendMatrix…


Une Bévue

What is the problem that you are actually trying to solve ?
Why do you compute eigenvectors to permutation matrices ? A permutation
matrix will always be unitary and thus won’t change the eigenvectors
(up to ordering, of course). For the construction of eigenvectors, you
can look up the Gram-Schmidt orthogonalization ( or orthonormalization)
method, but I think you’re now doing something that requires a lot of
computation (finding eigenvectors requires solving a linear equation
system), which you could probably do without …

Best regards,

Axel

-------- Original-Nachricht --------

Datum: Fri, 23 Nov 2007 04:40:04 +0900
Von: [email protected]
An: [email protected]
Betreff: Re: eigenvector discrepency was (Re: [Matrix] eigenvalues, eigenvectors in Ruby ???)

b = Mol.new( [ [ 0, 1, 1, 1 ],
[ 1, 0, 0, 0 ],
[ 1, 0, 0, 1 ],
[ 1, 0, 1, 0 ] ] )

for example the first one in the first row of a says the carbon numbered
1 is connected to the carbon number 0.

or computing :
using GSL or ExtendedMatrix i get the same eigenvalues :

5.227e-01 -3.020e-01 -3.682e-01 7.071e-01 ]
b.eigvec column 0 then p[2, 1] = 1
unfortunately i don’t have any theoretical suggestification of this
procedure…

The permutation matrices are rotation matrices
(Rotation matrix - Wikipedia), if and only if the
permutation is even (the number of elements exchanged is even). Then,
they can be decomposed into rotations of 45 degrees (i.e. exchanges of
coordinates x,y different planes).
See also the article about orthogonal matrices :
Orthogonal matrix - Wikipedia

With respect to computations of the matrices, both for GSL and
extended_matrix, you’ll get a matrix composed of columns which are
eigenvectors - thus put the eigenvectors from extended_matrix
together, and see what happens - these span a vector space associated to
an eigenvalue.
Every vector from that subspace verifies the eigenvalue equation
for that particular eigenvalue.
However, such a vector would still verify that equation, if you
multiplied
every entry by, say, 2.
So, in order to make it a permutation, one must additionally
request that the vectors at hand form an orthonormal basis of the
eigenspace rather than an orthogonal one, thus fixing a constant
factor for all entries in the vector in the example above.

There a two procedures to obtain an orthogonal and orthogonal
basis of a vector space - both bear the name Gram-Schmidt procedure.

The orthonormalization requires additionally that one divides the vector
obtained by its norm.

See the English and German Wikipedia entries for that procedure for
examples.

I don’t know which algorithm is used in extended_matrix.

There is a rather famous paper by the Hungarian mathematician George
Pólya, which deals with enumerations and symmetries of chemical compounds,
whose English translation appeared as a book:

Combinatorial enumeration of groups, graphs, and chemical compounds,
Springer 1987.

It explains the relationship of permutations and rotations as well as
other
symmetries …

The original is in German: Kombinatorische Anzahlbestimmungen für Gruppen,
Graphen und chemische Verbindungen, Acta Mathematica, 68 (1937),
145-254,
whichever you prefer :slight_smile:

Best regards,

Axel

Axel E. [email protected] wrote:

The permutation matrices are rotation matrices
(Rotation matrix - Wikipedia),
if and only if the permutation is even (the number
of elements exchanged is even). Then, they can be
decomposed into rotations of 45 degrees
(i.e. exchanges of coordinates x,y different planes).
See also the article about orthogonal matrices :
Orthogonal matrix - Wikipedia

[…]

The original is in German: Kombinatorische Anzahlbestimmungen für Gruppen,
Graphen und chemische Verbindungen, Acta Mathematica, 68 (1937), 145-254,
whichever you prefer :slight_smile:

fine thanks a lot for all your references, i think as the name implies
for eigenvalues (@eigval = @extendmatrix.eigenvaluesJacobi) that’s a
Jacobi method which i don’t know for GSL.

Vielen dank !

Une Bévue wrote:

Vielen dank !

Interesting … the Jacobi method was for many years the most commonly
used method for diagonalizing a real symmetric matrix (another term for
finding its eigenvalues and eigenvectors). The modern methods based on
QR operations are more efficient, so they are now the methods of choice.

M. Edward (Ed) Borasky [email protected] wrote:

The modern methods based on
QR operations are more efficient, so they are now the methods of choice.

yes the ExtendMatrix is roughly two times slower than GSL…

-------- Original-Nachricht --------

Datum: Fri, 23 Nov 2007 19:30:02 +0900
Von: [email protected]
An: [email protected]
Betreff: Re: eigenvector discrepency was (Re: [Matrix] eigenvalues, eigenvectors in Ruby ???)

Orthogonal matrix - Wikipedia
fine thanks a lot for all your references, i think as the name implies
for eigenvalues (@eigval = @extendmatrix.eigenvaluesJacobi) that’s a
Jacobi method which i don’t know for GSL.

Vielen dank !

Glad to help :slight_smile: My numerics textbook gives some references for the
amount of calculations needed for both QR and Jacobi.
The convergence of QR is quadratic (Henrici, SIAM J Applied Mathematics
6 (1958) 144-162), Schönhage A., Numer. Math. 3 (1961) 374-380,
The convergence of the QR method is actually cubic for bigger sized n,
(Gourlay, Watson, Computational methods for matrix eigenproblems,
John Wiley, London, 1973; Wilkinson, Lin. Alg. and Its Appl., 1(1968),
409-20.
These considerations are actually for really big matrices, not for
4x4 ones - maybe if you had to calculate symmetries of the DNA or
something ?

Best regards,

Axel

Axel E. [email protected] wrote:

These considerations are actually for really big matrices, not for
4x4 ones - maybe if you had to calculate symmetries of the DNA or
something ?

i’ll not compute for DNA, a basic molecule is taxane involving 20
carbone atoms, then a 20X20 matrix dimension is an average.

then, you mean GSL is using the QR approach ?

-------- Original-Nachricht --------

Datum: Sat, 24 Nov 2007 06:45:00 +0900
Von: [email protected]
An: [email protected]
Betreff: Re: eigenvector discrepency was (Re: [Matrix] eigenvalues, eigenvectors in Ruby ???)

Axel E. [email protected] wrote:

These considerations are actually for really big matrices, not for
4x4 ones - maybe if you had to calculate symmetries of the DNA or
something ?

i’ll not compute for DNA, a basic molecule is taxane involving 20
carbone atoms, then a 20X20 matrix dimension is an average.

then, you mean GSL is using the QR approach ?

I think you can choose whichever you like and let GSL (or rb-gsl)
do computations using that method.
There are references to the C code GSL is written in for
eigen_jacobi.c and for linalg_qr.c – there are examples for
using these in rb-gsl’s example section.
The result will be correct,
though not necessarily identical for either method - as you
can choose different vectors to form a basis.
I haven’t checked for extended_matrix.rb, but I think the same
choice will be offered.
Now, assuming you have the choice between both these methods
and two implementations (rb-gsl/gsl and extended_matrix),
I think you’d go for rb-gsl and qr decomposition in a 20x20
matrix, because:
GSL is implemented in C, which runs much faster than pure Ruby,
(here: extended_matrix) and as you’re having a discrete problem, you can
round of errors after the computation is done.
You’d choose QR over Jacobi, because the cube of 20 is much bigger
than the square. You’ll feel that if you have to do many calculations
like that.
But sometimes, you might need some really exact knowledge, involving
fractions where every bit counts – then extended_matrix will take
some additional seconds time, but the result will be exact, for
all possible decimal points you could wish for …

Best regards,

Axel

Axel E. [email protected] wrote:

You’d choose QR over Jacobi, because the cube of 20 is much bigger
than the square. You’ll feel that if you have to do many calculations
like that.

a rough estimate is about 1000 computations over 20X20 to 25X25
matrices.

unfortunately with the eigenvectors given by ExtendMatrix (Jacobi) i’m
unable, for the time being, to compute the permutation matrix giving one
molecule representation from the other (even for a test matrix of 4X4).
I don’t understand why. Then i’ll also explore, with GSL, other methods
for eigenvectors computation.