commons-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
Subject Re: svn commit: r721203 [1/2] - in /commons/proper/math/branches/MATH_2_0: ./ src/java/org/apache/commons/math/linear/ src/site/xdoc/ src/site/xdoc/userguide/ src/test/org/apache/commons/math/linear/
Date Thu, 27 Nov 2008 17:10:17 GMT

This commit is the result of weeks of work. I hope it completes an important feature
to [math], computation of eigenvalues and eigenvectors for symmetric real matrices.

The implementation is based on algorithms developed in the last 10 years or so. It is based
partly on two reference papers and partly on LAPACK. Lapack is distributed under a modified-BSD
license, so this is acceptable for [math]. I have updated the NOTICE file and taken care of
the proper attributions in Javadoc.

The current status is that we can solve eigenproblems much faster than Jama (see the speed
gains in the commit message below). Furthermore, the eigenvectors are not always computed,
they are computed only if needed. So applications that only need eigenvalues will benefit
from a larger speed gain. This could even be improved again by allowing to compute only some
eigenvalues, not all of them. This feature is available in the higher level LAPACK routine,
but I didn't include it yet. I'll do it only when required, as this as already been a very
large amount of work.

If someone could test this new decomposition algorithm further, I would be more than happy.

My next goal is now to implement Singular Value Decomposition. I will most probably use a
method based on eigen decomposition as this seems to be now the prefered way since dqd/dqds
and MRRR algorithms are available.


----- a écrit :

> Author: luc
> Date: Thu Nov 27 07:50:42 2008
> New Revision: 721203
> URL:
> Log:
> completed implementation of EigenDecompositionImpl.
> The implementation is now based on the very fast and accurate dqd/dqds
> algorithm.
> It is faster than Jama for all dimensions and speed gain increases
> with dimensions.
> The gain is about 30% below dimension 100, about 50% around dimension
> 250 and about
> 65% for dimensions around 700.
> It is also possible to compute only eigenvalues (and hence saving
> computation of
> eigenvectors, thus even increasing the speed gain).
> JIRA: MATH-220

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message