Jacobi eigenvalue algorithm: Difference between revisions

Content deleted Content added
Dinamik-bot (talk | contribs)
Line 227:
 
;Singular values
:The singular values of a (square) matrix ''A'' are the square roots of the (non -negative) eigenvalues of <math> A^T A </math>. In case of a symmetric matrix ''S'' we have of <math> S^T S = S^2 </math>, hence the singular values of ''S'' are the absolute values of the eigenvalues of ''S''
 
;2-Normnorm and spectral radius
:The 2-norm of a matrix ''A'' is the norm based on the euclidianEuclidean vectornorm, i.e. the largest value <math> \| A x\|_2 </math> when x runs through all vectors with <math> \|x\|_2 = 1 </math>. It is the largest singular value of ''A''. In case of a symmetric matrix it is largest absolute value of its eigenvectors and thus equal to its spectral radius.
 
;Condition number
:The condition number of a nonsingular matrix ''A'' is defined as <math> \mbox{cond} (A) = \| A \|_2 \| A^{-1}\|_2 </math>. In case of a symmetric matrix it is the absolute value of the quotient of the largest and smallest eigenvalue. Matrices with large condition numbers can cause numerically unstable results : small perturbation can result in large errors. [[Hilbert matrix|Hilbert matrices]] are the most famous ill-conditioned matrices. For example, the fourth -order Hilbert matrix has a condition of 15514, while for order 8 it is 2.7&nbsp;&times;&nbsp;10<sup>8</sup>.
 
;Rank
:A matrix ''A'' has rank ''r'' if it has ''r'' columns that are linearly independent while the remaining columns are linearilylinearly dependent on these. Equivalently, ''r'' is the dimension of the range of&nbsp;''A''. Furthermore it is the number of nonzero singular values.
:In case of a symmetric matrix r is the number of nonzero eigenvalues. Unfortunately because of rounding errors numerical approximations of zero eigenvalues may not be zero (it may also happen that a numerical approximation is zero while the true value is not). Thus one can only calculate the ''numerical'' rank by making a decision which of the eigenvalues are close enough to zero.
 
;Pseudo-inverse
;Pseudoinverse
:The pseudo inverse of a matrix ''A'' is the unique matrix <math> X = A^+ </math> for which ''AX'' and ''XA'' are symmetric and for which ''AXA = A, XAX = X'' holds. If ''A'' is nonsingular, then '<math> A^+ = A^{-1} </math>.
:When procedure jacobi (S, e, E) is called, then the relation <math> S = E^T \mbox{Diag} (e) E </math> holds where Diag(''e'') denotes the diagonal matrix with vector ''e'' on the diagonal. Let <math> e^+ </math> denote the vector where <math> e_i </math> is replaced by <math> 1/e_i </math> if <math> e_i \le 0 </math> and by 0 if <math> e_i </math> is (numerically close to) zero. Since matrix ''E'' is orthogonal, it follows that the pseudo-inverse of S is given by <math> S^+ = E^T \mbox{Diag} (e^+) E </math>.
Line 247:
 
;Matrix exponential
:From <math> S = E^T \mbox{Diag} (e) E </math> one finds <math> \exp S = E^T \mbox{Diag} (\exp e) E </math> where exp&nbsp;''e'' is the vector where <math> e_i </math> is replaced by <math> exp e_i </math>. In the same way, ''f''(''S'') can be calculated in an obvious way for any (analytic) function ''f''.
 
;Linear differential equations
:The differential equation ''x'&nbsp;'' =&nbsp;''Ax'', ''x''(0) = ''a'' has the solution ''x''(''t'') = exp(''t&nbsp;A'')&nbsp;''a''. For a symmetric matrix ''S'' , it follows that <math> x(t) = E^T \mbox{Diag} (\exp t e) E a </math>. If <math> a = \sum_{i = 1}^n a_i E_i </math> is the expansion of ''a'' by the eigenvectors of ''S'', then <math> x(t) = \sum_{i = 1}^n a_i \exp(t e_i) E_i </math>.
:Let <math> W^s </math> be the vector space spanned by the eigenvectors of ''S'' which correspond to a negative eigenvalue and <math> W^u </math> analogously for the positive eigenvalues. If <math> a \in W^s </math> then <math> \mbox{lim}_{t \ \infty} x(t) = 0 </math> i.e. the equilibrium point 0 is attractive to ''x''(''t''). If <math> a \in W^u </math> then <math> \mbox{lim}_{t \ \infty} x(t) = \infty </math>, i.e. 0 is repulsive to ''x''(''t''). <math> W^s </math> and <math> W^u </math> are called ''stable'' and ''unstable'' manifolds for ''S''. If ''a'' has components in both manifolds, then one component is attracted and one component is repelled. Hence ''x''(''t'') approaches <math> W^u </math> as <math> t \ \infty </math>.