
This value κ( A) is also the absolute value of the ratio of the largest eigenvalue of A to its smallest. Since this number is independent of b and is the same for A and A −1, it is usually just called the condition number κ( A) of the matrix A. Thus eigenvalue algorithms that work by finding the roots of the characteristic polynomial can be ill-conditioned even when the problem is not.įor the problem of solving the linear equation A v = b where A is invertible, the matrix condition number κ( A −1, b) is given by || A|| op|| A −1|| op, where || || op is the operator norm subordinate to the normal Euclidean norm on C n. However, the problem of finding the roots of a polynomial can be very ill-conditioned.

For example, as mentioned below, the problem of finding eigenvalues for normal matrices is always well-conditioned. However, a poorly designed algorithm may produce significantly worse results. No algorithm can ever produce more accurate results than indicated by the condition number, except by chance. It reflects the instability built into the problem, regardless of how it is solved. The condition number is a best-case scenario. Its base-10 logarithm tells how many fewer digits of accuracy exist in the result than existed in the input. The condition number describes how error grows during the calculation. The condition number κ( f, x) of the problem is the ratio of the relative error in the function's output to the relative error in the input, and varies with both the function and the input. For example, a real triangular matrix has its eigenvalues along its diagonal, but in general is not symmetric.Īny problem of numeric calculation can be viewed as the evaluation of some function f for some input x. It is possible for a real or complex matrix to have all real eigenvalues without being Hermitian. If A is real, there is an orthonormal basis for R n consisting of eigenvectors of A if and only if A is symmetric.The eigenvalues of a Hermitian matrix are real, since ( λ − λ) v = ( A * − A) v = ( A − A) v = 0 for a non-zero eigenvector v.The corresponding matrix of eigenvectors is unitary. For any normal matrix A, C n has an orthonormal basis consisting of eigenvectors of A.The null space and the image (or column space) of a normal matrix are orthogonal to each other.Eigenvectors of distinct eigenvalues of a normal matrix are orthogonal.Any normal matrix is similar to a diagonal matrix, since its Jordan normal form is diagonal.Every generalized eigenvector of a normal matrix is an ordinary eigenvector.

Normal, Hermitian, and real-symmetric matrices have several useful properties: When applied to column vectors, the adjoint can be used to define the canonical inner product on C n: w ⋅ v = w * v. If A has only real elements, then the adjoint is just the transpose, and A is Hermitian if and only if it is symmetric. It is called Hermitian if it is equal to its adjoint: A * = A. A square matrix A is called normal if it commutes with its adjoint: A * A = AA *. The adjoint M * of a complex matrix M is the transpose of the conjugate of M: M * = M T. Main articles: Adjoint matrix, Normal matrix, and Hermitian matrix Normal, Hermitian, and real-symmetric matrices That is, similar matrices have the same eigenvalues. Thus λ is an eigenvalue of W −1 AW with generalized eigenvector W − k v.


More generally, if W is any invertible matrix, and λ is an eigenvalue of A with generalized eigenvector v, then ( W −1 AW − λI) k W − k v = 0. Where the λ i are the eigenvalues, β i = 1 if ( A − λ i+1) v i+1 = v i and β i = 0 otherwise. Given an n × n square matrix A of real or complex numbers, an eigenvalue λ and its associated generalized eigenvector v are a pair obeying the relation ( A − λ I ) k v = 0, Main articles: Eigenvalues and eigenvectors and Generalized eigenvector
