Search

The Online Encyclopedia and Dictionary

 
     
 

Encyclopedia

Dictionary

Quotes

 

Eigenvalue

In mathematics, a number is called an eigenvalue of a matrix if there exists a non-zero vector such that the matrix times the vector is equal to the same vector multiplied by the eigenvalue. This vector is then called the eigenvector associated with the eigenvalue.

The eigenvalues of a matrix or a differential operator often have important physical significance. In classical mechanics the eigenvalues of the governing equations typically correspond to the natural frequencies of vibration (see resonance). In quantum mechanics, the eigenvalues of an operator corresponding to some observable variable are those values of the observable that have non-zero probability of occurring.

The word eigenvalue comes from the German Eigenwert which means "proper or characteristic value."

Contents

Definition

Formally, we define eigenvectors and eigenvalues as follows. Let A be an n-by-n matrix of real number or complex numbers (see below for generalizations). We say that λ ∈ C is an eigenvalue of A with eigenvector vCn if

Av = λv.

The spectrum of A, denoted σ(A), is the set of all eigenvalues.

Computing eigenvalues

Suppose that we want to compute the eigenvalues of a given matrix. If the matrix is small, we can compute them symbolically using the characteristic polynomial. However, this is often impossible for larger matrices, in which case we must use a numerical method.

Symbolic computations using the characteristic polynomial

The eigenvalues of a matrix are the zeros of its characteristic polynomial. Indeed, if λ is an eigenvalue of A with eigenvector v, then (A - λI)v = 0, where I denotes the identity matrix. This is only possible if the determinant of A - λI vanishes. But the characteric polynomial is defined to be pA(λ) = det(A - λI).

It follows that we can compute all the eigenvalues of a matrix A by solving the equation pA(λ) = 0. The fundamental theorem of algebra says that this equation has at least one solution, so every matrix has at least one eigenvalue.

Numerical computations

Main article: eigenvalue algorithm.

The Abel-Ruffini theorem implies that there is no general algorithm for finding the zeros of the characteristic polynomial. Therefore, general eigenvalues algorithms are iterative. The easiest method is power iteration : we choose a random vector v and compute Av, A2v, A3v, ... This sequence will almost always converge to an eigenvector corresponding to the dominant eigenvalue. This algorithm is easy, but not very useful by itself. However, popular methods such as the QR algorithm are based on it.

Example

Let us determine the eigenvalues of the matrix

A = \begin{bmatrix} 0 & 1 & -1 \\ 1 & 1 & 0 \\ -1 & 0 & 1 \end{bmatrix}.

We first compute the characteristic polynomial of A:

p(x) = \det( A - \lambda I) = \det \begin{bmatrix} -\lambda & 1 & -1 \\ 1 & 1-\lambda & 0 \\ -1 & 0 & 1-\lambda \end{bmatrix} = -\lambda^3 + 2\lambda^2 + \lambda - 2.

This polynomial factorizes as p(λ) = - (λ - 2)(λ - 1)(λ + 1). Therefore, the eigenvalues of A are 2, 1 and −1.

Multiplicity

The (algebraic) multiplicity of an eigenvalue λ of A is the order of λ as a zero of the characteristic polynomial of A; in other words, it is the number of factors t − λ in the characteristic polynomial. An n-by-n matrix has n eigenvalues, counted according to their algebraic multiplicity, because its characteristic polynomial has degree n.

An eigenvalue of algebraic multiplicity 1 is called a simple eigenvalue.

Occasionally, in an article on matrix theory, one may read a statement like

"the eigenvalues of a matrix A are 4,4,3,3,3,2,2,1,"

meaning that the algebraic multiplicity of 4 is two, of 3 is three, of 2 is two and of 1 is one. This style is used because algebraic multiplicity is the key to many mathematical proofs in matrix theory.

The geometric multiplicity of an eigenvalue λ is the dimension of the associated eigenspace, which consists of all the eigenvectors associated with λ; in other words, it is the nullity of the matrix λI − A. The geometric multiplicity is less than or equal to the algebraic multiplicity.

Consider for example the matrix

\begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}.

It has only one eigenvalue, namely λ = 1. The characteristic polynomial is (λ - 1)2, so this eigenvalue has algebraic multiplicity 2. However, the associated eigenspace is spanned by (1, 0)T, so the geometric multiplicity is only 1.

Properties

The spectrum is invariant under similarity transformations: the matrices A and P-1AP have the same eigenvalues for any matrix A and any invertible matrix P. The spectrum is also invariant under transposition: the matrices A and AT have the same eigenvalues.

A matrix is invertible if and only if zero is not an eigenvalue of the matrix.

A matrix is diagonalizable if and only if the algebraic and geometric multiplicities coincide for all its eigenvalues. In particular, an n-by-n matrix is diagonalizable if it has n different eigenvalues.

The location of the spectrum is often restricted if the matrix has a special form:

Generally, the trace of a matrix equals the sum of the eigenvalues, and the determinant equals the product of the eigenvalues (counted according to algebraic multiplicity).

Suppose that A is an m-by-n matrix, with mn, and that B is an n-by-m matrix. Then BA has the same eigenvalues as AB plus mn eigenvalues equal to zero.

Extensions and generalizations

Eigenvalues of an operator

Suppose we have a linear operator A mapping the vector space V to itself. As in the matrix case, we say that λ ∈ C is an eigenvalue of A if there exists a nonzero vV such that Av = λv.

Suppose now that A is a bounded linear operator on a Banach space V. We say that λ ∈ C is a spectral value of A if the operator A - λI is not invertible, where I denotes the identity operator. Note that by the closed graph theorem, if a bounded operator has an inverse, the inverse is necessarily bounded. The set of all spectral values is the spectrum of A.

If V is finite dimensional, then the spectrum of A is the same of the set of eigenvalues of A. This follows from the fact that on finite-dimensional spaces injectivity of a linear operator A is equivalent to surjectivity of A. However, an operator on an infinite-dimensional space may have no eigenvalues at all, while it always has spectral values.

Eigenvalues of a matrix with entries from a ring

Suppose that A is a square matrix with entries in a ring R. An element λ ∈ R is called a right eigenvalue of A if there exists a nonzero column vector x such that Axx, or a left eigenvalue if there exists a nonzero row vector y such that yA=yλ.

If R is commutative, the left eigenvalues of A are exactly the right eigenvalues of A and are just called eigenvalues. If R is not commutative, e.g. quaternions, they may be different.

Eigenvalues of a graph

An eigenvalue of a graph is defined as an eigenvalue of the graph's adjacency matrix A, or (increasingly) of the graph's Laplacian matrix I - T - 1 / 2AT - 1 / 2, where T is a diagonal matrix holding the degree of each vertex, and in T - 1 / 2, 0 is substituted for 0 - 1 / 2.

External links

References


Last updated: 08-01-2005 02:29:30
The contents of this article are licensed from Wikipedia.org under the GNU Free Documentation License. How to see transparent copy