# Eigenvalues, Eigenvectors and Diagonalization

Last Updated: 4th October, 2023

Eigenvalues, Eigenvectors, and Diagonalization are important concepts in linear algebra that have broad applications in various fields of mathematics, science, and engineering. At the end of this lesson, you will have gain knowledge on eigenvalues, eigenvectors and diagonalization.

## Definition and Basic Properties of Eigenvalues and Eigenvectors

Eigenvalues and eigenvectors are important concepts in linear algebra, which are used in a wide range of fields, including physics, engineering, computer science, and economics. They are defined as follows:

Definition:

• Eigenvalues are a fundamental concept in linear algebra that play an important role in understanding the behavior of linear transformations.
• Eigenvectors are special vectors of a matrix that are only scaled by the matrix and not rotated, and eigenvalues represent the scaling factors by which eigenvectors are scaled.
• Let A be an n x n matrix. A scalar λ is called an eigenvalue of A if there exists a non-zero vector x such that Ax = λx. Such a vector x is called an eigenvector corresponding to the eigenvalue λ.
• The eigenvalue equation,
Ax = λx


can be rewritten as

(A - λI)x = 0


where I is the identity matrix. This means that the eigenvectors of A correspond to the non-trivial solutions of the homogeneous linear system (A - λI)x = 0. The set of all eigenvalues of A is called the spectrum of A.

Example:

Let

A = [2  1]
[1  2]


be a 2 x 2 matrix. To find its eigenvalues, we need to solve the characteristic equation

det(A - λI) = 0


where I is the 2 x 2 identity matrix. We have:

det(A - λI) = det([2-λ 1]) = (2-λ)(2-λ) - 1
([1 2-λ])
= λ^2 - 4λ + 3
= (λ - 1)(λ - 3)


Thus, the eigenvalues of A are

λ1 = 1 and λ2 = 3


To find the corresponding eigenvectors, we need to solve the equations

(A - λ1I)x1 = 0 and (A - λ2I)x2 = 0


We have:

(A - λ1I)x1 = [1  1] [x1(1)] = [0]
[1  1] [x1(2)]   [0]


The solution to this system is

x1 = [ 1]
[-1]


Similarly, we have:

(A - λ2I)x2 = [-1   1] [x2(1)] = [0]
[ 1  -1] [x2(2)]   [0]


The solution to this system is

 x2 = [1]
[1]


Therefore, the eigenvectors of A are

x1 = [ 1]  and x2 = [1]
[-1]           [1]


### Geometric Interpretation of Eigenvectors and Eigenvalues

• Eigenvectors and eigenvalues have a geometric interpretation that can be useful in understanding their properties and applications.
• Geometric interpretation:
• An eigenvector x of a matrix A represents a direction in which the transformation represented by A acts by scaling the vector x by a factor of λ, which is the corresponding eigenvalue.
• In other words, if we apply the transformation represented by A to an eigenvector x, the resulting vector will be parallel to x but may be scaled by a factor of λ. The eigenvalue λ represents the scaling factor of the transformation in the direction of the eigenvector x.
• Example:

Let

A = [2  1]
[1  2]


be a 2 x 2 matrix with eigenvectors

x1 = [ 1] and x2 = [1]
[-1]          [1]


and eigenvalues

λ1 = 1 and λ2 = 3


The geometric interpretation of these eigenvectors and eigenvalues is as follows:

• The eigenvector
x1 = [ 1]
[-1]


represents a direction in which the transformation represented by A acts by scaling the vector x1 by a factor of λ1

• The eigenvector
x2 = [1]
[1]


represents a direction in which the transformation represented by A acts by scaling the vector x2 by a factor of λ2.

• The eigenvalue λ1 = 1 represents a scaling factor of 1 in the direction of the eigenvector x1. This means that if we apply the transformation represented by A to the vector x1, the resulting vector will be parallel to x1, but not necessarily of the same length.
• The eigenvalue λ2 = 3 represents a scaling factor of 3 in the direction of the eigenvector x2. This means that if we apply the transformation represented by A to the vector x2, the resulting vector will be parallel to x2, but three times as long.

### Calculation of Eigenvalues and Eigenvectors

There are several methods for calculating eigenvalues and eigenvectors of a matrix, depending on the size and structure of the matrix.

Method 1:

The characteristic equation method involves solving the characteristic equation

det(A - λI) = 0


where A is the matrix, λ is the eigenvalue, and I is the identity matrix. The roots of this equation are the eigenvalues of A. Once the eigenvalues are known, we can find the eigenvectors by solving the system of linear equations

(A - λI)x = 0


for each eigenvalue λ.

Example: Let

A = [2  1]
[1  2]


be a 2 x 2 matrix. We can find its eigenvalues and eigenvectors using the characteristic equation method as follows:

det(A - λI) = det([2-λ     1])
([1     2-λ])

= (2-λ)(2-λ) - 1 = λ^2 - 4λ + 3

= (λ - 1)(λ - 3)


Thus, the eigenvalues of A are

λ1 = 1 and λ2 = 3


To find the eigenvectors corresponding to λ1, we solve the equation

(A - λ1I)x1 = 0


We have:

(A - λ1I)x1 = [1 1] [x1(1)] = [0]
[1 1] [x1(2)]   [0]


The solution to this system is

x1 = [ 1]

[-1]

Similarly, to find the eigenvectors corresponding to λ2, we solve the equation

(A - λ2I)x2 = 0


We have:

(A - λ2I)x2 = [-1  1] [x2(1)] = [0]
[ 1 -1] [x2(2)]   [0]


The solution to this system is

x2 = [1]
[1]


Method 2:

The power method is an iterative algorithm for finding the dominant eigenvalue and eigenvector of a matrix. It involves starting with an arbitrary vector x0, and repeatedly multiplying it by the matrix A and normalizing the result, until the resulting vector converges to the dominant eigenvector of A. The dominant eigenvalue can be found by computing the ratio of the norms of successive vectors.

Example: Let

A = [2  1]
[1  2]


be a 2 x 2 matrix. We can use the power method to find its dominant eigenvalue and eigenvector as follows:

• Choose an initial vector
x0 = [1]
[1]


and normalize it to obtain

x0 = [1/√2]
[1/√2]

• Multiply x0 by A to obtain
Ax0 = [3/√2]
[3/√2]

• Normalize Ax0 to obtain
x1 = [1/√2]
[1/√2]

• Multiply x1 by A to obtain
Ax1 = [5/√2]
[5/√2]

• Normalize Ax1 to obtain
x2 = [1/√2]
[1/√2]

• Since x2 is equal to x1, we have converged to the dominant eigenvector of A.
• To find the dominant eigenvalue, we compute the ratio of the norms of successive vectors:
||Ax1|| / ||x1|| = 5/√2 / 1 = 5/√2


Thus, the dominant eigenvalue of A is 5/√2.

## Diagonalization

Diagonalization is a process of finding a diagonal matrix D that is similar to a given matrix A, i.e.,

D = P^(-1)AP


where P is an invertible matrix. Diagonalization is useful because diagonal matrices are easy to work with, and many properties of A can be easily derived from the diagonal form. It is useful in solving systems of linear equations and analyzing linear transformations.

The Jordan canonical form is a generalization of diagonalization that applies to non-diagonalizable matrices and allows for the representation of these matrices as a sum of Jordan blocks. Understanding and applying these concepts is critical to solving problems across a range of fields, from physics to engineering to data analysis.

Theorem: A matrix A is diagonalizable if and only if it has n linearly independent eigenvectors, where n is the size of the matrix.

Example: Let

A = [2  1]
[1  2]


be a 2 x 2 matrix. We have already calculated its eigenvalues and eigenvectors as

λ1 = 1, λ2 = 3
x1 = [ 1], and x2 = [1]
[-1]           [1]


To diagonalize A, we need to find a matrix P such that

D = P^(-1)AP


is a diagonal matrix. We can take P to be the matrix whose columns are the eigenvectors of A, i.e.,

P = [x1 x2]


Then we have:

D = P^(-1)AP = [x1  x2]^(-1)[2   1] [x1 x2] = [1 0]
[1   2]           [0 3]


Thus, A is diagonalizable, and its diagonal form is

D = [1  0]
[0  3]


### The Jordan Canonical form and Its Relation to Diagonalization

The Jordan canonical form is a way of writing a square matrix as a sum of Jordan blocks, where each Jordan block corresponds to a single eigenvalue of the matrix. The Jordan canonical form is similar to diagonalization in that it provides a way to express a matrix as a combination of simpler matrices, but it is more general than diagonalization, as it allows for non-diagonalizable matrices.

The Jordan canonical form is closely related to diagonalization in the case of diagonalizable matrices, i.e., matrices that have a complete set of linearly independent eigenvectors. For a diagonalizable matrix A, its Jordan canonical form J is a diagonal matrix whose entries are the eigenvalues of A, i.e.,

J = diag(λ1, λ2, ..., λn)


where n is the size of A. In this case, the matrix P that diagonalizes A can be chosen to be the matrix whose columns are the eigenvectors of A.

However, for non-diagonalizable matrices, the Jordan canonical form is more complicated than diagonalization, as it involves the use of Jordan blocks, which are matrix blocks that have a specific form determined by the algebraic and geometric multiplicities of the corresponding eigenvalue. A Jordan block of size k corresponding to an eigenvalue λ has the form:

J(λ,k) = [λ 1 0 ... 0]
[0 λ 1 ... 0]
[0 0 λ ... 0]
[ ... ... ..]
[0 0 0 ... λ]


where there are k diagonal entries of λ and k-1 entries of 1 above the diagonal. The size of a Jordan block corresponds to the geometric multiplicity of the corresponding eigenvalue, while the number of Jordan blocks corresponding to an eigenvalue λ equals the algebraic multiplicity of λ.

To find the Jordan canonical form J of a matrix A, one needs to first find the eigenvalues of A and their algebraic and geometric multiplicities, and then construct the Jordan blocks corresponding to each eigenvalue. The matrix A can then be written as A = PJP^(-1), where P is the matrix whose columns are the generalized eigenvectors of A, i.e., the vectors that span the corresponding Jordan blocks.

## Applications of Eigenvalues and Eigenvectors

Eigenvalues and eigenvectors have many applications in various fields of science and engineering, including physics, engineering, economics, and computer science. Here are some examples:

• In physics, eigenvalues and eigenvectors are used to study the behavior of quantum mechanical systems. The eigenvectors represent the states of the system, while the eigenvalues represent the energies of these states.
• In engineering, eigenvalues and eigenvectors are used to analyze the stability and control of systems such as aircraft and spacecraft. The eigenvectors represent the modes of vibration or deformation of the system, while the eigenvalues represent the frequencies or damping coefficients of these modes.
• In economics, eigenvalues and eigenvectors are used to analyze the interdependence of variables in large data sets, such as input-output matrices or social network graphs. The eigenvectors represent the relative importance of the variables, while the eigenvalues represent the strength of the interdependence.
• In computer science, eigenvalues and eigenvectors are used in various applications, such as image processing, data compression, and machine learning. For example, the eigenvectors of a covariance matrix can be used as features for pattern recognition algorithms.

## Conclusion

In conclusion, Eigenvalues, Eigenvectors, and Diagonalization are important concepts in linear algebra that have many applications in various fields of mathematics, science, and engineering. Eigenvectors represent special directions in which a linear transformation only scales, without rotating or reflecting, while eigenvalues represent the scaling factors by which these eigenvectors are scaled.

## Key Takeaways

1. Eigenvectors are special vectors of a matrix that are only scaled by the matrix and not rotated.
2. Eigenvalues are the scaling factors by which eigenvectors are scaled.
3. Eigenvalues and eigenvectors play a crucial role in many areas of mathematics and science, including linear algebra, physics, engineering, and data analysis.
4. A matrix is diagonalizable if and only if it has a complete set of linearly independent eigenvectors.
5. The diagonalization of a matrix involves finding a diagonal matrix and an invertible matrix that transform the original matrix into its diagonal form.
6. The Jordan canonical form is a generalization of diagonalization that allows for the representation of non-diagonalizable matrices as a sum of Jordan blocks.
7. The Jordan canonical form is closely related to diagonalization in the case of diagonalizable matrices, but it involves the use of Jordan blocks for non-diagonalizable matrices.
8. The Jordan canonical form and diagonalization are both useful tools for solving systems of linear equations and analyzing linear transformations.

## Quiz

1. What is an eigenvector of a matrix? a. A vector that is orthogonal to the matrix b. A vector that is rotated by the matrix c. A vector that is only scaled by the matrix d. A vector that is reflected by the matrix

Answer: c. A vector that is only scaled by the matrix

2. A matrix is diagonalizable if and only if: a. It has a complete set of linearly independent eigenvectors b. It has a complete set of linearly dependent eigenvectors c. It has no eigenvectors d. It has infinitely many eigenvectors

Answer: a. It has a complete set of linearly independent eigenvectors

3. What is the Jordan canonical form of a matrix? a) A diagonal matrix whose entries are the eigenvalues of the matrix b) A triangular matrix whose entries are the eigenvalues of the matrix c) A block-diagonal matrix whose blocks are Jordan blocks d) A matrix whose columns are the eigenvectors of the matrix

Answer: c) A block-diagonal matrix whose blocks are Jordan blocks

4. What is the relationship between diagonalization and the Jordan canonical form? a) Diagonalization is a special case of the Jordan canonical form b) The Jordan canonical form is a special case of diagonalization c) Diagonalization and the Jordan canonical form are unrelated concepts d) Diagonalization and the Jordan canonical form are equivalent concepts

Answer: b) The Jordan canonical form is a special case of diagonalization

Module 1: Linear Algebra and Vector AlgebraEigenvalues, Eigenvectors and Diagonalization

Top Tutorials

Related Articles