6. Eigenvalues and Eigenvectors

Keys:

  1. What are Eigenvalues and Eigenvectors?
  2. How to find Eigenvalues and Eigenvectors?
  3. Applications of Egenvalues and Eigenvectors:
    • Difference equation \(u_{k+1}=Au_k\)
    • Solution of \(\frac{du}{dt}=Au\)
    • Markov Matrices
    • Projections and Fourier Series
  4. Special Matrix
    • Symmetric Matrices
    • Positive Definite Matrix
    • Similar Matrices
    • Jordan Theorem

6.1 Introduction to Eigenvalues and Eigenvectors

keys:

  1. If X lies along the same direction as AX : \(AX = \lambda X\),then \(\lambda\) is eigenvalue and X is eigenvector.
  2. If \(AX=\lambda X\) then \(A^2X=\lambda^2 X\) and \(A^{-1}X=\lambda^{-1} X\) and \((A+cI)X=(\lambda + c) X\) : the same eigenvector X.
  3. If \(AX=\lambda X\) then \((A-\lambda I)X=0\) and \(A-\lambda I\) is singular and \(det(A-\lambda I)=0\) can find eigenvalues and eigenvectors.
  4. Check : \(\lambda_1 + \lambda_2 + \cdots + \lambda_n = a_{11} + a_{22} + \cdots + a_{nn}\)
  5. Projection Matrix : \(\lambda = 1 \ and \ 0\);Reflections Matrix : \(\lambda = 1 \ and \ -1\);Rotations Matrix : \(\lambda = e^{i \theta} \ and \ e^{-i \theta}\)

The Equation for the Eigenvalues and Eigenvectors

  1. Compute the determinant of \(A-\lambda I\).
  2. Find the roots of the polynomial of the determinant of \(A-\lambda I\),by solving det(\(A-\lambda I\)) = 0.
  3. For each eigenvalue \(\lambda\),solve \((A-\lambda I)X = 0\) to find an eigenvector X.

example:

\[A = \left[ \begin{matrix} 0&1 \\ 1&0 \end{matrix} \right] \\ \Downarrow \\ solve \ \ characteristic \ \ equation \\ det (A-\lambda I) = \left | \begin{matrix} -\lambda&1 \\ 1&-\lambda \end{matrix} \right| \\ \lambda_1 = 1 \ , \ x_1 = \left[ \begin{matrix} 1 \\ 1 \end{matrix} \right] \\ \lambda_2 = -1 \ , \ x_2 = \left[ \begin{matrix} 1 \\ -1 \end{matrix} \right] \\ check: \lambda_1 + \lambda_2 = a_{11} + a_{22} = 0,\ \ \lambda_1 \lambda_2 = detA = -1 \\ \]

\[B = \left[ \begin{matrix} 3&1 \\ 1&3 \end{matrix} \right] \\ \Downarrow \\ solve \ \ characteristic \ \ equation \\ det (B-\lambda I) = \left | \begin{matrix} 3-\lambda&1 \\ 1&3-\lambda \end{matrix} \right| \\ \lambda_1 = 4 \ , \ x_1 = \left[ \begin{matrix} 1 \\ 1 \end{matrix} \right] \\ \lambda_2 = 2 \ , \ x_2 = \left[ \begin{matrix} 1 \\ -1 \end{matrix} \right] \\ check: \lambda_1 + \lambda_2 = a_{11} + a_{22} = 6,\ \ \lambda_1 \lambda_2 = detB = 8 \\ \]

If \(AX=\lambda X\),the \((A+nI)X = \lambda X + nIX = (\lambda + n)X\);If eigenvectors of A is the same as eigenvectors of B, the \((A+B)X=(\lambda_{A} + \lambda_{B})X\).

Diagonalizing a Matrix

Eigenvectors of A for n different \(\lambda's\) are independent.Then we can diagonalize A.

The columns of X are eigenvectors.

So:

\[AX \\ = A \left[ \begin{matrix} x_1&x_2&\cdots&x_n\end{matrix} \right] \\ = \left[ \begin{matrix} \lambda_1x_1&\lambda_2x_2&\cdots&\lambda_2x_n\end{matrix} \right] \\ = \left[ \begin{matrix} x_1&x_2&\cdots&x_n\end{matrix} \right] \left[ \begin{matrix} \lambda_1&& \\ &\ddots&\\ &&\lambda_n \end{matrix} \right] \\ =X\Lambda \\ \Downarrow \\ AX=X\Lambda \\ X^{-1}AX=\Lambda \ or \ A=X\Lambda X^{-1} \\ \Downarrow \\ A^k =(X\Lambda X^{-1})_1(X\Lambda X^{-1})_2\cdots (X\Lambda X^{-1})_k = X\Lambda^k X^{-1} \]

example:

\[\left[ \begin{matrix} 1&5 \\ 0&6 \end{matrix} \right] = \left[ \begin{matrix} 1&1 \\ 0&1 \end{matrix} \right] \left[ \begin{matrix} 1&0 \\ 0&6 \end{matrix} \right] \left[ \begin{matrix} 1&1 \\ 0&-1 \end{matrix} \right] \\ \left[ \begin{matrix} 1&5 \\ 0&6 \end{matrix} \right]^k = \left[ \begin{matrix} 1&1 \\ 0&1 \end{matrix} \right] \left[ \begin{matrix} 1&0 \\ 0&6 \end{matrix} \right]^k \left[ \begin{matrix} 1&1 \\ 0&-1 \end{matrix} \right] = \left[ \begin{matrix} 1&1 \\ 0&1 \end{matrix} \right] \left[ \begin{matrix} 1^k&0 \\ 0&6^k \end{matrix} \right] \left[ \begin{matrix} 1&1 \\ 0&-1 \end{matrix} \right] \]

When all \(|\lambda_i| < 0\),the \(A^k \rightarrow 0\).

6.2 Applications of Eigenvalue and Eigenvector

Difference equation \(u_{k+1} = Au_k\)

Matrix Powers \(A^k\) : \(u_{k}=A^ku_0 = (X \Lambda X^{-1})(X \Lambda X^{-1})\cdots(X \Lambda X^{-1})u_0=X \Lambda^k X^{-1}u_0\)

step1 :

\[u_0 = c_1x_1 + c_2x_2 + \cdots + c_nx_n = \left[ \begin{matrix} x_1&x_2&\cdots&x_n \end{matrix}\right] \left[ \begin{matrix} c_1\\c_2\\\vdots\\c_n \end{matrix}\right] = Xc \\ \Downarrow \\ c = X^{-1}u_0 \]

step2~3:

\[u_{k}=A^ku_0 = X \Lambda^k X^{-1} u_0 = X \Lambda^k c = \left[ \begin{matrix} x_1&x_2&\cdots&x_n \end{matrix}\right] \left[ \begin{matrix} (\lambda_1)^k&& \\ &(\lambda_2)^k \\ &&\ddots \\ &&&(\lambda_n)^k\end{matrix} \right] \left[ \begin{matrix} c_1\\c_2\\\vdots\\c_n \end{matrix}\right] \\ \Downarrow \\ u_k = c_1(\lambda_1)^kx_1 + c_2(\lambda_2)^kx_2 + \cdots + c_n(\lambda_n)^kx_n \]

It solves \(u_{k+1} = Au_k\)

example:

Fibonacci Numbers: 0,1,1,2,3,5,8,13...

\(F_{k+2}=F_{k+1}+F_{k}\)

Let \(u_k = \left[ \begin{matrix} F_{k+1}\\F_k \end{matrix}\right]\)

\[F_{k+2} = F_{k+1} + F_{k} \\ F_{k+1} = F_{k+1} \\ \Downarrow \\ u_{k+1}= \left[ \begin{matrix} 1&1\\1&0 \end{matrix} \right]u_{k} \\ \Downarrow \\ A=\left[ \begin{matrix} 1&1\\1&0 \end{matrix} \right] \\ det(A-\lambda I) = 0 \\ \Downarrow \\ \lambda_1 = \frac{1+\sqrt{5}}{2} =1.618, \ \ x_1=\left[ \begin{matrix} \lambda_1\\1\end{matrix}\right] \\ \lambda_2 = \frac{1-\sqrt{5}}{2} =-0.618, \ \ x_2=\left[ \begin{matrix} \lambda_2\\1\end{matrix}\right] \\ and \\ u_0 = \left[ \begin{matrix} 1\\0 \end{matrix}\right] = c_1x_1 + c_2x_2 \rightarrow c_1 = \frac{1}{\lambda_1 - \lambda_2}, c_2 = \frac{1}{\lambda_2 - \lambda_1} \\ \Downarrow \\ u_k = c_1(\lambda_1)^kx_1 + c_2(\lambda_2)^kx_2\\ u_{100} = \frac{(\lambda_1)^{100}x_1-(\lambda_2)^{100}x_2}{\lambda_1 - \lambda_2} \]

Solution of du/dt = Au

key : \(e^{At}\)

Taylor Series : \(e^x = 1 + x + \frac{1}{2}x^2+\cdots+\frac{1}{n!}x^n\)

S is eigenvectors matrix of A.

\[e^{At} = I + At + \frac{1}{2}(At)^2+\cdots+\frac{1}{n!}(At)^n \\ A = S\Lambda S^{-1} \\ I = SS^{-1} \\ \Downarrow \\ e^{At} = SS^{-1} + S\Lambda S^{-1}t + \frac{1}{2}(S\Lambda S^{-1}t)^2+\cdots+\frac{1}{n!}(S\Lambda S^{-1}t)^n \\ =S (I+ \Lambda t + \frac{1}{2}(\Lambda t)^2+\cdots+\frac{1}{n!}(\Lambda t)^n)S^{-1} \\ \Downarrow \\ \Lambda = \left[ \begin{matrix} \lambda_1&& \\ &\lambda_2 \\ &&\ddots \\ &&&\lambda_n\end{matrix} \right] \\ e^{\Lambda t} = \left[ \begin{matrix} e^{\lambda_1t}&& \\ &e^{\lambda_2t} \\ &&\ddots \\ &&&e^{\lambda_nt}\end{matrix} \right] \\ \Downarrow \\ e^{At}=Se^{\Lambda t}S^{-1} \]

Solve Steps:

  1. Find eigenvalues and eigenvectors of A by solving \(det(A-\lambda I)=0\).

  2. Write u(0) as a combination \(c_1x_1 + c_2x_2 + \cdots + c_nx_n\) of the eigenvectors of A.

  3. Multiply each eigenvector \(x_i\) by its growth factor \(e^{\lambda_i t}\).

  4. The solution is the combinations of those pure solutions \(e^{\lambda t}x\).

    \[\frac{du}{dt} = Au \\ u(t) = c_1e^{\lambda_1 t}x_1 + c_2e^{\lambda_2 t}x_2 + \cdots + c_ne^{\lambda_n t}x_n \]

example:

\[\frac{du_1}{dt} = -u_1 + 2u_2 \\ \frac{du_2}{dt} = u_1 - 2u_2 \\ \Downarrow step1 \\ u' = Au = \left[ \begin{matrix} -1&2 \\ 1&-2 \end{matrix} \right] u \\ \lambda_1 = 0, x_1 = \left[ \begin{matrix} 2\\1 \end{matrix}\right] \\ \lambda_2 = -3, x_2 = \left[ \begin{matrix} -1\\1 \end{matrix}\right] \\ \Downarrow step2 \\ u(0) = \left[ \begin{matrix} 1\\0 \end{matrix} \right] = c_1x_1 + c_2x_2 \\ c_1 = 1/3, c_2 = -1/3 \\ \Downarrow step3 \\ u(t) = c_1e^{\lambda_1 t}x_1 + c_2e^{\lambda_2 t}x_2 = 1/3 \left[ \begin{matrix} 2 \\ 1 \end{matrix} \right] - 1/3 e^{-3t}\left[ \begin{matrix} -1 \\ 1 \end{matrix} \right] \\ \Downarrow steady \ \ state\\ u(\infty) = 1/3 \left[ \begin{matrix} 2 \\ 1 \end{matrix} \right] \]

State:

  1. Stabillity : \(u(t) -> 0 (e^{\lambda t}->0, real\ \ part\ \ \lambda < 0)\)
  2. Steady State : \(\lambda_1 = 0\) and other real part \(\lambda's < 0\)
  3. Blow up if any real part \(\lambda > 0\)

Markov Matrices

keys:

  1. All entries >=0.
  2. All columns add to 1.
  3. \(\lambda =1\) is one of eigenvalues.
  4. All other \(|\lambda_i|<1\).
  5. \(u_k = A^{k}u_0 = c_1\lambda_1^{k}x_1 + c_2\lambda_2^{k}x_2 + \cdots + c_n\lambda_n^{k}x_n \rightarrow c_1x_1 \ \ (steady \ \ state)\)

example: people movement model

\(u_{k+1} = Au_{k}\),A is Markov Matrix.

\[\left [ \begin{matrix} u_{col} \\ u_{mass} \end{matrix}\right]_{t=k+1} = \left [ \begin{matrix} 0.9&0.2 \\ 0.1&0.8 \end{matrix}\right] \left [ \begin{matrix} u_{col} \\ u_{mass} \end{matrix}\right]_{t=k} \\ \Downarrow \\ \lambda_1 = 1, x_1=\left [ \begin{matrix} 2 \\ 1 \end{matrix}\right] \\ \lambda_2 = 0.7, x_2=\left [ \begin{matrix} -1 \\ 1 \end{matrix}\right] \\ \]

if \(\left [ \begin{matrix} u_{col} \\ u_{mass} \end{matrix}\right]_{0} = \left [ \begin{matrix} 0 \\ 1000 \end{matrix}\right]\) , and \(c_1=1000/3, c_2=2000/3\)

\(u_k = c_1\lambda_1^{k}x_1+c_2\lambda_2^{k}x_2 = \frac{1000}{3}1^{k}\left [ \begin{matrix} 2 \\ 1 \end{matrix}\right] + \frac{2000}{3}0.7^{k}\left [ \begin{matrix} -1 \\ 1 \end{matrix}\right] \rightarrow \frac{1000}{3}\left [ \begin{matrix} 2 \\ 1 \end{matrix}\right]\) (steady state)

?Projections and Fourier Series

Projections with orthonormal basis:

\[Q = \left [ \begin{matrix} q_1&q_2&\cdots&q_n \end{matrix}\right],Q^{T}=Q^{-1}\\ V = x_1q_1 + x_2q_2 + \cdots + x_nq_n = \left [ \begin{matrix} q_1&q_2&\cdots&q_n \end{matrix}\right] \left [ \begin{matrix} x_1\\x_2\\\vdots\\x_n \end{matrix}\right] =QX \\ \Downarrow \\ Q^{-1}V = Q^{-1}QX \\ \Downarrow \\ Q^{T}V = X \]

Fourier series:

\(f(x) = a_0 + a_1cosx + b_1sinx + a_2cos2x + b_2sin2x + \cdots + b_nsinnx\)

(\(1,cosx,sinx,cos2x,sin2x...\)) are basis of f(x)

check: \(f(x) = f(x+ 2\pi)\)

\(f^Tg = \int_{0}^{2\pi}f(x)g(x)dx=0\) with f(x) = 1,cosx,sinx,cos2x,sin2x..., g(x) = 1,cosx,sinx,cos2x,sin2x..., \(f(x) \neq g(x)\)

example:

\(\int_{0}^{2\pi}f(x)cosxdx= \int_{0}^{2\pi}(a_0cosx + a_1(cosx)^2 + b_1cosxsinx...)dx= a_1\int_{0}^{2\pi} (cosx)^2 dx = a_1\pi\)

\(a_1 = \frac{1}{\pi}\int_{0}^{2\pi}f(x)cosxdx\)

6.3 Special Matrix

6.3.1 Symmetric Matrices

keys:

  1. A symmetric matrix S has n real eigenvalues \(\lambda_i\) and n orthonormal eigenvectors \(q_1,q_2,...,q_n\).
  2. Every real symmetric S can be diagonalized: \(S=Q \Lambda Q^{-1} = Q \Lambda Q^{T} =\left[ \begin{matrix} q_1&q_2&\cdots&q_n \end{matrix}\right] \left[ \begin{matrix} \lambda_1&& \\ &\lambda_2 \\ &&\ddots \\ &&&\lambda_n\end{matrix} \right] \left[ \begin{matrix} q_1^{T}\\q_2^{T}\\\vdots\\q_n^{T} \end{matrix}\right]\).
  3. The number of positive eigenvalues of S equals the number of positive pivots.
  4. Antisymmetric matrices \(A = A^{-T}\) have imaginary \(\lambda's\) and orthonormal (complex) q's.

example:

\[S = \left[ \begin{matrix} 1&2 \\ 2&4 \end{matrix}\right] \\ S-\lambda I = \left[ \begin{matrix} 1-\lambda&2 \\ 2&4-\lambda \end{matrix}\right]\\ \Downarrow\\ \lambda_1 = 0, x_1=\left[ \begin{matrix} 2 \\ -1 \end{matrix}\right] \\ \lambda_2 = 5, x_2=\left[ \begin{matrix} 1 \\ 2 \end{matrix}\right] \\ \Downarrow\\ Q^{-1}SQ = \frac{1}{\sqrt{5}} \left[ \begin{matrix} 2&-1 \\ 1&2 \end{matrix}\right] \left[ \begin{matrix} 1&2 \\ 2&4 \end{matrix}\right] \frac{1}{\sqrt{5}}\left[ \begin{matrix} 2&1 \\ -1&2 \end{matrix}\right] =\left[ \begin{matrix} 0&0 \\ 0&5 \end{matrix}\right] = \Lambda \]

6.3.2 Positive Definite Matrix

keys:

  1. Symmetric S : all eigenvalues > 0 \(\Leftrightarrow\) all pivots > 0 \(\Leftrightarrow\) all upper left determinants > 0

  2. The Symmetric S is the postive definite : \(x^TSx > 0\) for all vectors \(x\neq0\).

  3. \(A^TA\) is positive definite matrix.

    proof: A is m by n

    \[x^T(A^TA)x = (Ax)^T(Ax) = |Ax|^2 >= 0 \\ if \ \ A \ \ rank=n \\ |Ax|^2 >0 \]

    \(A^TA\) is positive definite matrix.

    \(A^TA\) is invertible, that \(\widehat{x} = (A^TA)^{-1}A^Tb\) work fine.

example:

\[S = \left [ \begin{matrix} 2&-1&0 \\ -1&2&-1 \\ 0&-1&2 \end{matrix}\right] \\ pivots : 2,3/2,4/3 >0 \\ left \ \ upper \ \ det : 2,3,4 >0 \\ eigenvalues : 2-\sqrt{2},2,2+\sqrt{2} \\ f = x^TSx = 2x_1^2 + 2x_2^2 + 2x_3^2-2x_1x_2-2x_2x_3 = (x_1-x_2)^2 + (x_2-x_3)^2 + x_1^2 > 0 \]

so A is positive definite matrix.

Minimum :

First derivatives : \(\frac{\partial f}{\partial x_1} = \frac{\partial f}{\partial x_2} = \frac{\partial f}{\partial x_3} =0\)

Second derivatives : \(\frac{\partial^2 f}{\partial x_1^2} = \frac{\partial^2 f}{\partial x_2^2} = \frac{\partial^2 f}{\partial x_3^2} >0\)

Maximum :

First derivatives : \(\frac{\partial f}{\partial x_1} = \frac{\partial f}{\partial x_2} = \frac{\partial f}{\partial x_3} =0\)

Second derivatives : \(\frac{\partial^2 f}{\partial x_1^2} = \frac{\partial^2 f}{\partial x_2^2} = \frac{\partial^2 f}{\partial x_3^2} <0\)

when \(f = x^TAx = 2x_1^2 + 2x_2^2 + 2x_3^2-2x_1x_2-2x_2x_3 = (x_1-x_2)^2 + (x_2-x_3)^2 + x_1^2 = 1\)

\(x^TAx=1\) describe an ellipse in 4D, with \(A=Q\Lambda Q^{T}\), Q are the directions of the principal axes, \(\Lambda\) are the lengths of those axes.

6.3.3 Similar Matrices

if \(B = M^{-1}AM\) for some matrix M, that A and B are similar.

example: \(A = \left [ \begin{matrix} 2&1 \\ 1&2 \end{matrix}\right]\)

  1. Special example: A is similar to \(\Lambda\)\(S^{-1}A S = \Lambda \ 或 \ A=S^{-1}\Lambda S \Rightarrow \Lambda = \left [ \begin{matrix} 3&0 \\ 0&1 \end{matrix}\right]\);

  2. other :

    \[B = M^{-1}AM =\left [ \begin{matrix} 1&-4 \\ 0&1 \end{matrix}\right] \left [ \begin{matrix} 2&1 \\ 1&2 \end{matrix}\right] \left [ \begin{matrix} 1&4 \\ 0&1 \end{matrix}\right] = \left [ \begin{matrix} -2&-15 \\ 1&6 \end{matrix}\right] \]

    \(A,\Lambda,B\) have the same \(\lambda's\).

    • A and \(\Lambda\) with same eigenvalues and eigenvectors.
    • A and B with same eigenvalues and numbers of eigenvectors, different eigenvectors.(\(X_B=M^{-1}X_A\))

?6.3.4 Jordan Theorem

Every square A is similar to a Jordan matrix:

Numbers of Jordan blocks is equal to numbers of eigenvectors.

\[J = \left [ \begin{matrix} J_1&&&\\&J_2&&\\&&\ddots&\\&&&J_d\end{matrix}\right] \]

Good : \(J=\Lambda\),(d=n)

posted @ 2022-04-05 17:51  溪奇的数据  阅读(73)  评论(0编辑  收藏  举报