diff --git a/lectures/eigen_II.md b/lectures/eigen_II.md index 77ee1165..d399ba84 100644 --- a/lectures/eigen_II.md +++ b/lectures/eigen_II.md @@ -11,7 +11,7 @@ kernelspec: name: python3 --- -+++ {"user_expressions": []} + # Spectral Theory @@ -27,19 +27,12 @@ In addition to what's in Anaconda, this lecture will need the following librarie ```{code-cell} ipython3 :tags: [hide-output] -!pip install graphviz quantecon -``` - -```{admonition} graphviz -:class: warning -If you are running this lecture locally it requires [graphviz](https://www.graphviz.org) -to be installed on your computer. Installation instructions for graphviz can be found -[here](https://www.graphviz.org/download/) +!pip install quantecon ``` In this lecture we will begin with the foundational concepts in spectral theory. -Then we will explore the Perron-Frobenius Theorem and the Neumann Series Lemma, and connect them to applications in Markov chains and networks. +Then we will explore the Perron-Frobenius Theorem and the Neumann Series Lemma, and connect them to applications in Markov chains and networks. We will use the following imports: @@ -48,7 +41,6 @@ import matplotlib.pyplot as plt import numpy as np from numpy.linalg import eig import scipy as sp -import graphviz as gv import quantecon as qe ``` @@ -119,7 +111,7 @@ In other words, if $w$ is a left eigenvector of matrix A, then $A^T w = \lambda This hints at how to compute left eigenvectors ```{code-cell} ipython3 -A = np.array([[3, 2], +A = np.array([[3, 2], [1, 4]]) # Compute right eigenvectors and eigenvalues @@ -174,7 +166,7 @@ $A$ is a nonnegative square matrix. If a matrix $A \geq 0$ then, -1. the dominant eigenvalue of $A$, $r(A)$, is real-valued and nonnegative. +1. the dominant eigenvalue of $A$, $r(A)$, is real-valued and nonnegative. 2. for any other eigenvalue (possibly complex) $\lambda$ of $A$, $|\lambda| \leq r(A)$. 3. we can find a nonnegative and nonzero eigenvector $v$ such that $Av = r(A)v$. @@ -204,8 +196,8 @@ Now let's consider examples for each case. Consider the following irreducible matrix A: ```{code-cell} ipython3 -A = np.array([[0, 1, 0], - [.5, 0, .5], +A = np.array([[0, 1, 0], + [.5, 0, .5], [0, 1, 0]]) ``` @@ -228,8 +220,8 @@ Now we can go through our checklist to verify the claims of the Perron-Frobenius Consider the following primitive matrix B: ```{code-cell} ipython3 -B = np.array([[0, 1, 1], - [1, 0, 1], +B = np.array([[0, 1, 1], + [1, 0, 1], [1, 1, 0]]) np.linalg.matrix_power(B, 2) @@ -253,7 +245,7 @@ np.round(dominant_eigenvalue, 2) eig(B) ``` -+++ {"user_expressions": []} + Now let's verify the claims of the Perron-Frobenius Theorem for the primitive matrix B: @@ -298,7 +290,7 @@ def check_convergence(M): n_list = [1, 10, 100, 1000, 10000] for n in n_list: - + # Compute (A/r)^n M_n = np.linalg.matrix_power(M/r, n) @@ -313,8 +305,8 @@ def check_convergence(M): A1 = np.array([[1, 2], [1, 4]]) -A2 = np.array([[0, 1, 1], - [1, 0, 1], +A2 = np.array([[0, 1, 1], + [1, 0, 1], [1, 1, 0]]) A3 = np.array([[0.971, 0.029, 0.1, 1], @@ -336,8 +328,8 @@ The convergence is not observed in cases of non-primitive matrices. Let's go through an example ```{code-cell} ipython3 -B = np.array([[0, 1, 1], - [1, 0, 0], +B = np.array([[0, 1, 1], + [1, 0, 0], [1, 0, 0]]) # This shows that the matrix is not primitive @@ -358,7 +350,7 @@ In fact we have already seen the theorem in action before in {ref}`the markov ch (spec_markov)= #### Example 3: Connection to Markov chains -We are now prepared to bridge the languages spoken in the two lectures. +We are now prepared to bridge the languages spoken in the two lectures. A primitive matrix is both irreducible (or strongly connected in the language of graph) and aperiodic. @@ -410,7 +402,7 @@ $$ This is proven in {cite}`sargent2023economic` and a nice discussion can be found [here](https://math.stackexchange.com/questions/2433997/can-all-matrices-be-decomposed-as-product-of-right-and-left-eigenvector). -In the formula $\lambda_i$ is an eigenvalue of $P$ and $v_i$ and $w_i$ are the right and left eigenvectors corresponding to $\lambda_i$. +In the formula $\lambda_i$ is an eigenvalue of $P$ and $v_i$ and $w_i$ are the right and left eigenvectors corresponding to $\lambda_i$. Premultiplying $P^t$ by arbitrary $\psi \in \mathscr{D}(S)$ and rearranging now gives @@ -418,14 +410,14 @@ $$ \psi P^t-\psi^*=\sum_{i=1}^{n-1} \lambda_i^t \psi v_i w_i^{\top} $$ -Recall that eigenvalues are ordered from smallest to largest from $i = 1 ... n$. +Recall that eigenvalues are ordered from smallest to largest from $i = 1 ... n$. As we have seen, the largest eigenvalue for a primitive stochastic matrix is one. -This can be proven using [Gershgorin Circle Theorem](https://en.wikipedia.org/wiki/Gershgorin_circle_theorem), +This can be proven using [Gershgorin Circle Theorem](https://en.wikipedia.org/wiki/Gershgorin_circle_theorem), but it is out of the scope of this lecture. -So by the statement (6) of Perron-Frobenius Theorem, $\lambda_i<1$ for all $i