You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: lectures/eigen_II.md
+12-9Lines changed: 12 additions & 9 deletions
Original file line number
Diff line number
Diff line change
@@ -50,7 +50,7 @@ Often, in economics, the matrix that we are dealing with is nonnegative.
50
50
51
51
Nonnegative matrices have several special and useful properties.
52
52
53
-
In this section we discuss some of them --- in particular, the connection
53
+
In this section we will discuss some of them --- in particular, the connection
54
54
between nonnegativity and eigenvalues.
55
55
56
56
Let $a^{k}_{ij}$ be element $(i,j)$ of $A^k$.
@@ -63,7 +63,7 @@ We denote this as $A \geq 0$.
63
63
(irreducible)=
64
64
### Irreducible matrices
65
65
66
-
We have (informally) introduced irreducible matrices in the Markov chain lecture (TODO: link to Markov chain lecture).
66
+
We have (informally) introduced irreducible matrices in the [Markov chain lecture](markov_chains_II.md).
67
67
68
68
Here we will introduce this concept formally.
69
69
@@ -157,9 +157,8 @@ This is a more common expression and where the name left eigenvectors originates
157
157
For a nonnegative matrix $A$ the behavior of $A^k$ as $k \to \infty$ is controlled by the eigenvalue with the largest
158
158
absolute value, often called the **dominant eigenvalue**.
159
159
160
-
For a matrix $A$, the Perron-Frobenius Theorem characterizes certain
161
-
properties of the dominant eigenvalue and its corresponding eigenvector when
162
-
$A$ is a nonnegative square matrix.
160
+
For a matrix nonnegative square matrix $A$, the Perron-Frobenius Theorem characterizes certain
161
+
properties of the dominant eigenvalue and its corresponding eigenvector.
163
162
164
163
```{prf:Theorem} Perron-Frobenius Theorem
165
164
:label: perron-frobenius
@@ -179,7 +178,9 @@ If $A$ is primitive then,
179
178
180
179
6. the inequality $|\lambda| \leq r(A)$ is **strict** for all eigenvalues $\lambda$ of $A$ distinct from $r(A)$, and
181
180
7. with $v$ and $w$ normalized so that the inner product of $w$ and $v = 1$, we have
182
-
$ r(A)^{-m} A^m$ converges to $v w^{\top}$ when $m \rightarrow \infty$. $v w^{\top}$ is called the **Perron projection** of $A$.
181
+
$ r(A)^{-m} A^m$ converges to $v w^{\top}$ when $m \rightarrow \infty$.
182
+
\
183
+
the matrix $v w^{\top}$ is called the **Perron projection** of $A$.
183
184
```
184
185
185
186
(This is a relatively simple version of the theorem --- for more details see
@@ -299,7 +300,7 @@ def check_convergence(M):
299
300
300
301
# Calculate the norm of the difference matrix
301
302
diff_norm = np.linalg.norm(diff, 'fro')
302
-
print(f"n = {n}, norm of the difference: {diff_norm:.10f}")
303
+
print(f"n = {n}, error = {diff_norm:.10f}")
303
304
304
305
305
306
A1 = np.array([[1, 2],
@@ -394,6 +395,8 @@ In the {ref}`exercise<mc1_ex_1>`, we stated that the convergence rate is determi
394
395
395
396
This can be proven using what we have learned here.
396
397
398
+
Please note that we use $\mathbb{1}$ for a vector of ones in this lecture.
399
+
397
400
With Markov model $M$ with state space $S$ and transition matrix $P$, we can write $P^t$ as
398
401
399
402
$$
@@ -402,7 +405,7 @@ $$
402
405
403
406
This is proven in {cite}`sargent2023economic` and a nice discussion can be found [here](https://math.stackexchange.com/questions/2433997/can-all-matrices-be-decomposed-as-product-of-right-and-left-eigenvector).
404
407
405
-
In the formula $\lambda_i$ is an eigenvalue of $P$ and $v_i$ and $w_i$ are the right and left eigenvectors corresponding to $\lambda_i$.
408
+
In this formula $\lambda_i$ is an eigenvalue of $P$ with corresponding right and left eigenvectors $v_i$ and $w_i$ .
406
409
407
410
Premultiplying $P^t$ by arbitrary $\psi \in \mathscr{D}(S)$ and rearranging now gives
408
411
@@ -485,7 +488,7 @@ The following is a fundamental result in functional analysis that generalizes
485
488
486
489
Let $A$ be a square matrix and let $A^k$ be the $k$-th power of $A$.
487
490
488
-
Let $r(A)$ be the dominant eigenvector or as it is commonly called the *spectral radius*, defined as $\max_i |\lambda_i|$, where
491
+
Let $r(A)$ be the **spectral radius** of $A$, defined as $\max_i |\lambda_i|$, where
489
492
490
493
* $\{\lambda_i\}_i$ is the set of eigenvalues of $A$ and
491
494
* $|\lambda_i|$ is the modulus of the complex number $\lambda_i$
Copy file name to clipboardExpand all lines: lectures/markov_chains_I.md
+11-11Lines changed: 11 additions & 11 deletions
Original file line number
Diff line number
Diff line change
@@ -98,7 +98,7 @@ In other words,
98
98
99
99
If $P$ is a stochastic matrix, then so is the $k$-th power $P^k$ for all $k \in \mathbb N$.
100
100
101
-
Checking this is {ref}`one of the exercises <mc1_ex_3>` below.
101
+
Checking this in {ref}`the first exercises <mc1_ex_3>` below.
102
102
103
103
104
104
### Markov chains
@@ -255,11 +255,11 @@ We'll cover some of these applications below.
255
255
(mc_eg3)=
256
256
#### Example 3
257
257
258
-
Imam and Temple {cite}`imampolitical` categorize political institutions into three types: democracy (D), autocracy (A), and an intermediate state called anocracy (N).
258
+
Imam and Temple {cite}`imampolitical` categorize political institutions into three types: democracy $\text{(D)}$, autocracy $\text{(A)}$, and an intermediate state called anocracy $\text{(N)}$.
259
259
260
-
Each institution can have two potential development regimes: collapse (C) and growth (G). This results in six possible states: DG, DC, NG, NC, AG, and AC.
260
+
Each institution can have two potential development regimes: collapse $\text{(C)}$ and growth $\text{(G)}$. This results in six possible states: $\text{DG, DC, NG, NC, AG}$ and $\text{AC}$.
261
261
262
-
The lower probability of transitioning from NC to itself indicates that collapses in anocracies quickly evolve into changes in the political institution.
262
+
The lower probability of transitioning from $\text{NC}$ to itself indicates that collapses in anocracies quickly evolve into changes in the political institution.
263
263
264
264
Democracies tend to have longer-lasting growth regimes compared to autocracies as indicated by the lower probability of transitioning from growth to growth in autocracies.
265
265
@@ -393,7 +393,7 @@ In these exercises, we'll take the state space to be $S = 0,\ldots, n-1$.
393
393
To simulate a Markov chain, we need
394
394
395
395
1. a stochastic matrix $P$ and
396
-
1. a probability mass function $\psi_0$ of length $n$ from which to draw a initial realization of $X_0$.
396
+
1. a probability mass function $\psi_0$ of length $n$ from which to draw an initial realization of $X_0$.
397
397
398
398
The Markov chain is then constructed as follows:
399
399
@@ -405,7 +405,7 @@ The Markov chain is then constructed as follows:
405
405
To implement this simulation procedure, we need a method for generating draws
406
406
from a discrete distribution.
407
407
408
-
For this task, we'll use `random.draw` from [QuantEcon](http://quantecon.org/quantecon-py).
408
+
For this task, we'll use `random.draw` from [QuantEcon.py](http://quantecon.org/quantecon-py).
409
409
410
410
To use `random.draw`, we first need to convert the probability mass function
411
411
to a cumulative distribution
@@ -491,7 +491,7 @@ always close to 0.25 (for the `P` matrix above).
491
491
492
492
### Using QuantEcon's routines
493
493
494
-
[QuantEcon.py](http://quantecon.org/quantecon-py) has routines for handling Markov chains, including simulation.
494
+
QuantEcon.py has routines for handling Markov chains, including simulation.
495
495
496
496
Here's an illustration using the same $P$ as the preceding example
497
497
@@ -585,15 +585,15 @@ $$
585
585
586
586
There are $n$ such equations, one for each $y \in S$.
587
587
588
-
If we think of $\psi_{t+1}$ and $\psi_t$ as *row vectors*, these $n$ equations are summarized by the matrix expression
588
+
If we think of $\psi_{t+1}$ and $\psi_t$ as row vectors, these $n$ equations are summarized by the matrix expression
589
589
590
590
```{math}
591
591
:label: fin_mc_fr
592
592
593
593
\psi_{t+1} = \psi_t P
594
594
```
595
595
596
-
Thus, to move a distribution forward one unit of time, we postmultiply by $P$.
596
+
Thus, we postmultiply by $P$ to move a distribution forward one unit of time.
597
597
598
598
By postmultiplying $m$ times, we move a distribution forward $m$ steps into the future.
599
599
@@ -671,7 +671,7 @@ $$
671
671
The distributions we have been studying can be viewed either
672
672
673
673
1. as probabilities or
674
-
1. as cross-sectional frequencies that a Law of Large Numbers leads us to anticipate for large samples.
674
+
1. as cross-sectional frequencies that the Law of Large Numbers leads us to anticipate for large samples.
675
675
676
676
To illustrate, recall our model of employment/unemployment dynamics for a given worker {ref}`discussed above <mc_eg1>`.
677
677
@@ -788,7 +788,7 @@ Not surprisingly it tends to zero as $\beta \to 0$, and to one as $\alpha \to 0$
788
788
789
789
### Calculating stationary distributions
790
790
791
-
A stable algorithm for computing stationary distributions is implemented in [QuantEcon.py](http://quantecon.org/quantecon-py).
791
+
A stable algorithm for computing stationary distributions is implemented in QuantEcon.py.
0 commit comments