Skip to content

Commit 10d0d9d

Browse files
committed
changes necessary for pdf build
1 parent 55a45cd commit 10d0d9d

File tree

6 files changed

+18
-19
lines changed

6 files changed

+18
-19
lines changed

lectures/applications/heterogeneity.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -101,12 +101,12 @@ When treatment is randomly assigned, we can estimate average treatment
101101
effects because
102102

103103
$$
104-
\begin{align*}
104+
\begin{aligned}
105105
E[y_i(1) - y_i(0) ] = & E[y_i(1)] - E[y_i(0)] \\
106106
& \text{random assignment } \\
107107
= & E[y_i(1) | d_i = 1] - E[y_i(0) | d_i = 0] \\
108108
= & E[y_i | d_i = 1] - E[y_i | d_i = 0 ]
109-
\end{align*}
109+
\end{aligned}
110110
$$
111111

112112
### Average Treatment Effects
@@ -164,12 +164,12 @@ logic that lets us estimate unconditional average treatment effects
164164
also suggests that we can estimate conditional average treatment effects.
165165

166166
$$
167-
\begin{align*}
167+
\begin{aligned}
168168
E[y_i(1) - y_i(0) |X_i=x] = & E[y_i(1)|X_i = x] - E[y_i(0)|X_i=x] \\
169169
& \text{random assignment } \\
170170
= & E[y_i(1) | d_i = 1, X_i=x] - E[y_i(0) | d_i = 0, X_i=x] \\
171171
= & E[y_i | d_i = 1, X_i = x] - E[y_i | d_i = 0, X_i=x ]
172-
\end{align*}
172+
\end{aligned}
173173
$$
174174

175175
Conditional average treatment effects tell us whether there are
@@ -209,7 +209,6 @@ $S(x)$ approximates $s_0(x)$ is to look at the best linear
209209
projection of $s_0(x)$ on $S(x)$.
210210

211211
$$
212-
\DeclareMathOperator*{\argmin}{arg\,min}
213212
\beta_0, \beta_1 = \argmin_{b_0,b_1} E[(s_0(x) -
214213
b_0 - b_1 (S(x)-E[S(x)]))^2]
215214
$$

lectures/applications/regression.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -123,7 +123,7 @@ only the livable square footage of the home.
123123
The linear regression model for this situation is
124124

125125
$$
126-
\log(\text{price}) = \beta_0 + \beta_1 \text{sqft_living} + \epsilon
126+
\log(\text{price}) = \beta_0 + \beta_1 \text{sqft\_living} + \epsilon
127127
$$
128128

129129
$\beta_0$ and $\beta_1$ are called parameters (also coefficients or
@@ -132,14 +132,14 @@ that best fit the data.
132132

133133
$\epsilon$ is the error term. It would be unusual for the observed
134134
$\log(\text{price})$ to be an exact linear function of
135-
$\text{sqft_living}$. The error term captures the deviation of
136-
$\log(\text{price})$ from a linear function of $\text{sqft_living}$.
135+
$\text{sqft\_living}$. The error term captures the deviation of
136+
$\log(\text{price})$ from a linear function of $\text{sqft\_living}$.
137137

138138
The linear regression algorithm will choose the parameters that minimize the
139139
*mean squared error* (MSE) function, which for our example is written.
140140

141141
$$
142-
\frac{1}{N} \sum_{i=1}^N \left(\log(\text{price}_i) - (\beta_0 + \beta_1 \text{sqft_living}_i) \right)^2
142+
\frac{1}{N} \sum_{i=1}^N \left(\log(\text{price}_i) - (\beta_0 + \beta_1 \text{sqft\_living}_i) \right)^2
143143
$$
144144

145145
The output of this algorithm is the straight line (hence linear) that passes as
@@ -218,7 +218,7 @@ Suppose that in addition to `sqft_living`, we also wanted to use the `bathrooms`
218218
In this case, the linear regression model is
219219

220220
$$
221-
\log(\text{price}) = \beta_0 + \beta_1 \text{sqft_living} +
221+
\log(\text{price}) = \beta_0 + \beta_1 \text{sqft\_living} +
222222
\beta_2 \text{bathrooms} + \epsilon
223223
$$
224224

@@ -227,7 +227,7 @@ We could keep adding one variable at a time, along with a new $\beta_{j}$ coeffi
227227
Let's write this equation in vector/matrix form as
228228

229229
$$
230-
\underbrace{\begin{bmatrix} \log(\text{price}_1) \\ \log(\text{price}_2) \\ \vdots \\ \log(\text{price}_N)\end{bmatrix}}_Y = \underbrace{\begin{bmatrix} 1 & \text{sqft_living}_1 & \text{bathrooms}_1 \\ 1 & \text{sqft_living}_2 & \text{bathrooms}_2 \\ \vdots & \vdots & \vdots \\ 1 & \text{sqft_living}_N & \text{bathrooms}_N \end{bmatrix}}_{X} \underbrace{\begin{bmatrix} \beta_0 \\ \beta_1 \\ \beta_2 \end{bmatrix}}_{\beta} + \epsilon
230+
\underbrace{\begin{bmatrix} \log(\text{price}_1) \\ \log(\text{price}_2) \\ \vdots \\ \log(\text{price}_N)\end{bmatrix}}_Y = \underbrace{\begin{bmatrix} 1 & \text{sqft\_living}_1 & \text{bathrooms}_1 \\ 1 & \text{sqft\_living}_2 & \text{bathrooms}_2 \\ \vdots & \vdots & \vdots \\ 1 & \text{sqft\_living}_N & \text{bathrooms}_N \end{bmatrix}}_{X} \underbrace{\begin{bmatrix} \beta_0 \\ \beta_1 \\ \beta_2 \end{bmatrix}}_{\beta} + \epsilon
231231
$$
232232

233233
Notice that we can add as many columns to $X$ as we'd like and the linear

lectures/problem_sets/problem_set_3.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -197,10 +197,10 @@ face value $M$, yield to maturity $i$, and periods to maturity
197197
$N$ is
198198

199199
$$
200-
\begin{align*}
200+
\begin{aligned}
201201
P &= \left(\sum_{n=1}^N \frac{C}{(i+1)^n}\right) + \frac{M}{(1+i)^N} \\
202202
&= C \left(\frac{1 - (1+i)^{-N}}{i} \right) + M(1+i)^{-N}
203-
\end{align*}
203+
\end{aligned}
204204
$$
205205

206206
In the code cell below, we have defined variables for `i`, `M` and `C`.

lectures/python_fundamentals/functions.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -633,10 +633,10 @@ that can be interchanged.
633633
That is, the following are identical.
634634

635635
$$
636-
\begin{eqnarray}
636+
\begin{aligned}
637637
f(K, L) &= z\, K^{\alpha} L^{1-\alpha}\\
638638
f(K_2, L_2) &= z\, K_2^{\alpha} L_2^{1-\alpha}
639-
\end{eqnarray}
639+
\end{aligned}
640640
$$
641641

642642
The same concept applies to Python functions, where the arguments are just

lectures/scientific/applied_linalg.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -343,11 +343,11 @@ $\begin{bmatrix} 1 & 2 \\ 3 & 1 \end{bmatrix}$ then we can multiply both sides b
343343
to get
344344

345345
$$
346-
\begin{align*}
346+
\begin{aligned}
347347
\begin{bmatrix} 1 & 2 \\ 3 & 1 \end{bmatrix}^{-1}\begin{bmatrix} 1 & 2 \\ 3 & 1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} &= \begin{bmatrix} 1 & 2 \\ 3 & 1 \end{bmatrix}^{-1}\begin{bmatrix} 3 \\ 4 \end{bmatrix} \\
348348
I \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} &= \begin{bmatrix} 1 & 2 \\ 3 & 1 \end{bmatrix}^{-1} \begin{bmatrix} 3 \\ 4 \end{bmatrix} \\
349349
\begin{bmatrix} x_1 \\ x_2 \end{bmatrix} &= \begin{bmatrix} 1 & 2 \\ 3 & 1 \end{bmatrix}^{-1} \begin{bmatrix} 3 \\ 4 \end{bmatrix}
350-
\end{align*}
350+
\end{aligned}
351351
$$
352352

353353
Computing the inverse requires that a matrix be square and satisfy some other conditions

lectures/scientific/numpy_arrays.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -521,10 +521,10 @@ face value $M$, yield to maturity $i$, and periods to maturity
521521
$N$ is
522522

523523
$$
524-
\begin{align*}
524+
\begin{aligned}
525525
P &= \left(\sum_{n=1}^N \frac{C}{(i+1)^n}\right) + \frac{M}{(1+i)^N} \\
526526
&= C \left(\frac{1 - (1+i)^{-N}}{i} \right) + M(1+i)^{-N}
527-
\end{align*}
527+
\end{aligned}
528528
$$
529529

530530
In the code cell below, we have defined variables for `i`, `M` and `C`.

0 commit comments

Comments
 (0)