Warming up with the RBC Model

Alisdair McKay

«  Computational Notes on Heterogeneous-Agent Macroeconomics   ::   Contents   ::   Endogenous Grid Method  »

Warming up with the RBC Model

Consider an RBC model in which preferences are given by
\[\mathbb E_0 \sum_{t=0}^\infty \beta^t \frac{C_t^{1-\gamma}}{1-\gamma},\]
production follows
\[Y_t = Z_t K_{t-1}^\alpha \bar L^{1-\alpha},\]
capital evolves according to
\[K_t = (1-\delta)K_{t-1}+Y_t - C_t,\]
and productivity evolves according to
\[\log Z_t = \rho \log Z_{t-1} + \varepsilon_{t},\]

where \(\varepsilon\) is an i.i.d. mean-zero innovation. I have adopted an unconvention of dating the capital stock selected in period \(t\) and used in production in \(t+1\) as \(K_t\) so that a variable dated \(t\) is measurable with respect to date-\(t\) information.

The model can be summarized by the following expectational difference equations
\[ \begin{align}\begin{aligned}C_t^{-\gamma} &= \beta \mathbb E_t \left[ R_{t+1} C_{t+1}^{-\gamma} \right]\\R_t &= \alpha Z_t \left( K_{t-1} / \bar L \right)^{\alpha-1} + 1 - \delta\\K_t &= (1-\delta)K_{t-1}+Y_t - C_t\\Y_t &= Z_t K_{t-1}^\alpha \bar L^{1-\alpha}\\\log Z_t &= \rho \log Z_{t-1} + \varepsilon_{t}.\end{aligned}\end{align} \]

The unknown is a stochastic processes for \(X_t \equiv \left(C_t,R_t,K_t,Y_t,Z_t\right)\). We will linearize the model around the deterministic steady state and then solve for the dynamics of the economy that are locally accurate in a neighborhood of the steady state. We will fix \(\bar L = 1\). The steady state is then

\[ \begin{align}\begin{aligned}Z &= 1\\R &= 1/ \beta\\K &= \left( \frac{R-1+\delta}{\alpha} \right)^{1/(\alpha - 1)}\\Y &= K^\alpha\\C &= Y - \delta K.\end{aligned}\end{align} \]
We then linearize the model equations to the form
\[A \mathbb E_t X_{t+1} + B X_t + C X_{t-1} + \mathcal E \epsilon_t = 0,\]

where \(A\), \(B\), \(C\), and \(\mathcal E\) are matrices of coefficients and \(\epsilon_t\) is a vector of shocks at date \(t\).

Solving the linearized model

There are a variety of methods for solving linear rational expectations models. An intuitive solution method is provided by Rendahl (2017) [1] using a guess-and-verify approach based for a solution of the form \(X_t = P X_{t-1} + Q \epsilon_t\), where \(P\) and \(Q\) are coefficient matrices to be found. Plugging in this functional form

\[ \begin{align}\begin{aligned}A \mathbb E_t \left[ P X_t + Q \epsilon_{t+1} \right] + B X_t + C X_{t-1} + \mathcal E \epsilon_t &= 0\\A \mathbb E_t \left[ P \left( P X_{t-1} + Q \epsilon_t \right) + Q \epsilon_{t+1} \right] + B \left( P X_{t-1} + Q \epsilon_t \right) + C X_{t-1} + \mathcal E \epsilon_t &= 0\\\left[A P^2 + B P + C \right] X_{t-1} + \left[ A P Q + B Q + \mathcal E \right] \epsilon_t &= 0.\end{aligned}\end{align} \]

\(\epsilon_{t+1}\) drops out of the linearized model because \(A Q \mathbb E_t \epsilon_{t+1} = 0\). So the distribution of risks facing the economy do not influence the solution a property we call “certainty equivalence.” We seek values of \(P\) and \(Q\) that solve the above equation for all values of \(X_{t-1}\) and \(\epsilon_t\), which means the two expressions in square brackets must be zero. Rendahl’s method finds a solution to \(A P^2 + B P + C = 0\) iteratively starting with an inital guess of a solution \(P^{(0)}\). The method then supposes that \(P^{(0)}\) will apply in period \(t+1\) and solves for \(P^{(1)}\) from

\[ \begin{align}\begin{aligned}A P^{(n)} P^{(n+1)} + B P^{(n+1)} + C &= 0\\P^{(n+1)} &= - \left[A P^{(n)} + B \right]^{-1} C.\end{aligned}\end{align} \]

One repeats this iteration until the solution for \(P^{(n+1)}\) solves \(A P^2 + B P + C = 0\) to within some tolerance. With the solution for \(P\) in hand, one solves for \(Q\) from

\[ \begin{align}\begin{aligned}A P Q + B Q + \mathcal E &= 0\\Q &= -\left[ A P + B \right]^{-1} \mathcal E.\end{aligned}\end{align} \]

With the solution for \(P\) and \(Q\) in hand, one can plot impulse response function, simulate business cycles, and calculate second moment properties of the economy.

Linearizing the model

In this example we could easily linearize the model by hand, but in the applications later that is a lot more work because there are many more equations in the system. Instead of doing it by hand, we will have the computer calculate the derivatives for us using a technique called automatic differentiation. The broad idea is that the computer knows how to differentiate basic mathematical operations and then strings them together using the Chain rule. That means that automatic differentiation works for computer code that can be considered as a series of basic mathematical operations and not, for example, an algorithm that iterates towards a solution. Here are some packages for automatic differentiation in Matlab and Python.

In practice we will write a function of the form \(F(X_{t-1},X_t, X_{t+1},\epsilon_t) = 0\) that gives the residuals of the five model equations. Then we use automatic differentiation to differentiate this function at the steady state.

Footnotes

[1]Rendahl, Pontus. “Linear Time Iteration.” (2017).

«  Computational Notes on Heterogeneous-Agent Macroeconomics   ::   Contents   ::   Endogenous Grid Method  »