3
$\begingroup$

I refer to this post on the problem with two control variables.

This problem may be called something else, but due to @tdm, it is as follows:

$$ \max_{c_t, \ell_t} \sum_{t = 0}^\infty \beta^t[\ln(c_t) + \gamma \ln(\ell_t)] \text{ s.t. } \begin{cases} y_t = k_t^\alpha(1- \ell_t)^{1- \alpha}\\ k_{t+1} = y_t - c_t\end{cases} $$

Here $\ell$ is leisure and $c$ is consumption, $k$ - capital stock (resource). We are looking for the dynamics of those three variables. $y$ is defined as the function of $k$ and $\ell$, so can be easily reduced.

I have looked through some different sources, i.e. this, but ironically have not found the strategy how to solve this problem numerically with dynamic programming tools (maybe there is a log-linearization solution, but at the moment I'm interested in dynamic programming approach).

Can anyone suggest a strategy how to approach this problem (functions iteration, LQ, whatever...).

$\endgroup$

1 Answer 1

3
$\begingroup$

For a problem like the one you present which is not too large and which does not have particular values of the state where policy functions change rapidly, straightforward value function iteration (VFI) works well.

Let us first consider how VFI arises mathematically. The problem can be translated into recursive form by identifying the current capital stock, $k_t$, as the state variable. This allows us to translate the problem into the following recursive form,

$$V(k_t)=\max_{k_{t+1},\ell_t}\{\ln[k_t^{\alpha}[1-\ell_t]^{1-\alpha}-k_{t+1}]+\gamma \ln [\ell_t]+\beta V(k_{t+1})\}$$

VFI is fundamentally an algorithm to solve fixed-point problems. To see why the above problem is a fixed-point problem, consider the following. By the Inada conditions and concavity of the production function, there exists some $\bar{k}>0$ which is the maximal attainable capital stock, so that we can restrict attention to $k_t\in [0,\bar{k}]$. Define $\mathcal{B}([0,\bar{k}])\owns f$ as the space of functions $f:[0,\bar{k}]\to \mathbb{R}$. Define an operator $T:\mathcal{B}([0,\bar{k}])\to \mathcal{B}([0,\bar{k}])$ as $$[T(f)](k)=\max_{k_{t+1},\ell_t}\{\ln[k_t^{\alpha}[1-\ell_t]^{1-\alpha}-k_{t+1}]+\gamma \ln [\ell_t]+\beta f(k_{t+1})\}$$

Now, the value function $V$ is the function $V\in \mathcal{B}([0,\bar{k}])$ such that $T(V)=V$, i.e., it is the fixed point of the operator $T$.

Now, we don't yet know whether $T$ actually has a fixed point, nor do we know of a quick way to calculate it. However, one particular class of functions for which a unique fixed point is guaranteed and which this fixed point is very to calculate are contraction mappings. Equiping $\mathcal{B}([0,\bar{k}])$ with the supremum norm, $T$ is a contraction mapping if $$\Vert T(f)-T(g)\Vert_{\infty} \leq k\Vert f-g\Vert_{\infty}, \qquad \forall f,g\in \mathcal{B}([0,\bar{k}])$$ for some $k\in [0,1)$. To check whether $T$ satisfies this condition, it is much easier to check Blackwell's sufficient conditions which immediately give that $T$ is such a contraction mapping.

Since $T$ is a contraction mapping, starting from any guess for the value function $V_0$, iteratively plugging this function into $T$ will produce a sequence of value functions $\{V_k\}$ which are guaranteed to converge to the fixed point of $T$, i.e., the true value function. Once we have our estimate of the value function, standard computational optimisation techniques provide us with the policy functions $k'(k)$ and $\ell(k)$, from which we can back out all the other policy functions we want from the problem.

I provide some pseudo-code for VFI below,

  1. Initialise capital grid $\mathcal{K}\xleftarrow{}\{0,k_1,\dots ,k_M,\bar{k}\}$
  2. Initialise $\texttt{maxiter, } \texttt{tol}>0, \text{ and } \texttt{Error}>\texttt{Tol}$
  3. Initialise $V_0(k)$ and policy functions $k'_0(k),\ell(k)$ for $k\in \mathcal{K}$. Set counter $j\xleftarrow{} 0$.
  4. $\textbf{While }$ $j<\texttt{maxiter}$ and $\texttt{tol}<\texttt{Error}$:

For $k\in \mathcal{K}$, $$\begin{align*}k'_{j+1}(k),\ell_{j+1}(k)&\xleftarrow{}\arg \max_{k',\ell}\{\ln[k^{\alpha}[1-\ell]^{1-\alpha}-k']+\gamma \ln [\ell]+\beta V_j(k')\} \\ V_{j+1}(k) &\xleftarrow{}\ln[k^{\alpha}[1-\ell_{j+1}(k)]^{1-\alpha}-k'_{j+1}(k)]+\gamma \ln [\ell_{j+1}(k)]+\beta V_j(k'_{j+1}(k)) \end{align*}$$ Set, $$\begin{align*}\texttt{Error}&\xleftarrow{} \max_{k\in \mathcal{K}}|V_{j+1}(k)-V_{j}(k)| \\ j&\xleftarrow{}j+1 \end{align*}$$ 6. Extract policy functions $k'_j(k),\ell_j(k)$ and value function $V_j(k)$.

If your problem is with the optimisation occurring over both $k'$ and $\ell$ in step (4), this really is not much of a problem, as you can restrict $\ell$ to lie on a grid $\mathcal{L}=\{0,\ell_1,\dots, \ell_N,1\}$ similarly to how we do for capital. Since the problem does not have large jumps in the policy function, linear interpolation on this grid will give a good approximation of the true policy functions.

Additionally, you can speed up VFI using something like Howard's policy improvement algorithm. In essence, this improvement makes the observation that the estimation of policy functions in step 4 above is computationally costly, so we would like to do it as little as possible. Therefore, with fixed policy functions, we can iterate on the value function until convergence and only once convergence of the value function is achieved for these fixed policy functions do we re-optimise the policy functions. This improvement, along with VFI itself, is described on pages 106-107 of Ljungqvist and Sargent, "Recursive Macroeconomic Theory", 3rd edition.

$\endgroup$
2
  • $\begingroup$ @JosephBasfror, thank you, you are right, now I see that my problem is with (4) optimization step. Do we loop over grids or use some interpolation (for which function), or both? $\endgroup$ Commented Jan 18 at 7:27
  • 1
    $\begingroup$ @garej You can either optimise $k'(k),\ell(k)$ over a grid (which is what I had in mind), or you can use some optimisation method like $\texttt{scipy.optimize}$ in Python. Either way, you'll have to interpolate afterwards to get the full policy functions over $k\in[0,\bar{k}]$ as VFI discretises the state space. Linear interpolation will work well for this problem. Either way, the main problem of VFI is that the policy function estimation is computationally costly. More advanced methods circumvent the issue, with the endogenous grid-point method being the primary example. $\endgroup$ Commented Jan 18 at 7:48

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.