5
$\begingroup$

Consider the model $$y_i = x_{i}'\beta + e_i.$$

In the discussion of instrumental variables estimator it can be shown that OLS estimator $b$ is biased and inconsistent estimator of $\beta$. Nonetheless, $b$ does estimate something: $plim_{n\to\infty}b = \beta + Q^{-1}\gamma = \theta$ with $Q = \mathbb{E}(x_i'x_i)$ and $\mathbb{E}(x_ie_i) = \gamma \neq 0.$
Assume data is i.i.d. and that $\mathbb{E}(x_i'x_i)$ has full rank.

  1. Show that $b$ is asymptotically normally distributed.
  2. Derive the asymptotic covariance matrix of $b$.

My attempt. Write $b - \beta = \left(\frac{1}{n}\sum_ix_i'x_i\right)^{-1}\left(\frac{1}{n}\sum_ix_ie_i\right)$. Since $\mathbb{E}(x_ie_i) \neq 0$, we can't use CLT directly on the RHS. But observe that $$\sqrt{n}\left(\frac{1}{n}\sum_ix_ie_i - \mathbb{E}(x_ie_i)\right) \xrightarrow{d} N(0, V),$$ where $V = Var(x_ie_i)$. Now transform expression for $b - \beta$ :

\begin{equation} \begin{aligned} b - \beta - \left(\frac{1}{n}\sum_ix_i'x_i\right)^{-1}\gamma &= \left(\frac{1}{n}\sum_ix_i'x_i\right)^{-1}\left(\frac{1}{n}\sum_ix_ie_i\right) - \left(\frac{1}{n}\sum_ix_i'x_i\right)^{-1}\gamma \\ & = \left(\frac{1}{n}\sum_ix_i'x_i\right)^{-1}\left(\frac{1}{n}\sum_ix_ie_i - \gamma\right) \end{aligned} \end{equation}

Multiply both sides by $\sqrt{n}$ to get $$\sqrt{n}\left[(b - \beta - \left(\frac{1}{n}\sum_ix_i'x_i\right)^{-1}\gamma)\right] = \left(\frac{1}{n}\sum_ix_i'x_i\right)^{-1}\sqrt{n}\left(\frac{1}{n}\sum_ix_ie_i - \gamma\right)$$

$\left(\frac{1}{n}\sum_ix_i'x_i\right)^{-1} \xrightarrow{p} Q^{-1} = \mathbb{E}(x_i'x_i)$ and $\sqrt{n}\left(\frac{1}{n}\sum_ix_ie_i - \gamma\right) \xrightarrow{d} N(0, V)$. Then, by Slutsky theorem $$\sqrt{n}\left[(b - \beta - \left(\frac{1}{n}\sum_ix_i'x_i\right)^{-1}\gamma)\right] \xrightarrow{d} N(0, Q^{-1}VQ^{-1})$$


I am not quite sure that the result is correct and makes sense. Any hints or suggestions are welcome.

$\endgroup$

1 Answer 1

3
$\begingroup$

The result is ok. What you have done is to find a specific function of $b$ (the LHS of your last expression) that does converge asymptotically to a zero-mean normal with the stated variance.

Note also that both the LHS and the RHS contain non-estimable parameters - the mean $\gamma$ and the variance $V$ of $x_ie_i$ (although for the variance there usually are some assumptions that permit you to work a bit further). This is the reason why the consistency property is so crucial -it allows us to estimate/use asymptotic results.

$\endgroup$
2
  • $\begingroup$ I know this is years later.. but I wanted to ask- are you saying for the non-estimable parameters- we can ever estimate $\gamma$ because we don't know what $\gamma$ is, but if we have consistency, we know it is 0 (or any particular number)? $\endgroup$ Commented Jun 23, 2021 at 16:55
  • 1
    $\begingroup$ @Steve Indeed. Note that $\gamma = \mathbb{E}(x_ie_i)$. But you cannot estimate this expected value because the only information you have on the true error term $e$ is the OLS residual, which is by construction orthogonal to the regressors (even if the true error term is not). Analogous comments apply for the variance matrix $V$. So the orthogonality condition that leads to consistency is essentially an identification condition. $\endgroup$ Commented Jun 24, 2021 at 17:40

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.