1
$\begingroup$

I read from Asymptotic Statistics (Van der Vaart, 1998) Theorem 5.7 that, to let the M-estimator $\hat{\theta}_n$ converge in probability to $\theta_0$, two conditions need to be satisfied, one of which is for every $\epsilon>0$, $$\sup_{\theta:d(\theta,\theta_0)\geq\epsilon}M(\theta)<M(\theta_0).$$

The inequality looks complexed, involving the metric $d$, which puzzles me. Why can't we remove the metric, expressing it as $$\forall\theta\neq\theta_0,M(\theta)<M(\theta_0)$$ Aren't the two equivalent?

$\endgroup$

1 Answer 1

0
$\begingroup$

The two conditions are not equivalent. Under your condition, there can be a sequence of values $\theta^*_m$ so that with $m\to\infty:\ M(\theta^*_m)\nearrow M(\theta_0)$ without actually reaching it, so that $d(\theta^*_m,\theta_0)>\delta$ for all $m$ and fixed $\delta>0$, and this is forbidden by the original condition.

It may well be that under your condition one can construct examples in which the sequence $\hat\theta_n$ or at least a subsequence of it can for arbitrarily large $n$ be in distance of $\delta$ or more from $\theta_0$ with probability bounded away from zero ("approximating" values of $\theta^*_m$ for which $M(\theta^*_m)$ is very close to $M(\theta_0)$). This would contradict consistency. Chances are also that it'd not be so easy to come up with such examples, but for sure it allows for a more straightforward argument in the proof to forbid this possibility.

$\endgroup$

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.