2
$\begingroup$

There are a few questions/answers out there about subscripts involving a random variable ($E_X[...]$) or a density ($E_{f(X)}[...]$) in expectations. I like this one.

But today I ran into a subscript involving a parameter of a density, and there is a point that confuses me.

The book of A. A. Tsiatis defines m-estimators $\hat{\theta}_n$ as the solution of:

$\sum_{i=1}^nm(Z_i, \hat{\theta}_n)=0$

where $Z_1,...,Z_n$ is an iid sample from $p_Z(z,\theta)$, $\theta$ is $p$-dimensional and, by definition, $E_{\theta}[m(Z, \theta)]=0^{p \times 1}$. When I first saw that zero-expectation condition I thought:

$\int m(Z, \theta)p_\theta(\theta)d\theta$

which is not weird in a Bayesian context, where parameters have distributions. But then I remembered that the likelihood in MLE is $L(\theta;z)\equiv p_Z(z,\theta)$, so I figured the $E_{\theta}[\quad ]$ was shorthand for:

$\int m(Z, \theta)p_Z(z,\theta)d\theta$

However, at the bottom of page 32 he says that this zero-expectation condition is equivalent to:

$\int m(Z, \theta)p_Z(z,\theta)d\nu(z)=0 \quad \text{for all } \theta$

where $\nu(z)$ is the dominating measure.

I would like to understand (1) what that subscript $\theta$ means in the expectation and (2) why the expectation is equivalent to this last integral + the "for all $\theta$" condition.

$\endgroup$
2
  • $\begingroup$ The expectation is parameterized by $\theta$; it is taken with respect to the distribution given the parameter value $=\theta$. This is important because you want to make sure that the $\theta$ in $m(Z,\theta)$ has the same value as the parameter that's in $p_Z(z;\theta)$, and it's that latter $\theta$ that the subscript refers to. $\endgroup$ Commented Dec 5, 2019 at 15:29
  • $\begingroup$ I think I understand what you're saying and it seems to also answer my second question. If you post it as an answer I will accept it. $\endgroup$ Commented Dec 5, 2019 at 15:40

1 Answer 1

1
$\begingroup$

In this case, the subscript means "the expectation is parameterized by $\theta$"; it is taken with respect to the distribution $p$ given $p$'s parameter value $= \theta$. This is important in this context because you want to make sure that the reader understands that $\theta$ in $m(Z,\theta$) has the same value as the parameter that's in $p_Z(z;\theta)$, and it's that latter $\theta$ that the subscript refers to.

It is a little confusing, as often the subscript on the expectation operator refers to what the expectation is taken with respect to, not what the parameters of the relevant probability distribution are.

$\endgroup$

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.