3
$\begingroup$

Suppose $x_i$ are drawn i.i.d. from a $p$-variate Gaussian, $\mathcal{N}\left(\mu,\Sigma\right)$. Suppose one observes $x_1,x_2,\ldots,x_n$. One also observes $s_{n+1},s_{n+2},\ldots,s_{n+m},$ where $s_i = \mbox{sign}\left(x_i\right)$ is a $p$-vector consisting of -1 and +1's. (Well, in principle, it could contain some zeroes...)

I would like to estimate $\mu$ from this information. One should be able to do at least as well as the estimate that ignores the $s_i$ (namely $\hat{\mu} = (1/n) \sum_{1\le i \le n} x_i$). But is there a method that does better? How good is the method? Is this a well-known problem?

$\endgroup$

1 Answer 1

4
$\begingroup$

Last m observations you can consider as censored. You could update the censored observation likelihood by probability of region.

For Example let p=3. you got the sign vector (1,-1,-1), the corresponding representation in likelihood will be $P[X_1>0,X_2<0,X<0]$.

Adding extra information is always better. This problem cannot solve analytically. Solving this problem involve numerical optimisation.

$\endgroup$
2
  • $\begingroup$ I interpret this answer as: compute the MLE of $\mu$ based on the $x_i$ in the 'usual way', and use the sign information based on a sign-censored Gaussian likelihood. And the whole thing will have to be done numerically. Is that right? $\endgroup$ Commented Mar 27, 2012 at 17:49
  • 1
    $\begingroup$ You are right. Use the censor information in the ML estimation. $\endgroup$ Commented Mar 27, 2012 at 18:03

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.