Skip to main content

Questions tagged [expectation-maximization]

An optimization algorithm often used for maximum-likelihood estimation in the presence of missing data.

1 vote
1 answer
52 views

I have data from a survey. This data includes a key metric of interest (Y), which has an interesting distribution: a clear peak at 0, and a generally right-skewed distribution. I have modeled the ...
hainabaraka's user avatar
2 votes
1 answer
72 views

A common example that I have found for explaining expectation-maximization is the example of two biased coins. The problem statement is: You have two biased coins, which you select with equal ...
Finncent Price's user avatar
2 votes
0 answers
96 views

I am quite new to ML methods such as GMMs but I have a problem at hand which requires me to estimate the covariance matrices of Gaussians such that the datapoints are drawn from a weighted sum of ...
Abhinav Verma's user avatar
0 votes
1 answer
52 views

Learning about EM algorithms and finite mixture models and I've run into a particularly unintuitive problem. I'm trying to fit a finite mixture regression model on simulated data, where the true ...
dancing_monkeys's user avatar
1 vote
0 answers
108 views

Inlined at the bottom is the code of a MATLAB simulation I wrote. This code very simply runs expectation maximization for three Gaussians and, as set down, is supposed to illustrate the degeneracy ...
Chris's user avatar
  • 322
0 votes
0 answers
44 views

I have a dataset of (noisy) test results. You can think of this as being accuracy that a chess player achieves in various games or number of points a basketballer scores in a game. I think a good ...
Stuart's user avatar
  • 131
0 votes
0 answers
50 views

I'm reading a paper on applying a SLOPE model to a Bayesian spike-and-slab framework [link]. The SLOPE penalty is given by $$ \text{pen}(\lambda) = \sigma \sum_{j=1}^{p} \lambda_{r(\beta,j)} |\beta_j|,...
user19904's user avatar
  • 294
0 votes
1 answer
65 views

Let $\sigma>0$. Suppose we observe $N$ samples of sizes $T_1,\dots,T_n$ that are each generated by the following data generating process: $\theta_n$ is drawn from the distribution $\mathrm{N}(0,\...
cfp's user avatar
  • 565
1 vote
0 answers
98 views

I am trying to use DLM package in R to estimate a state space model, where the observation and state equation are as follows: $y_t=\beta_1a_t + B_t\beta_2(\frac{u_t-v_t}{u_t}) + C_t\beta_3(\frac{u_t-...
sorbus's user avatar
  • 11
1 vote
1 answer
91 views

I am reading the paper Some contributions to maximum likelihood factor analysis. Consider the factor analysis model $$y=\Lambda x+z, $$ where $y$ is a vector containing $p$ features and $x$ is the ...
user avatar
4 votes
1 answer
128 views

I'm reading about how the estimation of the survival function is "self-consistent" because as Efron showed, we can estimate the survival function in the presence of right censoring as $$ \...
abadfr's user avatar
  • 105
0 votes
0 answers
73 views

These questions arose when I was reading Online Appendix D for the paper Missing Events in Event Studies: Identifying the Effects of Partially-Measured News Surprises by R.S. Gurkaynak, B. Kisacikoglu ...
zyy's user avatar
  • 135
1 vote
0 answers
40 views

I have a general idea of Gaussian Mixture Models. My understanding: GMM is a way of clustering data points which, unlike K means clustering, soft assigns them under different distributions by ...
DeadAsDuck's user avatar
3 votes
1 answer
125 views

From cs 229 page 6: Intuitively, the EM algorithm alternatively updates Q and θ by a) setting Q(z) = p(z|x; θ) following Equation (8) so that ELBO(x; Q, θ) = log p(x; θ) for x and the current θ, and ...
figs_and_nuts's user avatar
2 votes
2 answers
191 views

ELBO is a lower bound, and only matches the true likelihood when the q-distribution/encoder we choose equals to the true posterior distribution. Are there any guarantees that maximizing ELBO indeed ...
Daniel Mendoza's user avatar

15 30 50 per page
1
2 3 4 5
41