2
$\begingroup$

So during a project I get to know the reparameterization trick, where you restructure the mathematical expression hence to train parameters which is un-trainable prior to the restructuring.

Now I am going through some formal ML training courses, and when it comes to GLM, where you "link" the canonical parameter to the mean, it instantly brings me back to the reparameterization. Am I right they are same notion referred in different context? or did I miss/misunderstand anything?

$\endgroup$
0

1 Answer 1

2
$\begingroup$
  • The link in a GLM is a translation between parameterisations, from the mean parameter to the natural parameter. This is (typicall/exclusively?) for exponential family distributions.

  • The "reparameterisation trick" is for computing an expectation over a Gaussian distribution $N(\mu,\Sigma)$ by sampling (MC estimation). Rather than sampling from the Gaussian, you can sample from a standard Gaussian $N(0,1)$ (white noise) and scale the samples. By doing so, the parameters $\mu$ and $\Sigma$ become part of the deterministic function and can be "back-propped" through. This is important in, say, a VAE where the trick is used. (Apparently this also reduces variance in the MC estimate.)

I can't see a link between these, but hopefully the explanation helps clarify.

$\endgroup$

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.