So during a project I get to know the reparameterization trick, where you restructure the mathematical expression hence to train parameters which is un-trainable prior to the restructuring.
Now I am going through some formal ML training courses, and when it comes to GLM, where you "link" the canonical parameter to the mean, it instantly brings me back to the reparameterization. Am I right they are same notion referred in different context? or did I miss/misunderstand anything?