Skip to main content

Two random variables are independent iff $p(x,y) = p(x)p(y)$. This is the definition. Now, if we throw in the parameter then $X,Y$ have some distribution that depends on the parameter $\theta$. According to the bayesianBayesian approach, our model now is $X,Y,\Theta \sim p(x,y,\theta)$$X,Y,\theta \sim p(x,y,\theta)$ (tell me if you need clarification on this notation). To show independence we'd want to show $$ p(x,y) =\int p(x,y,\theta)d\theta = \int p(x) p(y, \theta | x) d\theta =\\ = p(x) p(y|x) =p(x)p(y) $$ which might not necessarily hold (we're bayesiansBayesians now)!! This is why marginal independence does not hold. However, if you condition on $\theta$ (pretend you know it exactly) - then the two tosses are just independent tosses of a biased coin (or whatever experiment you perform) under fixed, known, conditions. Hence, they are independent.

Lets summarize: if the parameter $\Theta$$\theta$ is a RV, then $X,Y$ depend on it and so knowing $X$ affects the distribution of $\Theta$$\theta$ and, consequently, the distribution of $Y$. If $\theta$ is not a random variable then its "distribution" is fixed - it is a point mass on its true value. Hence nothing you observe about $X$ will change the distribution of $Y$.

Two random variables are independent iff $p(x,y) = p(x)p(y)$. This is the definition. Now, if we throw in the parameter then $X,Y$ have some distribution that depends on the parameter $\theta$. According to the bayesian approach, our model now is $X,Y,\Theta \sim p(x,y,\theta)$ (tell me if you need clarification on this notation). To show independence we'd want to show $$ p(x,y) =\int p(x,y,\theta)d\theta = \int p(x) p(y, \theta | x) d\theta =\\ = p(x) p(y|x) =p(x)p(y) $$ which might not necessarily hold (we're bayesians now)!! This is why marginal independence does not hold. However, if you condition on $\theta$ (pretend you know it exactly) - then the two tosses are just independent tosses of a biased coin (or whatever experiment you perform) under fixed, known, conditions. Hence, they are independent.

Lets summarize: if the parameter $\Theta$ is a RV, then $X,Y$ depend on it and so knowing $X$ affects the distribution of $\Theta$ and, consequently, the distribution of $Y$. If $\theta$ is not a random variable then its "distribution" is fixed - it is a point mass on its true value. Hence nothing you observe about $X$ will change the distribution of $Y$.

Two random variables are independent iff $p(x,y) = p(x)p(y)$. This is the definition. Now, if we throw in the parameter then $X,Y$ have some distribution that depends on the parameter $\theta$. According to the Bayesian approach, our model now is $X,Y,\theta \sim p(x,y,\theta)$ (tell me if you need clarification on this notation). To show independence we'd want to show $$ p(x,y) =\int p(x,y,\theta)d\theta = \int p(x) p(y, \theta | x) d\theta =\\ = p(x) p(y|x) =p(x)p(y) $$ which might not necessarily hold (we're Bayesians now)!! This is why marginal independence does not hold. However, if you condition on $\theta$ (pretend you know it exactly) - then the two tosses are just independent tosses of a biased coin (or whatever experiment you perform) under fixed, known, conditions. Hence, they are independent.

Lets summarize: if the parameter $\theta$ is a RV, then $X,Y$ depend on it and so knowing $X$ affects the distribution of $\theta$ and, consequently, the distribution of $Y$. If $\theta$ is not a random variable then its "distribution" is fixed - it is a point mass on its true value. Hence nothing you observe about $X$ will change the distribution of $Y$.

summary
Source Link
Yair Daon
  • 2.7k
  • 1
  • 21
  • 35

Two random variables are independent iff $p(x,y) = p(x)p(y)$. This is the definition. Now, if we throw in the parameter then $X,Y$ have some distribution that depends on the parameter $\theta$. According to the bayesian approach, our model now is $X,Y,\Theta \sim p(x,y,\theta)$ (tell me if you need clarification on this notation). To show independence we'd want to show $$ p(x,y) =\int p(x,y,\theta)d\theta = \int p(x) p(y, \theta | x) d\theta =\\ = p(x) p(y|x) =p(x)p(y) $$ which might not necessarily hold (we're bayesians now)!! This is why marginal independence does not hold. However, if you condition on $\theta$ (pretend you know it exactly) - then the two tosses are just independent tosses of a biased coin (or whatever experiment you perform) under fixed, known, conditions. Hence, they are independent.

Lets summarize: if the parameter $\Theta$ is a RV, then $X,Y$ depend on it and so knowing $X$ affects the distribution of $\Theta$ and, consequently, the distribution of $Y$. If $\theta$ is not a random variable then its "distribution" is fixed - it is a point mass on its true value. Hence nothing you observe about $X$ will change the distribution of $Y$.

Two random variables are independent iff $p(x,y) = p(x)p(y)$. This is the definition. Now, if we throw in the parameter then $X,Y$ have some distribution that depends on the parameter $\theta$. According to the bayesian approach, our model now is $X,Y,\Theta \sim p(x,y,\theta)$ (tell me if you need clarification on this notation). To show independence we'd want to show $$ p(x,y) =\int p(x,y,\theta)d\theta = \int p(x) p(y, \theta | x) d\theta =\\ = p(x) p(y|x) =p(x)p(y) $$ which might not necessarily hold (we're bayesians now)!! This is why marginal independence does not hold. However, if you condition on $\theta$ (pretend you know it exactly) - then the two tosses are just independent tosses of a biased coin (or whatever experiment you perform) under fixed, known, conditions. Hence, they are independent.

Two random variables are independent iff $p(x,y) = p(x)p(y)$. This is the definition. Now, if we throw in the parameter then $X,Y$ have some distribution that depends on the parameter $\theta$. According to the bayesian approach, our model now is $X,Y,\Theta \sim p(x,y,\theta)$ (tell me if you need clarification on this notation). To show independence we'd want to show $$ p(x,y) =\int p(x,y,\theta)d\theta = \int p(x) p(y, \theta | x) d\theta =\\ = p(x) p(y|x) =p(x)p(y) $$ which might not necessarily hold (we're bayesians now)!! This is why marginal independence does not hold. However, if you condition on $\theta$ (pretend you know it exactly) - then the two tosses are just independent tosses of a biased coin (or whatever experiment you perform) under fixed, known, conditions. Hence, they are independent.

Lets summarize: if the parameter $\Theta$ is a RV, then $X,Y$ depend on it and so knowing $X$ affects the distribution of $\Theta$ and, consequently, the distribution of $Y$. If $\theta$ is not a random variable then its "distribution" is fixed - it is a point mass on its true value. Hence nothing you observe about $X$ will change the distribution of $Y$.

Source Link
Yair Daon
  • 2.7k
  • 1
  • 21
  • 35

Two random variables are independent iff $p(x,y) = p(x)p(y)$. This is the definition. Now, if we throw in the parameter then $X,Y$ have some distribution that depends on the parameter $\theta$. According to the bayesian approach, our model now is $X,Y,\Theta \sim p(x,y,\theta)$ (tell me if you need clarification on this notation). To show independence we'd want to show $$ p(x,y) =\int p(x,y,\theta)d\theta = \int p(x) p(y, \theta | x) d\theta =\\ = p(x) p(y|x) =p(x)p(y) $$ which might not necessarily hold (we're bayesians now)!! This is why marginal independence does not hold. However, if you condition on $\theta$ (pretend you know it exactly) - then the two tosses are just independent tosses of a biased coin (or whatever experiment you perform) under fixed, known, conditions. Hence, they are independent.