19
$\begingroup$

Let $X$ be in $\mathfrak{L}^1(\Omega,\mathfrak{F},P)$ and $\mathfrak{G}\subset \mathfrak{F}$.

Prove that if $X$ and $E(X|\mathfrak{G})$ have same distribution, then they are equal almost surely.

I know what I have to show, that $X$ is $\mathfrak{G}$ measurable, but I don't know how...

$\endgroup$
0

3 Answers 3

17
+200
$\begingroup$

Let's denote $Y = E(X|\mathfrak{G})$

$\color{blue}{\text{Step 1:}}$ let's suppose $X \in L^2$, then since $EX^2 = EY^2$, we have \begin{align} E\left(X - Y\right)^2 =& E(X^2) - 2E(XY) + E(Y^2) \\ =& EX^2 + EY^2 - 2E\left(E(XY|\mathfrak{G})\right) \\ =& EX^2 + EY^2 - 2E\left(YE(X|\mathfrak{G})\right) \\ =& EX^2 + EY^2 - 2EY^2 = 0 \end{align}

so $X=Y$ almost surely.

$\color{blue}{\text{Step 2:}}$ Now we remove the assumption $X \in L^2$. Then we need to consider $X' = X\wedge a \vee b$ and $Y' = Y\wedge a \vee b$, i.e. the truncated version of $X$ and $Y$. We will see $X' = Y'$ almost surely, then by sending $a \to +\infty$ and $b\to -\infty$, we see $X = Y$ almost surely.

To prove $X'= Y'$ almost surely, we will prove $E(X'|\mathfrak{G}) = Y'$, then since $X'$ and $Y'$ still have the same distribution, by $\color{blue}{\text{Step 1}}$ we see $X' = Y'$ almost surely.

So all we need to do now is to prove $$E(X'|\mathfrak{G}) = Y'$$

Firstly of all, $Y'$ is $\mathfrak{G}$-measurable.

Then by Jensen's inequality applied on conditional expectation, we have $$E(X\wedge a|\mathfrak{G}) \leq E(X|\mathfrak{G})\wedge a = Y\wedge a$$

However, since $E(X\wedge a) = E(Y\wedge a)$, the above inequality can't be strict on a set of positive probability, so we get

$$E(X\wedge a|\mathfrak{G}) = Y\wedge a$$

By a similar argument, we get

$$E(X\wedge a \vee b|\mathfrak{G}) = Y\wedge a\vee b$$

$\endgroup$
4
  • $\begingroup$ Nice! Actually, your proofs are not so different (I guess there are not thousand ways to solve the problem): your part with$X\wedge a$ correspond to the integration on $A$/$B$ in my answer, and $X\wedge a\vee b$ to that with $A'$, $B'$. $\endgroup$ Commented Nov 11, 2014 at 17:38
  • $\begingroup$ @DavideGiraudo Yeah, we both tried to find the lost $L^2$ integrability by truncation $\endgroup$ Commented Nov 11, 2014 at 17:43
  • $\begingroup$ Hello, I got a question. Why $\mathbb E(X-Y)^2$ implies X=Y almost surely? $\endgroup$ Commented Jan 7, 2015 at 2:28
  • $\begingroup$ @Duke Since $\{X \neq Y\} = \cup_{n=1}^\infty\{|X-Y| \ge \frac{1}{n}\}$, we have $P(X \neq Y) \leq \sum_{n=1}^\infty P(|X-Y| \ge \frac{1}{n}) $. Then if $P(X \neq Y) > 0$, we can find $N$ such that $P(|X-Y| \ge \frac{1}{N}) >0$, then $E(X-Y)^2 \ge \frac{1}{N^2}P(|X-Y| \ge \frac{1}{N}) >0$, so we get a contradiction $\endgroup$ Commented Jan 7, 2015 at 8:33
6
$\begingroup$

Here, the main difficulty is that we do not assume finiteness of the expectation of $X^2$.

Fix a real number $x$ and define $A:=\{X\leqslant x\}$ and $B:=\{\mathbb E[X\mid\mathcal G]\leqslant x\}$. Using the assumption, we have $$\mathbb E[X\chi(A)]=\mathbb E[X\chi(B)].$$

Indeed, since $B$ belongs to $\mathcal G$, we have $$\mathbb E[X\chi(B)]=\mathbb E[\mathbb E[X\mid\mathcal G]\chi(B)] =\mathbb E[\mathbb E[X\mid\mathcal G]\chi\{\mathbb E[X\mid\mathcal G]\leqslant x\}],$$ and the random variables $X\chi\{X\leqslant x\}$ and $\mathbb E[X\mid\mathcal G]\chi\{\mathbb E[X\mid\mathcal G]\leqslant x\}$ have the same distribution.

Define $C_1:=A\setminus B$ and $C_2:=B\setminus A$. Since $\mathbb P(A)=\mathbb P(B)$, we have $$\mathbb E\left[(X-x)\chi(C_1)\right]=\mathbb E[(X-x)\chi(C_2)].$$ As $(X-x)\chi(C_1)\leqslant 0\leqslant (X-x)\chi(C_2)$, we get that $\mathbb P(A\Delta B)=0$. Define $A':=\{X\geqslant -x\}$ and $B':=\{\mathbb E[X\mid\mathcal G]\geqslant -x\}$. By the argument uses with $-X$ instead of $X$, we get $\mathbb P(A'\Delta B')=0$. Defining $A'':=A\cap A'$ and $B'':=B\cap B'$, we have $\mathbb P(A''\Delta B'')=0$ hence $$\mathbb E[\left(\mathbb E[X\mid\mathcal G]\right)^2\chi(A'')]=\mathbb E[\left(\mathbb E[X\mid\mathcal G]\right)^2\chi(B'')]=\mathbb E[X^2\chi(|X|\leqslant x)]$$ and $$\mathbb E\left[X\mathbb E[X\mid\mathcal G]\chi(A'')\right]=\mathbb E[\left(\mathbb E[X\mid\mathcal G\right)^2\chi(B'')],$$ hence $$\mathbb E\left[\left(X-\mathbb E[X\mid\mathcal G]\right)^2\chi\{|X|\leqslant x\}\right]=0.$$ As $x$ is arbitrary, the conclusion follows.

$\endgroup$
11
  • $\begingroup$ I may need more caffeine, but can you explain a little why $E[X \cdot 1_A] = E[X \cdot 1_B]$ please? $\endgroup$ Commented Nov 6, 2014 at 16:40
  • $\begingroup$ @copper.hat I've edited. $\endgroup$ Commented Nov 6, 2014 at 16:46
  • 2
    $\begingroup$ Much appreciated Davide. (For my consumption: $Y$ a version of $E[X|\mathcal G]$, $f(t) = t\cdot 1_{(-\infty,x]}(t)$, $\mu_X C = p X^{-1}(C)$, $\mu_Y C = p Y^{-1}(C)$, $\mu_X = \mu_Y$, $\int_B X = \int_B Y= \int f \circ Y = \int f d \mu_B = \int f d \mu_A = \int f \circ X = \int_A X$.) $\endgroup$ Commented Nov 6, 2014 at 18:12
  • 2
    $\begingroup$ You can try to understand it, and find a simpler solution. If some step is not clear let me know. $\endgroup$ Commented Nov 8, 2014 at 17:23
  • 3
    $\begingroup$ "I don't want to try to understand it, if it is wrong" ??? $\endgroup$ Commented Nov 11, 2014 at 17:13
0
$\begingroup$

Here is how I argued:

If we take the usual simple functions $s_n\rightarrow Y^+$ and $s_{n}'\rightarrow \mathbb{E}[Y|\mathcal{G}]^+$, we have that by the monotone converge Theorem and by $\mathbb{P}(m/2^n<Y<(m+1)/2^n)=\mathbb{P}(m/2^n<\mathbb{E}[Y|\mathcal{G}]<(m+1)/2^n)$:

$$\int Y^+d\mathbb{P}=\int E[Y|\mathcal{G}]^+d\mathbb{P}$$

Now, by Jensen's inequality and by using that $x\rightarrow x^+$ is convex $\mathbb{E}[Y^+|\mathcal{G}]\geq(\mathbb{E}[Y|\mathcal{G}])^+$. However, the integral identity above teaches us that in fact $\mathbb{E}[Y^+|\mathcal{G}]=(\mathbb{E}[Y|\mathcal{G}])^+$ almost everywhere. Finally, if $S=\{Y^+>E[Y|\mathcal{G}]^++\varepsilon\}$: $$0=\int_S \mathbb{E}[Y^+|\mathcal{G}]-E[Y|\mathcal{G}]^+d\mathbb{P}=\int_S Y^+-E[Y|\mathcal{G}]^+d\mathbb{P}\geq \varepsilon\mathbb{P}(S)$$

Hence, $\mathbb{P}(Y^+>E[Y|\mathcal{G}]^+)=0$. By the same token, $\mathbb{P}(Y^+<E[Y|\mathcal{G}]^+)=0$ and so $Y^+=E[Y|\mathcal{G}]^+$ almost everywhere. Because $-Y$ and $\mathbb{E}[-Y|\mathcal{G}]$ also follow the same distribution, the computations above teach us that almost everywhere $Y^-=(-Y)^+=E[-Y|\mathcal{G}]^+=E[Y|\mathcal{G}]^-$ thus, the equalities bellow also happen almost everywhere:

$$Y=Y^+-Y^-=E[Y|\mathcal{G}]^+-E[Y|\mathcal{G}]^-=E[Y|\mathcal{G}]$$

$\endgroup$

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.