2
$\begingroup$

I have an input signal x (assumed to be iid Gaussian with $\mu=0$, $\sigma^2$) which is fed into two linear systems:

  • $y_1 = h_1 * x$
  • $y_2 = h_2 * x$

Now I would like to calculate $\mathbb{E}[y_1 y_2]$. What is the proper way to do this - preferably in the frequency domain?

Background: I want to calculate something like $\operatorname{var}(y_1 + y_2)$. This expands to $\operatorname{var}(y_1)+\operatorname{var}(y_2)+2\mathbb{E}[y_1 y_2]$. Now I am very familar with the first two components - the variance. I know that the variance of a stochastic signal is given by the autocorrelation sequence at position 0. With the Wiener-Kinchin theorem this translates to:

$$ \operatorname{var}(y_1) = r_{yy}(0) = \int_0^{\infty} \Phi_{yy}(f) df $$

and then:

$$ \cdots = \sigma_x^2 \int_0^{\infty} |H_1(f)|^2 df $$

Now for my problem - with $\mathbb{E}[y_1 y_2]$ I arrive at $\mathbb{E}[x^2 \cdots]$ (giving a variance) but the filter is not square magnitude.

$\endgroup$

2 Answers 2

2
$\begingroup$

Assumimg that the two linear systems are BIBO-stable, the random processes $\{Y_1(t)\}$ and $\{Y_2(t)\}$ are zero-mean WSS Gaussian processes with autocorrelation functions and power spectral densities given by \begin{align} R_{Y_1} &= \sigma^2 (h_1 \star \tilde{h}_1)\\ R_{Y_2} &= \sigma^2 (h_2 \star \tilde{h}_2)\\ S_{Y_1} &= \sigma^2 |H_1|^2\\ S_{Y_2} &= \sigma^2 |H_2|^2 \end{align} In fact, the processes are also jointly Gaussian and jointly WSS processes with cross-correlation function $$R_{Y_1, Y_2}(\tau) = E[Y_1(t), Y_2(t+\tau)] = \sigma^2 (h_1 \star \tilde{h}_2)$$ and cross-power spectral density $$S_{Y_1,Y_2}(f) = \sigma^2 H_1(f)H_2^*(f).$$ The OP wants to find $E[Y_1(t)Y_2(t)]$ which is given by \begin{align}E[Y_1(t)Y_2(t)] &= R_{Y_1,Y_2}(0)\\ &= \sigma^2 h_1\star \tilde{h}_2\big|_{\tau=0}\tag{1}\\ &= \sigma^2\int_{-\infty}^{\infty} H_1(f)H_2^*(f) \,\mathrm df\tag{2} \end{align} since the OP prefers the frequency-domain calculation. Personally, given only $h_1$ and $h_2$ (and not $H_1$ and $H_2$), I would say that it is easier/cheaper to just grind out $(1)$ rather than use the frequency-domain calculation $(2)$ but the OP might have specific reasons for opting for the frequency-domain calculation. In particular, $\operatorname{var}(Y_1+Y_2)$ which is what the OP seems to really want to find is just $$\operatorname{var}(Y_1+Y_2) = \sigma^2\begin{cases}\displaystyle\int_{-\infty}^\infty |h_1(t)+h_2(t)|^2 \,\mathrm dt,\\ \displaystyle\sum_{n=-\infty}^\infty |h_1[n]+h_2[n]|^2\end{cases}$$ which seems easier than the frequency-domain version of the same calculation, but ymmv.

$\endgroup$
0
$\begingroup$

You can rewrite the convolution operator as a matrix operation by building a Toeplitz matrix for the impulse response function. Your equations can be rewritten as $Y_1 = H_1 X$ and $Y_2 = H_2 X$, where $Y_1 = [y^1_1,\ldots,y^n_1]^T$, $Y_2 = [y^1_2,\ldots,y^n_2]^T$, $X = [x_1,\ldots,x_m]^T$, $m$ is the length of the input vector, and $n$ is the length of the convolution product which depends on length of the input vector and the length of the impulse response function.

Now, $Y_1 \sim \mathcal{N}(0, \sigma^2 H_1 H_1^T)$, $Y_2 \sim \mathcal{N}(0, \sigma^2 H_2 H_2^T)$, and $X \sim \mathcal{N}(0, \sigma^2 I)$.

Therefore,

\begin{align} \mathbb{E}(Y_1 Y_2^T) &= \mathbb{E}(H_1 X X^T H_2^T) \\ &= H_1 \mathbb{E}(X X^T) H_2^T \\ &= \sigma^2 H_1 H_2^T \end{align}

$\endgroup$
1
  • $\begingroup$ I was thinking about an approach such as that but couldn't figure out how to get $H$. Now I know. Thanks. $\endgroup$ Commented Dec 19, 2018 at 4:56

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.