Skip to main content
1 of 5
Royi
  • 21k
  • 4
  • 200
  • 242

In the context of Signal Processing White Noise is usually defined using a single intuitive property: A random process with constant magnitude, for any frequency, of its power spectrum:

If $ w \left( t \right) $ is a white noise then its power spectrum is given by $ {S}_{w} \left( f \right) = {\sigma}^{2} $. In communication it is common to use $ {S}_{w} \left( f \right) = \frac{ {N}_{0} }{2} $.

Using Wiener Khinchin Theorem one would conclude that the Auto Correlation of the process, which is assumed to be a Stationary Process in the wide sense is given by $ {R}_{ww} \left( \tau \right) = {\sigma}^{2} \delta \left( t \right) $.

In the context of Signal Processing we only need the model of Continuous White Noise in order to analyze the output of Linear System with finite frequency support. For this context, the above model is enough.

Yet, if one wants to be rigorous, mathematically, the above model is not enough.
One contradiction is that one must have some level of correlation in order to have a process which the Wiener Khinchin Theorem holds. So the move from the power density to the auto correlation, as done above, doesn't actually hold (See for instance "Roy M. Howard - On Defining White Noise"). Yet, again, in the context of Signal Processing, this is enough.

A more rigorous derivation can be done by defining the White Noise as the derivative of Wiener Process.
Wiener Process has some properties which makes it interesting:

  1. The difference of 2 samples is Gaussian.
  2. The samples / realization path is continuous (In the almost surely sense).

Then we can define Gaussian White noise as the derivative of the Wiener Process. This will yield a more coherent definition (For instance, it indeed define a probability).

Royi
  • 21k
  • 4
  • 200
  • 242