In most text books on signal processing, the sampling of an analog signal $x_a(t)$ is mathematically represented by \begin{equation} x_s(t) = x_a(t) \sum_{n} \delta(t - nT_s) \tag 1 \end{equation} where $\sum_{n} \delta(t - nT_s)$ is the periodic impulse train, separated by $T_s$ secs. It may be noted that the terms \begin{equation} \text{The value of }\delta(t)\text{ is undefined at } t = 0 \text{ and } g(t)\delta(t- \tau) \neq g(\tau) \tag 2 \end{equation} However, \begin{equation} \int_{t} \delta(t) dt = 1 \text{ and } \int_{t = -\infty}^{\infty} g(t) \delta(t - \tau)dt = g(\tau) \tag 3 \end{equation} The integral is required to used with $\delta(t-\tau)$ to extract the value of $g(t)$ at $t = \tau$. Based on the above reasoning, should not the mathematical equation for the sampling process be \begin{eqnarray} x_s(t) &=& \int_{t} x_a(t) \sum_{n} \delta(t - nT_s) dt \\ &=& \sum_{n} \int_{t} x_a(t) \delta(t - nT_s) dt \\ &=& \{ \dots, x(-2T), x(-T), x(0), x(T), x(2T), \dots \} \tag 4 \end{eqnarray}
4 Answers
Your last equation doesn't make much sense because
$$\sum_n\int_t x_a(t)\delta(t-nT_s)dt=\sum_nx_a(nT_s)\tag{1}$$
which is simply the sum over all samples, hence a constant (if the sum converges).
The mathematical model involving multiplication with a Dirac comb is just convenient for certain manipulations. It makes sure that the resulting signal is still a continuous-time signal, in the sense that it can be integrated, or convolved with another continuous-time function.
If we use your definition of the sampled signal $x_s(t)$, note that
$$(x_s*h)(t)=\sum_nx(nT_s)h(t-nT_s)\tag{2}$$
where $*$ denotes convolution, and $h$ is the impulse response of some LTI system. This is when the model becomes useful.
If you're just interested in the sample values, you're free to consider the sequence $x(nT_s)$, and there's no need for any mathematical model.
- $\begingroup$ The integral operator sums up the values of $x(n)$ as you pointed it out. Thanks. However, with no meaningful value defined for $\delta(t)$, it presents yet another limitation (mathematical) in representing the values of the sampled signal. The mathematical model for sampling is required for Fourier analysis of sampled signals. Finally, this question is the same as the one you pointed to. $\endgroup$lakshminarayanan raghavendran– lakshminarayanan raghavendran2024-09-18 17:26:47 +00:00Commented Sep 18, 2024 at 17:26
This is the Fundamental Sampling Theorem according to a sampling fundamentalist. We start with $x(t)$, a continuous-time and real signal that is bandlimited to one-sided bandwidth $B$. That is that the spectrum of $x(t)$ (which we called $X(f)$) is zero for all $f \ge B$.
We can uniformly sample $x(t)$ if the sample rate, $f_\text{s} \triangleq \tfrac{1}{T} > 2 B$, is sufficiently high
$$\begin{align} x_\text{s}(t) &= x(t) \cdot T \, \operatorname{\text{Ш}}_T(t) \\ &= x(t) \cdot T \sum\limits_{n=-\infty}^{\infty} \delta(t - nT) \\ &= T \sum\limits_{n=-\infty}^{\infty} x(t) \, \delta(t - nT) \\ &= T \sum\limits_{n=-\infty}^{\infty} x(nT) \, \delta(t - nT) \\ &= T \sum\limits_{n=-\infty}^{\infty} x[n] \, \delta(t - nT) \\ \end{align}$$
It is also true that the sampling function is periodic and has a Fourier series.
$$\begin{align} T \, \operatorname{\text{Ш}}_T(t) &\triangleq T \sum\limits_{n=-\infty}^{\infty} \delta(t - nT) \\ &= \sum\limits_{k=-\infty}^{\infty} e^{j 2 \pi k f_\text{s} t} \\ \end{align}$$
Turns out that all of the Fourier series coefficients are $1$. This means that the uniform sampled function is
$$\begin{align} x_\text{s}(t) &= x(t) \cdot T \, \operatorname{\text{Ш}}_T(t) \\ &= x(t) \cdot T \sum\limits_{n=-\infty}^{\infty} \delta(t - nT) \\ &= x(t) \sum\limits_{k=-\infty}^{\infty} e^{j 2 \pi k f_\text{s} t} \\ &= \sum\limits_{k=-\infty}^{\infty} x(t) \, e^{j 2 \pi k f_\text{s} t} \\ \end{align}$$
Accordingly, taking the continuous Fourier Transform, the spectrum of the sampled signal is
$$\begin{align} X_\text{s}(f) & \triangleq \mathscr{F} \Big\{ x_\text{s}(t) \Big\} \\ &= \mathscr{F} \left\{ \sum\limits_{k=-\infty}^{\infty} x(t) \, e^{j 2 \pi k f_\text{s} t} \right\} \\ &= \sum\limits_{k=-\infty}^{\infty} \mathscr{F} \Big\{ x(t) \, e^{j 2 \pi k f_\text{s} t} \Big\} \\ &= \sum\limits_{k=-\infty}^{\infty} X(f - k f_\text{s}) \\ \end{align}$$
And we know, as long as $B < \tfrac12 f_\text{s}$, that there is no overlap in the adjacent shifted spectra of $X(f)$ because $ k f_\text{s} +B < (k+1)f_\text{s}-B $ for all $k \in \mathbb{Z}$.
The original $X(f)$ can be recovered from $X_\text{s}(f)$ as the $k=0$ term of the summation.
$$\begin{align} X(f) &= H(f) \, X_\text{s}(f) \\ \\ &= H(f) \, \sum\limits_{k=-\infty}^{\infty} X(f - k f_\text{s}) \\ &= \sum\limits_{k=-\infty}^{\infty} H(f) \, X(f - k f_\text{s}) \end{align}$$
where
$$ H(f) = \Pi\left( \tfrac{f}{f_\text{s}} \right) $$
and $\Pi(u)$ (sometimes "$\operatorname{rect}(u)$") is the rectangular function
In the summation above, for all terms where $k \ne 0$, then $H(f) \, X(f - k f_\text{s}) =0$ because $H(f)=0$ for any values of $f$ that $X(f - k f_\text{s}) \ne 0$ and for the single term where $k=0$, then $H(f) \, X(f - 0 f_\text{s}) = X(f)$ because $H(f)=1$ for all values of $f$ where $X(f) \ne 0$.
$$\operatorname{rect}(u) = \Pi(u) \triangleq \begin{cases} 1 \qquad & \text{ if } |u| < \tfrac12 \\ \tfrac12 \qquad & \text{ if } |u| = \tfrac12 \\ 0 \qquad & \text{ if } |u| > \tfrac12 \\ \end{cases}$$
And we know that the inverse Fourier transform is
$$ \mathscr{F}^{-1} \Big\{ H(f) \Big\} \triangleq h(t) = f_\text{s} \, \operatorname{sinc}(f_\text{s} t) $$
where the sinc function is
$$\operatorname{sinc}(u) \triangleq \begin{cases} \frac{\sin(\pi u)}{\pi u} \qquad & \text{ if } |u| \ne 0 \\ 1 \qquad & \text{ if } |u| = 0 \\ \end{cases}$$
Then, remembering that $f_\text{s}=\tfrac1T $, we know that the output of the brickwall reconstruction filter is
$$\begin{align} X(f) &= H(f) \, X_\text{s}(f) \\ & \iff \\ x(t) &= h(t) \ \circledast \ x_\text{s}(t) \\ &= f_\text{s} \, \operatorname{sinc}(f_\text{s} t) \ \circledast \ x_\text{s}(t) \\ &= f_\text{s} \, \operatorname{sinc}(f_\text{s} t) \ \circledast \ T \sum\limits_{n=-\infty}^{\infty} x(nT) \, \delta(t - nT) \\ &= f_\text{s} \, T \sum\limits_{n=-\infty}^{\infty} x(nT) \, \big( \operatorname{sinc}(f_\text{s} t) \ \circledast \ \delta(t - nT) \big) \\ &= \sum\limits_{n=-\infty}^{\infty} x(nT) \, \operatorname{sinc}\big( f_\text{s} (t - nT) \big) \\ &= \sum\limits_{n=-\infty}^{\infty} x(nT) \, \operatorname{sinc}\big( f_\text{s} t - n \big) \\ &= \sum\limits_{n=-\infty}^{\infty} x[n] \, \operatorname{sinc}\big( f_\text{s} t - n \big) \\ \end{align}$$
That's how we reconstruct out original $x(t)$ out of the samples $x[n]$. So much for the sampling and reconstruction theorem.
This answer should be taken as an auxilliary to the other ones. It's my supposition about the intent of the theorem (I really should delve into the history of this theorem).
The theorem really boils down to a mathematical convenience: Your (1) takes a time-domain signal $x_s$ and converts it to a sampled-time signal $x_a$ in a rigorously defined way. If you keep reading, you'll find that the author will show that taking the Fourier transform of $x_s$ coughs up a result that is just the discrete-time Fourier transform of some signal $x_d(n)$ times some fluff$^1$, as long as $x_d$ is defined something like $$x_d(n) = x_s(n T_s) \tag a$$
This is an exceedingly useful result. It gives you a structured way to design software-defined radios, audio systems that use DSP, and pretty much anything else that uses DSP. You can do it all with nice solid$^2$ math, instead of hand-waving and going by guess and by gosh.
Note that if you're doing sampled-time controls, when you take the Laplace transform of $x_s$, you end up with the $\mathcal Z$ transform $x_d$, with some suspiciously-similar mathematical fluff attached. So the fundamental sampling theorem isn't just good for listening to Def Leppard.
If you try to take your (4) and use it in the way that (1) is used in DSP, then you won't get anywhere. (1) isn't so much about the physics of any signals, as it is a means of unifying the math between the sampled-time and the continuous-time realms, in a way that makes sense all around.
$^1$ Unlike RBJ, I'm a sampling anti-purist.
$^2$ I wrote "hard" instead of "solid" on the first revision. Yes, it is difficult -- but it's also 100% dependable, and that's the important thing.
Late to the party, but just to clarify an issue that wasn't mentioned before, I think that if you come with an analysis/calculus mindset, the "sampling" achieved by multiplying with delta functions should be interpreted this way:
\begin{equation} x_s(t_0) = \int_{t_0-\epsilon}^{t_0+\epsilon} x_a(t) \sum_{n} \delta(t - nT_s) dt \end{equation}
where $\epsilon$ is less than half the sampling interval.
This makes the $"="$ sign work in the way you would expect if you accept the existence of a "function" such as $\delta(t)$.
Omitting this and working directly with a definition such as $x_s(t) = x_a(t) \sum_{n} \delta(t - nT_s) $ is what the theory of distributions does.
Essentially, it boils down to what we mean by the symbol $"="$.


