2
$\begingroup$

I'm not very experienced in using Fourier transforms to solve PDEs, in fact, I'm trying to learn right now. Here is my issue. I'm trying to understand an example found in some lecture notes. Suppose we have the wave equation in one spatial dimension and one time dimension: \begin{align} &\partial_t^2 u(t, x) - v^2 \partial_x^2 u(t, x) && x \in \mathbb{R}\\ &\text{some initial conditions + boundary conditions} \end{align} and suppose we want to solve it using a Fourier transform in two dimensions: \begin{align} &u(t, x) = \frac{1}{2 \pi} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} dk_t dk_x \hat{u}(k_t, k_x) e^{-i(k_t t + k_x x)} \\[7pt] &\partial_t^2 u(t, x) = \frac{1}{2 \pi} \int_{-\infty}^\infty \int_{-\infty}^\infty dk_t dk_x \hat{u}(k_t, k_x) \partial_t^2 e^{-i(k_t t+ k_x x)} = \frac{1}{2 \pi} \int_{-\infty}^\infty \int_{-\infty}^\infty -dk_t dk_x \hat{u}(k_t, k_x) k_t^2 e^{-i(k_t t + k_x x)} \\[7pt] &\partial_x^2 u(t, x) = \frac{1}{2 \pi} \int_{-\infty}^\infty \int_{-\infty}^\infty dk_t dk_x \hat{u}(k_t, k_x) \partial_x^2 e^{-i(k_t t+ k_x x)} = \frac{1}{2 \pi} \int_{-\infty}^\infty \int_{-\infty}^\infty -dk_t dk_x \hat{u}(k_t, k_x) k_x^2 e^{-i(k_t t + k_x x)} \end{align} Hence, the PDE becomes: \begin{align} \frac{1}{2 \pi} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} dk_t dk_x \left[ -k_t^2 + v^2 k_x^2 \right] \hat{u}(k_t, k_x) = 0 \Longleftrightarrow \left[ -k_t^2 + v^2 k_x^2 \right] \hat{u}(k_t, k_x) =^\mathbf{*} 0 \end{align}

Now, we have to find a function $\hat{u}(k_t, k_x)$ which results in zero when multiplied to the factor $[-k_t^2 + v^2 k_x^2]$: \begin{align} \hat{u}(k_t, k_x) = \frac{0}{-k_t^2 + v^2 k_x^2} = 0 && \text{iff} && -k_t^2 + v^2 k_x^2 \neq 0 \Longleftrightarrow k_t \neq \mp vk_x \end{align} The solution I found from the notes is the following one $$ \hat{u}(k_t, k_x) = A(k_x) \delta (vk_x + k_t) + B(k_x) \delta (vk_x - k_t) $$ Where $A(k_x)$ and $B(k_x)$ are some arbitrary functions, I suppose. This arises three main questions:

  1. I can see that if the Dirac delta is interpreted as having a null value everywhere except in $\mp vk_x$, where it is infinite, intuitively the solution has some sense. But mathematically, the Delta, being a distribution, is not point-wise defined. Even if we want to allow an abuse of interpretation and use the definition for which the delta is zero everywhere except in one point, where it is infinite, mathematically I cannot see how the infinite value would help in this case. Therefore: how is it possible to derive such a solution more rigorously?
  2. The solutuion consists of a linear combination of deltas, where the coefficients are functions of $k_x$. Since we are working with a two dimensional Fourier transformation, there is no asymmetry in the variables, meaning that $k_x$ and $k_t$ should be treated equally. On the other hand, if we were to use a one dimensional Fourier transformation, the transformed variable would have a different meaning with respect to the natural one. Therefore, the second question: why are the coefficients of the linear combination both functions of $k_x$? Is it possible to write an equivalent solution using coefficients which are functions of $k_t$?
  3. Seeing that the solution $\hat{u}(k_t, k_x)$ is a member of some distributional space (i.e. the dual of rapidly decreasing function, or the dual of compact support functions), how should the equal sign marked with the symbol "*" be interpreted? Is it a "almost everywhere" equal, or a equal in a "distributional sense" or something else entirely? And why?

P.S. Please mind that the notes I'm referring to are hand written, therefore they might contain some typos (even if I don't see any). Anyway, thank you in advance for your help!

$\endgroup$
0

1 Answer 1

3
$\begingroup$

Let's start with some basics. Formally a distribution $D$ (on say $\mathbb{R}^n$) is a linear-functional and it is defined by how it acts on (a suitable class of) test-functions $\phi$ via the inner-product $$\left<D,\phi\right> \equiv \int_{\mathbb{R}^n} D(x)\phi(x) dx$$ Since $D$ is not a function it does not need to produce a real or complex number when evaluated at any given point - it's only its action when integrated against a test-function that must be well-defined. For example the defining relation for the Dirac delta is simply $\left<\delta,\phi\right> = \phi(0)$ which is just a different way of writing the good old $\int \phi(x)\delta(x) dx = \phi(0)$, but in this language there is no $\delta(0) = \infty$. The $\delta$ is simply a functional that picks out the value $\phi(0)$ from any test-function it acts on.

In many practical situations we can treat (see also this) distributions as being normal functions, we just need to be a bit more careful. We cannot insist on them having a value at any given point, they just have to make sense when integrated. Distributional equations like $xf(x) = 1$ will have some new solutions (see also this) due to the fact that we can now have distributions like Dirac $\delta$ that has all its weight at one single point. We must also be careful about multiplying general distributions together - that operation is not always well-defined.

Now that we know this we can solve the equation $xf(x) = 0$. Such of an equation in the space of distributions means that we want a distribution $f$ s.t. when multiplied by $x$ and any test-function $\phi$ it gives zero. Or in symbols $\int x f(x) \phi(x) dx = 0$ for any (suitable, e.g. smooth) $\phi$. We need one fact: the only distribution that has all its weight at one point is the Dirac delta together with its derivatives (which can again be related to the Dirac delta). For $x\not = 0$ its easy to see that $f(x) \equiv 0$ - otherwise we can choose a test-function $\phi$ that is non-zero in a tiny interval around a given point and get a non-zero result for the integral - a contradiction. Secondly we need to know that $x \delta(x) \equiv 0$ (which follows from the defining relation of $\delta$) so the solution here is simply $f(x) = A\delta(x)$. It's easy to check that $xf(x)$ is indeed zero: $\int xf(x) \phi(x) dx = (Ax\phi(x))|_{x=0} = 0$.

Applying this result to $(k_t^2-v^2k_x^2)\hat{u}(k_t,k_x) = 0$, which can be written $(k_t-vk_x)(k_t+vk_x)\hat{u}(k_t,k_x) = 0$, we see that $\hat{u}$ must be a sum of two delta-functions: $$\hat{u}(k_t,k_x) = A(k_t,k_x)\delta(k_t - vk_x) + B(k_t,k_x)\delta(k_t + vk_x)$$ When multiplied by each two factors $(k_t-vk_x)$ and $(k_t+vk_x)$ then they each "kill" (via $x\delta(x) = 0$) one of the delta-functions giving us $0$ as the result. Since the delta functions above are strictly zero when their argument is non-zero the coefficients $A,B$ are effectively functions of just $k_t$ (or $k_x$ if you prefer). Thus there is no asymmetry and you are free to write these as functions of either coordinate (or both). We will see this in practice below.

Now lets compute $u$ from this solution $$u(x,t) = \iint dk_t dk_x [A(k_t,k_x)\delta(k_t - vk_x) + B(k_t,k_x)\delta(k_t + vk_x)] e^{-i(k_t t + k_x x)}$$ Each of the $\delta$-functions "kills" one of the integrals and we get $$u(x,t) = \int dk_x [a(k_x)e^{-ik_x(x + vt)} + b(k_x)e^{-ik_x(x-v t)}]$$ where $a(k_x) = A(vk_x,k_x)$ and $b(k_x) = B(-vk_x,k_x)$ (and as promised above we see that $A$ and $B$ are indeed functions of just one variable). We can further simplify this by defining $f(x) = \int dk_x a(k_x)e^{-ik_x x}$ and $g(x) = \int dk_x b(k_x)e^{-ik_x x}$ and then the solution becomes $$u(x,t) = f(x + vt) + g(x-vt)$$ which is that of two wave-packets, one traveling to the left and one to the right (as expected from d'Alembert's formula). The wave-equation does indeed describe waves.

For solving the PDE in practice it is often (depending on the domain and the given IC / BC) much better to not take the full 2D FT, but instead take the FT with respect to just the $x$-coordinate. This gives us $$\hat{u}(k_x,t)_{tt} + k_x^2v^2\hat{u}(k_x,t) = 0$$ which has the solution $$\hat{u}(k_x,t) = A\cos(k_x v t) + B\frac{\sin(k_x v t)}{k_x v}$$ where $A = \hat{u}(k_x,0)$ is just the FT of an initial condition $u(x,0)$ and $B = \hat{u}_t(k_x,0)$ is the FT of the initial velocity $u_t(x,0)$. Thus $A$ and $B$ follows directly from the initial conditions which can be computed and then we can take the inverse transform to find the solution for all times $t$. In this case we also avoid the $\delta$-functions (though they can always pop up via the IC, e.g., if we had $u(x,0) = 1$).

$\endgroup$
4
  • $\begingroup$ Thank you very much! Your answer is really well written and complete! Just one more question that does still bother me: do we solve $\hat{u}(k_t, k_x)(k_t^2 - v^2 k_x^2) = 0$ in the space of distributions because it is the only way to find a non trivial solution for $\hat{u}(k_t, k_x)$? $\endgroup$ Commented Sep 4 at 16:38
  • 1
    $\begingroup$ @Luke__ We solve it in the space of distributions because that is the natural space for the FT. If we insist of having the FT being a normal function then we are no longer able to take the FT of many common functions like constant functions, periodic functions etc. Stuff that don't decay fast enough at infinity. This class simply don't have a FT outside distributions (as you can easily see in that the FT of a constant function, which is the Dirac delta, $\int \exp(ikx)dx$ is not a convergent Riemann integral). It's hard to solve a wave equation by excluding periodic functions $\endgroup$ Commented Sep 4 at 16:53
  • 1
    $\begingroup$ If you try to solve this PDE in the class of functions you only get $u=0$ which just tells you that any non-zero solution to the wave equation cannot have a FT (as a convergent Riemann integral) as that was the assumption you implicitly made when you took the FT to begin with. $\endgroup$ Commented Sep 4 at 17:18
  • $\begingroup$ Yes, I was thinking something like that too. Thank you very much again! $\endgroup$ Commented Sep 4 at 20:16

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.