4
$\begingroup$

Essentially, I'm writing a paper in which I want a figure that shows the effect of convolving an arbitrary curve with a Gaussian. Then, I want to show that you can deconvolve it by taking the FFT of the convolved form and dividing it by the FFT of the Gaussian (inverse filter it). However, doing so blows up, and I don't know why.

Before it's said, I know that inverse filtering is a terrible method and very sensitive to noise. However, that's the point of this figure. Basically show that it works but that it blows up once you leave nonideal conditions

To start, I set up the basic importing and define the functions/domain they're on.

import matplotlib.pyplot as plt import numpy as np import scipy as sp from scipy.fft import * xf = 5 n = 100 xrange = np.linspace(0,xf,num=n) a = 10 xc = 2 w = 0.1 def func(x): bpoint = 2.25 if x <= bpoint: return a/(np.pi*w*(1+((x-xc)**2/w))) else: return a/(np.pi*w*(1+((x-xc)**2/w))) + 9*np.sin(1.5*(x-bpoint)) + 5*(x - bpoint) def gauss(x,mx,o,I): return I*np.exp(-1/2*(((x-mx)/(o/6))**2)) def conved(x,mx,o,I): return gauss(x,mx,o,I)*func(x) 

Then, I plot the function f(x) alongside the Gaussian g(x) that it is to be convolved with.

f = np.zeros((len(xrange),)) for i in range(len(xrange)): f[i] = func(xrange[i]) o = 2.5 g = gauss(xrange,2.5,o,5) normal = np.sqrt(2*np.pi)*o/6*5 plt.figure(0) plt.plot(xrange,f) plt.plot(xrange,g) plt.legend(['f(x)','g(x)']) 

From there, I calculate the convolution using the integral formula (with normalization) and plot them alongside each other.

fconvg = np.zeros((len(xrange),)) for i in range(len(xrange)): mx = xrange[i] avg = sp.integrate.quad(conved,mx-o/2,mx+o/2,args=(mx,o,5)) fconvg[i] = avg[0]/normal plt.figure(1) plt.plot(xrange,f) plt.plot(xrange,fconvg) plt.legend(['f(x)','f(x)*g(x)']) 

Now I know that multiplying the FFTs of each function would convolve them for me. However, the way that FFT handles the boundary conditions are inaccurate as seen by the next plot

F = fft(f) G = fft(g) fconvg_fft = fftshift(ifft(F*G))*xf/(n-1)/normal plt.figure(2) plt.plot(xrange,f) plt.plot(xrange,fconvg) plt.plot(xrange,fconvg_fft) plt.legend(['f(x)','Integral','FFT']) 

Since it applies a periodic BC, the boundaries of the convolution diverge from the true convolution calculated using the integral

This part is where I'm having the issue. Basically, I take the FFT of the two different convolved forms and then divide each by the FFT of the Gaussian. However, in both cases, the result blows up to infinity when I take the IFFT. I don't know what's happening. In my original code, this worked perfectly for the case where the convolution was calculated using FFT. In the example though, neither case works

H_integral = fft(fconvg) F_integral = H_integral/G f_integral = ifft(F_integral) H_fft = fft(fconvg_fft) F_fft = H_fft/G f_fft = ifft(F_fft) plt.figure(3) plt.plot(xrange,f) plt.plot(xrange,f_integral) plt.legend(['f(x)','Integral']) plt.figure(4) plt.plot(xrange,f) plt.plot(xrange,f_fft) plt.legend(['f(x)','FFT']) plt.show() 

Deconvolution with Integral Calculation

Deconvolution with FFT Calculation

I've tried scaling with the step size and played with using fftshift/ifftshift but can't get anything to work. I'm assuming I just don't understand how FFT works, and it's starting to get really frustrating. I'd really appreciate it if someone can figure out what I'm doing wrong

Here's the code in just one big block sans the plot commands

import matplotlib.pyplot as plt import numpy as np import scipy as sp from scipy.fft import * xf = 5 n = 100 xrange = np.linspace(0,xf,num=n) a = 10 xc = 2 w = 0.1 def func(x): bpoint = 2.25 if x <= bpoint: return a/(np.pi*w*(1+((x-xc)**2/w))) else: return a/(np.pi*w*(1+((x-xc)**2/w))) + 9*np.sin(1.5*(x-bpoint)) + 5*(x - bpoint) def gauss(x,mx,o,I): return I*np.exp(-1/2*(((x-mx)/(o/6))**2)) def conved(x,mx,o,I): return gauss(x,mx,o,I)*func(x) f = np.zeros((len(xrange),)) for i in range(len(xrange)): f[i] = func(xrange[i]) o = 2.5 g = gauss(xrange,2.5,o,5) normal = np.sqrt(2*np.pi)*o/6*5 fconvg = np.zeros((len(xrange),)) for i in range(len(xrange)): mx = xrange[i] avg = sp.integrate.quad(conved,mx-o/2,mx+o/2,args=(mx,o,5)) fconvg[i] = avg[0]/normal F = fft(f) G = fft(g) fconvg_fft = fftshift(ifft(F*G))*xf/(n-1)/normal H_integral = fft(fconvg) F_integral = H_integral/G f_integral = ifft(F_integral) H_fft = fft(fconvg_fft) F_fft = H_fft/G f_fft = ifft(F_fft) 

Edit: Thank you, Jdip. That worked for fixing the issue for the signal calculated using FFT.

worked

However, the signal calculated using integrals still blows up. still blows up.

Below is the edited code:

F = fft(f) G = fft(g) + 10**(-12) fconvg_fft = fftshift(ifft(F*G))*(xf/(n-1))/normal H_integral = fft(fconvg) F_integral = H_integral/G f_integral = ifftshift(ifft(F_integral))*normal*(n-1)/xf H_fft = fft(fconvg_fft) F_fft = H_fft/G f_fft = ifftshift(ifft(F_fft))*normal*(n-1)/xf 
$\endgroup$
2
  • 2
    $\begingroup$ Even better than adding small constants to the denominator in the Fourier domain is to use some kind of regularization. $\endgroup$ Commented May 7, 2024 at 8:19
  • $\begingroup$ Agreed, that's what I'm leading into with this plot. "Possible to do this way but should do another regularized method like Wiener deconvolution" $\endgroup$ Commented May 7, 2024 at 15:46

2 Answers 2

3
$\begingroup$
  1. However, in both cases, the result blows up to infinity when I take the IFFT.

The result blows up before that, when you do division by 0: add a small constant to G

  1. You also need to scale back and ifftshift your F_fft:
f_fft = ifftshift(ifft(F_fft)) * normal * (n-1)/xf 
$\endgroup$
3
  • $\begingroup$ Thank you so much! That worked for fixing the deconvolution of the signal generated from FFT. However, the signal from manual integration still blows up. I've edited the post to reflect this $\endgroup$ Commented May 6, 2024 at 19:06
  • $\begingroup$ I don't have time to go into it right now, but hopefully I can soon ;) $\endgroup$ Commented May 6, 2024 at 19:33
  • $\begingroup$ No worries, take your time. Any help you can give is greatly appreciated $\endgroup$ Commented May 6, 2024 at 20:02
0
$\begingroup$

A better approach would be to apply some method to solve problems on the form

$$\min_v \|Mv - d\|$$

or possibly

$$\min_v \|M(v+d) - d\|$$

Where $M$ is the Gaussian convolution operation, $d$ is the known gaussian convolution. In the $1$st case $v$ is the signal you seek before convolution. In the $2$nd case $v$ is the pointwise difference from the convolved signal you seek before convolution.

This way we can integrate into our framework terms of regularization $+\|R v\|$ or $+\|R(v+d)\|$

In the language of linear algebra, the original problem is numerically unstable because the linear equation system we get from this optimization problem risks being underdetermined. The regularization can help us bring information about what kind of behavior is to be disqualified in the solution.

For example too large function values or too large differentials or too large local averages.

$\endgroup$

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.