2
\$\begingroup\$

I'm implementing a numerical solver for the 1D diffusion (heat) equation using an iterative method. The initial temperature distribution is a Gaussian profile, which should theoretically smooth out over time as the diffusion process evolves.

However, when the correction or update between iterations becomes large, I observe small oscillations or noise near the bottom of the temperature profile. Instead of relaxing smoothly, the temperature shows local fluctuations that shouldn't be there physically.

Here’s a summary of my setup:

Equation (including one nonlinear term \$T^{2.5}\$) $$\dfrac{\partial{T}}{\partial{t}}=\dfrac{\partial}{\partial{x}}T^{2.5}\dfrac{\partial{T}}{\partial{x}}$$

Discretization: finite difference method (implicit)

Initial condition: Gaussian temperature distribution

Boundary conditions: zero flux

Observation: oscillations appear at the lowest point of the profile when the correction step is large

My questions are:

What causes these local oscillations or “numerical noise” in a diffusion solver that should theoretically smooth out?

How can I prevent or mitigate this issue? I have already tried adjusting the time step, grid spacing.

Any insight into the numerical stability or discretization errors that could cause this would be appreciated.

import numpy as np from scipy.sparse import diags from scipy.sparse.linalg import spsolve from scipy.integrate import quad # --------------------------- # Spitzer-Härm flux # --------------------------- import numpy as np import matplotlib.pyplot as plt from math import sqrt, log, pi, exp # Constants c0 = 299792458 # the vacuum light velocity in unit of m/s me = 9.10938356e-31 # the electron mass in kg epsilon0 = 8.8541878128e-12 # the dielectric constant in vacuum (C^2⋅kg^−1⋅m^−3⋅s^2) qe = 1.60217662e-19 # the electron charge in unit of coulomb (C) K = 1.783e-33*c0*c0 # in unit of kg*(m/s)^2/(keV), i.e., no units kB = 1.380649e-23 # in the unit of J/K mu00 = 1.25663706127e-6 # vacuum permeability in the unit of (N⋅(C/s)^−2) # OR mu0 = 1/(c0**2*epsilon0) # the relation connected by the light velocity Te_central = 1000 # the temperature in the central wall, in the unit of eV Te_per = 4000 Ne = 1.5e20 # the central plasma density 1/m^3 omegap = sqrt(Ne*qe**2/(epsilon0*me)) # plasma frequency (1/s) kp = omegap/c0 # plasma wavenumber p = Ne*kB*Te_central*11604.51812 # the thermal pressure (J/m^3) in kept as constant, 1eV = 11604.51812 K TeRatio = 2 # the temperature ratio between hot and cold Teh = 2*Te_central/(1+TeRatio)*TeRatio # the temperature in the hot wall, in the unit of eV Tel = 2*Te_central/(1+TeRatio) # the temperature in the cold wall, in the unit of eV vthMean_SI = sqrt(Te_central/1000*1*K/me) # thermal velocity in the hot wall, in the unit of SI (m/s) vthh_SI = sqrt(Teh/1000*1*K/me) # thermal velocity in the hot wall, in the unit of SI (m/s) vthl_SI = sqrt(Tel/1000*1*K/me) # thermal velocity in the cold wall, in the unit of SI (m/s) Z = 1 # ion charge state LambdaCoulomb1 = 23 - log((Ne*1.e-6)**(1/2)*Z*(Te_central)**(-3/2)) # Ti*me/mi < Te < 10z^2 eV LambdaCoulomb2 = 24 - log((Ne*1.e-6)**(1/2)*(Te_central)**(-1)) # Ti*me/mi < 10z^2 eV < Te if Te_central <= 10*Z**2: CoulombLogarithm = LambdaCoulomb1 else: CoulombLogarithm = LambdaCoulomb2 nu_ei0 = Z*Ne*qe**4*CoulombLogarithm/(4*pi*epsilon0**2*me**2)/(vthMean_SI**3) mfp0 = vthMean_SI/nu_ei0 # centrel mean free path in m LL = 160*mfp0 # simulation box length Xmin = -LL/2 Xmax = LL/2 ll = 8.44*mfp0 # Create spatial grid num_points = 800 central = int(num_points / 2) # Ensures an integer value x = np.linspace(Xmin, Xmax, num_points) nx = len(x) dx = x[1]-x[0] temperature_profile = Te_central + Te_per*np.exp(-x**2/ll**2) # temperature profile in eV thermal_velocity_profile = np.sqrt(temperature_profile / 1000 * 1 * K / me) # thermal velocity profile in m/s #density_profile = Te_central*Ne/temperature_profile # electron density 1/m^3 density_profile = np.full(len(x), Ne) # electron density 1/m^3 Z_factor = (Z + 0.24) / (Z + 4.2) cont_nu_ei = Z*density_profile*qe**4* CoulombLogarithm/(4*pi*epsilon0**2*me**2) nu_ei_local = cont_nu_ei/(thermal_velocity_profile**3) mfp_profile = thermal_velocity_profile/nu_ei_local # mean free path in m (SI units) mfp_sherlock = mfp_profile*np.sqrt(Z)*np.sqrt(Z_factor) # mean free path in m (SI units) lambda_lmv = 32 * np.sqrt(Z + 1) * mfp_profile dT_dx = np.gradient(temperature_profile, x) const_factor = 128 / (3 * np.pi) * Z_factor * (3 * np.pi) / np.sqrt(2 * np.pi) kappa = const_factor * 4*pi*epsilon0**2*me**2 / (Z*qe**4*CoulombLogarithm) * np.sqrt(temperature_profile/1000*1*K/me)**5 ka = const_factor * 4*pi*epsilon0**2*me**2 / (Z*qe**4*CoulombLogarithm) * np.sqrt(1/1000*1*K/me)**5 q_sh = -kappa * dT_dx q_free = density_profile * temperature_profile * thermal_velocity_profile def SH_solver(Te): Te_center = (Te[:-1] + Te[1:]) /2 kappa_face = ka * Te_center**2.5 # Cao Q_nl = -kappa_face * (Te[1:] - Te[:-1]) / dx * 0.01 divQ_nl = np.zeros_like(Te) divQ_nl[1:-1] = (Q_nl[1:] - Q_nl[:-1]) / dx divQ_nl[0] = divQ_nl[1] divQ_nl[-1] = divQ_nl[-2] return Q_nl, divQ_nl import numpy as np from scipy.sparse import diags from scipy.sparse.linalg import spsolve # --------------------------------------------------------------- # Simulation setup # --------------------------------------------------------------- dt = 0.1/nu_ei0 total_time = 30/nu_ei0 num_steps = int(total_time / dt) n_e = density_profile # initial condition Te_int = temperature_profile # initial temperature profile # iteration control max_iter = 1000 alpha = 0.01 Te_history = np.zeros((num_steps+1, nx)) Qe_history_current = np.zeros((num_steps, nx-1)) Qe_history_previous = np.zeros((num_steps, nx-1)) Te_history[0, :] = Te_int.copy() for n in range(num_steps): print(f"\n================ TIME STEP {n+1}/{num_steps} ================") if n == 0: Te_n = Te_int.copy() else: Te_n = Te.copy() Te_k = Te_n.copy() divQ_SH_prev = np.zeros_like(Te_n) S_nl_corr_prev = np.zeros_like(Te_n) # ----------------------------------------------------------- # Inner nonlinear iteration loop # ----------------------------------------------------------- for k in range(1, max_iter + 1): #print(f" Iteration {k} ...") # STEP 1: compute Te_k Te_face = (Te_k[:-1] + Te_k[1:])/2 kappa_i = ka * Te_face**(2.5) k_right = np.zeros(nx) k_left = np.zeros(nx) k_right[:-1] = kappa_i k_left[1:] = k_right[:-1] n_interior = nx - 2 rhs = np.zeros(n_interior) a = np.zeros(nx) b = np.zeros(nx) c = np.zeros(nx) for i in range(1, nx-1): a[i] = -k_left[i] / dx**2 b[i] = (k_left[i] + k_right[i]) / dx**2 + (1.5 * n_e[i] / dt) c[i] = -k_right[i] / dx**2 # reflective BC corrections --- b[1] += a[1]; a[1] = 0.0 b[nx-2] += c[nx-2]; c[nx-2] = 0.0 # diagonals for sparse matrix lower_diag = np.zeros(n_interior-1) main_diag = np.zeros(n_interior) upper_diag = np.zeros(n_interior-1) for j, i in enumerate(range(1, nx-1)): if i > 1: lower_diag[j-1] = a[i] main_diag[j] = b[i] if i < nx-2: upper_diag[j] = c[i] rhs[j] = (1.5 * n_e[i] * Te_n[i] / dt) + S_nl_corr_prev[i] # build sparse matrix directly A_sparse = diags([lower_diag, main_diag, upper_diag], [-1, 0, 1], format="csc") # print(A_sparse) Te_new_bulk = spsolve(A_sparse, rhs) # reflective boundary for temperature --- Te_new = np.zeros(nx) Te_new[1:-1] = Te_new_bulk Te_new[0] = Te_new[1] Te_new[-1] = Te_new[-2] #Te_newkk = 2 * Te_new[:-1] * Te_new[1:] / (Te_new[:-1] + Te_new[1:]) Te_new_face = (Te_new[:-1] + Te_new[1:])/2 # STEP 2: compute ∇·Q_SH kappa_face = ka * Te_new_face**2.5 # Cao flux_face = - kappa_face * (Te_new[1:] - Te_new[:-1]) / dx #plt.plot(kappa_face1) #plt.show() divQ_SH_now = np.zeros_like(Te_new) divQ_SH_now[1:-1] = (flux_face[1:] - flux_face[:-1]) / dx divQ_SH_now[0] = divQ_SH_now[1] divQ_SH_now[-1] = divQ_SH_now[-2] # STEP 3: convergence check (skip the k=1 interation) if k > 1: flux_diff = np.abs(divQ_SH_now - divQ_SH_prev) tol_array = alpha * (1.5 * n_e * Te_new / dt) if np.all(flux_diff <= tol_array): print(f" ✅ Converged at iteration {k}") break # STEP 4 & 5: compute nonlocal heat flux and its divergence Q_nl, divQ_nl = SH_solver(Te_new) # STEP 7: update nonlocal correction S_nl_corr = divQ_SH_now - divQ_nl # update for next iteration divQ_SH_prev = divQ_SH_now.copy() S_nl_corr_prev = S_nl_corr.copy() #Te_k = 0.5 * Te_k + 0.5 * Te_new Te_k = Te_new.copy() # update temperature for next time step Te_history[n+1, :] = Te_new.copy() Te = Te_new.copy() Qe_history_previous[n, :] = Q_nl.copy() Q_nlt, divQ_nlt = SH_solver(Te) Qe_history_current[n, :] = Q_nlt.copy() #plt.plot(Te) # optional: diagnostic #total_energy = np.trapz(1.5 * n_e * Te, x) #print(f" Total energy = {total_energy:.3e} J/m²") print(f" Max Te = {Te.max():.3f} eV, Min Te = {Te.min():.3f} eV") print("\nSimulation complete.") 
\$\endgroup\$
1
  • 1
    \$\begingroup\$ Hey, this is an issue with the numerical method not your code. Implicit finite difference methods (and even the Crank Nicolson Method!) do have issues with producing spurious oscillations, particularly around boundaries or a discrete boundary condition. There are particular step/grid size constraints to help. You should try using a higher order method - there are several available to try for diffusion type equations. The spurious oscillations are covered in many books e.g. section 3.3 here covers it ma.imperial.ac.uk/~ajacquie/IC_Num_Methods/IC_Num_Methods_Docs/… . \$\endgroup\$ Commented Oct 25 at 16:02

2 Answers 2

2
\$\begingroup\$

I don't have much to say about numeric stability of code I can't run, as there's usually a lot of experimentation needed.

broadcast

 for i in range(1, nx-1): a[i] = -k_left[i] / dx**2 b[i] = ... c[i] = -k_right[i] / dx**2 

I can say that the interpreted loop looks slow, and an NDArray broadcast would be more appropriate.

And even if you don't do that, at least hoist that dx ** 2 squaring operation into a local variable computed before the loop begins.

Similarly you would prefer to broadcast the subsequent loop of
for j, i in enumerate(range(1, nx-1)):

helpers

The "step N" comments are helpful. But they suggest extracting a few helper functions. Minimally, "step 1" should be a function call.

roundoff

observe small oscillations

Presumably you observe it through the print(A_sparse) statement. It's unclear what the magnitude of the errors is. You pass matrices into several scipy solvers. It would be useful to find their condition number, to better understand if we're solving a "stiff" system.

The only numeric analysis detail I noticed which leapt off the page was computing a simple average over a window.

 Te_face = (Te_k[:-1] + Te_k[1:]) / 2 

Roughly the per-element expression is (first + second) / 2.

Standard advice in numeric analysis texts is to prefer this expression:
first + (second - first) / 2

Similarly for the various flux face calculations, where we divide by dx rather than by 2.

With positive integers this avoids the spectre of overflowing uint32_t or similar. With floats, this lets us perform the division on a numerator of smaller magnitude. And then we're performing a "big + little" addition, which gives us a better chance at properly rounding the result.

This only gets you about one more ULP, one more bit of precision per iteration. I can't imagine it's significant, compared to the effects of the various sophisticated solvers you're calling into.

\$\endgroup\$
2
  • \$\begingroup\$ Thank you so much, I made these changes, however, the noise still exists. \$\endgroup\$ Commented Oct 21 at 11:18
  • \$\begingroup\$ Now I updated my full code, which can be run directly. \$\endgroup\$ Commented Oct 21 at 11:27
2
\$\begingroup\$

Here are some general coding style suggestions.

Portability

I'm not a big fan of fancy Unicode characters in source code, like the green checkmark in the print line. Sometimes they don't render well in editors, and other times they don't render well in output generated by the code. In my case, it is both.

Imports

There are multiple repeated import lines, like:

import numpy as np import numpy as np 

Also, it is customary to place all import lines at the top of the code. These are the only ones that are needed:

import numpy as np from scipy.sparse import diags from scipy.sparse.linalg import spsolve from math import sqrt, log, pi 

ruff is a handy tool to identify unused imports.

For some of the constants, such as:

c0 = 299792458 # the vacuum light velocity in unit of m/s 

you could take advantage of scipy constants:

c0 = constants.speed_of_light 

Comments

Remove commented-out code to reduce clutter. For example:

#density_profile = Te_central*Ne/temperature_profile # electron density 1/m^3 #print(f" Iteration {k} ...") 

If you want to conditionally execute code, you can run your code with options.

The comments for the constants at the top of the code are helpful.

Comments like this lead to very long lines which can make the code hard to understand:

thermal_velocity_profile = np.sqrt(temperature_profile / 1000 * 1 * K / me) # thermal velocity profile in m/s 

In this case, since the comment mostly restates the variable name, you could get rid of the unnecessary whitespace and redundant information:

thermal_velocity_profile = np.sqrt(temperature_profile / 1000 * 1 * K / me) # in m/s 

Layout

For improved readability, use one statement per line. Change:

b[1] += a[1]; a[1] = 0.0 b[nx-2] += c[nx-2]; c[nx-2] = 0.0 

to:

b[1] += a[1] a[1] = 0.0 b[nx-2] += c[nx-2] c[nx-2] = 0.0 

DRY

This expression is repeated several times:

1000*1*K 

You could set it to a named constant. It can also be simplified as:

1000*K 

Documentation

The PEP 8 style guide recommends adding docstrings for the main code and the function. For example:

def SH_solver(Te): """ Explain what SH means Describe the type of the input Describe the type of the returned value """ 
\$\endgroup\$

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.