I have a complex finite difference model which is written in python using the same general structure as the below example code. It has two for loops one for each iteration and then within each iteration a loop for each position along the x array. Currently the code takes two long to run (probably due to the for loops). Is there a simple technique to use numpy to remove the second for loop?
Below is a simple example of the general structure I have used.
import numpy as np def f(x,dt, i): xn = (x[i-1]-x[i+1])/dt # a simple finite difference function return xn x = np.linspace(1,10,10) #create initial conditions with x[0] and x[-1] boundaries dt = 10 #time step iterations = 100 # number of iterations for j in range(iterations): for i in range(1,9): #length of x minus the boundaries x[i] = f(x, dt, i) #return new value for x[i] Does anyone have any ideas or comments on how I could make this more efficient?
Thanks,
Robin
forloop has some nasty dependencies introduced by the structure of the functionf. It is not parallelizable.(x[1,:]-x[:-1])/(2*dt). You might want to use the forward and backward differentiation formulas of error order 2(-3*x[0]+4*x[1]-x[2])/(2*dt)and(3*x[-1]-4*x[-2]+x[-3])/(2*dt)for the first and last derivative approximations.x'(t)=x(t)? Then the difference formula using the central difference quotient as derivative approximation isx[i+1]=x[i-1]+2*dt*x[i]