12
$\begingroup$

I wish to construct the derivative matrix corresponding to an irregular set of points. For example, I have the following grid consisting 16 nodes in the 2D plane:

ClearAll["Global`*"]; grid = {{-2.30259, 2.40259}, {-2.30259, 2.70259}, {-2.30259, 3.00259}, {-2.30259, 3.30259}, {-0.310155, 0.410155}, {-0.310155, 0.710155}, {-0.310155, 1.01015}, {-0.310155, 1.31015}, {0.312375, -0.212375}, {0.312375, 0.0876253}, {0.312375, 0.387625}, {0.312375, 0.687625}, {0.693147, -0.593147}, {0.693147, -0.293147}, {0.693147, 0.00685282}, {0.693147, 0.306853}}; ListPlot[grid, PlotRange -> All, Frame -> True, Axes -> False, ImageSize -> 300] 

I know about the powerful built-in function NDSolve`FiniteDifferenceDerivative in Mathematica. So, in order to construct my derivative matrix I do the following which is probably not correct:

xgrid2 = Sort@Map[First, grid] ygrid2 = Sort@Map[Last, grid] NDSolve`FiniteDifferenceDerivative[{1, 1}, {xgrid2, ygrid2}]["DifferentiationMatrix"] 

But here since the domain is irregular and there are duplicated values in xgridg2 direction, then nothing will be attained. If I use DeleteDuplicates to remove the duplicates values, then the size of the final differentiation matrix is not $16\times 16$!

I will be grateful if someone let me know for a possible way in Mathematica to construct the differentiation matrix quickly for irregular domains.

$\endgroup$
2
  • 1
    $\begingroup$ This FEM tutorial shows how to get the system matrices. $\endgroup$ Commented May 21, 2018 at 21:54
  • $\begingroup$ To compare the results of the two different ways, it would be good if we test them on a differential equation solving. $nx$ could be changed to a better distribution of the nodes (not random) and then compare practically. $\endgroup$ Commented May 21, 2018 at 22:11

4 Answers 4

6
$\begingroup$

Recently I've learned a bit about radial basis function-generated finite differences (RBF-FD) method, which is a relatively new method for PDE solving, thus can be used to solve this problem. The book I'm refering to is A Primer on Radial Basis Functions with Applications to the Geosciences by Bengt Fornberg (this guy is frequently mentioned in the document of method of lines! ) and Natasha Flyer, which seems to be the only book about RBF-FD method at the moment. The following is a general-purpose function calculating derivative matrix for an irregular domain based on RBF-FD method:

ϕfunc[ϵ2_] := Function[{pc, p}, With[{r2 = Total[(pc - p)^2]}, Sqrt[1 + ϵ2 r2]]]; matRBFFD[L_, vars_, plst_, n_, ϵ2_] := Module[{p}, With[{ϕ = ϕfunc[ϵ2], len = Length@plst, e = ConstantArray[{1}, n]}, With[{Lϕ = Function[{pc, #}, Module[#2, #2 = pc; #3]] &[p, vars, With[{cg = Compile`GetElement}, L@ϕ[vars, cg[p, #] & /@ Range@Length@vars]]], L1 = L@1}, With[{range = Range@len}, SparseArray@Table[Module[{A, LB, nearindex, near}, nearindex = Nearest[plst -> range, coord, n]; near = plst[[nearindex]]; A = ArrayFlatten[{{Table[ϕ[xi, xj], {xi, near}, {xj, near}], e}, {e\[Transpose], 0}}] // N; LB = Append[Lϕ[coord, #] & /@ near, L1]; SparseArray[nearindex -> Most@Quiet@LinearSolve[A, LB], len]], {coord, plst}]]]]] 

There still exists room for optimization, but given that the function isn't too slow, and my understanding for the method is primary, I'd like not to make too much effort on performance tuning (at least for now).

The usage of the function is straightforward:

(* grid in the question is too coarse, so a denser grid is generated here for better illustration *) Needs["NDSolve`FEM`"] mesh = ToElementMesh[Rectangle[], "MeshElementType" -> "TriangleElement", "MeshOrder" -> 1]; plst = mesh["Coordinates"]; L = D[#, x, y] &; vars = {x, y}; ϵ2 = 10^-0; n = 30; coeflst = matRBFFD[L, vars, plst, n, ϵ2]; // AbsoluteTiming (* {1.14005, Null} *) 

The coeflst is the derivative matrix. Let's check its accuracy:

func = {x, y} |-> Sin[2 Pi (x - Pi/2)] Cos[Pi y]; funcvalues = (func) @@ (plst\[Transpose]); dfunc = Function[{x, y}, Evaluate@L@func[x, y]]; ref = dfunc @@@ plst; drbf = coeflst . funcvalues; ListPlot[ref - drbf, PlotRange -> All] 

enter image description here

drbffunc = ElementMeshInterpolation[{mesh}, drbf]; Manipulate[ Plot[{drbffunc[x, y], dfunc[x, y]}, {x, 0, 1}, PlotRange -> 20, PlotStyle -> {Automatic, Dashed}], {y, 0, 1}] 

enter image description here

Remaining Issue

As mentioned above, RBF-FD method is a relatively new method (compared with FDM, FEM, etc.). Perhaps I've missed something, but there doesn't seem to be a general strategy for determing $\epsilon^2$ (ϵ2) and stencil size (n). (The parameters above are determined by trial and error. )

$\endgroup$
1
  • $\begingroup$ Thanks, looks very promising, have to elaborate $\endgroup$ Commented Dec 1, 2024 at 11:16
6
$\begingroup$

If we assume, that the given function u and its derivative can be approximated by the same elementmeshfunction, it is possible to derive an explicit expression for the differentiationmatrix M(FEM):

enter image description here

Needs["NDSolve`FEM`"] 

onedimensional case (M[1] & M[2])

mesh = ToElementMesh[Line[{{0}, {Pi}}], "MeshOrder" -> 2 , MaxCellMeasure -> Pi/25 ] pts = mesh["Coordinates"] // Flatten; nr = ToNumericalRegion[mesh]; vd = NDSolve`VariableData[{"DependentVariables" -> {u},"Space" -> {x }}]; sd = NDSolve`SolutionData[{"Space" -> nr}]; 

calculate elementmatrices(assuming NeumannValue==0):

initCoeffs =InitializePDECoefficients[vd, sd ,"ReactionCoefficients" -> {{1 }} ] ; methodData = InitializePDEMethodData[vd, sd] ; discretePDE = DiscretizePDE[initCoeffs, methodData, sd]; mat\[Phi]\[Phi] = discretePDE["StiffnessMatrix"] // Normal ; initCoeffs =InitializePDECoefficients[vd, sd ,"ConvectionCoefficients" -> {{{1} }} ] ; // Quiet methodData = InitializePDEMethodData[vd, sd] ; discretePDE = DiscretizePDE[initCoeffs, methodData, sd]; mat\[Phi]\[Phi]x = discretePDE["StiffnessMatrix"] // Normal ; initCoeffs =InitializePDECoefficients[vd, sd ,"DiffusionCoefficients" -> {{{1} }} ] ; methodData = InitializePDEMethodData[vd, sd] ; discretePDE = DiscretizePDE[initCoeffs, methodData, sd]; mat\[Phi]\[Phi]xx = discretePDE["StiffnessMatrix"] // Normal ; 

examplary function f1[x]:

f1 = Function[x, Exp[-x] Sin[x/2]] f1i = Map[f1, pts]; (**) ax = Inverse[mat\[Phi]\[Phi]] . (mat\[Phi]\[Phi]x ) . f1i; Plot[{f1'[x], ElementMeshInterpolation[mesh, ax][x]},Element[x, mesh] , PlotLabel -> "differentiation matrix M[1]", AxesLabel -> {"x", "f1'[x]"},PlotStyle -> {Red, {Dashed, Black }}] 

enter image description here

axx = Inverse[mat\[Phi]\[Phi]] . mat\[Phi]\[Phi]xx . f1i; Plot[{f1''[x], ElementMeshInterpolation[mesh, axx][x]},Element[x, mesh] ,PlotLabel -> "differentiation matrix M[2]", AxesLabel -> {"x", "f1''[x]"},PlotStyle -> {Red, {Dashed,Black }}] 

enter image description here

conclusion: M[1] fits quite well, M[2] gives bad approximation near the boundaries.

twodimensional case M[1,1]

mesh2D = ToElementMesh[Rectangle[{0, 0}, {1, 1}], "MeshOrder" -> 2,"MeshElementType" -> "TriangleElement", MaxCellMeasure -> 1/250] ng = ToNumericalRegion[mesh2D] 

calculate elementmatrices(assuming NeumannValue==0):

nr = ToNumericalRegion[mesh2D]; vd = NDSolve`VariableData[{"DependentVariables" -> {u}, "Space" -> {x , y}}]; sd = NDSolve`SolutionData[{"Space" -> nr}]; initCoeffs =InitializePDECoefficients[vd, sd ,"ReactionCoefficients" -> {{1 }} ] ; methodData = InitializePDEMethodData[vd, sd] ; discretePDE = DiscretizePDE[initCoeffs, methodData, sd]; mat\[Phi]\[Phi] = discretePDE["StiffnessMatrix"] // Normal ; initCoeffs =InitializePDECoefficients[vd, sd ,"DiffusionCoefficients" -> {{{{0, 1/2}, {1/2, 0}} }} ] ; methodData = InitializePDEMethodData[vd, sd] ; discretePDE = DiscretizePDE[initCoeffs, methodData, sd]; mat\[Phi]\[Phi]xy = discretePDE["StiffnessMatrix"] // Normal ; 

examplary function f2[x,y]

f2 = Function[{x, y}, x y] punkte = mesh2D["Coordinates"]; axy = Inverse[mat\[Phi]\[Phi]] . mat\[Phi]\[Phi]xy . Map[Apply[f2, #] &, punkte]; GraphicsRow[{Plot3D[Derivative[1, 1][f2] [x, y], Element[{x, y}, mesh2D], PlotRange -> {All, {-1, 2} }[[1]], PlotLabel -> "exakt"], Plot3D[ElementMeshInterpolation[mesh2D, axy][x, y], Element[{x, y}, mesh2D], PlotRange -> {All, {-1, 2} }[[1]], PlotLabel -> "differentiation matrix M[1,1]"]}] 

enter image description here

conclusion 2D:

Result looks quite well, but approximation isn't acceptable near the boundaries.

final conclusion:

Perhaps quality of approximation might be improved, if boundary integration of Green's identity (NeumannValue !=0) is included?

$\endgroup$
10
  • $\begingroup$ A possible work-around just come to my mind: axx = Inverse[mat\[Phi]\[Phi]] . (mat\[Phi]\[Phi]x) . Inverse[mat\[Phi]\[Phi]] . (mat\[Phi]\[Phi]x) . f1i :) $\endgroup$ Commented Dec 16, 2024 at 10:06
  • $\begingroup$ @xzczd Thanks for your clever workaround. Works fine! How do you justify this workaround $\endgroup$ Commented Dec 16, 2024 at 10:18
  • $\begingroup$ I'm first inspired by this observation of mine. Then, by checking Partial Differential Equations and Boundary Conditions section of the tutorial Solving Partial Differential Equations with Finite Elements, I notice the discretization of convection term doesn't seem to involve integration by parts i.e. the zero Neumann value isn't imposed when a pure first order derivative (e.g. $\frac\partial{\partial x}$) is discretized. $\endgroup$ Commented Dec 16, 2024 at 10:32
  • $\begingroup$ @xzczd Very clever. I'm just trying the 2D workaround, actually without succes $\endgroup$ Commented Dec 16, 2024 at 10:35
  • $\begingroup$ $\frac{\partial^2}{\partial x \partial y}$ can be calculated like this: {mat\[Phi]\[Phi]x, mat\[Phi]\[Phi]y} = (initCoeffs = InitializePDECoefficients[vd, sd, "ConvectionCoefficients" -> {{{#}}}]; methodData = InitializePDEMethodData[vd, sd]; DiscretizePDE[initCoeffs, methodData, sd]["StiffnessMatrix"]) & /@ {{1, 0}, {0, 1}};f2 = Function[{x, y}, Sin[2 Pi (x - Pi/2)] Cos[Pi y]]; func = LinearSolve[mat\[Phi]\[Phi]]; axy = func[mat\[Phi]\[Phi]y . func[mat\[Phi]\[Phi]x . Map[Apply[f2, #] &, punkte]]]; but still, the precision isn't good enough… (The simple $f(x,y)=x y$ case is good, though. ) $\endgroup$ Commented Dec 17, 2024 at 11:53
5
+100
$\begingroup$

modified

Here I'll present an approach which approximates the derivatives of an unknown area f[x,y] in an irregular {x,y}-mesh (of course also in a regular mesh)!

Assumptions:

irregular mesh (2D) is available

unknown area can be approximated by a polynom O[3,3] in the neighborhood of every meshpoint

the area parameters are estimated (LeastSquares) using some neighboring meshpoints

startup:

mesh = DiscretizeRegion[Disk[] ] ; (* mesh*) pts = MeshCoordinates[mesh ]; (* meshpoints*) nf = Nearest[pts -> "Index" ]; (* Neighbor points*) 

neighbors of every meshpoint:

neighbors = Table[nf[pts[[ii]], 8 + 1] , {ii, 1, Length[pts]}] ; (* assuming 8 neighbors for every meshpoint*) 

calculation of differentiation matrices:

Unknown area is approximated by taylor expansion up to O[3]. Coefficients are the area parameters fx,fy,fxx,fxy,fyy,fxxx,fxxy,fxyy,fyyy All these area parameters are evaluated in one step!

diffmat = Table[ index = neighbors[[i]]; {xi, yi} = pts[[index[[1]]]]; (* aktueller Punkt*) nj = Length[index] - 1;(* aktuelle Anzahl Nachbarn*) sparse = SparseArray[ Flatten[ Table[{{j, index[[j + 1]]} -> 1, {j, index[[1]]} -> -1}, {j, 1, nj}] , 1] , {nj, Length[pts] }]; pinv = PseudoInverse[ Table[ {xj, yj} = pts[[ index[[j + 1]] ]] ; {xj - xi, yj - yi, 1/2 (xj - xi)^2, (xj - xi) (yj - yi), 1/2 (yj - yi)^2, 1/6 (xj - xi)^3, 1/2 (xj - xi)^2 (yj - yi), 1/2 (xj - xi) (yj - yi)^2 , 1/6 (yj - yi)^3}, {j, 1, nj}] ]; pinv . sparse , {i, 1, Length[pts]}] ; 

diffmat[[All,i,All] gives the i'th area parameter {fx,fy,fxx,fxy,fyy,fxxx,fxxy,fxyy,fyyy}[[i]]!

example:

f = Function[{x, y}, Sin[ x y ] ]; fi = Map[Apply[f, #] &, pts]; xyf = MapThread[Join[#1, {#2}] &, {pts, fi}] ; ListPlot3D[xyf , Mesh -> All] 

enter image description here

Calculation of the area parameters:

{fx,fy,fxx,fxy,fyy,fxxx,fxxy,fxyy,fyyy} =Map[diffmat[[All, #, All ]] &, Range[1, 9]]; 

check result:

Show[{Plot3D[Derivative[1, 1][f][x, y], Element[{x, y}, mesh], Mesh -> All, PlotRange -> {All, {-1, 1}}[[1]] , AxesLabel -> {x, y, "Derivative[1,1][f][x,y]"}, PlotStyle -> Opacity[.5]] , Graphics3D[Point[MapThread[Join[#1, {#2}] &, {pts, fxy . fi }]]] } , PlotLabel -> "diffmat"] 

enter image description here

Result agrees very well with the analytical derivative, especially for inner meshpoints!

This approach might be easily modified by changing the order of the local polynomial approximation and/or by adapting the number of neighbors to be considered

User @xzczd (thanks!) gave me the hint that this approach possibly is known as GFDM (generalized finite difference method)

1. addendum

To check the approach for the example used in @xzczd's answer we only have to change the two lines in my previous code

mesh = DiscretizeRegion[Rectangle[]] (*mesh*) f = Function[{x, y}, Sin[2 Pi (x - Pi/2)] Cos[Pi y]]; 

![enter image description here

2. addendum

Qp's question finds an answer too. Changing pts=... in my previous code

 pts = {{-2.30259, 2.40259}, {-2.30259, 2.70259}, {-2.30259, 3.00259}, {-2.30259, 3.30259}, {-0.310155, 0.410155}, {-0.310155, 0.710155}, {-0.310155, 1.01015}, {-0.310155, 1.31015}, {0.312375, -0.212375}, {0.312375, 0.0876253}, {0.312375, 0.387625}, {0.312375, 0.687625}, {0.693147, -0.593147}, {0.693147, -0.293147}, \ {0.693147, 0.00685282}, {0.693147, 0.306853}}; 

we get the differentiation matrix Derivative[1,1] (size 16X16)

diffmat[[All, 4, All]] (*{{-0.120679, 7.26295, -11.693, 4.76711, -0.940285, 6.46903, -11.841, 5.76336, 0., 0., 0., 0.332489, 0., 0., 0., 0.}, {-6.51901, 25.0985, -28.2169, 9.85328, -2.39166, 11.8775, -18.2575, 8.22388, 0., 0., 0., 0.331834, 0., 0., 0., 0.}, {-12.5212, 41.6491, -43.3899, 14.4732, -4.41721, 19.0184, -26.4006, 11.2637, 0., 0., 0., 0.324519, 0., 0., 0., 0.}, {-14.2909, 45.4676, -45.8112, 14.8384, -10.706, 38.9511, -47.3309, 18.5684, 0., 0., 0., 0.313474, 0., 0., 0., 0.}, {0., 0., 0., 0., 7.79339, -10.8215, 3.44813, -0.329057, 2.12253, -8.47835, 5.42944, 0.706927, 0., 0., 0., 0.128473}, {0., 0., 0., 0., -0.353074, 9.5816, -12.1636, 3.25332, 0.482822, 1.26365, -9.82602, 7.31184, 0., 0., 0., 0.449441}, {0., 0., 0., 0., -9.35514, 32.1033, -29.7607, 7.46485, -0.635949, 9.55741, -23.4592, 13.4467, 0., 0., 0., 0.638743}, {0., 0., 0., 0., -11.1111, 41.7677, -41.5247, 10.8681, 0., 2.67716, -19.463, 16.7858, 0., 0., 5.43178, -5.43178}, {0., 0., 0., 0., -0.272051, 0., 0., 0., 13.9969, -20.8049, 8.6933, -1.20906, 0.837216, -13.091, 15.3607, -3.51116}, {0., 0., 0., 0., -0.1769, 0., 0., 0., 3.89094, 0.999506, -4.81686, 0.366166, 0.0993203, -2.32354, -4.0086, 5.96998}, {0., 0., 0., 0., 2.03213, -2.03213, 0., 0., -2.03213, 10.8756, -12.2551, 3.41168, 0., 2.26766, -9.96709, 7.69943}, {0., 0., 0., 0., -11.1112, 30.6567, -25.9472, 6.40178, 0., 13.7884, -24.1771, 10.3887, 0., 0., -5.43178, 5.43178}, {0., 0., 0., 0., 0.275168, 0., 0., 0., 25.8154, -45.3621, 21.0069, -2.22901, 2.61984, -29.0903, 41.7112, -14.747}, {0., 0., 0., 0., 0.250612, 0., 0., 0., 14.5269, -20.0665, 4.34738, 0.491935, 3.03887, -21.7865, 25.8588, -6.66145}, {0., 0., 0., 0., 0.178312, 0., 0., 0., 6.44284, -4.34281, -2.6579, 0.0596134, 0.328696, -5.16065, 0.779191, 4.37271}, {0., 0., 0., 0., 0.0513305, 0., 0., 0., -2.73798, 14.7178, -12.9065, 0.783275, -1.19754, 7.83736, -20.5616, 14.0139}}*) 

Hope it helps!

$\endgroup$
11
  • $\begingroup$ @xzczd See my new answer, perhaps it shows an easy access to the the GFDM method you mentioned today. $\endgroup$ Commented Jan 17 at 16:35
  • $\begingroup$ Interesting. Perhaps I should revisit those papers. Two quick suggestions: 1. LeastSquares may be a better choice compared with PseudoInverse: mathematica.stackexchange.com/questions/117979/… 2. The AxesLabel can be modified to e.g. HoldForm@D[f, x, y] // TraditionalForm. $\endgroup$ Commented Jan 18 at 3:43
  • 1
    $\begingroup$ @xzczd Thanks for your interesting reply. I checked LeastSquares and PseudoInverse for different numerical examples, all with a symbolic righthandside. Both commands gave the same result (not surprising) but in all examples PseudoInverse evaluated significantly faster (surprising for me) $\endgroup$ Commented Jan 18 at 8:37
  • $\begingroup$ @xzczd If of interest, here my testcode mat = RandomReal[{-1, 1}, {RandomInteger[{4, 20}], 8}]; rS = Table[u[i], {i, 1, Dimensions[mat][[1]]}]; erg1 = LeastSquares[mat, rS] // AbsoluteTiming; erg2 = PseudoInverse[mat] . rS // AbsoluteTiming; {Norm[erg1[[-1]] - erg2[[-1]]], erg1[[1]]/erg2[[1]]} $\endgroup$ Commented Jan 18 at 8:59
  • $\begingroup$ This is surprising. I leaved a comment under that post, hopefully J.M. will finally have a look at this. (He seems to be busy these days… ) $\endgroup$ Commented Jan 18 at 11:34
1
$\begingroup$

Motivated by this answer from Ulrich, I revisited the generalized finite difference method (GFDM) a bit and managed to write a general-purpose function. The underlying idea is just the same as that of Ulrich's code, with the following improvements:

  1. A taylorterms function is defined based on this answer to allow Taylor expansion up to arbitrary order.

  2. The function applies to arbitrary dimensions (in principle).

  3. The code is slightly simplified and the efficiency is slightly improved.

Here comes the code:

BeginPackage["GFDM`"]; taylorterms; matGFDM; Begin["`private`"]; taylorterms[dim_, order_] := Block[{t, f, vars, difforder, x, xlst}, xlst = x /@ Range@dim; With[{expr = Normal@Series[f @@ (t xlst), {t, 0, order}] /. t -> 1}, {vars, difforder} = Cases[expr, a : Derivative[o__][f][__] :> {a, {o}}, Infinity]\[Transpose]; {difforder, Compile @@ {{#, _Real, 1} & /@ xlst, (CoefficientArrays[expr, vars] // Last // Normal)}}]] w = Function[{d, dm}, #] &[ With[{x = d/dm}, If[d <= dm, 1 - 6 x^2 + 8 x^3 - 3 x^4, 0]] // PiecewiseExpand // Simplify`PWToUnitStep]; dmfunc[sf_, pcenter_, pinf_] := sf Sqrt[# . # &@(pcenter - pinf)]; matGFDM[dim_, order_, plst_, neighborsize_, sf_ : False] := ( meshsize = Length[plst]; nf = Nearest[plst -> "Index"]; neighbors = nf[plst, neighborsize + 1]; {difforderlst, taylortermfunc} = taylorterms[dim, order]; With[{lst = difforderlst}, take = First@FirstPosition[lst, #] &]; diffmat = Function[indexlst, pcenter = plst[[indexlst[[1]]]]; If[NumericQ@sf, dm = dmfunc[sf, pcenter, plst[[indexlst[[-1]]]]]; wlst = w[Sqrt[((pcenter - Transpose@plst[[indexlst // Rest]])^2 // Total)], dm], wlst = 1.]; sparse = Module[{spa = SparseArray[{{Range@neighborsize, indexlst // Rest}\[Transpose] -> ConstantArray[1., neighborsize]}, {neighborsize, meshsize}]}, spa[[All, First@indexlst]] = -1.; wlst spa]; pinv = PseudoInverse[ wlst Transpose[ taylortermfunc @@ (plst[[indexlst // Rest]]\[Transpose] - pcenter)]]; pinv . sparse] /@ neighbors; With[{mat = diffmat\[Transpose], take = take}, mat[[take@#]] &]) End[]; EndPackage[]; 

There still exists room for optimization, but again, given that the function isn't slow, and my understanding for the method is primary, I'd like not to make too much effort on performance tuning (at least for now).

The usage of the function is straightforward:

<< NDSolve`FEM` mesh = ToElementMesh[Rectangle[], "MeshElementType" -> "TriangleElement", "MeshOrder" -> 1]; plst = mesh["Coordinates"]; dim = 2; order = 8; size = 29; getmat = matGFDM[dim, order, plst, size]; // AbsoluteTiming (* {0.194138, Null} *) 

The getmat is a pure function whose argument is a list of differential order. To extract the differentiation matrix of $\frac{\partial}{\partial x \partial y}$, we can:

dxymat = getmat@{1, 1}; 

Finally, let's check its accuracy:

f = Function[{x, y}, Sin[2 Pi (x - Pi/2)] Cos[Pi y]]; funcvalues = f @@ (plst\[Transpose]); dfunc = Function[{x, y}, D[#, x, y] &@f[x, y] // Evaluate]; ref = dfunc @@@ plst; dxylst = dxymat . funcvalues; (*ref-dxylst//Abs//Max*) ListPlot[ref - dxylst, PlotRange -> All] 

enter image description here

dgfdmfunc = ElementMeshInterpolation[{mesh}, dxylst]; Manipulate[ Plot[{dgfdmfunc[x, y], dfunc[x, y]}, {x, 0, 1}, PlotRange -> 20, PlotStyle -> {Automatic, Dashed}], {y, 0, 1}] 

enter image description here

As we can see, compared with my implementationf of RBF-FD, this implementation of GFDM is faster, but the accuracy is worse. (Notice I've chosen the same mesh and neighbor size. )

Remaining Issues

  1. Paper like this has added a weight function to improve the accuracy, but according to my test, the effect of weight function seems to be effective only if the order and neighborsize is small. I'm not sure if I've missed something. The weight function can be enabled by adding a fifth argument scale factor sf to matGFDM. Example:

     dim = 2; order = 2; size = 29; sf = 1; getmat = matGFDM[dim, order, plst, size, sf]; // AbsoluteTiming dxymat = getmat@{1, 1}; dxylst = dxymat . funcvalues; ref - dxylst // Abs // Max ListPlot[ref - dxylst, PlotRange -> All] dgfdmfunc = ElementMeshInterpolation[{mesh}, dxylst]; Manipulate[ Plot[{dgfdmfunc[x, y], dfunc[x, y]}, {x, 0, 1}, PlotRange -> 20, PlotStyle -> {Automatic, Dashed}], {y, 0, 1}] 
  2. According to the papers, 2nd order derivative approximation should already give good enough accuracy, but I fail to achieve that. Again, I'm not sure if I've missed something.

$\endgroup$
6
  • $\begingroup$ Interesting improvement of my answer! Stil I don't understand, why your weights( actually commented out) only take place in the spa matrix and not in a generalized Pseudoinverese $\endgroup$ Commented Jan 22 at 9:50
  • $\begingroup$ @UlrichNeumann It's there, see PseudoInverse[(*wlst*)? :) $\endgroup$ Commented Jan 22 at 9:52
  • $\begingroup$ @UlrichNeumann Code updated, now the weight function can be easily tested by adding a fifth argument to matGFDM. $\endgroup$ Commented Jan 22 at 10:18
  • 1
    $\begingroup$ I tried weighted approach too, with weights, depending on the inverse distance 1/Sqrt[(xi-xj)^2+(yi-yj)^2] but the results are discouraging. Equal results if number of neighbors is smaller than the number of taylorterms. Worse results if the number of neighbors is greater. $\endgroup$ Commented Jan 22 at 16:54
  • $\begingroup$ @UlrichNeumann The paper sciencedirect.com/science/article/pii/0045794980901492 uses $\frac{1}{d^3}$ where $d$ is the distance, but according to my test, it doesn't seem to be great, either… Once again, I'm not sure if I've missed something. $\endgroup$ Commented Jan 23 at 8:12

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.