An Overview of Meshfree Collocation Methods


An Overview of Meshfree Collocation Methods

Supplementary Material

Tomas Halada1,4, Serhii Yaskovets1,2,3, Abhinav Singh1,2,3, Ludek Benes4,
Pratik Suchde5,6,, and Ivo F. Sbalzarini1,2,3,,*


1Dresden University of Technology, Faculty of Computer Science, 01187 Dresden, Germany
2Max Planck Institute of Molecular Cell Biology and Genetics, 01307 Dresden, Germany
3Center for Systems Biology Dresden, 01307 Dresden, Germany
4Czech Technical University, Faculty of Mechanical Engineering, Department of Technical Mathematics,
160 00 Prague, Czech Republic
5University of Luxembourg, Faculty of Science, Technology and Medicine, 4365 Esch-sur-Alzette, Luxembourg
6Fraunhofer Institute for Industrial Mathematics, 67663 Kaiserslautern, Germany
Joint senior authors
*Corresponding author sbalzarini@mpi-cbg.de

1 Encyclopedia of meshfree collocation methods↩︎

In this Supplement, we provide an encyclopedia of all reviewed meshfree collocation methods for convenient reference. In this, we also present the specific features and approaches of each method and point toward applications where suitable. This Supplement is intended for information look-up, and not for continuous reading.

1.1 Local Anistropic Basis Function Method (LABFM)↩︎

The Local Anistropic Basis Function Method (LABFM) introduced by King in [1], [2] is one of the newest additions to the family of finite-difference methods in scattered nodes. A discrete approximation of the differential operator in LABFM originates from the direct approximation of moments 2 with the excluded 0th moment and is given as follows. \[L^{\boldsymbol{\alpha}}_{i} u \coloneq \displaystyle\sum_{j\in\mathcal{N}_i} u_{ji}\boldsymbol{W}_{ji}\cdot \left(\mathbf{M}_{i}^{-1} \widetilde{\boldsymbol{C}^{\boldsymbol{\alpha}}} \right), \quad \mathbf{M}_{i} \coloneq \displaystyle\sum_{j\in\mathcal{N}_i} \widetilde{\boldsymbol{X}}_{ji}\boldsymbol{W}_{ji}^{\top}. \label{eq:labfm95discrete95diff95operator}\tag{1}\] Originally, in [1], the method was formulated without scaling \(\boldsymbol{X}_{ji}\) and \(\boldsymbol{C}^{\boldsymbol{\alpha}}\) with stencil sizes \(h\), which was included two years later in [2].

Compared to similar methods, the introduction to LABFM emphasizes the choice of anisotropic basis functions \(\boldsymbol{W}_{ji}\). Since the elements of \(\boldsymbol{X}_{ji}\) scale with \(h_i, h_i^2, h_i^3,...\), we need to ensure that corresponding ABFs scales with \(h_i^{-1}, h_i^{-2}, h_i^{-3},...\) also. Originally, in [1] the basis functions \(\boldsymbol{W}_{ji}\) are taken as partial derivatives of some suitably scaled RBF \(\varphi_{ji} \coloneq \varphi\left( | \boldsymbol{x}_{ji} |, h \right)\) with respect to \(\boldsymbol{x}_j\) \[\boldsymbol{W}_{ji} = \left.\boldsymbol{D}\varphi_{ji}\right\rvert_{\boldsymbol{x}_{j}}. \label{eq:labfm95w95abf95from95rbf}\tag{2}\] However, two years later, the authors of the original work [2] published improvements over the original choice 2 . Although the original approach works well in 2D, it is unable to ensure linear independence of the rows of the moment matrix \(\mathbf{M}_i\) in general. Based on [2], this eventually breaks down into three dimensions, of order \(m = 4\), where regardless of the node distribution, \(\mathbf{M}_i\) has no full rank. In [2] they propose to construct ABFs using orthogonal polynomials (Hermite or Legendre polynomials) multiplied by a suitable RBF \(\varphi_{ji}\). Thus, ABFs can be written as \[\boldsymbol{W}_{ji} = \boldsymbol{P}_{ji} \varphi_{ji} \label{eq:labfm95w95abf95from95base}\tag{3}\] where \(\boldsymbol{P}_{ji}\) are suitable basis functions (preferably orthogonal). Several choices of \(\boldsymbol{P}_{ji}\) and \(\varphi_{ji}\) were investigated at [1], [2].

1.2 High-order Consistent SPH (HOCSPH)↩︎

Around the same time as LABFM, High-order Consistent SPH (HOCSPH) introduced by Nasar et al. [3], which is very similar to LABFM, sharing even some of the authors. Having roots in SPH, the method in contrast to LABFM assumes a volume defined for each particle, which corresponds to direct node integration, and also makes the HOCSPH effectively a collocation scheme. Besides that, the strategy is the same. HOCSPH originates from the direct approximation of moments 2 with the excluded 0th moment. In the notation introduced above, ABFs in HOCSPH are obtained as derivatives of SPH weight function which corresponds to 2 .

Considering the particle volume \(v_i\) associated with particle \(i\), with our notation, the general discrete operator [eq:ba:ddo] of HOCSPH takes from \[L^{\boldsymbol{\alpha}}_{i}u \coloneq \displaystyle \sum_{j\in\mathcal{N}_i} u_{ji}w^{\boldsymbol{\alpha}}_{ji}v_j \label{eq:appendix:variations:hochs32ddo}.\tag{4}\] and – following approximation of moments 2 – the resulting approximation is \[L^{\boldsymbol{\alpha}}_{i} u \coloneq \displaystyle\sum_{j\in\mathcal{N}_i} u_{ji}\boldsymbol{W}_{ji}\cdot \left(\mathbf{M}_{i}^{-1} \boldsymbol{C}^{\boldsymbol{\alpha}} \right)v_j, \quad \mathbf{M}_{i} \coloneq \displaystyle\sum_{j\in\mathcal{N}_i} \boldsymbol{X}_{ji}\boldsymbol{W}_{ji}^{\top}v_j. \label{eq:appendix:variations:hochs32ddo32approx}\tag{5}\] As we show the in section 1.3 describing DC-PSE method, considering direct nodal integration, the volume can be absorbed by the weight functions.

1.3 Discretization-Corrected Particle Strength Exchange Method (DC-PSE)↩︎

Discretization-Corrected Particle Strength Exchange Method (DC-PSE) is a generalization of the integral particle strength exchange (PSE) operator approach to approximate spatial derivatives of any degree [4], [5]. It originates from the direct approximation of moments 2. The method was introduced by Schrader et al. [6] and later was presented in [7][10]. In its original formulation, the discrete approximation of differential operator is introduced as \[L^{\boldsymbol{\alpha}}_{i} u \coloneq \frac{1}{h^{|\boldsymbol{\alpha}|}} \displaystyle\sum_{j\in\mathcal{N}_i} \left( u_{j}\pm u_{i}\right)w^{\boldsymbol{\alpha}}_{ji}v_j \approx \boldsymbol{C}^{\boldsymbol{\alpha}} \cdot \left.\boldsymbol{D} u\right\rvert_{i} \label{eq:dcpse95discrete95diff95operator95with95volume}\tag{6}\] with weights \(\widetilde{w}^{\boldsymbol{\alpha}}_{ji} \coloneq w_{ji}^{\boldsymbol{\alpha}}\left( \left( \mathbf{x}_{j} - \mathbf{x}_{i} \right)/ h \right)\) and \(v_j\) representing volume of particle \(j\) from the quadrature of PSE. The sign is positive for odd derivatives (in case \(|\boldsymbol{\alpha}|\) is odd) and negative for even derivatives (in case \(|\boldsymbol{\alpha}|\) is even), however as mentioned in [7], form exactly corresponding to [eq:general_discrete_diff_operator_zm] can also be taken as the default. Later the volume \(v_j\) was omitted since it was relic of PSE and the volume and the normalization factor, which is usually hidden in the weight functions, can be absorbed into the normalization term (and weight coefficients). As it is clear from derivation 2, volume is not part of the formulas. The common form to present DC-PSE is thus \[L^{\boldsymbol{\alpha}}_{i} u = \frac{1}{h_{i}^{|\boldsymbol{\alpha}|}} \displaystyle\sum_{j\in\mathcal{N}_i} \left( u_{j}\pm u_{i}\right)w^{\boldsymbol{\alpha}}_{ji} \approx \boldsymbol{C}^{\boldsymbol{\alpha}} \cdot \left.\boldsymbol{D} u\right\rvert_{i} \label{eq:dcpse95discrete95diff95operator}\tag{7}\] and it is clear that the use of factor \(1/h^{|\boldsymbol{\alpha}|}\) and normalized weight function arguments is equivalent with stencil-size normalisation introduced at the end of section 2.

The feature highlighted in DC-PSE compared to other methods of this kind is the work with zero moment. Using the Taylor expansion with normalized Taylor monomials \(\widetilde{\boldsymbol{X}}_{ji} \coloneq \boldsymbol{X}(\mathbf{x}_{ji}/h) = \mathbf{H}_{i}\boldsymbol{X}_{ji}\) \[L^{\boldsymbol{\alpha}}_{i} u = \frac{1}{h_{i}^{|\boldsymbol{\alpha}|}} \displaystyle\sum_{j\in\mathcal{N}_i} \left( \left.\widetilde{\boldsymbol{X}}_{ji}\cdot\boldsymbol{D} u\right\rvert_{i} \pm u_{i} \right) w^{\boldsymbol{\alpha}}_{ji}, \label{eq:dcpse95error95short}\tag{8}\] we can see, that the zeroth moment is automatically canceling out for \(|\boldsymbol{\alpha}|\) odd, while stays present for \(|\boldsymbol{\alpha}|\) even. A slightly different approach is taken in [11] where the DC-PSE approximation is assumed in form \[L^{\boldsymbol{\alpha}}_{i} u = \displaystyle\sum_{j\in\mathcal{N}_i} u_{j}w^{\boldsymbol{\alpha}}_{ji} \approx \boldsymbol{C}^{\boldsymbol{\alpha}} \cdot \left.\boldsymbol{D} u\right\rvert_{i}. \label{eq:dcpse95discrete95diff95operator95no95substr95form}\tag{9}\] This however omits the nuances associated with zeroth moment and it is completely similar to the derivation carried out in section presenting basic approach 2.

Following [6], [7], the weight function are composed by vector of monomials divided by stencil sizes \(\widetilde{\boldsymbol{X}}_{ji}\) multiplied by a suitable radial base function \(\varphi_{ji}\) (for example, in [6], [7] an exponential window function is used) and unknown weight coefficients \(\boldsymbol{\Psi^{\boldsymbol{\alpha}}}_{i}\) \[L^{\boldsymbol{\alpha}}_{i} u = \frac{1}{h_{i}^{|\boldsymbol{\alpha}|}} \displaystyle\sum_{j\in\mathcal{N}_i} \left( u_{j}\pm u_{i}\right) \widetilde{\boldsymbol{X}}_{ji} \cdot \boldsymbol{\Psi^{\boldsymbol{\alpha}}}_{i} \varphi_{ji} = \displaystyle\sum_{j\in\mathcal{N}_i} \left( u_{j}\pm u_{i}\right) \boldsymbol{W}_{ji} \cdot \boldsymbol{\Psi^{\boldsymbol{\alpha}}}_{i}. \label{eq:dcpse95discrete95diff95operator95hMod}\tag{10}\] Taking into account the notation of general form of MFD introduced above, both [6][10] and [11] assume the ABFs as vector of monomials \(\widetilde{\boldsymbol{X}}_{ji}\) multiplied by a radial base function \(\varphi_{ji}\) \[\boldsymbol{W}_{ji} = \widetilde{\boldsymbol{X}}_{ji} \varphi_{ji}^{\text{Gauss}} \label{eq:dcpse95w95taylor95monomials95with95rbf}\tag{11}\] In the cited works, the right-hand side for the system of coefficients in the DC-PSE method is usually written as the differential operator \(\mathcal{D}^{\boldsymbol{\alpha}}\) acting on vector with monomials \(\boldsymbol{X}(0)\), which however is not anything else then different expression for our mapping vector \(\boldsymbol{C}^{\boldsymbol{\alpha}}\). Moreover, the scaling factor \(1/h_{i}^{|\boldsymbol{\alpha}|}\) can by absorbed by the mapping vector \(\boldsymbol{C}^{\boldsymbol{\alpha}}\), which is equivalent to scaling \(\boldsymbol{C}^{\boldsymbol{\alpha}}\) by \(\mathbf{H}_i\). Since the \(\boldsymbol{C}^{\boldsymbol{\alpha}}\) selects the approximated derivative, the normalisation coefficient \(1/h_{i}^{|\boldsymbol{\alpha}|}\) is automatically part of the mapping vector. Finally DC-PSE can be written as \[\widetilde{\boldsymbol{C}^{\boldsymbol{\alpha}}} \coloneq (-1)^{\lvert\boldsymbol{\alpha}\rvert}\mathbf{H}_{i}(\partial^{\boldsymbol{\alpha}}\boldsymbol{X}_{ji})|_0 \, . \label{eq:dcpse95linSystem95rhs}\tag{12}\]

\[L^{\boldsymbol{\alpha}}_{i} u \coloneq \displaystyle\sum_{j\in\mathcal{N}_i} \left( u_{j}\pm u_{i}\right) \boldsymbol{W}_{ji} \cdot \left(\mathbf{M}_{i}^{-1}\widetilde{\boldsymbol{C}^{\boldsymbol{\alpha}}}\right), \quad \mathbf{M}_{i} \coloneq \displaystyle\sum_{j\in\mathcal{N}_i}\widetilde{\boldsymbol{X}}_{ji}\boldsymbol{W}_{ji}^{\top}. \label{eq:dcpse95discrete95diff95operator95hMod95recap}\tag{13}\]

Notation remark↩︎

To connect our notation with notation usually used by DC-PSE community, we also present small dictionary to make the relation clear. The cited papers use \(\boldsymbol{Q}^\alpha u (\boldsymbol{x}_i)\) instead of our \(L^{\boldsymbol{\alpha}}_{i} u\) to express the differential operator, \(\eta^{\boldsymbol{\alpha}}( \boldsymbol{x}_i - \boldsymbol{x}_j )\) is used instead of our \(w_{ji}^{\boldsymbol{\alpha}}\) to express the weight functions (which is called a kernel function in DC-PSE). DC-PSE uses the kernel or stencil size \(\epsilon(\boldsymbol{x}_i)\) which we replaced with our \(h_i\) and in the scaling procedure with the vector of characteristic stencil sizes \(\mathbf{H}_i\). In fact, all the arguments \(\boldsymbol{x}_i - \boldsymbol{x}_j\) are divided by \(\epsilon(\boldsymbol{x}_i)\) which is equivalent to the scaling with \(\mathbf{H}_i\). The searched coefficients \(\boldsymbol{a}(\boldsymbol{x}_i)\) corresponds to our \(\boldsymbol{\Psi}_i^{\boldsymbol{\alpha}}\) and the base (although only Taylor monomials are mentioned) \(\boldsymbol{p}(\mathbf{x}_j - \mathbf{x}_i)\) can be viewed as our \(\boldsymbol{P}_{ji}\) (or \(\boldsymbol{X}_{ji}\) in this context). DC-PSE is often presented also in matrix form.

1.4 Reproducing Kernel Collocation Method (RKCM)↩︎

Reproducing Kernel Collocation Method (RKCM) is the next member of the family of meshfree finite difference methods introduced by Aluru [12]. RKCM is inspired by Reproducing Kernel Particle method (RKPM) [13], [14]. The motivation was to get rid of numerical integration (or quadrature) required in RKPM and utilize fully the collocation approach. RKCM thus follows so called reproducing conditions which are same with RKPM, but formulates them directly in discrete form. RKCM (RKPM), like other methods, relies on the canceling of unwanted moments. However, in contrast to the methods presented so far, RKCM (RKPM) first approximates the function itself (which is equivalent with equation (2) for \(\boldsymbol{\alpha}=\boldsymbol{0}\) resulting in search for weight functions \(w_{ji}^{\boldsymbol{0}}\)). The resulting weight functions \(w_{ji}^{\boldsymbol{\alpha}}\) for a given differential operator \(\boldsymbol{\alpha}\) are obtained as derivatives of the weight functions which approximates the function itself \(w_{ji}^{\boldsymbol{0}}\), i.e. \(\partial^{\boldsymbol{\alpha}}\left( w_{ji}^{\boldsymbol{0}} \right)\). This approach is usually referred to as direct derivatives (see for example [15]).

The idea behind reproducing kernel approximation is to obtain weight functions \(w^{\boldsymbol{\alpha}}_{ji}\) in such a form that we correct the RBF weight function \(\varphi_{ji}\) (i.e. the kernel approximation) to force consistency to the desired order. The correction is assumed in form \(\boldsymbol{X}_{ji}\cdot\boldsymbol{\Psi}_{i}^{\boldsymbol{\alpha}}\) where \(\boldsymbol{\Psi}_{i}^{\boldsymbol{\alpha}}\) are the unknown coefficients, that we search for by canceling selected moments. The way the resulting function approximation is obtained corresponds to the principle of direct approximation of moments described in section 2 which enforces the reproducing conditions for \(\boldsymbol{\alpha}=\boldsymbol{0}\).

Following the procedure outlined above, to obtained RKCM approximation of derivatives we start with approximation of function itself \[L^{\boldsymbol{0}}_{i} u \coloneq \displaystyle\sum_{j\in\mathcal{N}_i}^{N} u_{j}w^{\boldsymbol{0}}_{ji} \approx \boldsymbol{C}^{\boldsymbol{0}} \cdot \left.\boldsymbol{D} u\right\rvert_{i} \label{eq:rkcm95function95interpolation}\tag{14}\] and following the procedure described step by step in section 2 we end up with relation \[\boldsymbol{W}_{ji} \coloneq \boldsymbol{X}_{ji} \varphi_{ji}\] \[L^{\boldsymbol{0}}_{i} u = \displaystyle\sum_{j\in\mathcal{N}_i} u_{j} \boldsymbol{W}_{ji} \cdot (\mathbf{M}_i^{-1} \boldsymbol{C}^{\boldsymbol{0}}), \quad \mathbf{M}_{i} \coloneq \displaystyle\sum_{j\in\mathcal{N}_i} \boldsymbol{X}_{ji}\boldsymbol{W}_{ji}^{\top} \label{eq:rkcm95function95interpolation95substituted32weights}\tag{15}\] In RKCM, ABFs are usually assumed as Taylor monomials \(\boldsymbol{X}_{ji}\) multiplied by some radial base function \(\varphi_{ji}\) and by \(\boldsymbol{C}^{\boldsymbol{0}}\) we mean mapping vector with a one in the first position and zeros in the rest. (Note that in case of formulation with \(0\)th moment mentioned in section 2, \(\boldsymbol{C} = \mathbf{0}\)).

Derivatives of the function are then obtained by taking derivatives of the approximation \[L^{\boldsymbol{\alpha}}_{i} u = \partial^{\boldsymbol{\alpha}} L^{\boldsymbol{0}}_{i} u = \partial^{\boldsymbol{\alpha}} \left( \displaystyle\sum_{j\in\mathcal{N}_i}^{N} u_{j}w^{\boldsymbol{0}}_{ji} \right) = \displaystyle\sum_{j\in\mathcal{N}_i}^{N} u_{j}\partial^{\boldsymbol{\alpha}}w^{\boldsymbol{0}}_{ji} = \displaystyle\sum_{j\in\mathcal{N}_i}^{N} u_{j}w^{\boldsymbol{\alpha}}_{ji}. \label{eq:rkcm95discrete95diff95operator95not95substituted}\tag{16}\] By substituting for \(w_{ji}^{\boldsymbol{0}}\) from 15 to 16 , we get \[L^{\boldsymbol{\alpha}}_{i} u = \displaystyle\sum_{j\in\mathcal{N}_i} u_{j} \partial^{\boldsymbol{\alpha}} \left( \boldsymbol{W}_{ji} \cdot (\mathbf{M}_i^{-1} \boldsymbol{C}^{\boldsymbol{0}}) \right)\] and considering \(\boldsymbol{W}_{ji} = \boldsymbol{X}_{ji} \varphi_{ji}\), this expands to resulting differential operator approximation of derivative used in RKCM \[\boldsymbol{W}_{ji} \coloneq \boldsymbol{X}_{ji} \varphi_{ji}\] \[L^{\boldsymbol{\alpha}}_{i} u \coloneq \displaystyle\sum_{j\in\mathcal{N}_i} u_{j} \left( \partial^{\boldsymbol{\alpha}} \boldsymbol{X}_{ji} \varphi_{ji} \cdot \mathbf{M}_{i}^{-1} \boldsymbol{C}^{\boldsymbol{0}} + \boldsymbol{X}_{ji} \partial^{\boldsymbol{\alpha}} \varphi_{ji} \cdot \mathbf{M}_{i}^{-1} \boldsymbol{C}^{\boldsymbol{0}} + \boldsymbol{X}_{ji} \varphi_{ji} \cdot \partial^{\boldsymbol{\alpha}} \mathbf{M}_{i}^{-1} \boldsymbol{C}^{\boldsymbol{0}} \right), \quad \mathbf{M}_{i} \coloneq \displaystyle\sum_{j\in\mathcal{N}_i} \boldsymbol{X}_{ji}\boldsymbol{W}_{ji}^{\top}. \label{eq:rkcm95discrete95diff95operator}\tag{17}\] Note also that the reproducing condition is formulated in RKCM using Taylor monomials.

Notation remark↩︎

As in the previous case, we would like to link our notation with notation established in RKCM (which is common also to RKPM). In RKCM/RKPM, the vector of Taylor monomials is usually denoted by \(\boldsymbol{h}(\boldsymbol{x}_j - \boldsymbol{x}_i)\) (or sometimes \(\boldsymbol{H}(\boldsymbol{x}_j - \boldsymbol{x}_i)\)) instead of our \(\boldsymbol{X}_{ji}\). The correction functions are usually denoted by \(C(\boldsymbol{x}_j, \boldsymbol{x}_j -\boldsymbol{x}_i)\) and consist of Taylor monomials \(\boldsymbol{h}(\boldsymbol{x}_j - \boldsymbol{x}_i)\) and unknown coefficients \(c(\boldsymbol{x}_j, \boldsymbol{x}_i)\). In our notation, this corresponds to the product \(\boldsymbol{X}_{ji}\cdot\boldsymbol{\Psi}_{i}^{\boldsymbol{\alpha}}\).

1.5 Gradient Reproducing Kernel Collocation Method (GRKCM)↩︎

Gradient Kernel Collocation Method (GRKCM) introduced by Chi [16] aims to simplify the derivative approximation used in RKCM introduced in previous part 1.4. Like the RKCM, the GRKCM is based on the ideas of the RKPM, but does not use direct derivatives. Instead, derivatives are approximated directly, in the same way we presented in the introduction (which is sometimes referred to as synchronous or implicit derivation, see [15]). The method is derived using the direct approximation of moments exactly as presented above 2. The 0th moment is included.

We start with discrete approximation in form \[L^{\boldsymbol{\alpha}}_{i} u \coloneq \displaystyle\sum_{j\in\mathcal{N}_i} u_{j}w^{\boldsymbol{\alpha}}_{ji} \approx \boldsymbol{C}^{\boldsymbol{\alpha}} \cdot \left.\boldsymbol{D} u\right\rvert_{i}. \label{eq:grkcm95discrete95diff95operator}\tag{18}\] Similarly to the previous section, we aim to construct correction which ensures desired order of the approximation and we assume the correction in form \(\boldsymbol{X}_{ji}\cdot\boldsymbol{\Psi}_{i}^{\boldsymbol{\alpha}}\). Canceling unwanted moments (or in the words of RKPM, by enforcing the condition of reproduction), we end up with \[L^{\boldsymbol{\alpha}}_{i} u = \displaystyle\sum_{j\in\mathcal{N}_i} u_{j}\boldsymbol{X}_{ji} \cdot \boldsymbol{\Psi^{\boldsymbol{\alpha}}}_{i} \varphi_{ji} \label{eq:grkcm95discrete95diff95operator95subs95weights}\tag{19}\] where the coefficients \(\boldsymbol{\Psi}_{i}^{\boldsymbol{\alpha}}\) are obtained solving linear system with symmetric matrix \[\mathbf{M}_i\boldsymbol{\Psi}_{i}^{\boldsymbol{\alpha}} = \boldsymbol{C}^{\boldsymbol{\alpha}}. \label{eq:grkcm95linSystem}\tag{20}\] Using the notation introduced above, the RKCM is expressed as \[\boldsymbol{W}_{ji} \coloneq \boldsymbol{X}_{ji} \varphi_{ji} \label{eq:grkcm95w95taylor95monomials95with95rbf}\tag{21}\]

\[L^{\boldsymbol{\alpha}}_{i} u \coloneq \displaystyle\sum_{j\in\mathcal{N}_i} u_{j} \boldsymbol{W}_{ji} \cdot (\mathbf{M}_i^{-1} \boldsymbol{C}^{\boldsymbol{\alpha}}), \quad \mathbf{M}_{i} \coloneq \displaystyle\sum_{j\in\mathcal{N}_i} \boldsymbol{X}_{ji}\boldsymbol{W}_{ji}^{\top}. \label{eq:grkcm95discrete95diff95operator95recap}\tag{22}\]

Notation remark↩︎

Same as above, in GRKCM the vector of Taylor monomials is usually denoted by \(\boldsymbol{h}(\boldsymbol{x}_j - \boldsymbol{x}_i)\) (or sometimes \(\boldsymbol{H}(\boldsymbol{x}_j - \boldsymbol{x}_i)\)) instead of our \(\boldsymbol{X}_{ji}\). The mapping vector is usually denoted using the vector of Taylor monomials \(\boldsymbol{C}^{\boldsymbol{\alpha}} = (-1)^{\lvert \boldsymbol{\alpha} \rvert}\partial^{\boldsymbol{\alpha}}\boldsymbol{H}(\mathbf{0})\). The correction functions are usually denoted by \(C(\boldsymbol{x}_j, \boldsymbol{x}_j - \boldsymbol{x}_i)\) and consist of Taylor monomials \(\boldsymbol{h}(\boldsymbol{x}_j - \boldsymbol{x}_i)\) and unknown coefficients \(c(\boldsymbol{x}_j, \boldsymbol{x}_i)\). In our notation, this corresponds to the product \(\boldsymbol{X}_{ji}\cdot\boldsymbol{\Psi}_{i}^{\boldsymbol{\alpha}}\).

1.6 Generalized Finite Difference Method (GFDM)↩︎

Generalized Finite Difference Method (GFDM) is popular versions of MFD introduced by Liszka and Orkisz [17], [18] inspired by work by Jensen [19] or paper by Swartz and Wendroff [20]. Since then, various improvements have been made across different areas and a number of additional techniques have been developed. Here we focus on the current concept of GFDM as it can be found under this particular name. For this purpose, we refer to the work [21]. The mechanism of GFDM is exactly the same as for the previous methods but it was build on the principle of \(\ell_2\) minimization of approximation error which is described in section 4 and later on using the polynomial method \(\ell_2\) minimization as we presented in section 5.

In the case of approximation error \(\ell_2\) we start with the discrete version of differential operator \[L^{\boldsymbol{\alpha}}_{i} u \coloneq \displaystyle\sum_{j\in\mathcal{N}_i} u_{j}w^{\boldsymbol{\alpha}}_{ji} \label{eq:gfdm95general95discrete95diff95operator95zm}\tag{23}\] and using Taylor expansion of field \(u\) in the vicinity of point \(i\) \[u_{j} = \boldsymbol{X}_{ji}\cdot\left.\boldsymbol{D} u\right\rvert_{i} + \boldsymbol{E}_{ji}, \label{eq:gfdm95taylor95expansion}\tag{24}\] with aim to minimize the square of the error of the approximation \(\lVert \boldsymbol{E}_{ji} \rVert_{2}^{2}\) weighted by some suitable weight function \(\varphi_{ji}\). This leads to a optimization problem which is step by step solved in section 4 which results in the known system of equations for the vector of derivatives \(\left.\boldsymbol{D} u\right\rvert_{i}\) \[\mathbf{M}_{i}\left.\boldsymbol{D} u\right\rvert_{i} = \displaystyle\sum_{j\in\mathcal{N}_i} u_{ji}\boldsymbol{X}_{ji}\varphi_{ji}, \quad \mathbf{M}_{i} \coloneq \displaystyle\sum_{j\in\mathcal{N}_i}\boldsymbol{X}_{ji}\boldsymbol{X}_{ji}^{\top} \varphi_{ji}. \label{eq:gfdm95linSystem}\tag{25}\] Expressing this by the vector of basis functions \(\boldsymbol{W}_{ji}\), which is in this case again composed from the vector of monomials \(\boldsymbol{X}_{ji}\) multiplied by a suitable RBF \(\varphi_{ji}\) \[\boldsymbol{W}_{ji} \coloneq \boldsymbol{X}_{ji} \varphi_{ji} \label{eq:gfdm95w95taylor95monomials95with95rbf}\tag{26}\] we can write 25 as \[\mathbf{M}_{i}\left.\boldsymbol{D} u\right\rvert_{i} = \displaystyle\sum_{j\in\mathcal{N}_i} u_{ji}\boldsymbol{W}_{ji}, \quad \mathbf{M}_{i} = \displaystyle\sum_{j\in\mathcal{N}_i}\boldsymbol{X}_{ji}\boldsymbol{W}_{ji}^{\top}. \label{eq:gfdm95linSystem95with95w}\tag{27}\]

In GFDM, all first \(m\) derivatives are computed together at once. To construct particular differential operator \(d\), we include \(\boldsymbol{C}^{\boldsymbol{\alpha}}\) to select desired derivatives. The result is \[L^{\boldsymbol{\alpha}}_{i} u = \displaystyle\sum_{j\in\mathcal{N}_i} \boldsymbol{C}^{\boldsymbol{\alpha}} \cdot (\mathbf{M}_i^{-1} \boldsymbol{W}_{ji}) u_{ji} \label{eq:gfdm95discrete95diff95operator}\tag{28}\] which we can do due to the symmetry of the matrix \(\mathbf{M}\). Reshuffle to the form [eq:general_discrete_diff_operator_with_coefs_do] \[\boldsymbol{W}_{ji} \coloneq \boldsymbol{X}_{ji} \varphi_{ji}. \label{eq:gfdm95w95taylor95monomials95with95rbf95recap}\tag{29}\] \[L^{\boldsymbol{\alpha}}_{i} u \coloneq \displaystyle\sum_{j\in\mathcal{N}_i} u_{ji} \boldsymbol{W}_{ji} \cdot (\mathbf{M}_i^{-1} \boldsymbol{C}^{\boldsymbol{\alpha}}), \quad \mathbf{M}_{i} \coloneq \displaystyle\sum_{j\in\mathcal{N}_i}\boldsymbol{X}_{ji}\boldsymbol{W}_{ji}^{\top}. \label{eq:gfdm95discrete95diff95operator95recap}\tag{30}\]

In the case of polynomial method \(\ell_2\) minimization, we start from the same assumed form of discrete differential operator 23 , but instead of minimizing the approximation error, we determine the weight functions \(w_{ji}^{\boldsymbol{\alpha}}\) by enforcing consistency up to order of \(m\) of gradient reproducing conditions. The minimization problem given by equation (97) and the section 5 presents a step-by-step procedure leading to the same equation 30 . Following the derivation, we obtained the same result (as for the approximation error \(\ell_2\)) that the basis function are given as \(\boldsymbol{W}_{ji} = \boldsymbol{X}_{ji} \varphi_{ji}\).

As we can observe, deriving the method using minimization of approximation error leads in fact to less general results, since the choice of the basis is less flexible (the vector of ABFs is given) than for the methods introduced so far. Although the polynomial minimization produces the same result, we showed in section 5 that this approach can be generalized such that arbitrary basis functions can be chosen in the same way as in 2.

1.7 Moving least squares collocation methods (MLS)↩︎

One particular class of meshfree finite difference methods is based directly on the Moving Least Squares (MLS) method introduced by Lancester [22]. This approach was quickly adopted for approximation of differential operators following the derivations using optimization procedure from sections 4 and 5. Over the years, a number of variants, modifications and improvements have been introduced to improve the properties of the moment matrix and reduce the number of points needed for approximation. MLS is also widely used in meshfree methods employed in with weak form, however, in the following section we present purely collocation methods based on MLS for approximating differential operators in strong form. Namely we introduce Finite Point Method (FPM) 1.7.1, Generalized Moving Least Squares (GMLS) 1.7.3, Compact Moving Least Squares (CMLS) 1.7.4, Modified Moving Least squares (MMLS) 1.7.5, Interpolating Moving Least Squares (IMLS) 1.7.6 and, used with the same abbreviation as the previous one Improved Moving Least Squares (IMLS) 1.7.6.

1.7.1 Finite Point Method (FPM)↩︎

The term Finite Point Methods is used originally by the authors [23] to describe a class of point data interpolation-based procedures that encompass the methods like Least Squares Methods (LSQ), Weighted Least Squares (WLS), Moving Least Squares (MLS) and Reproducing Kernel Particle (RKP) Methods. In the later works the authors use MLS as the main method under the title Finite Point Method (FPM) to simulate the equations of self-adjoint and non-self-adjoint nature like advective-diffusive transport problems, compressible fluid flow problems and elasticity problems in structural mechanics[24][28]. In addition, the authors thoroughly elaborate on stabilization techniques for the non-self-adjoint problems applicable to the meshfree formulation.

Given MLS is assumed to be the method of choice for FPM, the derivation is identical to the definition given in the section 4 for \(\boldsymbol{D}u = L^{\boldsymbol{0}}u\), i.e. it first builds the approximation of the function \(u_i\) which is then used to compute its derivatives.

Additionally, the authors have extended [25] MLS formalism by introducing Multiple Fixed Least Squares (MFLS) approximation by replacing the weighting function \(\varphi_k(\mathbf{x}_j)\) centered at an arbitrary point \(\mathbf{x}_k\) (not necessarily one of the nodes \(\mathbf{x}_i\)) where the function approximation \(u_k\) or the derivative approximation \(L_k^{\boldsymbol{\alpha}}u\) is evaluated, to \(\varphi_j(\mathbf{x}_k)\), i.e. defining the weighting function at each node location \(\varphi_j\). This formalism is equivalent to the one used by us (see section 2).

Following the formal minimization presented in section 4 we end up with the expression \[u_{i}^{*} = L_{i}^{\boldsymbol{0}} u = \boldsymbol{P}_i \cdot \displaystyle\sum_{j\in\mathcal{N}_i} \left[ \left( \displaystyle\sum_{k\in\mathcal{N}_i} \boldsymbol{P}_{ki} \boldsymbol{P}_{ki}^{\top} W_{ki}^{\boldsymbol{0}} \right)^{-1} \boldsymbol{P}_{ji} \varphi_{ji} u_j \right]. \label{eq:fpm95l095approximation95raw}\tag{31}\]

The authors consider two approaches to compute derivatives of the shape functions: full derivative (also known as direct derivative) and diffuse derivative [23], [29] (for further details see [15]). Direct derivative of the expression 31 leads to a complicated and computationally expensive formula since both \(\boldsymbol{P}_{ji}\) and \(\varphi_{ji}\) depend on the position \(\mathbf{x}\). Instead with the method of diffuse derivative the coefficients of the basis \(\varphi_{ji}\) are assumed to be constant and their derivatives are therefore zero. 1 The approximation of the corresponding differential operator \(L_{i}^{\boldsymbol{\alpha}} u\) is thus obtained by replacing the basis functions \(\boldsymbol{P}_i\) in equation 31 by their respective derivatives \(\left.\partial^{\boldsymbol{\alpha}}\boldsymbol{P}\right\rvert_{i}\) which gives us \[L_{i}^{\boldsymbol{\alpha}} u = \left.\partial^{\boldsymbol{\alpha}}\boldsymbol{P}\right\rvert_{i} \cdot \displaystyle\sum_{j\in\mathcal{N}_i} \left[ \left( \displaystyle\sum_{k\in\mathcal{N}_i} \boldsymbol{P}_{ki} \boldsymbol{P}_{ki}^{\top} W_{ki}^{\boldsymbol{0}} \right)^{-1} \boldsymbol{P}_{ji} \varphi_{ji} u_j \right]. \label{eq:fpm95ld95approximation95raw}\tag{32}\] Taking the ABFs in form \(\boldsymbol{W}_{ji} = \boldsymbol{P}_{ji} \varphi_{ji}\), we recognize the moment matrix inside the parentheses \[\mathbf{M}_i = \displaystyle\sum_{j\in\mathcal{N}_i} \boldsymbol{P}_{ji}\boldsymbol{W}_{ji}^{\top}. \label{eq:fpm95momentum95matrix}\tag{33}\] Moreover \(\left.\partial^{\boldsymbol{\alpha}}\boldsymbol{P}_i\right\rvert_{i}\) can be interpreted as the mapping vector \(\boldsymbol{C}^{\boldsymbol{\alpha}}\). This results in final form of the discrete differential operator \[\boldsymbol{W}_{ji} = \boldsymbol{P}_{ji} \varphi_{ji}.\] \[L^{\boldsymbol{\alpha}}_{i} u = \displaystyle\sum_{j\in\mathcal{N}_i} u_{j} \boldsymbol{W}_{ji} \cdot (\mathbf{M}_i^{-1} \boldsymbol{C}^{\boldsymbol{\alpha}}), \quad \mathbf{M}_{i} = \displaystyle\sum_{j\in\mathcal{N}_i}\boldsymbol{P}_{ji}\boldsymbol{W}_{ji}^{\top}. \label{eq:fpm95ld95approximation}\tag{34}\] which is in agreement with the results presented above.

To conclude the FPM section, we would like to point out, that to the best of our knowledge, every available application has used Taylor monomials as basis for ABFs, i.e. \(\boldsymbol{P}_{ji} = \boldsymbol{X}_{ji}\) and thus \(\boldsymbol{W}_{ji} = \boldsymbol{X}_{ji} \varphi_{ji}\). FPM is also usually presented in the matrix notation.

1.7.1.1 Notation remark

Last, we provide link between our notation and notation consistently used in FPM. Papers cited above usually use \(\boldsymbol{p}(\mathbf{x})\) to denote the basis (or vector of Taylor monomials in particular cases) which is denoted by our \(\boldsymbol{P}_{ji}\) or by \(\boldsymbol{X}_{ji}\) in case of Taylor monomials. Weight coefficients in FPM are usually denoted by \(\boldsymbol{\alpha}\) while we use \(\Psi^{\boldsymbol{\alpha}}_{i}\). The weight functions are usually represented by vector \(\boldsymbol{N}_i\) corresponds to our \(w_{ji}^{\boldsymbol{\alpha}}\) or in matrix notation \(\boldsymbol{w}_{i}^{\boldsymbol{\alpha}}\). The RBF weight function is usually denoted as \(\varphi_i(\mathbf{x}_j)\) which we replace with \(\varphi_{ji}\).

1.7.2 Meshless Local Strong Form Method (LSMFM)↩︎

Meshless Local Strong Form Method (LSMFM) introduced by Park and Youn [30] was introduced with the aim to remove the numerical integration required in meshfree methods using Galerkin formulation. LSMFM based on MLS 1.7, so similarly as other methods of this kind, it uses minimization of approximation error 4, to approximate the function itself, which is then used to compute desired derivatives. LSMFM is thus very similar to RKCM introduced in section 1.4, with that difference it follows derivation based on MLS instead of approximation of moments. The zero-th moment is included.

To obtain the approximation, first, function itself is approximated which corresponds to equation (2) for \(\boldsymbol{\alpha}=\boldsymbol{0}\) \[L^{\boldsymbol{0}}_{i} u \coloneq \displaystyle\sum_{j\in\mathcal{N}_i}^{N} u_{j}w^{\boldsymbol{0}}_{ji} \approx \boldsymbol{C}^{\boldsymbol{0}} \cdot \left.\boldsymbol{D} u\right\rvert_{i} \label{eq:app:variations:lsmfm:funcion32approx32general}\tag{35}\] Using ABFs given by Taylor monomials multiplied by eight order RBF \(\boldsymbol{W}_{ji} = \boldsymbol{P}_{ji} \varphi_{ji}\) and momentum matrix defined, the approximation of function results \[L^{\boldsymbol{0}}_{i} u = \displaystyle\sum_{j\in\mathcal{N}_i} u_{j} \boldsymbol{W}_{ji} \cdot (\mathbf{M}_i^{-1} \boldsymbol{C}^{\boldsymbol{0}}), \label{eq:app:variations:lsmfm:funcion32approx}\tag{36}\] and taking the derivatives of the function approximation \[L^{\boldsymbol{\alpha}}_{i} u = \displaystyle\sum_{j\in\mathcal{N}_i} u_{j} \partial^{\boldsymbol{\alpha}} \left( \boldsymbol{W}_{ji} \cdot (\mathbf{M}_i^{-1} \boldsymbol{C}^{\boldsymbol{0}}) \right)\] gives the final approximation of derivatives \[\boldsymbol{W}_{ji} \coloneq \boldsymbol{X}_{ji} \varphi_{ji}\] \[L^{\boldsymbol{\alpha}}_{i} u \coloneq \displaystyle\sum_{j\in\mathcal{N}_i} u_{j} \left( \partial^{\boldsymbol{\alpha}} \boldsymbol{X}_{ji} \varphi_{ji} \cdot \mathbf{M}_{i}^{-1} \boldsymbol{C}^{\boldsymbol{0}} + \boldsymbol{X}_{ji} \partial^{\boldsymbol{\alpha}} \varphi_{ji} \cdot \mathbf{M}_{i}^{-1} \boldsymbol{C}^{\boldsymbol{0}} + \boldsymbol{X}_{ji} \varphi_{ji} \cdot \partial^{\boldsymbol{\alpha}} \mathbf{M}_{i}^{-1} \boldsymbol{C}^{\boldsymbol{0}} \right), \quad \mathbf{M}_{i} \coloneq \displaystyle\sum_{j\in\mathcal{N}_i} \boldsymbol{X}_{ji}\boldsymbol{W}_{ji}^{\top}. \label{eq:app:variations:lsmfm:ddo32approx}\tag{37}\] which corresponds to the forumla 17 of the RKCM method introduced above 1.4.

1.7.3 Generalized Moving Least Squares (GMLS)↩︎

Taking into account all these strong form methods constructed as some variation on MLS, Generalized Moving Least Square method proposed in [31][34] is very popular one. The approach used in GMLS is based on the minimization of approximation error, but in contrast to GFDM and section 4, it focuses on the approximation of reconstruction of target function, which is then used to compute desired derivatives.

A more thorough mathematical formalism and rigor can be found in GMLS. The methods is also presented as a more general tool since it is focus on arbitrary linear bounded operator in general. In the following lines we present a derivation of the GMLS method, emphasizing the precise classification of the individual terms, in contrast to the derivations and methods presented above. The notation introduced above is used for the derivation, while the connection to the classical notation usually used in GMLS is given at the end of the section.

Assume function \(u(\boldsymbol{x})\) to be \(m\) times differentiable function \(u \in C^m(\Omega)\) for some \(m \ge 0\). Moreover, assume, that \(u\) can be characterized by a collection of scattered so called sampling functionals \(\Lambda( u) = \left\{ \lambda_j( u)\right\}_{j=0}^{N} \in C^m(\Omega)^*\) where \(C^m(\Omega)^*\) is the dual space to \(C^m(\Omega)\). Usually, these sampling functionals are chosen as simple evaluation of \(u\) in given point i.e. \(\lambda_i ( u) = \delta_{\boldsymbol{x}_i}[ u ] = u( \boldsymbol{x}_i)\) or simply \(\lambda_i ( u) = u_i\).

For a given evaluation functional \(\lambda(u) \in C^m(\Omega)^*\) the problem of approximating \(\lambda(u)^h\) of the function value \(\lambda(u)\) in the context of GMLS consits of searching for a linear function of the data \(\left\{ \lambda_j( u)\right\}_{j=0}^{N}\):

\[\lambda(u)^h \coloneq \displaystyle\sum_{j\in\mathcal{N}_i}^{N} \lambda_j( u)w_{j}(\lambda). \label{eq:general95approx95operator95gmls}\tag{38}\]

Now, we consider linear operator \(\tau_{\boldsymbol{x}}\) at location \(\boldsymbol{x}\). In our case, we are interested in differential operators \(\tau_i = \left.\partial^{\boldsymbol{\alpha}} u\right\rvert_{i} = \boldsymbol{C}^{\boldsymbol{\alpha}} \cdot \left.\boldsymbol{D} u\right\rvert_{i}\) but we keep in mind that this can be in practice arbitrary linear operator. We aim for approximation \(\tau_i^h\) of the operator \(\tau_i\) which we search in the form \[\tau_i^h u = \displaystyle\sum_{j\in\mathcal{N}_i} \lambda_{j}( u) \widetilde{w}_{j}(\lambda). \label{eq:tau95approx95operator95gmls}\tag{39}\] Alternatively, the solution to the function approximation problems 38 can be expressed in the matrix form (see Appendix [sec:appendix-matrixForm]) using a vector of coefficients \(\widetilde{\boldsymbol{\Psi}}(u^*)\) as

\[\lambda(u)^h = \lambda(u^*) = \boldsymbol{P}(\boldsymbol{x})^{\top} \widetilde{\boldsymbol{\Psi}}(u^*)\] where \(\boldsymbol{P}(\mathbf{x})\) denotes the vector of basis functions and \(u^*\) is the solutions to the local weight \(\ell_2\)-optimization problem over finite dimensional subspace \(\mathcal{V}_h \in \mathcal{V}\). For \(u^* \in \mathcal{V}_h\) we need to solve \[u^* = \mathop{\mathrm{argmin}}\limits_{q \in \mathcal{V}_h} \displaystyle\sum_{j\in\mathcal{N}_i} \left( \lambda_j\left( u\right) - \lambda_j\left(q\right) \right)^2 \varphi_{ji}. \label{eq:gmls95l295optim95problem}\tag{40}\] where \(\mathcal{V}_h = \text{span}\{ P_1, ..., P_{\text{dim}(\mathcal{V}_h)} \}\), we denote \(P_i\) denotes individual basis functions of basis \(\boldsymbol{P}\). The dimension \(\dim({\mathcal{V}_h}) = p\) depends on the desired order of approximation \(m\).

To obtain an approximation \(\tau_i^h\) of the operator \(\tau_i\) we compute derivatives of the vector of basis functions \(\boldsymbol{P}\) while keeping the vector of coefficients \(\widetilde{\boldsymbol{\Psi}}(u^*)\) constant. This approach has been connected to the notion of diffuse derivatives (see [33]). For further details on diffuse derivatives and their convergence to exact derivatives see [15], [29], [35][37]

\[\tau_i^h u = \tau_i (\boldsymbol{P})^{\top} \widetilde{\boldsymbol{\Psi}}(u). \label{eq:gmls95diffuse95derivative}\tag{41}\]

Few notes about the formulas above. Fist, in the minimization problem 40 , some of the well known radial functions \(\varphi_{ji}\) are used to weight the square of an error. Second, in contrast to optimization problem presented in 4, here we focus on minimizing the error of the approximation of the target function instead of minimizing the error of a particular approximation of the differential operator.

The two operator approximations 39 and 41 are connected by the relation \[\tau_i^h u = \tau_i (\boldsymbol{P})^{\top} \widetilde{\boldsymbol{\Psi}}(u^*) = \displaystyle\sum_{j\in\mathcal{N}_i} \lambda_{j}( u) \widetilde{w}_{j}(\lambda).\] Since we focus on differential operators and \(\tau_i = \left.\partial^{\boldsymbol{\alpha}} u\right\rvert_{i} \approx \boldsymbol{C}^{\boldsymbol{\alpha}} \cdot \left.\boldsymbol{D} u\right\rvert_{i}\), the resulting approximation corresponds to our \(L_{i}^{\boldsymbol{\alpha}} u\) and we would like to express the result in form \[L^{\boldsymbol{\alpha}}_{i}u \coloneq \displaystyle\sum_{j\in\mathcal{N}_i}^{N} u_{j}w^{\boldsymbol{\alpha}}_{ji} \label{eq:gmls95gddo}\tag{42}\] The procedure to obtain the coefficient vector is the same as presented in the section 4. Following the formal minimization, we end up with approximation of the differential operator in form \[L_{i}^{\boldsymbol{\alpha}}( u) = \partial^{\boldsymbol{\alpha}}\boldsymbol{P}(\mathbf{0}) \cdot \displaystyle\sum_{j\in\mathcal{N}_i} \varphi_{ji} \lambda_j(u) \left( \displaystyle\sum_{k\in\mathcal{N}_i} \lambda_k(\boldsymbol{P}_i) \lambda_k(\boldsymbol{P}_i)^{\top} \varphi_{ki} \right)^{-1} \lambda_j(\boldsymbol{P}_{i}). \label{eq:gmls95ld95approximation95raw}\tag{43}\] Since we assume that the functional \(\lambda_j = \delta_{\boldsymbol{x}_i}\) quantifies the point value of the variable at point \(j\) (however, one should beware that it may in principle be a more complex functional), following notation introduced above, we can simply write \(\lambda_j(\boldsymbol{P}_{i}) = \boldsymbol{P}(\boldsymbol{x}_j - \boldsymbol{x}_i) = \boldsymbol{P}_{ji}\) (again \(\boldsymbol{P}_{ji} \neq \boldsymbol{P}_j - \boldsymbol{P}_i\)) and \(\lambda_j( u) = u_j\). Next, we cam see that the first term of the dot product is the mapping vector \(\partial^{\boldsymbol{\alpha}}\boldsymbol{P}(\mathbf{0}) = \boldsymbol{C}^{\boldsymbol{\alpha}}\) where \(\partial^{\boldsymbol{\alpha}}\boldsymbol{P}(\mathbf{0})\) originates from applying the differential operator \(\partial^{\boldsymbol{\alpha}}\) to \(\boldsymbol{P}_i\) evaluated at point \(\mathbf{x}_i\) which can be written as \(\lambda_i(\boldsymbol{P}_i) = \boldsymbol{P}(\mathbf{0})\). The bracket is nothing else then moment matrix \[\mathbf{M}_i = \displaystyle\sum_{k\in\mathcal{N}_i} \lambda_k(\boldsymbol{P}_i) \lambda_k(\boldsymbol{P}_i)^{\top} \varphi_{ki}. \label{eq:gmls95momentum95matrix}\tag{44}\] In the context of vector of ABFs, the basis \(\boldsymbol{P}\) is the building block for the vector \(\boldsymbol{W}_{ji}\). So far we assumed mostly Taylor monomials (i.e. \(\boldsymbol{P}_{ji} = \boldsymbol{X}_{ji}\)) and for example in case of LABFM in 1.1 \(\boldsymbol{P}\) was represented by Hermite or Legendre polynonomials. As in the previous cases, the vector of ABFs is given as \[\boldsymbol{W}_{ji} = \boldsymbol{P}_{ji} \varphi_{ji}. \label{eq:gmls95w95basis}\tag{45}\] Using 45 the moment matrix can be rewritten as \[\mathbf{M}_i = \displaystyle\sum_{j\in\mathcal{N}_i}\boldsymbol{P}_{ji} \boldsymbol{W}_{ji}^{\top}. \label{eq:gmls95m95matrix}\tag{46}\]

Using the symmetry of the moment matrix, we can pull the vector \(\boldsymbol{C}^{\boldsymbol{\alpha}}\) inside the summation and swapping the matrix multiplication results in the approximation of differential operator given by \[\boldsymbol{W}_{ji} = \boldsymbol{P}_{ji} \varphi_{ji}.\] \[L^{\boldsymbol{\alpha}}_{i} u = \displaystyle\sum_{j\in\mathcal{N}_i} u_{j} \boldsymbol{W}_{ji} \cdot (\mathbf{M}_i^{-1} \boldsymbol{C}^{\boldsymbol{\alpha}}), \quad \mathbf{M}_{i} = \displaystyle\sum_{j\in\mathcal{N}_i}\boldsymbol{P}_{ji} \boldsymbol{W}_{ji}^{\top}. \label{eq:gmls95ld95approximation}\tag{47}\] which is almost the same result, as for the methods above. There are however slight nuances. As in the case of the GFDM 1.6 and RKCM 1.4, the result is a matrix that is always symmetric since it is formed by the product of the base \(\mathcal{P_{ji}}\). This is a consequence of the derivation via \(\ell_2\) error minimization. In contrast, DC-PSE 1.3 and LABFM 1.1 allow the matrix to be chosen in other ways. Moreover, in contrast to the GFDM, where we start from a Taylor expansion and the ABFs are formed by Taylor monomials, here we have the choice of \(\boldsymbol{W}_{ji}\) functions, hence the basis \(\boldsymbol{P}_{ji}\) functions.

For the basis \(\boldsymbol{P}_{ji}\) bivariate polynomials are among the popular choices. In the literature, orthogonal polynomials such as Legendre, Laguerre or Hermite polynomials are also mentioned. Taylor monomials are as well an option.

When solving for the operator approximation \(\tau_i^h\) in the matrix formulation 41 , resulting equation 43 can be expressed as \[\tau_i^h u = \tau_i (\boldsymbol{P})^{\top} \left( \Lambda_i(\boldsymbol{P})^{\top} \mathbf{V}_i \Lambda(\boldsymbol{P}) \right)^{-1} \Lambda_i(\boldsymbol{P})^{\top} \mathbf{V}_i \Lambda_i( u)\] where \(\Lambda( u) = \left\{ \lambda_j( u)\right\}_{j=0}^{N} \in \mathbb{R}^{N}\) stacks all the values from the neighboring points \(\mathcal{N}_i\) similarly to the equation [eq:appendix-introduction-of-matrix-formalism] and \(\mathbf{V}_i\) is the diagonal matrix with stores RBF values for all neighbors defined by equation [eq:l2e:appendix_matrixform_wmatrix].

1.7.4 Compact Moving Least Squares (CMLS)↩︎

Compact Moving Least Squares (CMLS) introduced by Trask [38] builds on the standard MLS approach and enriches it with procedures for constructing compact finite difference schemes [39], [40]. The strategy is to consider the formulation of problem the when calculating the coefficients of the weight functions. In practice, this is achieved by augmenting the functional for minimization from which we derive the reconstruction with additional conditions that take into account the initial problem. This aims to reduce the size of the stencils and keep moment matrices invertible while using smaller number of neighbors, then is required for the standard case.

First of all, assume the standard MLS approach [15] or simply the same method as described in previous section 1.7.3. Assume for example following linear boundary value problem \[\begin{align} \mathcal{L}_{\Omega} u &= f, \qquad \forall \boldsymbol{x} \in \Omega \\ \mathcal{L}_{\partial\Omega} u &= g, \qquad \forall \boldsymbol{x} \in \partial\Omega \end{align}\] In case of CMLS, minimization problem for optimal polynomial reconstruction 40 is enhanced by additional penalization terms which reflects the solved problem. For \(u^* \in \mathcal{V}_h\) we need to solve \[u^* = \mathop{\mathrm{argmin}}\limits_{q \in \mathcal{V}_h} \displaystyle\sum_{j\in\mathcal{N}_i} \left[ \left( \lambda_j\left( u\right) - \lambda_j\left(q\right) \right)^2 + \varepsilon_{\Omega} \left( f_j - \mathcal{L}_{\Omega} q \right)^2 + \chi_{j,\partial\Omega} \varepsilon_{\partial\Omega} \left( g_j - \mathcal{L}_{\partial\Omega} q \right)^2 \right] \varphi_{ji}. \label{eq:cmls95l295optim95problem}\tag{48}\] where \(\varepsilon_{\Omega}\) and \(\varepsilon_{\partial\Omega}\) stand for the regularization parameters and \(\chi_{j,\partial\Omega}\) is the characteristic function which \(\chi_{j,\partial\Omega} = 1\) if \(\boldsymbol{x}_j \in \partial\Omega\) and \(\chi_{j,\partial\Omega} = 0\) otherwise. The first penalty term is same as in the previous case (or as in the standard MLS) and it is responsible for the interpolation quality of the reconstruction. The second penalty term takes into account the correspondence between reconstruction and the governing equation and the third penalty term reflects how faithful is the reconstruction to boundary conditions. The regularization parameters are used to tune the significance of individual terms. To enforce boundary conditions, an additional constraint is necessary which requires that reconstruction satisfy the boundary conditions exactly \[\mathcal{L}_{\partial\Omega}q_i = g_i, \qquad \forall i:\; \boldsymbol{x}_i \in \partial\Omega.\]

Optimization problem 48 is solved in the same fashion as it is done in detail in sections 4, 5 and [sec:gl2p]. The structure of the moment matrix is more complex in this case, as it includes contributions from additional penalty terms. \[\boldsymbol{W}_{ji} = \boldsymbol{P}_{ji} \varphi_{ji}\] \[L^{\boldsymbol{\alpha}}_{i} u = \displaystyle\sum_{j\in\mathcal{N}_i} \left( u_{j} \boldsymbol{W}_{ji} + \epsilon_{\Omega} \mathcal{L}_{\Omega} \boldsymbol{P}_{ji} f_i \varphi_{ji} + \chi_{j,\partial\Omega} \epsilon_{\partial\Omega} \mathcal{L}_{\partial\Omega} \boldsymbol{P}_{ji} g_i \varphi_{ji} \right) \cdot (\mathbf{M}_i^{-1} \boldsymbol{C}^{\boldsymbol{\alpha}}), \label{eq:cmls95discrete95diff95operator95no95enforced95bc}\tag{49}\] \[\mathbf{M}_i = \displaystyle\sum_{j\in\mathcal{N}_i} \left( \boldsymbol{P}_{ji} \boldsymbol{W}_{ji}^{\top} + \epsilon_{\Omega} (\mathcal{L}_{\Omega} \boldsymbol{P}_{ji})(\mathcal{L}_{\Omega} \boldsymbol{P}_{ji}^{\top}) \varphi_{ji} + \chi_{j,\partial\Omega} \epsilon_{\partial\Omega} (\mathcal{L}_{\partial\Omega} \boldsymbol{P}_{ji})(\mathcal{L}_{\partial\Omega} \boldsymbol{P}_{ji}^{\top}) \varphi_{ji} \right)\]

To enforce exactly boundary conditions, following [38], we need to include boundary condition constraint to the matrix \(M_i\) and right hand side vector. Following notation introduced in section 4, we write equation 49 as \[L_{i}^{\boldsymbol{\alpha}}u = \boldsymbol{C}^{\boldsymbol{\alpha}} \cdot \left.\boldsymbol{D}u\right\rvert_{i}\] where the derivatives coefficient \(\left.\boldsymbol{D}u\right\rvert_{i}\) are obtained as solution of system of linear equations \(\mathbf{M}_i \left.\boldsymbol{D}u\right\rvert_{i} = \boldsymbol{R}_{i}\), where \[\boldsymbol{R}_{i} \coloneq \left[ \displaystyle\sum_{j\in\mathcal{N}_i} \left( u_{j} \boldsymbol{W}_{ji}^{\top} + \epsilon_{\Omega} \mathcal{L}_{\Omega} \boldsymbol{P}_{ji}^{\top} f_i \varphi_{ji} + \chi_{j,\partial\Omega} \epsilon_{\partial\Omega} \mathcal{L}_{\partial\Omega} \boldsymbol{P}_j^{\top} g_i \varphi_{ji} \right) \right].\]. Including the boundary condition constraint, the problem takes form \[\widetilde{\mathbf{M}_i} \coloneq \begin{bmatrix} \mathbf{M}_i & \mathcal{L}_{\partial\Omega}\boldsymbol{P}_{ij} \\ \mathcal{L}_{\partial\Omega}\boldsymbol{P}_{ij}^{\top} & 0 \end{bmatrix}, \quad \widetilde{\left.\boldsymbol{D}u\right\rvert_{i}} \coloneq \left[ \left.\boldsymbol{D}u\right\rvert_{i}, \lambda \right], \quad \widetilde{\boldsymbol{R}_{i}} \coloneq \left[ \boldsymbol{R}_{i}, g_i \right].\] Solving system of linear equations \(\widetilde{\mathbf{M}_i} \widetilde{\left.\boldsymbol{D}u\right\rvert_{i}} = \widetilde{\boldsymbol{R}_{i}}\), we obtain the approximation \[L_{i}^{\boldsymbol{\alpha}}u = \boldsymbol{C}^{\boldsymbol{\alpha}} \cdot \widetilde{\left.\boldsymbol{D}u\right\rvert_{i}}\] for which \(L_{i}^{\mathcal{L}_{\Omega}}u = f_i\) for all \(i: \; \mathbf{x}_i \in \Omega \cup \partial \Omega\).

Formally, we can write the resulting scheme in general form as \[\boldsymbol{W}_{ji} = \boldsymbol{P}_{ji} \varphi_{ji}\] \[L^{\boldsymbol{\alpha}}_{i} u = \displaystyle\sum_{j\in\mathcal{N}_i} \left( u_{j} \boldsymbol{W}_{ji} + \boldsymbol{Q}_{ji} \right) \cdot (\mathbf{M}_i^{-1} \boldsymbol{C}^{\boldsymbol{\alpha}}), \quad \mathbf{M}_{i} = \displaystyle\sum_{j\in\mathcal{N}_i}\boldsymbol{P}_{ji}\boldsymbol{W}_{ji}^{\top} + \mathbf{Q}_i \label{eq:cmls95discrete95diff95operator}\tag{50}\] where \(\boldsymbol{Q}_i \in \mathbb{R}^{p}\) and \(\mathbf{Q}_i \in \mathbb{R}^{p \times p}\) represent the additional terms originating in solved problem.

In [38], the authors report that this can reduce the number of required neighbors by up to half compared to the standard approach for particles inside the domain.

1.7.5 Modified Moving Least Squares (MMLS)↩︎

Modified Moving Least Squares (MMLS) introduced by Joldes [41], [42] as well as CMLS 1.7.4 brings the idea of applying Tikhonov–Miller alike regularization to allow the use of higher-order polynomial bases while having less restrictive conditions on the size of the stencil, or the number of neighbors needed for interpolation and their distribution compared to the standard MLS method. Additionally, the method introduces constraints for the situation when the classical MLS moment matrix is singular (i.e. there exist infinitely many solutions), that ensure the solution with zero coefficients for the higher order monomials in the basis is favoured. This has the practical effect of decreasing the order of polynomial interpolation, but preserves well-posedness of the problem.

As in the previous case with CMLS, these properties are achieved by modifying the approximation error functional. However, in contrast to the previous case, in MMLS this modification does not take into account the system of governing equations, but instead the method adds additional terms based on the coefficients of the polynomial base functions.

Assume a modification of the minimization problem 40 to the form \[u^* = \mathop{\mathrm{argmin}}\limits_{q \in \mathcal{V}_h} \displaystyle\sum_{j\in\mathcal{N}_i} \left[ \left( \lambda_j\left( u\right) - \lambda_j\left(q\right) \right)^2 \right] \varphi_{ji} + \boldsymbol{\mu}_s\cdot \left( \hat{\boldsymbol{\Psi}}_{s,i} \right)^2. \label{eq:mmls95l295optim95problem}\tag{51}\] where \(\boldsymbol{\mu}\) is a vector containing positive regularization parameters for the additional constraints and \(\hat{\boldsymbol{\Psi}}_{s,i}\) is a subset of the coefficients \(\boldsymbol{\Phi}_i^{\boldsymbol{\alpha}}\) connected with the higher orders terms. To make this clear, for example using second order basis \(m = 2\) in 2D, we have \[u^* = \mathop{\mathrm{argmin}}\limits_{q \in \mathcal{V}_h} \displaystyle\sum_{j\in\mathcal{N}_i} \left[ \left( \lambda_j\left( u\right) - \lambda_j\left(q\right) \right)^2 \right] \varphi_{ji} + \mu_{x^2} \left( \Psi_{i,x^2} \right)^2 + \mu_{xy} \left( \Psi_{i,xy} \right)^2 + \mu_{y^2} \left( \Psi_{i,y^2} \right)^2 . \label{eq:mmls95l295optim95problem95example95with95m2}\tag{52}\]

By multiplying small positive (\(\ll 1\)) coefficients \(\boldsymbol{\mu}\) to the higher order terms \(\hat{\boldsymbol{\Psi}}_{s,i}\), in the case the matrix \(\mathbf{M}_i\) tends to be singular, the strategy is to obtain a solution with zero coefficients for these higher order terms. By choosing the coefficients small enough it is ensured that the solution remains nearly identical to the standard MLS solution when the moment matrix is not singular. Let us add that there are only a few comments on the specific choice of regularization parameters \(\boldsymbol{\mu}\) to be found. In [42] the authors use a constant value \(\mu = 10^{-7}\).

The optimization problem 51 is solved in the same way as presented in detail in sections 4, 5 and 6. For MMLS the moment matrix is obtained in the form \[\widetilde{\mathbf{M}}_i = \displaystyle\sum_{j\in\mathcal{N}_i} \boldsymbol{X}_{ji} \boldsymbol{W}_{ji}^{\top} + \mathbf{H} = \mathbf{M}_i + \mathbf{H}_{\mu} \label{eq:mmls95momentum95matrix}\tag{53}\] where \(\mathbf{H}_{\mu} = \mathrm{diag}( \boldsymbol{0}_{p-s}, \boldsymbol{\mu}_s ) \in \mathbb{R}^{p\times p}\), with \(\boldsymbol{0}_{p-s} \in \mathbb{R}^{p-s}\) representing a zero-vector and \(\boldsymbol{\mu}_s \in \mathbb{R}^s\) representing the regularization coefficients. For the ABFs, same as above, arbitrary polynomials \(\boldsymbol{P}_{ji}\) multiplied by a suitable radial weight function \(\varphi_{ji}\) can be used. We end up with the resulting scheme \[\boldsymbol{W}_{ji} = \boldsymbol{P}_{ji} \varphi_{ji}. \label{eq:mmls95general95abfs}\tag{54}\] \[L^{\boldsymbol{\alpha}}_{i} u = \displaystyle\sum_{j\in\mathcal{N}_i} u_{ji} \boldsymbol{W}_{ji} \cdot (\widetilde{\mathbf{M}}_i^{-1} \boldsymbol{C}^{\boldsymbol{\alpha}}), \quad \widetilde{\mathbf{M}}_{i} = \displaystyle\sum_{j\in\mathcal{N}_i}\boldsymbol{P}_{ji}\boldsymbol{W}_{ji}^{\top} + \mathbf{H}_{\mu}. \label{eq:mmls95discrete95diff95operator}\tag{55}\] with the symmetric modified moment matrix \(\widetilde{\mathbf{M}_i}\) built using arbitrary basis functions. The matrix \(\widetilde{\mathbf{M}_i}\) generally has the same rank as the matrix \(\mathbf{M}_i\) from the equation 46 , but requires for it fewer neighboring nodes.

1.7.6 Other variants based on MLS: Regularized Moving Least Squares (RMLS), Interpolating Moving Least Squares (IMLS), Improved Interpolating Moving Least Squares (IIMLS), Interpolating Modified Moving Least Squares (IMMLS) and more↩︎

Following the introduced methods based on MLS, large number of modifications and improvements exist and some of them are as well adopted as a strong form collocation schemes. We do not discuss these methods in detail, but at least briefly mention their differences and refer to a more detailed description.

Another regularization techniques were presented as Regularized Moving Least Squares (RMLS) in [43]. These are close to MMLS but are based directly on the Tikhonov-Miller regularization. Group of methods with name Interpolating Moving Least Squares (IMLS) introduced by Lancester [22] adds the property that the weight functions have to satisfy property \(w_{ji} \approx \delta_{ji}\). The interpolating property was also extended to MMLS, which is named Interpolating Modified Moving Least Squares (IMMLS) [44]. Another group of methods is called Improved Moving Least Squares (IMLS) [45] or Improved Interpolating Moving Least Squares (IIMLS) [46], [47], where the weight functions have the property of delta distributions but any standard radial functions \(\varphi_{ji}\) can be used. In order to avoid ill-conditioned moment matrices, Adaptive Orthogonal IIMLS (AO-IIMLS) with additional extensions was proposed [48].

1.8 Finite Pointset Method (FPsM)↩︎

Finite Pointset Method (FPsM) is based on the method of weighted least squares and was first introduced in [49] with further works on application of the method to flow problems [50][55].

To construct the approximation of differential operators, similarly to GFDM, both polynomial method \(\ell_2\) minimization which corresponds to the derivation introduced in section 5 is used. Given a second order differential model equation in 3D:

\[Au + \boldsymbol{B}\cdot \nabla u + C \Delta \psi = f(\boldsymbol{x}) \label{eq:general95second95order95eq}\tag{56}\]

where the coefficients \(A\), \(B\), \(C\) and \(f(\boldsymbol{x}\) are given real valued function. To solve this equation with Dirichlet \(u = \phi\) or Neumann boundary conditions

\[\frac{\partial u}{\partial \boldsymbol{n}} = \phi \label{eq:general95second95order95eq95Neumann95BC}\tag{57}\]

We extend the definition of the matrix \(\boldsymbol{X}_{ji}\) of Taylor monomials defined in the section [sec::com_definition] to add equations 56 and 57 as constraints in the form

\[Aa_0 + B_1 a_1 + B_2 a_2 + B_3 a_3 + C(a_4 + a_7 + a_9 ) = f\]

\[n_1 a_1 + n_2 a_2 + n_3 a_3 = \phi\]

where \(B = (B_1 , B_2 , B_3 )\), \(n = (n_1 , n_2 , n_3 )\) and vector \(a\) contains the elements of vector \(\boldsymbol{D}u\) defined in [eq:derivatives_vect], i.e.

\[\begin{align} a_1 &= \frac{\partial u}{\partial x}\\ a_2 &= \frac{\partial u}{\partial y}\\ ...\\ a_9 &= \frac{\partial^2 u}{\partial^2 z} \end{align}\]

The matrix \(\tilde{\mathbb{X}}_{i}\) in matrix notation [eq:appendix-introduction-of-matrix-formalism] becomes (see more on matrix notation [sec:appendix-matrixForm])

\[\tilde{\mathbb{X}}_{i} = \begin{bmatrix} 1 & dx_1 & dy_1 & dz_1 & \frac{1}{2}dx_1^2 & dx_1 dy_1 & dx_1 dz_1 & \frac{1}{2} dy_1^2 & dy_1 dz_1 & \frac{1}{2} dz_1^2 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ 1 & dx_{N_i} & dy_{N_i} & dz_{N_i} & \frac{1}{2}dx_{N_i}^2 & dx_{N_i} dy_{N_i} & dx_{N_i} dz_{N_i} & \frac{1}{2} dy_{N_i}^2 & dy_{N_i} dz_{N_i} & \frac{1}{2} dz_{N_i}^2\\ A & B_1 & B_2 & B_3 & C & 0 & 0 & C & 0 & C\\ 0 & n_1 & n_2 & n_3 & 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix} \label{eq:FPsM95matrix95X}\tag{58}\]

To construct the approximation of differential operators, similarly to GFDM, both polynomial method \(\ell_2\) minimization which corresponds to the derivation introduced in section 5 was used [55] and also derivation based on \(\ell_2\) minimization of approximation error given in section 4 can be found in [49]. Analytical and numerical comparison of numerical stability of FPsM and GFDM methods is given in [17].

Taking [52] as the referential work (although only Laplace operator is considered, it is clear, that this approach can be used for any particular differential operator \(d\) ), we assume approximate differential operator \[L^{\boldsymbol{\alpha}}_{i} u= \displaystyle\sum_{j\in\mathcal{N}_i} u_{j}w^{\boldsymbol{\alpha}}_{ji} \approx \boldsymbol{C}^{\boldsymbol{\alpha}} \cdot \left.\boldsymbol{D} u\right\rvert_{i}. \label{eq:FPsM95ddo}\tag{59}\] The weights are obtained similarly to 4 from the following minimization problem \[\begin{align} \tilde{\mathcal{J}}_{\mathcal{E},i} \coloneq \displaystyle\sum_{j\in\mathcal{N}_i} \tilde{\varphi}_{ji} \left( \tilde{e}_{ji}^{m+1} \right)^2 \quad \text{with respect to} \left.\boldsymbol{D}u\right\rvert_{i}. \addtocounter{equation}{1} \label{eq:FPsM95mc-optimizationformulation-extended2} \end{align}\tag{60}\]

where \[\begin{align} \tilde{e}_{N_i+1, i}^{m+1} &= Au + \boldsymbol{B}\cdot \nabla u + C \Delta \psi - f(\boldsymbol{x}_i)\\ \tilde{e}_{N_i+2, i}^{m+1} &= \frac{\partial u}{\partial \boldsymbol{n}} - \phi\\ \tilde{\varphi}_{N_i+1, i} &= 1\\ \tilde{\varphi}_{N_i+2, i} &= 1 \end{align}\]

The final scheme reads as:

\[\tilde{\mathbf{M}}_i = \displaystyle\sum_{j\in\mathcal{N}_i}\tilde{\boldsymbol{X}}_{ji}\left(\tilde{\boldsymbol{W}}_{ji}\right)^{\top}, \quad \tilde{\boldsymbol{W}}_{ji} = \tilde{\boldsymbol{X}}_{ji} \tilde{\varphi_{ji}}.\] \[L^{\boldsymbol{\alpha}}_{i} u = \displaystyle\sum_{j\in\mathcal{N}_i} u_{ji} \tilde{\boldsymbol{W}}_{ji} \cdot \left( \left( \tilde{\mathbf{M}_i} \right )^{-1} \boldsymbol{C}^{\boldsymbol{\alpha}} \right)\] which is consistent with the methods and procedures presented above.

1.9 Least Squares Kinetic Upwind Method (LSKUM)↩︎

Another methods that employs finite differences on scattered nodes is Least-squares Kinetic Upwind Method (LSKUM) introduced by Ghosh and Deshpande [56], [57] and extended later on for high order schemes as q-LSKUM [58], [59] (denoted as well as q-KVFS referring to Kinetic Flux Vector Splitting). This method itself is not just a tool for approximating derivatives. The method was designed originally for obtaining numerical solution of Euler equations of gas dynamics and of Navier–Stokes equations of compressible viscous heat conducting fluid. LSKUM uses a relation between the Boltzmann transport equation and the Euler equations for gas dynamics to solve inviscid compressible flow and works with the so-called kinetic flux vector splitting [60], [61] to compute the fluxes. It comes with a technique for discretizing derivatives at arbitrarily spaced points. Here, we focus only on the discretization of the spatial derivatives, and we refer to [56], [62] for details about how the discrete derivatives are employed.

LSKUM uses approach which \(\ell_2\) minimization of approximation error and thus the presented technique to obtain approximation of derivatives is equivalent with approach presented in section 4. Zeroth moment is not included.

Since the method relays on minimization of approximation error of Taylor expansion, introducing LSKUM within the presented frameworks, the ABFs are given by Taylor monomials multiplied by suitable weight function \(\boldsymbol{W}_{ji} = \boldsymbol{X}_{ji} \varphi_{ji}\). Thus, the method for approxmating the derivatives can be summarized as follows \[\boldsymbol{W}_{ji} = \boldsymbol{X}_{ji} \varphi_{ji}. \label{eq:lskum95w95taylor95monomials95with95rbf95recap}\tag{61}\] \[L^{\boldsymbol{\alpha}}_{i} u = \displaystyle\sum_{j\in\mathcal{N}_i} u_{ji} \boldsymbol{W}_{ji} \cdot (\mathbf{M}_i^{-1} \boldsymbol{C}^{\boldsymbol{\alpha}}), \quad \mathbf{M}_{i} = \displaystyle\sum_{j\in\mathcal{N}_i}\boldsymbol{X}_{ji} \boldsymbol{W}_{ji}^{\top}. \label{eq:lskum95discrete95diff95operator95recap}\tag{62}\] To recover the expressions for first and second order from original paper [56] (where the authors symbolically expressed the solution), equation 62 can be used with \(m = 1\) and \(m=2\) for the first and second order respectively (see section [sec:com_definition]).

For the needs of LSKUM, not all the particles in \(\mathcal{N}_i\) are used to obtain the derivatives. 2 Since the method builds on the upwind scheme, only half of the neighboring nodes of the considered node \(i\) are used along each of the coordinate axes \(k\). We denote this truncated neighborhood as \(\mathcal{N}_i^{\beta\pm}\) where \(\beta\) stands for a direction along the coordinate axes, i.e. \(\beta=x, y, \dots n\) and the plus-minus sign denotes which half of the neighborhood is used. This indicates that the propagation of information from the neighboring nodes \(j\) to the receiving node \(i\) depends on the location \(j\) in relation to \(i\) and the signs of the imposed velocity \(v_x, v_y \dots v_z\) 3 , thus ensuring the inclusion in the computation of the derivative operator of only those neighbors that influence the solution of the advection equation at \(i\).

1.10 Kinetic Meshless Method (KMM)↩︎

Kinetic Meshless Method is largely based on the LSKUM. It utilizes finite differences on scattered nodes in an effort to improve accuracy and robustness of the original method. KMM was introduced by Praveen and Deshpande [63], [64] and similarly to LSKUM, the method was designed to solve compressible inviscid flows and discretization of derivatives is only one particular step in the procedure (see LSKUM section 1.9 and the original papers [63], [64]. Approximation of derivatives is based on \(\ell_2\) minimization of approximation error presented in section \(4\) and zeroth moment is not included.

The approach to approximate derivatives is the same as we derived in section 4 or described in LSKUM section 1.9. However, the procedure presented in the KMM paper is called dual least-squares (DLS) method. The idea is to use a modified least squares scheme in which mid-point function values \(u_{j/2} := u( ( \boldsymbol{x}_j - \boldsymbol{x}_i) / 2 )\) (i.e. fluxes) are used to introduce upwind convective stabilization. These mid-point values are used instead of the values \(u_j\) to approximate derivatives at the position \(\boldsymbol{x}_i\) by replacing \(u_{ji} = u_{j} - u_{i}\) in the Taylor expansion with \(u_{ji/i} = u_{j/2} - u_{i}\), thus avoiding the flux splitting employed before in the LSKUM. Hence, similarly to all methods introduced before it uses all nodes from the surroundings \(\mathcal{N}_i\).

Additionally, in comparison to the (q-)LSKUM, rotational invariance due to the DLS approximation is proven that simplifies setting up boundary conditions [64].

To introduce the upwind convective stabilization, the values of \(u_{j/2}\) take the following approximation:

\[u_{ij/2} = \begin{cases} u_i, & \quad \boldsymbol{v} \cdot \hat{\boldsymbol{e}}_{ji} \ge 0 \\ u_j, & \quad \boldsymbol{v} \cdot \hat{\boldsymbol{e}}_{ji} < 0\, , \end{cases}\]

where \(\mathbf{v}\) is the imposed velocity, \(\hat{\boldsymbol{e}}\) is a unit vector directed from \(\boldsymbol{x}_i\) to \(\boldsymbol{x}_j\).

Due to the minimization of approximation error, the structure of ABFs is given by Taylor monomials multiplied by suitable weight function \(\boldsymbol{W}_{ji} = \boldsymbol{X}_{ji/2} \varphi_{ji}\) which leads to approximation formula. \[\boldsymbol{W}_{ji} = \boldsymbol{X}_{j/2i} \varphi_{ji}. \label{eq:kmm95w95taylor95monomials95with95rbf95recap}\tag{63}\] \[L^{\boldsymbol{\alpha}}_{i} u = \displaystyle\sum_{j\in\mathcal{N}_i} (u_{ji/2} - u_{i}) \boldsymbol{W}_{ji} \cdot (\mathbf{M}_i^{-1} \boldsymbol{C}^{\boldsymbol{\alpha}})\, , \quad \mathbf{M}_{i} = \displaystyle\sum_{j\in\mathcal{N}_i}\boldsymbol{X}_{ji/2} \boldsymbol{W}_{ji}^{\top}\, . \label{eq:kmm95discrete95diff95operator95recap}\tag{64}\]

Compared to the general approach taken in section 4, the method modifies the matrix \(\mathbf{M}\) and the vector \(\boldsymbol{X}\) to account for the halved node distance.

The method described so far corresponds in the original work [64] to a kinetic least-squares formulation as opposed to a least-squares formulation at the macroscopic level. The latter is also derived and shown to have a higher order of approximation, but more prone to ill-conditioning.

For the higher order approximation at the kinetic level, KMM utilizes the scheme from the seminal series Towards the ultimate conservative difference scheme. [65][67] that form a one-parameter family known as the k-schemes, after the parameter \(k\), with \(k = -1\) generating the fully one-sided second-order scheme to interpolate variables from neighboring particles to the middle points \(u_{ji/2}\) where the flux is evaluated. This interpolation uses derivatives computed by the standard approximation given by equation 64 .

1.11 Particle Difference Method (PDM)↩︎

Particle Difference Method (PDM) [68] (sometimes also called Particle Derivative Approximation - PDA) is derived using the \(\ell_2\) minimization of approximation error and exactly corresponds to the derivation introduced in section 4. In this case, zeroth moment is included and the resulting approximation is obtained by minimizing functional \(\mathcal{J}_{\mathcal{E},i}\) and solving the optimization problem. Using this approach, all derivatives are obtained simultaneously

\[\left.\boldsymbol{D} u\right\rvert_{i} = \displaystyle\sum_{j\in\mathcal{N}_i} \varphi_{ji} u_{ji} \left( \mathbf{M}_i^{-1} \boldsymbol{X}_{ji} \right) \label{eq:pdm-approximation-all-derivatives}\tag{65}\] \[\mathbf{M}_i = \displaystyle\sum_{j\in\mathcal{N}_i}\boldsymbol{X}_{ji}\boldsymbol{W}_{ji}^{\top}, \quad \boldsymbol{W}_{ji} = \boldsymbol{X}_{ji} \varphi_{ji} .\]

To obtain approximation of particular derivative in the form of (2), themapping vector \(\boldsymbol{C}^{\boldsymbol{\alpha}}\) is employed \[L^{\boldsymbol{\alpha}}_{i} u = \displaystyle\sum_{j\in\mathcal{N}_i} u_{ji} \boldsymbol{W}_{ji} \cdot \left( \mathbf{M}_i^{-1} \boldsymbol{C}^{\boldsymbol{\alpha}} \cdot \right).\]

An important contribution of [68] is the extension of this approach to model weak and strong discontinuities - so called EPDM or EPDA. Note also that the paper considers non-smooth \(\varphi_{ji}\) functions as opposed to the standard bell shape functions.

1.12 Finite Difference Particle Method (FDPM)↩︎

Finite Difference Particle Method (FDMP) [69] is derived using the \(\ell_2\) minimization of approximation error and exactly corresponds to the derivation introduced in section 4. The authors essentially present this method as a GFDM-based scheme. To keep our review consistent, zeroth moment is not included, which means that the resulting scheme is

\[\left.\boldsymbol{D} u\right\rvert_{i} = \displaystyle\sum_{j\in\mathcal{N}_i} \varphi_{ji} u_{ji} \left( \mathbf{M}_i^{-1} \boldsymbol{X}_{ji} \right) \label{eq:fdpm-approximation-all-derivatives}\tag{66}\] \[\mathbf{M}_i = \displaystyle\sum_{j\in\mathcal{N}_i}\boldsymbol{X}_{ji}\boldsymbol{W}_{ji}^{\top}, \quad \boldsymbol{W}_{ji} = \boldsymbol{X}_{ji} \varphi_{ji}.\]

To obtain approximation of particular derivative in the form of [eq:general_discrete_diff_operator_zm], themapping vector \(\boldsymbol{C}^{\boldsymbol{\alpha}}\) is employed \[L^{\boldsymbol{\alpha}}_{i} u = \displaystyle\sum_{j\in\mathcal{N}_i} u_{ji} \boldsymbol{W}_{ji} \cdot \left( \mathbf{M}_i^{-1} \boldsymbol{C}^{\boldsymbol{\alpha}} \cdot \right).\]

The work of [69] extends the standard scheme by introducing a dynamic method to update variable smoothing length so that the number of the neighbor particles remains relatively constant. In turn it makes the parameter \(h_i\) to the radial basis function \(\varphi_{ij}\) vary spatially, and the function becomes anisotropic.

1.13 Meshless Finite Difference Method (MFDM)↩︎

Meshless Finite Difference Method MFDM is not a new variant of inter-network MFD methods but rather an umbrella term for a class of methods using the same principles of use by Milewsky in [70]. To cover all the various occurrences of the same or similar methods under different names, we decided to include MFDM aswell, although it is not presented as o novel method. The article summarize discretization of differential operators and their use in methods with strong and weak formulation.

Considering the strong form method, MFDM is similar to GFDM. The method is derived using \(\ell_2\) minimization of approximation error which we presented in detail in section 4. The zeroth moment is included. Moreover, high order extensions are presented by MFDM, which are done by canceling higher order terms up to certain order \(m\). However, MFDM brings different approach to high order corrections [71], [72], which are not done through simply zeroing out of higher order terms as we presented above.

MFDM is presented in vector formalism which is similar to what we introduced in opening section 2. Starting from the formulation of the minimization problem, similarly to the summary presented in GFDM section 1.6, we obtain ABFs given by Taylor monomials \(\boldsymbol{X}_{ji}\) multiplied appropriate RBF weight function \(\varphi_{ji}\), which originates in minimization of Taylor expansion approximation error. Following derivation from section 4, we end up with approximation of all derivatives up to order of \(m\) \[\boldsymbol{W}_{ji} = \boldsymbol{X}_{ji} \varphi_{ji}\, , \label{eq:mfdm95w95taylor95monomials95with95rbf}\tag{67}\] we can write \[\left.\boldsymbol{D} u\right\rvert_{i} = \displaystyle\sum_{j\in\mathcal{N}_i}\mathbf{M}_{i}^{-1}\boldsymbol{W}_{ji} u_{ji}, \quad \mathbf{M}_{i} = \displaystyle\sum_{j\in\mathcal{N}_i}\boldsymbol{X}_{ji}\boldsymbol{W}_{ji}^{\top}. \label{eq:mfdm95linSystem95with95w}\tag{68}\] To express approximation of particular differential operator \(d\), we include the mapping vector to select desired derivatives \[\boldsymbol{C}^{\boldsymbol{\alpha}} \cdot \left.\boldsymbol{D} u\right\rvert_{i} = \displaystyle\sum_{j\in\mathcal{N}_i} \boldsymbol{C}^{\boldsymbol{\alpha}} \cdot \left( \mathbf{M}_{i}^{-1}\boldsymbol{W}_{ji} \right) u_{ji}, \label{eq:mfdm95selec95derivative}\tag{69}\] and thanks to symmetry of moment matrix \(\mathbf{M}_i\), we can summarize the method as \[\boldsymbol{W}_{ji} = \boldsymbol{X}_{ji} \varphi_{ji} \label{eq:mfdm95w95taylor95monomials95with95rbf95recap}\tag{70}\] \[L^{\boldsymbol{\alpha}}_{i} u = \displaystyle\sum_{j\in\mathcal{N}_i} u_{ji} \boldsymbol{W}_{ji} \cdot (\mathbf{M}_i^{-1} \boldsymbol{C}^{\boldsymbol{\alpha}}), \quad \mathbf{M}_{i} = \displaystyle\sum_{j\in\mathcal{N}_i}\boldsymbol{X}_{ji}\boldsymbol{W}_{ji}^{\top}. \label{eq:mfdm95discrete95diff95operator95recap}\tag{71}\]

Although this approach is common to all methods presented so far, in order to obtain high order approximation, MFDM takes different strategy then just simple cancel higher order terms by setting approximation order \(m\) higher then is the order of approximated derivative \(L = |\boldsymbol{\alpha}|\). Approach to treat high order approximations is introduced [70] and more recently in [71], [72]. High order approximation \(L^{\boldsymbol{\alpha},>L}_{i}\) denoted by superscript \(>L\) is obtained by introducing correction \(\Delta^{\boldsymbol{\alpha}}\) to lower order approximation \(L^{\boldsymbol{\alpha},{\leq L}}_{i}\) denoted by superscript \({\leq L}\) \[L^{\boldsymbol{\alpha},>L}_{i} = L^{\boldsymbol{\alpha},{\leq L}}_{i} - \Delta_{i}^{\boldsymbol{\alpha}}.\] The \(\Delta_{i}^{\boldsymbol{\alpha}}\) correction factor is obtained as follows. Assuming Taylor expansion [eq:taylor_series_zm], we can split the terms to lower order and higher order \[u_{j} = \boldsymbol{X}_{ji}^{{\leq L}}\cdot\left.\boldsymbol{D}^{\leq L} u\right\rvert_{i} + \boldsymbol{X}_{ji}^{>L}\cdot\left.\boldsymbol{D}^{>L} u\right\rvert_{i} + e_{ji}^{m+1}, \label{eq:mfmd95taylor95series95split}\tag{72}\] where \[\boldsymbol{X}_{ji}^{{\leq L}} = \left\{ x_{1,ji}^{\alpha_1} \ldots x_{n,ji}^{\alpha_n} / \boldsymbol{\alpha}! \right\}_{|\boldsymbol{\alpha}|=0}^L, \quad \boldsymbol{X}_{ji}^{>L} = \left\{ x_{1,ji}^{\alpha_1} \ldots x_{n,ji}^{\alpha_n} / \boldsymbol{\alpha}! \right\}_{|\boldsymbol{\alpha}|>L}^m,\] \[\boldsymbol{D}^{\leq L} u = \left\{ \partial^{\boldsymbol{\alpha}} u \right\}_{\boldsymbol{|\alpha|}=0}^L, \quad \boldsymbol{D}^{>L} u = \left\{ \partial^{\boldsymbol{\alpha}} u \right\}_{\boldsymbol{|\alpha|}>L}^m. \label{eq:mfdm95low95high95order95derivatives}\tag{73}\] In the work [72] author states that usually \(L = |\boldsymbol{\alpha}|\) and in most cases \(m = 2L\).

Next, partitioning to lower and higher order terms is introduced as well in optimization problem [eq:l2e:emin-optimizationformulation] to minimize approximation error \[\begin{align} \widetilde{\mathcal{J}}_{\mathcal{E},i} = \displaystyle\sum_{j\in\mathcal{N}_i} \varphi_{ji} \left( e_{ji}^{m+1} \right)^2 = \displaystyle\sum_{j\in\mathcal{N}_i} \varphi_{ji} \left( \boldsymbol{X}_{ji}^{{\leq L}}\cdot\left.\boldsymbol{D}^{\leq L} u\right\rvert_{i} + \boldsymbol{X}_{ji}^{>L}\cdot\left.\boldsymbol{D}^{>L} u\right\rvert_{i} - u_{ji} \right)^2 \\ \mathrm{min} \left( \mathcal{J}_{\mathcal{E},i} \right) \quad \text{with respect to} \left.\boldsymbol{D}^{\leq L} u\right\rvert_{i}. \addtocounter{equation}{1} \label{eq:mfdm95emin-optimizationformulation} \end{align}\tag{74}\] Following formal minimization \(\partial \mathcal{J}_i / \partial \left.\boldsymbol{D}^{\leq L} u\right\rvert_{i} = 0\) performed in section 4 we obtain following result \[\left.\boldsymbol{D}^{\leq L} u\right\rvert_{i} = \displaystyle\sum_{j\in\mathcal{N}_i}\mathbf{M}_{i}^{-1}\boldsymbol{W}_{ji} u_{j} - \boldsymbol{\Delta}_{i}\, , \label{eq:}\tag{75}\] where the first term on the right hand side corresponds to the expression for derivatives above 68 and \[\boldsymbol{\Delta}_i = \mathbf{M}_{i}^{-1}\boldsymbol{W}_{ji} \left( \boldsymbol{X}_{ji}^{>L}\cdot\left.\boldsymbol{D}^{>L} u\right\rvert_{i} \right)\, ,\] groups all the additional terms arising from the presence of unknown higher order derivatives \(\left.\boldsymbol{D}^{>L} u\right\rvert_{i}\). \(\Delta_{i}\) represents a vector of correction terms, each element corrects the corresponding derivative in \(\boldsymbol{\Delta}_{i}\). In case we are interested only in particular derivative \(d\), we proceed in the same way as in equation 69 using \(\boldsymbol{C}^{\boldsymbol{\alpha}}\). The problem of correction terms \(\boldsymbol{\Delta}_i\) is that it contains unknown vector of higher order derivatives \(\left.\boldsymbol{D}^{>L} u\right\rvert_{i}\). High order derivatives \(\left.\boldsymbol{D}^{>L} u\right\rvert_{i}\) can be expressed by appropriate composition law of low order derivatives \[\begin{align} \left.\boldsymbol{D}^{>L} u\right\rvert_{i} &= \left( \boldsymbol{D}^{\leq L}\mathbf{P}_{t} \right) \left( \left.\boldsymbol{D}^{\leq L} u\right\rvert_{i}\mathbf{P}_{u} \right) \\ &= \displaystyle\sum_{j\in\mathcal{N}_i} \left( \mathbf{M}_{i}^{-1}\boldsymbol{W}_{ji}\mathbf{P}_{t} \right) \left( \left.\boldsymbol{D}^{\leq L} u\right\rvert_{j}\mathbf{P}_{u} \right) \\ &= \displaystyle\sum_{j\in\mathcal{N}_i} \left( \mathbf{M}_{i}^{-1}\boldsymbol{W}_{ji}\mathbf{P}_{t} \right) \left( \displaystyle\sum_{s}\mathbf{M}_{i}^{-1}\boldsymbol{W}_{sj} u_{s}\mathbf{P}_{u} - \boldsymbol{\Delta}_{j}\mathbf{P}_{u} \right)\, , \end{align}\] where the matrices \(\mathbf{P}_{t}\) and \(\mathbf{P}_{u}\) are non-unique permutation matrices that fulfill the property \(p - t - u = 0\) for every higher-order approximation \(p = | \boldsymbol{\alpha} |\) and the lower-order derivatives \(t = | \boldsymbol{\alpha}^{'} |\) and \(u = | \boldsymbol{\alpha}^{''} |\). The permutation matrices select \(\boldsymbol{\alpha}^{'}\) and \(\boldsymbol{\alpha}^{''}\) from the corresponding rows of the matrix \(\mathbf{M}_{i}\).

However, this resulting formula again depends on the correction terms \(\boldsymbol{\Delta}_{j}\). The problem is solved by employing iterative procedure \[\left( \left.\boldsymbol{D}^{>L} u\right\rvert_{i} \right)^{ (k + 1) } = \displaystyle\sum_{j\in\mathcal{N}_i} \left( \mathbf{M}_{i}^{-1}\boldsymbol{W}_{ji}\mathbf{P}_{t} \right) \left( \displaystyle\sum_{s}\mathbf{M}_{i}^{-1}\boldsymbol{W}_{sj} u_{s}\mathbf{P}_{u} - \boldsymbol{\Delta}_{j}^{(k)}\mathbf{P}_{u} \right)\, ,\] with the initial solution corresponding to low order approximation \(\boldsymbol{\Delta}^{(0)} = \left.\boldsymbol{D}^{\leq L} u\right\rvert_{i}\).

For further details as well as for treatment of boundary conditions and handling high order correction for the boundary conditions, we refer to the original works [70], [72]. Last, we should mention, that MFDM is presented mostly in matrix formalism.

Notation remark↩︎

As we already mentioned, the way how is MFDM written is close to the notation used through the paper. We would like to only point out that original works use \(\mathbf{P}\) to denote vector of Taylor monomials which we denote by \(\boldsymbol{X}_{ji}\). Moreover, original paper use matrix \(\mathbf{M}\) to denote matrix of coefficients of all derivatives instead of moment matrix (there is no any particular symbol for a moment matrix on its own). Thus, \(\mathbf{M}\) from the original papers corresponds to our \(\mathbb{C}_{i}^{\top}\) introduced in the section 2.

1.14 Lagrangian Differencing Dynamics (LDD)↩︎

In contrast to all methods presented so far, Lagrangian Differencing Dynamics (Originally Proposed as Renormalised meshless operator [73], then renamed to Lagrangian finite difference method [74] and finally renamed to Lagrangian differencing dynamics [75]) does not consider general formula to approximate general differential operator, but it focuses only on gradient and Laplace operator approximations providing second order formulas. The motivation behind LDD is to find suitable compromise between SPH/MPS-like approximations, which do not provide sufficient accuracy and accurate collocation schemes, which are computationally more expensive. This is achieved only with the help of a correction matrix, known for example from SPH. In our notation, this is \(\mathbf{M}_i\) for \(m = 1\), where zeroth terms are not considered. For the purpose of this section, we denote this matrix as \(\mathbf{M}_{i}^{1}\), to highlight that only linear terms are present. Together with the symmetry of the weight function \(\varphi_{ji}\), this allows us to construct the desired compromise approximation.

At this point, short comment about the weight function \(\varphi\) has to be made. In the original work [73] absolutely normalized weight function \(\psi_{ji} = \varphi_{ji} / \sum_{j\in\mathcal{N}_i} \varphi_{ji}\) is used. This function acts the same was as the \(\varphi_{ji}\) introduced above. However, in more recent works [74][76] authors operates with the standard normalized weight function \(\varphi_{ji}\). To keep consistency with the notation introduced above, we adopt the approach from recent works and weight function \(\varphi_{ji}\) through the expressions.

For the purpose of derivation LDD, we write Taylor expansion with split second derivatives \[u_{i} = u_{j} + \boldsymbol{x}_{ji} \cdot \left.\nabla u\right\rvert_{i} + \frac{1}{2} \left.\boldsymbol{D}^{2} u\right\rvert_{i} \cdot \boldsymbol{X}_{ji}^{2} + \left.\boldsymbol{D}^{2,\textrm{mixed}} u\right\rvert_{i} \cdot \boldsymbol{X}_{ji}^{2,\textrm{mixed}} + e_{ji}^{3} \label{eq:ldd-taylorExpansionSplitedDerivatives}\tag{76}\] where \(e_{ji}^{3}\) is third order remainder term and where we divided \(\boldsymbol{D} u\) and \(\boldsymbol{X}_{ji}\) to sub-components 4 \[\boldsymbol{D}^{2} u = \left\{ \frac{\partial^{2} u}{\partial{x}^{2}_{\beta}} \right\}_{\beta=1}^{n} \quad \boldsymbol{D}^{2,\textrm{mixed}} u = \left\{ \frac{\partial^{2} u}{\partial{x}_{\alpha}\partial{x}_{\beta}} \right\}_{\alpha, \beta =1, \alpha\neq\beta}^{n} \label{eq:ldd-selectedDerivativesAndMonomials}\tag{77}\] \[\boldsymbol{X}_{ji}^{2} = \left\{ x_{\beta,ji}^2 \right\}_{\beta=1}^{n} \quad \boldsymbol{X}_{ji}^{2,\textrm{mixed}} = \left\{ x_{\alpha,ji} x_{\beta,ji} \right\}_{\alpha, \beta =1, \alpha\neq\beta}^{n} \label{sutbgicn}\tag{78}\] where \(\boldsymbol{D}^{2} u \in \mathbb{R}^{n}\), \(\boldsymbol{D}^{2,\textrm{mixed}} u \in \mathbb{R}^{\widetilde{p}}\) and \(\boldsymbol{X}_{ji}^{2} \in \mathbb{R}^{n}\), \(\boldsymbol{X}_{ji}^{2,\textrm{mixed}} \in \mathbb{R}^{\widetilde{p}}\) with \(\widetilde{p} = \sum_{k=1}^{n-1} k\).

Considering identity vector \(\boldsymbol{I} \in \mathbb{R}^{n}\), the Laplace operator can be written as \[\left.\nabla^2 u\right\rvert_{i} = \left.\boldsymbol{D}^{2} u\right\rvert_{i} \cdot \boldsymbol{I}. \label{eq:ldd-taylorExpansionSelectedLaplace}\tag{79}\] Isolating the second derivatives \(\left.\boldsymbol{D}^{2} u\right\rvert_{i}\) from Taylor expansion 76 \[\left.\boldsymbol{D}^{2} u\right\rvert_{i} \cdot \boldsymbol{X}_{ji}^{2} = 2 \left( u_{ji} - \boldsymbol{x}_{ji} \cdot \left.\nabla u\right\rvert_{i} - \left.\boldsymbol{D}^{2,\textrm{mixed}} u\right\rvert_{i} \cdot \boldsymbol{X}_{ji}^{2,\textrm{mixed}} - e_{ji}^{3} \right) \label{eq:ldd-taylorExpansionExpressedSecondDerivatives}\tag{80}\] there are three ways how we can construct the approximation.

1.14.1 Laplacian approximation - naive version↩︎

In order to obtain Laplacian, equation 80 is multiplied by \(\boldsymbol{I} \cdot \boldsymbol{I} / \lVert \boldsymbol{x}_{ji} \rVert^2\), which turns the left hand side into 79 \[\left.\nabla^2 u\right\rvert_{i} = \frac{2n}{ \lVert \boldsymbol{x}_{ji} \rVert^2 } \left( u_{ji} - \boldsymbol{x}_{ji} \cdot \left.\nabla u\right\rvert_{i} - \left.\boldsymbol{D}^{2,\textrm{mixed}} u\right\rvert_{i} \cdot \boldsymbol{X}_{ji}^{2,\textrm{mixed}} - e_{ji}^{3} \right), \label{eq:ldd-naiveLaplacianPairwise}\tag{81}\] where the dimension factor \(n\) is product of \(n = \boldsymbol{I} \cdot \boldsymbol{I}\). Multiplying equation 81 with the weight function \(\varphi_{ji}\) and performing summation over all neighboring particles, due to symmetry of \(\varphi_{ji}\), the term with mixed derivatives is canceled and dropping the Taylor expansion remainder, we obtain formula \[L_{i}^{\nabla^2,\textrm{naive}} u = 2n \displaystyle \sum_{j\in\mathcal{N}_i} \frac{\varphi_{ji}}{ \lVert \boldsymbol{x}_{ji} \rVert^2 } \left( u_{ji} - \boldsymbol{x}_{ji} \cdot L_i^{\nabla} u \right). \label{eq:ldd95naive}\tag{82}\] As show in [73], the error of approximation 82 contains error of gradient approximation \(L_i^{\nabla} u\) and truncated higher order terms from the Taylor series. Moreover as the authors highlights, to obtain sufficient accuracy, high number of neighbors of utmost regular distribution is required.

1.14.2 Laplacian approximation - sum version↩︎

The naive approximation is strongly affected by the accuracy of approximation of gradient \(L_i^{\nabla} u\). Substituting the first order approximation of gradient obtained from (2) (which corresponds to LDD approximation of gradient) directly to the Taylor expansion 79 , we obtain following expression \[u_j = u_i + \boldsymbol{x}_{ji} \cdot \left(\mathbf{M}_{i}^{1}\right)^{-1} \displaystyle \sum_{j\in\mathcal{N}_i} \varphi_{ji} \boldsymbol{x}_{ji} \left( u_{ji} - \frac{1}{2} \left.\boldsymbol{D}^{2} u\right\rvert_{i} \cdot \boldsymbol{X}_{ji}^{2} - e_{ji}^{3} \right) + \frac{1}{2} \left.\boldsymbol{D}^{2} u\right\rvert_{i} \cdot \boldsymbol{X}_{ji}^{2} + e_{ji}^{3}\] Reshuffling the terms in order to group the terms with second derivatives \(\left.\boldsymbol{D}^{2} u\right\rvert_{i}\) on one side, we obtain \[\begin{align} \left.\boldsymbol{D}^{2} u\right\rvert_{i} \cdot \boldsymbol{X}_{ji}^{2} - \boldsymbol{x}_{ji} \cdot \left(\mathbf{M}_{i}^{1} \right)^{-1} \displaystyle \sum_{j\in\mathcal{N}_i} \varphi_{ji} \boldsymbol{x}_{ji} \left( \left.\boldsymbol{D}^{2} u\right\rvert_{i} \cdot \boldsymbol{X}_{ji}^{2} \right) = \\ 2 \left( u_{ji} - \boldsymbol{x}_{ji} \cdot \left(\mathbf{M}_{i}^{1} \right)^{-1} \displaystyle \sum_{j\in\mathcal{N}_i} \varphi_{ji} \boldsymbol{x}_{ji} \left( u_{ji} - e_{ji}^{3} \right) - e_{ji}^{3} \right). \addtocounter{equation}{1}\label{eq:ldd-sumlaplacian-expressedLaplacianTerms} \end{align}\tag{83}\] In order to obtain the Laplacian, we multiply the equation 83 by \(\boldsymbol{I} \cdot \boldsymbol{I}\). Since \(\boldsymbol{X}_{ji}^{2} \cdot{I} = \lVert \boldsymbol{x}_{ji} \rVert ^2\), omitting the Taylor expansion reminders we have \[\nabla^2 u_i \left( \lVert \boldsymbol{x}_{ji} \rVert ^2 - \boldsymbol{x}_{ji} \cdot \left(\mathbf{M}_{i}^{1}\right)^{-1} \displaystyle \sum_{j\in\mathcal{N}_i} \varphi_{ji} \boldsymbol{x}_{ji} \lVert \boldsymbol{x}_{ji} \rVert ^2 \right) \approx 2n \left( u_{ji} - \boldsymbol{x}_{ji} \cdot \left(\mathbf{M}_{i}^{1}\right)^{-1} \displaystyle \sum_{j\in\mathcal{N}_i} \varphi_{ji} \boldsymbol{x}_{ji} u_{ji} \right).\] Next, we multiply both sides by the weight function \(\varphi_{ji}\) and sum over the neighbors. Due to normalization of the weight functions (summing weight function over all particles is approximately one), the value of Laplacian remains unchanged. Reordering the terms once again results in \[\nabla^2 u_i \left( \displaystyle \sum_{j\in\mathcal{N}_i} \varphi_{ji} \lVert \boldsymbol{x}_{ji} \rVert ^2 - \left( \displaystyle \sum_{j\in\mathcal{N}_i} \varphi_{ji} \lVert \boldsymbol{x}_{ji} \rVert ^2 \right) \cdot \left(\mathbf{M}_{i}^{1}\right)^{-1} \displaystyle \sum_{j\in\mathcal{N}_i} \psi_{ki} \boldsymbol{x}_{ki} \lVert \boldsymbol{x}_{ki} \rVert ^2 \right) \approx 2n \displaystyle \sum_{j\in\mathcal{N}_i} \varphi_{ji} u_{ji} \left( 1 - \boldsymbol{x}_{ji} \cdot \left(\mathbf{M}_{i}^{1}\right)^{-1} \boldsymbol{O}_i \right) \label{eq:ldd-sumLaplacian-multipliedAndSummed}\tag{84}\] where \(\boldsymbol{O}_i\) is so called \(\textit{offset}\) vector related with particle \(i\) \[\boldsymbol{O}_i = \displaystyle \sum_{j\in\mathcal{N}_i} \varphi_{ji} \boldsymbol{x}_{ji},\] which points from the position of particle \(\boldsymbol{x}_i\) to the weighted center of the neighborhood where the distribution of neighbor points dominates and which is zero for regular distribution of neighbors.

It can be seen from the equation 84 that the resulting shape of the Laplace approximation is is \[L_{i}^{\nabla^2,\textrm{sum}} u = 2n \frac{ \displaystyle \sum_{j\in\mathcal{N}_i} \varphi_{ji} u_{ji} \left( 1 - \boldsymbol{x}_{ji} \cdot \left(\mathbf{M}_{i}^{1}\right)^{-1}\boldsymbol{O}_i \right) }{ \displaystyle \sum_{j\in\mathcal{N}_i} \varphi_{ji} \lVert \boldsymbol{x}_{ji} \rVert^2 \left( 1 - \boldsymbol{x}_{ji} \cdot \left(\mathbf{M}_{i}^{1}\right)^{-1}\boldsymbol{O}_i \right) }. \label{eq:ldd95sum}\tag{85}\]

Assuming regular distribution (\(\boldsymbol{O} = \mathbf{0}\)) and only closest neighbors are taken into account, approximation 85 reduces to central difference. The offset vector \(\boldsymbol{O}\) inside the sums acts as correction, which within the scalar products virtually corrects relative position of each neighborhood to compensate iregularity of the distribution [73]. As shown in [73], in contrast to 82 sum version 85 does not contain the first order error but the average error collected from the neighbor points and it does not require to precompute the gradient values.

1.14.3 Laplacian approximation - LS version↩︎

An alternative way to get the Laplace operator from equation 83 is to use the moving least square procedure, which stated by the authors [73] results in more accurate approximation. This is closer to the approach used in LDD to approximate the first derivative or to the approach introduced in the Basic approach section 2. From the left side of the equation 83 , we can express \(\left.\boldsymbol{D}^{2} u\right\rvert_{i}\) and group the remaining terms left side as \[\left.\boldsymbol{D}^{2} u\right\rvert_{i} \cdot \boldsymbol{Q}_{ji} = 2 \left( u_{ji} - \boldsymbol{x}_{ji} \cdot \left(\mathbf{M}_{i}^{1}\right)^{-1} \displaystyle \sum_{j\in\mathcal{N}_i} \varphi_{ji} \boldsymbol{x}_{ji} \left( u_{ji} - e_{ji}^{3} \right) - e_{ji}^{3} \right). \label{eq:ldd-lsLaplace-expressedWithVectorQ}\tag{86}\] where \[\boldsymbol{Q}_{ji} = \boldsymbol{X}_{ji}^{2} - \boldsymbol{x}_{ji}^{\top} \displaystyle \sum_{j\in\mathcal{N}_i} \varphi_{ji} \left(\mathbf{M}_{i}^{1}\right)^{-1}\boldsymbol{x}_{ji} \left( \boldsymbol{X}_{ji}^{2} \right) ^{\top}. \label{eq:ldd-lsLaplace-q}\tag{87}\] Multiplying equation 86 by \(\varphi_{ji} \boldsymbol{Q}_{ji}^{\top}\) and summing over the particles, we end up with \[\displaystyle \sum_{j\in\mathcal{N}_i} \left.\boldsymbol{D}^{2} u\right\rvert_{i} \varphi_{ji} \cdot \boldsymbol{Q}_{ji} \boldsymbol{Q}_{ji}^{\top} = \displaystyle \sum_{j\in\mathcal{N}_i} 2 \varphi_{ji} \left( u_{ji} - \boldsymbol{x}_{ji} \cdot \left(\mathbf{M}_{i}^{1}\right)^{-1} \displaystyle \sum_{j\in\mathcal{N}_i} \varphi_{ji} \boldsymbol{x}_{ji} \left( u_{ji} - e_{ji}^{3} \right) - e_{ji}^{3} \right) \boldsymbol{Q}_{ji}^{\top}. \label{eq:ldd-lsLaplace-expressedWithVectorQMultipliedAndSummed}\tag{88}\] Defining modified correction matrix \(\widehat{\mathbf{M}}_i\) (in LDD called renormalisation matrix) \[\widehat{\mathbf{M}}_i = \displaystyle \sum_{j\in\mathcal{N}_i} \varphi_{ji} \boldsymbol{Q}_{ji} \boldsymbol{Q}_{ji}^{\top}\] and multiplying both sides of equation 88 with identity vector \(\boldsymbol{I}\), we obtain discretization of Laplacian in form \[L_{i}^{\nabla^2,\textrm{LS}} u = \boldsymbol{I} \cdot \widehat{\mathbf{M}}_i^{-1} \displaystyle \sum_{j\in\mathcal{N}_i} 2 \varphi_{ji} \boldsymbol{Q}_{ji} \left( u_{ji} - \boldsymbol{x}_{ji} \cdot L_i^{\nabla} u \right). \label{eq:ldd95ls95full}\tag{89}\] where we recognized the approximation for gradient \[L_i^{\nabla} u = \left(\mathbf{M}_{i}^{1} \right)^{-1} \displaystyle \sum_{j\in\mathcal{N}_i} u_{ji} \boldsymbol{W}_{ji}.\] The equation 89 is referred to as full inversion version of Laplace operator. The full inversion version is computationally not very efficient, since several loops over the neighbors need to be done to compute all the parts of the expression. To provide computationally more advantageous discretization, authors in [73] proposed different approach. Instead of the steps performed above, equation 83 is multiplied by \(\boldsymbol{X}_{ji}^{2}\), weight function and summed over all neighbors

\[\begin{align} \left.\boldsymbol{D}^{2} u\right\rvert_{i} \left[ \displaystyle \sum_{j\in\mathcal{N}_i} \varphi_{ji}\boldsymbol{X}_{ji}^{2}\left(\boldsymbol{X}_{ji}^{2}\right)^{\top} + \displaystyle \sum_{j\in\mathcal{N}_i} \varphi_{ji}\mathbf{x}_{ji}\left(\boldsymbol{X}_{ji}^{2}\right)^{\top} \displaystyle \sum_{k} \varphi_{ji}\left( \mathbf{M}_{i}^{1} \right)^{-1}\mathbf{x}_{ji}\left(\boldsymbol{X}_{ji}^{2}\right)^{\top} \right] \approx \\ \displaystyle \sum_{j\in\mathcal{N}_i} 2 \varphi_{ji} \boldsymbol{X}_{ji}^{2} \left( u_{ji} - \boldsymbol{x}_{ji} \cdot L_i^{\nabla} u \right) \addtocounter{equation}{1}\label{eq:ldd95ls95efficent95strategy} \end{align}\tag{90}\] where we similarly to 89 used the notation for the first derivatives. In the second term on the left side of 90 , the product of two similar tensors is recovered. Following [73] and taking the cubic order elements as zero (due to symmetry of weight function \(\varphi\)) we can define the modified correction matrix \(\widetilde{\mathbf{M}}_i\) simply as \[\widetilde{\mathbf{M}}_i = \displaystyle \sum_{j\in\mathcal{N}_i} \varphi_{ji} \boldsymbol{X}_{ji}^{2}\left(\boldsymbol{X}_{ji}^{2}\right)^{\top}.\] This leads to approximation of Laplacian in form \[L_{i}^{\nabla^2,\textrm{LS}} u = \boldsymbol{I} \cdot \widetilde{\mathbf{M}}_i^{-1} \displaystyle \sum_{j\in\mathcal{N}_i} 2 \varphi_{ji} \boldsymbol{Q}_{ji} \left( u_{ji} - \boldsymbol{x}_{ji} \cdot L_i^{\nabla} u \right). \label{eq:ldd95ls95basic}\tag{91}\] which is denoted as basic inversion version of the Laplace operator. In contrast to 89 both \(\widetilde{\mathbf{M}}_i\) and \(\mathbf{M}_{i}^{1}\) can be computed in the same loop. The gradient \(L_i^{\nabla} u\) is in contrast to sum version 85 still required. Potential drawback reported by the authors [73] with regard to formula 89 and 91 is problem with conditioning of the matrix in case that elements of vectors \(\boldsymbol{Q}_{ji}\) or \(\boldsymbol{X}_{ji}^{2}\) are close to zero.

1.15 Generalized reproducing kernel peridynamics (GRKP)↩︎

Generalized reproducing kernel peridynamics (GRKP) [77] was proposed based on the relationship between meshfree pridynamic method [78], [79], used to solve new class of problems from solid mechanics, and classical meshfree methods discussed in this paper. Peridynamics is a mathematical framework that describes the behavior of materials and provides non-local formulation of continuum mechanics. Peridynamics describes the material as set of material points and focuses on interactions between them in a similar way how particle methods approximate function values and derivatives. The gradients in peridynamics are obtained as weighed summation over surrounding points, originating from Taylor expansion, strikingly resembling the approximation of derivatives in collocation methods, which motivate the authors to explore the connection. On one hand, GRKP bridges two completely different fields. On the other hand, peridynamics is based on particles and the need to represent derivatives on scattered data hints the intuitive link. GRKP which unifies peridynamics differential operators with RKPM methods using implicit gradients (see GRKCM section 1.5) is based on approximation of moments 2. Including the zeroth moment is optional.

The theory of peridynamics assumes that variation of function \(u(\mathbf{x}_i)\) at point \(\mathbf{x}_i\) (or simply \(u_i\)) is influenced by interactions with another points in its neighborhood \(N_i\). As stated in [80], in the original formulations of peridynamics, the neighborhoods were symmetric and uniform with constant cut-off distance \(r_c\). In recent variants, the sizes and shapes of the neighborhoods of different points can vary. Similarly to particle methods, the interaction between peridynamics points are local thanks to the weight function \(\varphi_{ji} = \varphi(\lvert x_j - x_i \rvert, h )\) which weights the interactions. For detail description of derivatives in the peridynamics context we refer to [80] and for more information about the peridynamics, we refer to original GRKP paper [77] or specialized peridynamics literature [78], [79].)

The peridynamics assumes the differential operators in form \[\partial^{\boldsymbol{\alpha}} u_i = \int_{\mathcal{N}_i} u(\mathbf{x}') g_m^{\boldsymbol{\alpha}}(\mathbf{x}' - \mathbf{x}_i) \mathrm{d}\mathbf{x}' \label{eq:peridinamic95derivative}\tag{92}\] where \(\boldsymbol{\alpha}\) is the multi-index and \(g_m^{\boldsymbol{\alpha}}(\mathbf{x}' - \mathbf{x}_i) = g_m^{\alpha_1, \alpha_2,\dots\alpha_n}(\mathbf{x}' - \mathbf{x}_i)\) are orthogonal functions which needs to be constructed in order to represent the operator. The \(g_m^{\boldsymbol{\alpha}}\) functions are usually assumed as product of Taylor monomials \(\mathbf{X}(\mathbf{x})\) multiplied by weight functions \(\psi(\mathbf{x})\), which we denote \(\mathbf{W}(\mathbf{x}) = \mathbf{X}(\mathbf{x})\psi(\mathbf{x})\), and vector of unknown coefficients \({\Psi}_{\mathbf{x}}^{\boldsymbol{\alpha}}\) \[g_m^{\boldsymbol{\alpha}}(\mathbf{x}) = \boldsymbol{W}(\mathbf{x})\cdot\boldsymbol{\Psi}_{\mathbf{x}}^{\boldsymbol{\alpha}}.\] This is similar to our weight functions defined by equation [eq:w_abf_zm].

Formulating equation in discrete collocation sense or assuming direct nodal integration, i.e. that there is an elementary volume associated with each point, which can be incorporated to the weight functions, we can spot that equation 92 become equivalent with our definition of general discrete operator [eq:general_discrete_diff_operator_zm]. Replacing \(\partial^{\boldsymbol{\alpha}} u_i\) with \(L^{\boldsymbol{\alpha}}_{i}u\), we have \[L^{\boldsymbol{\alpha}}_{i}u = \displaystyle\sum_{j\in\mathcal{N}_i}^{N} u_{j}w^{\boldsymbol{\alpha}}_{ji} \approx \boldsymbol{C}^{\boldsymbol{\alpha}} \cdot \left.\boldsymbol{D}u\right\rvert_{i} \label{eq:grkp95differential95operator}\tag{93}\] where we assume that the weights have the form \(w_{ji}^{\boldsymbol{\alpha}} = \boldsymbol{W}_{ji} \cdot \boldsymbol{\Psi}_i^{\boldsymbol{\alpha}}\) of anisotropic basis functions \(\boldsymbol{W}_{ji}\) multiplied by unknown coefficients \(\boldsymbol{\Psi}_i^{\boldsymbol{\alpha}}\).

As shown in detail in [77], the idea of peridynamic differential operators is similar to the approximation of a differential operator using reproducing conditions. The authors moreover introduce Generalized Reproducing Kernel Peridynamics (GRKP) which combines the peridynamics differential operators and implicit gradient-based RKPM (for RKPM and implicit gradients see GRKCM 1.5). GRKP follows directly the derivation presented in Section 2. The method is presented in both continuous and discrete collocation versions, showing that the resulting formula corresponds to a scheme which is equivalent with GRKCM and which can also be interpreted as a peridynamics differential operator employed in the peridynamics theory.

Including the zeroth moment in the GRKP is optional. In the referential paper [77], both situations are considered, and this feature is directly built into the GRKP notation. If the moment is used, it is denoted by a square bracket subscript containing two indices \([l,m]\) - \(l\) for the starting order of the basis i.e., \(l=0\) using the unit element or \(l=1\) to exclude the unit element, and \(m\) which denotes the order of the basis used and hence the approximation.

Summarizing the results in our notation, the operator \(L^{\boldsymbol{\alpha}}_{i}\) can thus be expressed as \[\boldsymbol{W}_{ji,[l,m]} = \boldsymbol{X}_{ji,[l,m]} \varphi_{ji,[l,m]} \label{eq:grkp95abfs}\tag{94}\] \[L^{\boldsymbol{\alpha}}_{i, [m]} u = \displaystyle\sum_{j\in\mathcal{N}_i} u_{ji}\boldsymbol{W}_{ji,[l,m]}\cdot \left(\mathbf{M}_{i,[l,m]}^{-1} \boldsymbol{C}^{\boldsymbol{\alpha}}_{[l,m]} \right), \quad \mathbf{M}_{i,[l,m]} = \displaystyle\sum_{j\in\mathcal{N}_i} \boldsymbol{X}_{ji,[l,m]} \boldsymbol{W}_{ji}^{\top} \quad \label{eq:grkp95ld95approximation}\tag{95}\] where Taylor monoials \(\boldsymbol{X}_{ji}\) are used to construct anizotropic basis functions.

1.16 Meshless Local Strong Form Method (MLSM)↩︎

The name Meshless Local Strong Form Method is introduced in [81] as a generalization of several strong form methods presented in the literature. The approach presented in the paper is based on MLS and corresponds exactly to the methods presented in section 1.7. MLSM is thus based on minimization of approximation error, but focuses on the reconstruction of the target function, which is then used to compute its derivatives. As presented in the referred paper, zeroth moment is included. The paper also highlights that a wide range of basis function \(\boldsymbol{P}_{ji}\) can be used, which allows one to write the method as \[\boldsymbol{W}_{ji} = \boldsymbol{P}_{ji} \varphi_{ji}.\] \[L^{\boldsymbol{\alpha}}_{i} u = \displaystyle\sum_{j\in\mathcal{N}_i} u_{j} \boldsymbol{W}_{ji} \cdot (\mathbf{M}_i^{-1} \boldsymbol{C}^{\boldsymbol{\alpha}}), \quad \mathbf{M}_{i} = \displaystyle\sum_{j\in\mathcal{N}_i}\boldsymbol{P}_{ji} \boldsymbol{W}_{ji}^{\top}. \label{eq:mlsm95ld95approximation}\tag{96}\]

References↩︎

[1]
J. King, S. Lind, and A. Nasar. High order difference schemes using the local anisotropic basis function method. Journal of Computational Physics, 415:109549, 8 2020.
[2]
J. King and S. Lind. High-order simulations of isothermal flows using the local anisotropic basis function method (labfm). Journal of Computational Physics, 449:110760, 1 2022.
[3]
A. Nasar, G. Fourtakas, S. Lind, J. King, B. Rogers, and P. Stansby. High-order consistent SPH with the pressure projection method in 2-d and 3-d. Journal of Computational Physics, 444:110563, Nov. 2021.
[4]
P. Degond and S. Mas-Gallic. The weighted particle method for convection-diffusion equations. part 1: The case of an isotropic viscosity. Mathematics of Computation, 53(188):485–507, 1989.
[5]
J. D. Eldredge, A. Leonard, and T. Colonius. A general deterministic treatment of derivatives in particle methods. Journal of Computational Physics, 180(2):686–709, 2002.
[6]
B. Schrader, S. Reboux, and I. F. Sbalzarini. Discretization correction of general integral pse operators for particle methods. Journal of Computational Physics, 229(11):4159–4182, 2010.
[7]
G. C. Bourantas, B. L. Cheeseman, R. Ramaswamy, and I. F. Sbalzarini. Using dc pse operator discretization in eulerian meshless collocation methods improves their robustness in complex geometries. Computers & Fluids, 136:285–300, 9 2016.
[8]
A. Singh, A. Foggia, P. Incardona, and I. F. Sbalzarini. A meshfree collocation scheme for surface differential operators on point clouds. Journal of Scientific Computing, 96, 9 2023.
[9]
B. Zwick, G. Bourantas, F. Alkhatib, A. Wittek, and K. Miller. Recovery by discretization corrected particle strength exchange (dc pse) operators. Applied Mathematics and Computation, 448:127923, 7 2023.
[10]
A. Singh, I. F. Sbalzarini, and A. Obeidat. Entropically damped artificial compressibility for the discretization corrected particle strength exchange method in incompressible fluid mechanics. Computers & Fluids, 267:106074, 12 2023.
[11]
T. Jacquemin, S. Tomar, K. Agathos, S. Mohseni-Mofidi, and S. P. A. Bordas. Taylor-series expansion based numerical methods: A primer, performance benchmarking and new approaches for problems with non-smooth solutions. Archives of Computational Methods in Engineering, 27:1465–1513, 11 2020.
[12]
N. R. Aluru. A point collocation method based on reproducing kernel approximations. International Journal for Numerical Methods in Engineering, 47(6):1083–1121, Feb. 2000.
[13]
W. K. Liu, S. Jun, S. Li, J. Adee, and T. Belytschko. Reproducing kernel particle methods for structural dynamics. International Journal for Numerical Methods in Engineering, 38(10):1655–1679, May 1995.
[14]
J.-S. Chen, C. Pan, C.-T. Wu, and W. K. Liu. Reproducing kernel particle methods for large deformation analysis of non-linear structures. Computer Methods in Applied Mechanics and Engineering, 139(1–4):195–227, Dec. 1996.
[15]
J.-S. Chen, M. Hillman, and S.-W. Chi. Meshfree methods: Progress made after 20 years. Journal of Engineering Mechanics, 143(4):04017001, 2017.
[16]
S.-W. Chi, J.-S. Chen, H.-Y. Hu, and J. Yang. A gradient reproducing kernel collocation method for boundary value problems. International Journal for Numerical Methods in Engineering, 93:1381–1402, 03 2013.
[17]
T. Liszka and J. Orkisz. The finite difference method at arbitrary irregular grids and its application in applied mechanics. Computers & Structures, 11(1-2):83–95, 1980.
[18]
T. Liszka. An interpolation method for an irregular net of nodes. International Journal for Numerical Methods in Engineering, 20(9):1599–1612, 1984.
[19]
P. S. Jensen. Finite difference techniques for variable grids. Computers & Structures, 2(1):17–29, 1972.
[20]
B. Swartz and B. Wendroff. Generalized finite-difference schemes. Mathematics of Computation, 23(105):37–49, 1969.
[21]
P. Suchde. Conservation and Accuracy in Meshfree Generalized Finite Difference Methods. PhD thesis, University of Kaiserslautern, Kaiserslautern, Germany, 2018.
[22]
P. Lancaster and K. Salkauskas. Surfaces generated by moving least squares methods. Mathematics of Computation, 37:141–158, 1981.
[23]
E. Onate, S. Idelsohn, O. Zienkiewicz, and R. Taylor. A finite point method in computational mechanics. applications to convective transport and fluid flow. Research Report, N. PI67, 1995.
[24]
E. Onate, S. Idelsohn, O. Zienkiewicz, and R. Taylor. A finite point method in computational mechanics. applications to convective transport and fluid flow. International journal for numerical methods in engineering, 39(22):3839–3866, 1996.
[25]
E. Oñate, S. Idelsohn, O. Zienkiewicz, R. Taylor, and C. Sacco. A stabilized finite point method for analysis of fluid mechanics problems. Computer Methods in Applied Mechanics and Engineering, 139(1–4):315–346, Dec. 1996.
[26]
E. Oñate and S. Idelsohn. A mesh-free finite point method for advective-diffusive transport and fluid flow problems. Computational Mechanics, 21(4–5):283–292, May 1998.
[27]
E. Onate, C. Sacco, and S. Idelsohn. A finite point method for incompressible flow problems. Computing and visualization in science, 3:67–75, 2000.
[28]
E. Oñate, F. Perazzo, and J. Miquel. A finite point method for elasticity problems. Computers & Structures, 79(22–25):2151–2163, Sept. 2001.
[29]
B. Nayroles, G. Touzot, and P. Villon. Generalizing the finite element method: Diffuse approximation and diffuse elements. Computational Mechanics, 10(5):307–318, 1992.
[30]
S. Park and S. Youn. The least‐squares meshfree method. International Journal for Numerical Methods in Engineering, 52(9):997–1012, Aug. 2001.
[31]
D. Levin. The approximation power of moving least-squares. Math. Comput., 67:1517–1531, 1998.
[32]
H. Wendland. Scattered Data Approximation. Cambridge Monographs on Applied and Computational Mathematics. Cambridge University Press, 2004.
[33]
D. Mirzaei, R. Schaback, and M. Dehghan. . IMA Journal of Numerical Analysis, 32(3):983–1000, 09 2012.
[34]
B. Gross, N. Trask, P. Kuberry, and P. Atzberger. Meshfree methods on manifolds for hydrodynamic flows on curved surfaces: A generalized moving least-squares (gmls) approach. Journal of Computational Physics, 409:109340, 2020.
[35]
D. W. Kim and Y. Kim. Point collocation method using the fast moving least-square reproducing kernel approximation. International Journal for Numerical Methods in Engineering, 56:1445 – 1464, 03 2003.
[36]
A. Huerta, T. Belytschko, S. Fernández‐Méndez, and T. Rabczuk. Meshfree methods. 2004.
[37]
Y. Yoon, S. Lee, and T. Belytschko. Enriched meshfree collocation method with diffuse derivatives for elastic fracture. Computers and Mathematics with Applications, 51(8 SPEC. ISS.):1349–1366, Apr. 2006.
[38]
N. Trask, M. Maxey, and X. Hu. Compact moving least squares: An optimization framework for generating high-order compact meshless discretizations. Journal of Computational Physics, 326:596–611, 2016.
[39]
S. K. Lele. Compact finite difference schemes with spectral-like resolution. Journal of Computational Physics, 103(1):16–42, Nov. 1992.
[40]
W. E and J.-G. Liu. Essentially compact schemes for unsteady viscous incompressible flows. Journal of Computational Physics, 126(1):122–138, June 1996.
[41]
G. R. Joldes, H. A. Chowdhury, A. Wittek, B. Doyle, and K. Miller. Modified moving least squares with polynomial bases for scattered data approximation. Applied Mathematics and Computation, 266:893–902, Sept. 2015.
[42]
G. Joldes, G. Bourantas, B. Zwick, H. Chowdhury, A. Wittek, S. Agrawal, K. Mountris, D. Hyde, S. K. Warfield, and K. Miller. Suite of meshless algorithms for accurate computation of soft tissue deformation for surgical simulation. Medical Image Analysis, 56:152–171, Aug. 2019.
[43]
Q. Wang, W. Zhou, Y. Cheng, G. Ma, X. Chang, Y. Miao, and E. Chen. Regularized moving least-square method and regularized improved interpolating moving least-square method with nonsingular moment matrices. Applied Mathematics and Computation, 325:120–145, May 2018.
[44]
S. Lohit, A. K. Gaonkar, and T. P. Gotkhindi. Interpolating modified moving least squares based element free galerkin method for fracture mechanics problems. Theoretical and Applied Fracture Mechanics, 122:103569, Dec. 2022.
[45]
L. Zhang, D. Huang, and K. Liew. An element-free imls-ritz method for numerical solution of three-dimensional wave equations. Computer Methods in Applied Mechanics and Engineering, 297:116–139, Dec. 2015.
[46]
J.-F. Wang, F.-X. Sun, and Y.-M. Cheng. An improved interpolating element-free galerkin method with a nonsingular weight function for two-dimensional potential problems. Chinese Physics B, 21(9):090204, Sept. 2012.
[47]
X. Li. An interpolating boundary element-free method for three-dimensional potential problems. Applied Mathematical Modelling, 39(10–11):3116–3134, June 2015.
[48]
Q. Wang, W. Zhou, Y. Feng, G. Ma, Y. Cheng, and X. Chang. An adaptive orthogonal improved interpolating moving least-square method and a new boundary element-free method. Applied Mathematics and Computation, 353:347–370, July 2019.
[49]
J. Kuhnert and S. Tiwari. Grid free method for solving the poisson equation. Technical Report 25, Fraunhofer (ITWM), 2001.
[50]
J. Kuhnert and S. Tiwari. Finite pointset method based on the projection method for simulations of the incompressible navier-stokes equations. Technical Report 30, Fraunhofer (ITWM), 2001.
[51]
S. Tiwari and J. Kuhnert. A meshfree method for incompressible fluid flows with incorporated surface tension. Revue européenne des éléments finis, 11, 12 2002.
[52]
S. Tiwari and J. Kuhnert. Particle method for simulation of free surface flows. Hyperbolic Problems: Theory, Numerics Applications, 01 2003.
[53]
S. Tiwari and J. Kuhnert. A numerical scheme for solving incompressible and low mach number flows by the finite pointset method. In M. Griebel and M. A. Schweitzer, editors, Meshfree Methods for Partial Differential Equations II, pages 191–206, Berlin, Heidelberg, 2005. Springer Berlin Heidelberg.
[54]
S. Tiwari and J. Kuhnert. Modeling of two-phase flows with surface tension by finite pointset method (fpm). Journal of Computational and Applied Mathematics, 203(2):376–386, June 2007.
[55]
C. Drumm, S. Tiwari, J. Kuhnert, and H.-J. Bart. Finite pointset method for simulation of the liquid - liquid flow field in an extractor. Computers & Chemical Engineering, 32(12):2946 – 2957, 2008.
[56]
A. Ghosh and S. Deshpande. Least squares kinetic upwind method for inviscid compressible flows. AIAA paper, 1735:1995, 1995.
[57]
A. K. Ghosh and S. M. Deshpande. Robust least squares kinetic upwind method for inviscid compressible flows. 1995.
[58]
S. Deshpande, P. Kulkarni, and A. Ghosh. New developments in kinetic schemes. Computers & Mathematics with Applications, 35:75–93, 01 1998.
[59]
A. Ghosh, J. S. Mathur, and S. Deshpande. q-KFVS scheme — A new higher order kinetic method for euler equations, pages 379–384. 05 2007.
[60]
S. Deshpande. Kinetic theory based new upwind methods for inviscid compressible flows.
[61]
J. Mandal and S. Deshpande. Kinetic flux vector splitting for euler equations. Computers & Fluids, 23(2):447–478, 1994.
[62]
S. M. Deshpande, K. Anandhanarayanan, C. Praveen, and V. Ramesh. Theory and application of 3-d lskum based on entropy variables. International Journal for Numerical Methods in Fluids, 40(1-2):47–62, 2002.
[63]
P. Chandrashekar and S. M. Deshpande. A new grid-free method for conservation laws. 01 2003.
[64]
C. Praveen and S. M. Deshpande. Kinetic meshless method for compressible flows. International Journal for Numerical Methods in Fluids, 55(11):1059–1089, 2007.
[65]
B. Van Leer. Towards the ultimate conservative difference scheme iii. upstream-centered finite-difference schemes for ideal compressible flow. Journal of Computational Physics, 23(3):263–275, 1977.
[66]
B. van Leer. Towards the ultimate conservative difference scheme. v. a second-order sequel to godunov’s method. Journal of Computational Physics, 32(1):101–136, July 1979.
[67]
B. Van Leer. Towards the ultimate conservative difference scheme. iv. a new approach to numerical convection. Journal of Computational Physics, 23(3):276–299, 1977.
[68]
Y.-C. Yoon and J.-H. Song. Extended particle difference method for weak and strong discontinuity problems: part i. derivation of the extended particle derivative approximation for the representation of weak and strong discontinuities. Computational Mechanics, 53(6):1087–1103, Jun 2014.
[69]
S. Wang, Y. O. Zhang, and J. P. Wu. Lagrangian meshfree finite difference particle method with variable smoothing length for solving wave equations. Advances in Mechanical Engineering, 10(7):1687814018789248, 2018.
[70]
S. Milewski. Meshless finite difference method with higher order approximation—applications in mechanics. Archives of Computational Methods in Engineering, 19(1):1–49, 2012.
[71]
S. Milewski. Higher order schemes introduced to the meshless fdm in elliptic problems. Engineering Analysis with Boundary Elements, 131:100–117, Oct. 2021.
[72]
S. Milewski. Higher order meshless approximation applied to finite difference and finite element methods in selected thermomechanical problems. Engineering Analysis with Boundary Elements, 140:300–321, July 2022.
[73]
J. Basic, N. Degiuli, and D. Ban. A class of renormalised meshless laplacians for boundary value problems. Journal of Computational Physics, 354:269–287, 2018.
[74]
J. Bašić, N. Degiuli, Š. Malenica, and D. Ban. Lagrangian finite-difference method for predicting green water loadings. Ocean engineering, 209:107533, 2020.
[75]
C. Peng, M. Bašić, B. Blagojević, J. Bašić, and W. Wu. A lagrangian differencing dynamics method for granular flow modeling. Computers and Geotechnics, 137:104297, 2021.
[76]
J. Bašić, N. Degiuli, B. Blagojević, and D. Ban. Lagrangian differencing dynamics for incompressible flows. Journal of Computational Physics, 462:111198, Aug. 2022.
[77]
M. Hillman, M. Pasetto, and G. Zhou. Generalized reproducing kernel peridynamics: unification of local and non-local meshfree methods, non-local derivative operations, and an arbitrary-order state-based peridynamic formulation. Computational Particle Mechanics, 7(2):435–469, Mar 2020.
[78]
E. Madenci and E. Oterkus. Peridynamic theory and its applications. Springer, New York, NY, 2014 edition, Oct. 2013.
[79]
E. Oterkus, S. Oterkus, and E. Madenci, editors. Peridynamic modeling, numerical techniques, and applications. Elsevier Series in Mechanics of Advanced Materials. Elsevier Science Publishing, Philadelphia, PA, Apr. 2021.
[80]
E. Madenci, A. Barut, and M. Futch. Peridynamic differential operator and its applications. Computer Methods in Applied Mechanics and Engineering, 304:408–451, June 2016.
[81]
G. Kosec, J. Slak, M. Depolli, R. Trobec, K. Pereira, S. Tomar, T. Jacquemin, S. P. Bordas, and M. Abdel Wahab. Weak and strong from meshless methods for linear elastic problem under fretting contact conditions. Tribology International, 138:392–402, 2019.

  1. Another view - not given in the FPM derivation, however - of diffusion derivations may be that instead of primary function, we approximate its derivatives directly. For further details see [15], [29].↩︎

  2. Derivatives of fluxes in the context of the method.↩︎

  3. Molecular velocity in the context of the method↩︎

  4. To make the procedure clear, assuming two dimensions, the sub-components of \(\boldsymbol{D} u\) and \(\boldsymbol{X}_{ji}\) have form \[\boldsymbol{D}^{2} u = \begin{bmatrix} \frac{\partial^{2} u}{\partial{x}^{2}},& \frac{\partial^{2} u}{\partial{y}^{2}} \end{bmatrix}^{\top}, \quad \boldsymbol{D}^{2,\textrm{mixed}} u = \begin{bmatrix} \frac{\partial^{2} u}{\partial{x}\partial{y}},& \frac{\partial^{2} u}{\partial{y}\partial{x}} \end{bmatrix}^{\top}\] \[\boldsymbol{X}_{ji}^{2} = \begin{bmatrix} x_{ji}^2,& y_{ji}^2 \end{bmatrix}^{\top}, \quad \boldsymbol{X}_{ji}^{2,\textrm{mixed}} = \begin{bmatrix} x_{ji} y_{ji},& y_{ji} x_{ji} \end{bmatrix}^{\top}.\]↩︎