FlashMP: Fast Discrete Transform-Based Solver for Preconditioning Maxwell’s Equations on GPUs 1


Abstract

Efficiently solving large-scale linear systems is a critical challenge in electromagnetic simulations, particularly when using the Crank-Nicolson Finite-Difference Time-Domain method. Existing iterative solvers are commonly employed to handle the resulting sparse systems but suffer from slow convergence due to the ill-conditioned nature of the double-curl operator. Approximate preconditioners, like SOR and Incomplete LU decomposition (ILU) provide insufficient convergence, while direct solvers are impractical due to excessive memory requirements. To address this, we propose FlashMP, a novel preconditioning system that designs a subdomain exact solver based on discrete transforms. FlashMP provides an efficient GPU implementation that achieves multi-GPU scalability through domain decomposition. Evaluations on AMD MI60 GPU clusters (up to 1000 GPUs) show that FlashMP reduces iteration counts by up to 16\(\times\) and achieves speedups of 2.5\(\times\) to 4.9\(\times\) compared to baseline implementations in state-of-the-art libraries Hypre. Weak scalability tests show parallel efficiencies up to 84.1%.

GPU, Maxwell’s Equation, Preconditioning, Discrete Transform

1 Introduction↩︎

Numerical simulation of electromagnetic phenomena governed by Maxwell’s equations is vital in science and engineering, with applications in antenna design [1], radar systems [2], photonic crystals [3], and geophysical inversion [4]. These fields demand accurate and efficient methods to model complex behaviors, but increasing needs for precision and scale challenge traditional approaches. As the complexity of electromagnetic systems grows, the demand for high-resolution simulations and detailed modeling of intricate geometries becomes more pronounced, pushing the boundaries of computational resources and numerical techniques.

The Crank-Nicolson Finite-Difference Time-Domain (CN-FDTD) method [1], [5], prized for its unconditional stability and energy conservation, is ideal for long-term simulations. These properties make it particularly suitable for applications requiring extended time horizons. However, large-scale CN-FDTD simulations require solving sparse linear systems from the double-curl operator [2], [6] at each time step, a task demanding efficient solvers for high-resolution or complex geometries. The computational cost of solving these systems can become prohibitive, especially when dealing with fine discretizations or large-scale problems.

Current solutions use iterative methods like BiCGSTAB [1], [7][9] and GMRES [7], [10], [11], as direct solvers are infeasible for large problems. While these iterative methods are widely adopted due to their lower memory footprint and adaptability to large-scale problems, they face significant challenges in achieving practical acceleration on GPU platforms. Specifically, existing preconditioning techniques, such as Jacobi, IC [2], and ILU [7], [11], often fail to provide substantial speedup on GPUs. Fast Transform-based Preconditioners (FTP) [12] are grid-size-independent but limited to a single subdomain, while direct solvers like LU [11], [13] are impractical due to overhead.

Moreover, the parallel efficiency of large-scale simulations is severely constrained by communication overhead and poor adaptation to the spectral properties of Maxwell’s equations [12], [13]. Existing methods often struggle to achieve high parallel efficiency on distributed GPU architectures, especially when scaling to thousands of GPUs. This limitation is further exacerbated by the lack of techniques that simultaneously reduce the number of iterations and the computational cost per iteration. As a result, there is a significant gap in the literature regarding practical preconditioning strategies that can effectively leverage the parallelism of GPUs while maintaining robustness and efficiency.

To address these challenges, we propose FlashMP, a preconditioner for efficient CN-FDTD simulations on GPUs. FlashMP employs discrete transforms via SVD to decouple the double-curl operator into \(3 \times 3\) systems, significantly reducing computational complexity by employing a subdomain exact solver as the preconditioner, FlashMP minimizes the number of convergence steps and global communication overhead. Efficient mapping operators on GPUs ensure low computational cost per iteration. This marks the first time practical speedups have been achieved in large-scale electromagnetic simulations on GPU clusters.

This work offers three key contributions:

  • A novel subdomain exact solver design based on discrete transforms to decouple and solve subdomain problems for the double-curl operator efficiently.

  • An efficient GPU implementation that leverages domain decomposition to achieve multi-GPU scalability.

  • A comprehensive performance analysis, quantifying FlashMP’s convergence, time breakdown, and scalability through experiments with BiCGSTAB and GMRES, demonstrating up to 16\(\times\) reduction in iteration counts, 2.5\(\times\) to 4.9\(\times\) speedups, and 84.1% parallel efficiency.

2 Background↩︎

2.1 Model Problem↩︎

We consider the Maxwell’s curl equations for an isotropic media which can be written in the form: \[\nabla \times E = -\frac{\partial B}{\partial t}, \quad \nabla \times H = \frac{\partial D}{\partial t} \label{eqn:pde}\tag{1}\] where \(E\), \(H\) are electric and magnetic fields, \(D = \epsilon E\), \(B = \mu H\), with \(\epsilon\), \(\mu\) as permittivity and permeability.

The Crank-Nicolson (CN) scheme solves the discretized Maxwell’s equations by a full time step size with one marching procedure, and averages the right-hand-sides of the discretized Maxwell’s equations at \(t+1\) time step and \(t\) time step: \[\epsilon \frac{E^{t+1} - E^t}{\Delta t} = \frac{1}{2} \left[ \nabla \times H^{t+1} + \nabla \times H^t \right] \label{eqn:E}\tag{2}\] \[\mu \frac{H^{t+1} - H^t}{\Delta t} = -\frac{1}{2} \left[ \nabla \times E^{t+1} + \nabla \times E^t \right] \label{eqn:H}\tag{3}\] Assuming \(\epsilon = \mu = 1\) (normalized units, common in simulations) and substituting 3 into 2 , we obtain: \[E^{t+1} + \frac{\Delta t^2}{4} \nabla \times \nabla \times E^{t+1} = E^t + \Delta t \nabla \times H^t - \frac{\Delta t^2}{4} \nabla \times \nabla \times E^t \label{eqn:E95final}\tag{4}\] The update equations for electric field components can be written in the following linear system form with a discrete double-curl operator: \[\mathbf{A} E^{t+1} = R \label{eqn:A}\tag{5}\] where \(\mathbf{A} = \mathbf{I} + \frac{\Delta t^2}{4} \nabla \times \nabla \times\), and \(R\) includes \(E^t\) and \(H^t\). Only \(E^{t+1}\) is unknown, as \(H^{t+1}\) is computed via 3 .

Table 1: Summary of Preconditioners for Solving Maxwell’s Equations using CN-FDTD. “GPU?” means whether GPU acceleration is utilized. “P.C.” stands for preconditioner.
Ref. Iterative Solver P.C. Type GPU? Scale
[1] BiCGSTAB ILU 1 CPU
[2] CG IC 1 CPU
[9] BiCGSTAB NONE 1 GPU
[10] GMRES SAI-SSOR 1 CPU
[13] NONE DD-LU 4 CPUs
Ours BiCGSTAB / GMRES FlashMP 1000 GPUs

2.2 High Performance Linear Solvers↩︎

Solving sparse linear systems \(\mathbf{A} E = R\) is central to computational electromagnetics, enabling applications like photonic crystals and unexploded ordnance detection [1][4], [6][9], [11]. These systems arise from discretizing Maxwell’s equations into meshes, a necessity since most PDEs lack analytical solutions. The Crank-Nicholson Finite-Difference Time-Domain (CN-FDTD) method, valued for its unconditional stability and energy conservation, avoids the Courant-Friedrichs-Lewy (CFL) constraint, producing sparse matrices with mostly zero entries for efficient storage.

Solvers are either direct or iterative. Direct methods like LU or QR factorization adapt dense matrix techniques but suffer from fill-in, increasing memory use and limiting scalability. Iterative solvers, such as BiCGStab [1], [7][9], GMRES [7], [10], [11] are preferred for large systems but converge slowly with the ill-conditioned double-curl operator [2], [7], necessitating preconditioners.

Preconditioning Iterative Methods. Preconditioning accelerates iterative solvers like BiCGStab and GMRES by using a matrix \(\mathbf{M}\) that approximates \(\mathbf{A}\), transforming the system into \(\mathbf{M}^{-1}\mathbf{A}E = \mathbf{M}^{-1} R\) for faster convergence [1][4], [6][9], [11]. \(\mathbf{M}\) must be easier to invert than \(\mathbf{A}\), a challenge for ill-conditioned double-curl operators [2], [7]. Approximate preconditioners like Jacobi and ICCG [2] are cost-effective but converge slowly. Fast Transform-based Preconditioners (FTP) [12] offer grid-size-independent iterations but are limited to a single subdomain. Two-step methods like SAI-SSOR [10] or RCM [13] improve convergence but struggle with severe ill-conditioning. ILU preconditioners [7], [11] balance robustness and cost by controlling fill-in, while direct solvers like LU [11], [13] reduce iterations but incur high overhead, highlighting a trade-off: better approximations of \(\mathbf{A}\) speed convergence at the cost of per-iteration complexity.

Domain Decomposition. To improve scalability, domain decomposition is a widely used technique that splits a matrix into subdomains lying along the diagonal, and each subdomain is solved [13], [14]. The subdomain can be performed using ILU for approximate solutions or LU for complete solutions, and the resultant vectors are combined to approximate the solution of the entire matrix. When subdomains are overlapped, Restricted Additive Schwarz (RAS) [15] is needed to combine the subdomain solutions to improve the approximation. We use RAS as a global preconditioner, which is given by: \[\mathbf{M}_{RAS}^{-1} = \sum_{i=1}^p \mathbf{S}_i^{0^T} \mathbf{A}_i^{-1} \mathbf{S}_i^\gamma\] where \(\mathbf{S}_i^\gamma\) is the restriction operator associated with the \(i^{\text{th}}\) subdomain with a overlap-\(\gamma\), \(\mathbf{A}_i = \mathbf{S}_i^\gamma \mathbf{A} \mathbf{S}_i^{\gamma^T}\) is the restricted submatrix from \(\mathbf{A}\) to the \(i^{\text{th}}\) subdomain, and \(\mathbf{S}_i^0\) is the restriction operator associated with the \(i^{\text{th}}\) subdomain without overlap. The discrete transform-based exact subdomain solver proposed later in this paper efficiently computes \(\mathbf{A}_i^{-1}\), enabling fast and scalable preconditioning.

Figure 1: Steps of subdomain exact solving with discrete transform and low-rank correction.

3 Innovation Implementation↩︎

We introduce a preconditioning system, FlashMP, to efficiently speed up iterative solvers on multi-GPUs. For algorithmic optimization, we design a subdomain exact solver based on discrete transform (illustrated in Figure 1). In light of the following analysis, we can systematically derive Algorithm 2.

3.1 Algorithmic Innovations↩︎

We consider a 3D grid with subdomain size \(n_x = n_y = n_z = n\). Our goal is to solve the linear system \(\mathbf{A} E = R\), where \(\mathbf{A} \in \mathbb{R}^{3n^3 \times 3n^3}\) is the coefficient matrix, \(E = [E_x, E_y, E_z]^T \in \mathbb{R}^{3n^3 \times 1}\) is the electric field vector with components in the \(x\), \(y\), and \(z\) directions, and \(R \in \mathbb{R}^{3n^3 \times 1}\) is the right-hand side vector, on a grid with \(n^3\) points (\(3n^3\) variables).

We note the double-curl operator using first-order forward (\(\mathbf{D}_l^f \in \mathbb{R}^{n^3 \times n^3}\)) and backward (\(\mathbf{D}_l^b = -(\mathbf{D}_l^f)^T\)) difference operators [16], where \(l \in \{x, y, z\}\), then the curl operator is represented as:

\[\nabla \times E = \left[ \scriptsize \begin{array}{c} -\partial_z E_y + \partial_y E_z \\ \partial_z E_x - \partial_x E_x \\ -\partial_y E_x + \partial_x E_y \end{array} \right] = \left[ \scriptsize \begin{array}{ccc} 0 & -\mathbf{D}_z & \mathbf{D}_y \\ \mathbf{D}_z & 0 & -\mathbf{D}_x \\ -\mathbf{D}_y & \mathbf{D}_x & 0 \end{array} \right] E\] where \(\mathbf{D}_x \in \{\mathbf{D}_x^f, \mathbf{D}_x^b\}\), \(\mathbf{D}_y \in \{\mathbf{D}_y^f, \mathbf{D}_y^b\}\), and \(\mathbf{D}_z \in \{\mathbf{D}_z^f, \mathbf{D}_z^b\}\) are the first-order difference operators in the \(x\), \(y\), and \(z\) directions, respectively. The double-curl operator in (5 ) becomes:

\[\mathbf{M} = \left[ \renewcommand{\arraystretch}{1.5} \scriptsize \begin{array}{ccc} -\mathbf{D}_z^b \mathbf{D}_z^f - \mathbf{D}_y^b \mathbf{D}_y^f & \mathbf{D}_y^b \mathbf{D}_x^f & \mathbf{D}_z^b \mathbf{D}_y^f \\ \mathbf{D}_x^b \mathbf{D}_y^f & -\mathbf{D}_z^b \mathbf{D}_z^f - \mathbf{D}_x^b \mathbf{D}_x^f & \mathbf{D}_z^b \mathbf{D}_y^f \\ \mathbf{D}_x^b \mathbf{D}_z^f & \mathbf{D}_y^b \mathbf{D}_z^f & -\mathbf{D}_y^b \mathbf{D}_y^f - \mathbf{D}_x^b \mathbf{D}_x^f \end{array} \right]\]

Define \(\alpha = \frac{\Delta t^2}{4}\), so the linear system in (5 ) has the following representation: \[(\mathbf{I} + \alpha \mathbf{M}) E = R \label{eqn:linear95sys}\tag{6}\] where \(\mathbf{A} = \mathbf{I} + \alpha \mathbf{M}\), and \(\mathbf{I} \in \mathbb{R}^{3n^3 \times 3n^3}\) is the identity matrix.

Observation. The linear system (6 ) arises from a non-symmetric double-curl operator \(\mathbf{M}\) with cross-derivative terms, precluding traditional diagonalization methods used for symmetric operators like the Laplacian [17]. Forward and backward difference operators exhibit symmetry (\(\mathbf{D}_l^b = -(\mathbf{D}_l^f)^T\)), suggesting shared spectral properties.

Key Idea. We can employ a discrete transform based on the SVD of the forward difference operator (\(\mathbf{D}^f = \mathbf{U} \mathbf{S} \mathbf{V}^T\)). The symmetry \(\mathbf{D}_l^b = -(\mathbf{D}_l^f)^T\) allows reuse of the SVD (\(\mathbf{D}_l^b = -\mathbf{V} \mathbf{S} \mathbf{U}^T\)). We aim to diagonalize \(\mathbf{A}\) to decouple variables across grid points for efficient solving.

We define the compact difference matrix \(\mathbf{D}^f \in \mathbb{R}^{n \times n}\) as:

\[\mathbf{D}^f = \begin{pmatrix} -1 & 1 & 0 & \cdots & 0 \\ 0 & -1 & 1 & \cdots & 0 \\ 0 & 0 & -1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & -1 \end{pmatrix}\] Perform singular value decomposition (SVD) on \(\mathbf{D}^f\): \[\mathbf{D}^f = \mathbf{U} \mathbf{S} \mathbf{V}^T \label{eqn:svd}\tag{7}\] where \(\mathbf{S} = \text{diag}(\sigma_1, \dots, \sigma_n) \in \mathbb{R}^{n \times n}\), and \(\mathbf{U}, \mathbf{V} \in \mathbb{R}^{n \times n}\) are orthogonal matrices.

Let \(F\) denote any component of the electric field \(E\) along the \(x\), \(y\), or \(z\)-direction, i.e., \(F \in \{E_x, E_y, E_z\}\). Define tensor product operators \(\text{TrX}, \text{TrY}, \text{TrZ}\) for a 3D array \(F \in \mathbb{R}^{n^3 \times 1}\), reshaped from a tensor of size \(n \times n \times n\), which perform discrete transforms on the field. \[\begin{align} \text{TrX}(\mathbf{T}, F) &= \sum_{m=1}^n t_{im} f_{mjk} \\ \text{TrY}(\mathbf{T}, F) &= \sum_{m=1}^n t_{jm} f_{imk} \\ \text{TrZ}(\mathbf{T}, F) &= \sum_{m=1}^n t_{km} f_{ijm} \end{align} \label{eqn:tensor}\tag{8}\] where \(\mathbf{T} \in \mathbb{R}^{n \times n}\), and \(f_{ijk}\) represents the \((i,j,k)\)-th element of \(F\) in tensor form. Thus, it is evident that: \[\begin{align} \mathbf{D}_x^f F &= \text{TrX}(\mathbf{D}^f, F) \\ \mathbf{D}_y^f F &= \text{TrY}(\mathbf{D}^f, F) \\ \mathbf{D}_z^f F &= \text{TrZ}(\mathbf{D}^f, F) \end{align} \label{eqn:trxyz}\tag{9}\] Define the transform operator \(\mathbf{G} \in \mathbb{R}^{3n^3 \times 3n^3}\):

\[\begin{align} \mathbf{G} &= \text{diag}\Big( \text{TrZ}(\mathbf{V}^T, \text{TrY}(\mathbf{V}^T, \text{TrX}(\mathbf{U}^T, \cdot))) \\ &\phantom{= \text{diag}\Big\{} \text{TrZ}(\mathbf{V}^T, \text{TrY}(\mathbf{U}^T, \text{TrX}(\mathbf{V}^T, \cdot))) \\ &\phantom{= \text{diag}\Big\{} \text{TrZ}(\mathbf{U}^T, \text{TrY}(\mathbf{V}^T, \text{TrX}(\mathbf{V}^T, \cdot))) \Big) \end{align} \label{eqn:G}\tag{10}\]

The operator definition facilitates deriving the diagonalized form of \(\mathbf{A}\), followed by four steps for exact subdomain solve:

Step 1: Component-wise transform. We apply \(\mathbf{G}\) to transform the system (6 ): \(\mathbf{G} (\mathbf{I} + \alpha \mathbf{M}) \mathbf{G}^{-1} (\mathbf{G} E) = \mathbf{G} R\) , yielding: \[(\mathbf{I} + \alpha \mathbf{H}) \hat{E} = \hat{R} \label{eqn:H95linear95sys}\tag{11}\] where \(\hat{E} = \mathbf{G} E\), \(\hat{R} = \mathbf{G} R\), and \(\mathbf{H} = \mathbf{G} \mathbf{M} \mathbf{G}^{-1} \in \mathbb{R}^{3n^3 \times 3n^3}\). By substituting (7 ) and (9 ) into (11 ), we can obtain the form of \(\mathbf{H}\) as follows:

\[\mathbf{H} = \scalebox{0.75}{ \left[ \renewcommand{\arraystretch}{1.6} \scriptsize \begin{array}{ccc} \text{TrZ}(\mathbf{S}^T \mathbf{S}, \cdot) + \text{TrY}(\mathbf{S}^T \mathbf{S}, \cdot) & -\text{TrY}(\mathbf{S}^T, \text{TrX}(\mathbf{S}, \cdot)) & -\text{TrZ}(\mathbf{S}^T, \text{TrX}(\mathbf{S}, \cdot)) \\ -\text{TrX}(\mathbf{S}^T, \text{TrY}(\mathbf{S}, \cdot)) & \text{TrZ}(\mathbf{S}^T \mathbf{S}, \cdot) + \text{TrX}(\mathbf{S}^T \mathbf{S}, \cdot) & -\text{TrZ}(\mathbf{S}^T, \text{TrY}(\mathbf{S}, \cdot)) \\ -\text{TrX}(\mathbf{S}^T, \text{TrZ}(\mathbf{S}, \cdot)) & -\text{TrY}(\mathbf{S}^T, \text{TrZ}(\mathbf{S}, \cdot)) & \text{TrX}(\mathbf{S}^T \mathbf{S}, \cdot) + \text{TrY}(\mathbf{S}^T \mathbf{S}, \cdot) \end{array} \right] }\]

Expanding the \(x\)-direction component of (11 ): \[\begin{align} \hat{R}_x &= \hat{E}_x + \alpha \text{TrZ}(\mathbf{S}^T \mathbf{S}, \hat{E}_x) + \alpha \text{TrY}(\mathbf{S}^T \mathbf{S}, \hat{E}_x) \\ &\quad + \alpha \text{TrY}(\mathbf{S}^T, \text{TrX}(\mathbf{S}, \hat{E}_y)) + \alpha \text{TrZ}(\mathbf{S}^T, \text{TrX}(\mathbf{S}, \hat{E}_z)) \end{align}\]

The scalar form of the above equation is as follows: \[\begin{align} (\hat{r}_x)_{ijk} &= (1 + \alpha \sigma_k^2 + \alpha \sigma_j^2) (\hat{e}_x)_{ijk} + \alpha \sigma_i \sigma_j (\hat{e}_y)_{ijk} + \alpha \sigma_i \sigma_k (\hat{e}_z)_{ijk} \end{align}\] where \(\hat{e}_x, \hat{e}_y, \hat{e}_z\) and \(\hat{r}_x, \hat{r}_y, \hat{r}_z\) are the scalar elements at grid point \((i,j,k)\) of the solution field components \(\hat{E}_x, \hat{E}_y, \hat{E}_z\) and right-hand components \(\hat{R}_x, \hat{R}_y, \hat{R}_z\), respectively.

For the \(y\)- and \(z\)-directions we have: \[(\hat{r}_y)_{ijk} = -\alpha \sigma_i \sigma_j (\hat{e}_x)_{ijk} + (1 + \alpha \sigma_i^2 + \alpha \sigma_k^2) (\hat{e}_y)_{ijk} - \alpha \sigma_j \sigma_k (\hat{e}_z)_{ijk}\] \[(\hat{r}_z)_{ijk} = -\alpha \sigma_i \sigma_k (\hat{e}_x)_{ijk} - \alpha \sigma_j \sigma_k (\hat{e}_y)_{ijk} + (1 + \alpha \sigma_i^2 + \alpha \sigma_j^2) (\hat{e}_z)_{ijk}\]

Define the block matrix: \[\mathbf{B}_{ijk} = \mathbf{I} + \alpha \begin{bmatrix} \sigma_j^2 + \sigma_k^2 & -\sigma_i \sigma_j & -\sigma_i \sigma_k \\ -\sigma_i \sigma_j & \sigma_i^2 + \sigma_k^2 & -\sigma_j \sigma_k \\ -\sigma_i \sigma_k & -\sigma_j \sigma_k & \sigma_i^2 + \sigma_j^2 \end{bmatrix} \in \mathbb{R}^{3 \times 3}\] we can get:

\[\mathbf{B}_{ijk} \begin{bmatrix} (\hat{e}_x)_{ijk} \, (\hat{e}_y)_{ijk} \, (\hat{e}_z)_{ijk} \end{bmatrix}^T = \begin{bmatrix} (\hat{r}_x)_{ijk} \, (\hat{r}_y)_{ijk} \, (\hat{r}_z)_{ijk} \end{bmatrix}^T \label{eqn:rbe}\tag{12}\] (12 ) indicates that the linear system of size \(3n^3 \times 3n^3\) in (6 ) can be decoupled into \(n^3\) small linear systems of size \(3 \times 3\).

Step 2: Point-wise field solving. We define the permutation matrix \(\mathbf{P} \in \mathbb{R}^{3n^3 \times 3n^3}\), which essentially reorders the indices \(l,i, j, k\) (where \(l \in \{x, y, z\}\) indicates the three components of field variable in \(x, y, z\) respectively, and \(i, j, k\) represent the grid indices along the \(x, y, z\) directions). \[\begin{align} \mathbf{P} R &= \mathbf{P} \left\{ \left\{ r_{x, ijk} \right\}, \left\{ r_{y, ijk} \right\}, \left\{ r_{z, ijk} \right\} \right\} \\ &= \left\{ \left\{ r_{x, ijk}, r_{y, ijk}, r_{z, ijk} \mid 1 \leq i,j,k \leq n \right\} \right\} \end{align}\] By reordering both sides of (11 ), an equivalent equation is obtained: \[\text{diag} \left( \mathbf{B}_{ijk} \right) \mathbf{P}^T \hat{E} = \mathbf{P} \hat{R} \label{qgn:PR}\tag{13}\]

Step 3: Component-wise inverse transform. We can get exact_solve process for solving (6 ): (1) compute \(R' = \mathbf{P} \mathbf{G} R\), (2) solve \(3 \times 3\) system \(\mathbf{B}_{ijk} (E')_{ijk} = (R')_{ijk}\) at each grid point, (3) recover \(E = \mathbf{G}^{-1} \mathbf{P} E'\). Here, \(\mathbf{G}\) and \(\mathbf{G}^{-1}\) are used for the transform and inverse transform, respectively.

Figure 2: Subdomain exact solving with discrete transform and low-rank correction.
Figure 3: Illustration of Low-Rank Boundary Correction. (a) Left: Why low-rank? Only surfaces matter, not volume. Total non-zeros: 2n^2 - n for the x component. (a) Right: \mathbf{\Lambda}’s sparse diagonal with non-zeros (e.g., 2 at edges, 1 at faces) at boundary indices. (b) Low-rank decomposition \mathbf{Q} \mathbf{W} \mathbf{Q}^T, from a tall-skinny \mathbf{Q} to a small \mathbf{W}, yielding a rank \leq 6n^2 - 3n \ll 3n^3, followed by the application of the Woodbury formula.

Step 4: Boundary error correction. The system in (6 ) assumes Dirichlet boundary conditions \(E|_{\partial \Omega} = 0\). However, the difference operators \(\mathbf{D}_l^f\) and \(\mathbf{D}_l^b\) near the boundary points do not explicitly enforce these constraints in the transformed system, leading to boundary errors that must be corrected to ensure exact subdomain solving. To address this, we introduce a sparse diagonal correction term \(\alpha \mathbf{\Lambda}\), which encodes the boundary effects, resulting in the corrected linear system: \[(\mathbf{I} + \alpha \mathbf{M} + \alpha \mathbf{\Lambda}) E = R, \label{eqn:error95linear95sys}\tag{14}\] where \(\mathbf{\Lambda} = \text{diag}(\delta_{l,ijk} \mid l=x,y,z, 1 \leq i,j,k \leq n) \in \mathbb{R}^{3n^3 \times 3n^3}\), \(E, R \in \mathbb{R}^{3n^3 \times 1}\), and \(\delta_{l,ijk} \in \{0, 1, 2\}\) encodes the boundary effects. For direction \(l \in \{x, y, z\}\), define the indices \(p_l\) and \(q_l\) as: \[(p_l, q_l) = \begin{cases} (j, k) & \text{if } l = x \\ (i, k) & \text{if } l = y \\ (i, j) & \text{if } l = z \end{cases}\] Then, the function \(\delta_l(l, i, j, k)\) is given by: \[\delta_l(l, i, j, k) = \begin{cases} 2 & \text{if } p_l = q_l = 1 \\ 1 & \text{if exactly one of } p_l = 1 \text{ or } q_l = 1 \\ 0 & \text{otherwise} \end{cases}\]

The sparse pattern of \(\mathbf{\Lambda}\) is characterized by non-zero entries confined to boundary points of each field component, reflecting the geometric insight that errors arise from asymmetric differencing at boundaries and propagate along surfaces rather than through the volume. As visualized in Figure 3 (a), the 3D subdomain (left) with total non-zeros \(2n^2 - n\) for the \(x\) component (two planes of \(n^2\) points each minus \(n\) overlapping edge points); across three components, this yields \(6n^2 - 3n\). The matrix \(\mathbf{\Lambda}\) (right) shows non-zeros (e.g., 2 at edges, 1 at faces) only along boundary indices in an otherwise zero-filled diagonal. This surface-like scaling (\(O(n^2)\)) versus volumetric (\(O(n^3)\)) underpins the low-rank nature, projecting errors into a low-dimensional boundary subspace.

To leverage this structure for solving (14 ), we express the error term \(\alpha \mathbf{\Lambda}\) in low-rank matrix form. Using \(g_l\) and \(v_l\) to denote the indices and values of non-zero elements in the vector \(\{\delta_{l, ijk}\}\) for each component \(l\) (with length \(m_l = n \cdot (n + n - 1) = 2n^2 - n\)), we define the selection matrix \(\mathbf{Q}_l \in \mathbb{R}^{n^3 \times m_l}\) as: \[q_{ij} = \begin{cases} 1 & \text{if } g_{l,i} = i, \, 1 \leq i \leq n^3, \, 1 \leq j \leq m_l \\ 0 & \text{otherwise} \end{cases}\] \[\begin{align} g_l &= \{ m \mid \delta_{l,m} \neq 0, \, 1 \leq m \leq n^3 \}, \\ v_l &= \{ \delta_{l,m} \mid \delta_{l,m} \neq 0, \, 1 \leq m \leq n^3 \} \end{align}\] The block-diagonal \(\mathbf{Q} = \text{diag}(\mathbf{Q}_x, \mathbf{Q}_y, \mathbf{Q}_z) \in \mathbb{R}^{3 n^3 \times (6n^2 - 3n)}\) acts as a projection operator onto the boundary subspace, while \(\mathbf{W} = \text{diag}(\alpha v_x, \alpha v_y, \alpha v_z) \in \mathbb{R}^{(6n^2 - 3n) \times (6n^2 - 3n)}\) is a diagonal matrix weighting the boundary corrections (with values \(\alpha \times 1\) or \(\alpha \times 2\)). It follows that \(\alpha \mathbf{\Lambda} = \mathbf{Q} \mathbf{W} \mathbf{Q}^T\), a low-rank outer product (rank at most \(6n^2 - 3n\)), as illustrated in the bottom decomposition flow of Figure 3 (b), where the tall-skinny \(\mathbf{Q}\) (purple) compresses the full system to the small \(\mathbf{W}\) (orange), yielding a compact low-rank result (green).

Motivated by the need to handle this low-rank update efficiently—avoiding the prohibitive cost of reinverting the entire \(3n^3 \times 3n^3\) matrix, which would lead to memory exhaustion—we apply the Woodbury formula [18], which relates the inverse of a matrix after a low-rank perturbation to the inverse of the original matrix, transforming the large matrix inverse into a small matrix inverse plus matrix-vector multiplications: \[\begin{gather} (\mathbf{I} + \alpha \mathbf{M} + \alpha \mathbf{\Lambda})^{-1} = \left( \mathbf{I} + \alpha \mathbf{M} + \mathbf{Q} \mathbf{W} \mathbf{Q}^T \right)^{-1} \\ = (\mathbf{I} + \alpha \mathbf{M})^{-1} - (\mathbf{I} + \alpha \mathbf{M})^{-1} \mathbf{Q} \, \mathbf{C}^{-1} \, \mathbf{Q}^T (\mathbf{I} + \alpha \mathbf{M})^{-1} \end{gather}\] where the correction matrix \[\mathbf{C} = \left[ \mathbf{W}^{-1} + \mathbf{Q}^T (\mathbf{I} + \alpha \mathbf{M})^{-1} \mathbf{Q} \right] \in \mathbb{R}^{(6n^2 - 3n) \times (6n^2 - 3n)}\]

In practice, this manifests in Algorithm 2’s boundary correction steps, avoiding reprocessing the entire system. Without it, direct inversion would lead to prohibitive overheads, memory exhaustion, and scalability issues. Thus, we obtain the complete algorithmic procedure for solving (14 ), presented in Algorithm 2.

3.2 Computational and Space Complexity Analysis↩︎

Compared to traditional direct methods, such as LU, QR decomposition, or Gaussian elimination [19], which compute an explicit inverse \(\mathbf{A}^{-1}\), FlashMP reduces both computational and space complexities from \(O(n^6)\) to \(O(n^4)\), significantly reducing computational and memory overhead. For fairness, we assume the inverse matrix \(\mathbf{A}^{-1}\) is precomputed for direct methods, with runtime dominated by a general matrix-vector multiplication (GEMV). Computational complexity is measured as the count of floating-point arithmetic operations (G), while memory usage is quantified in gigabytes (GB).

Table 2: Comparison of runtime for FlashMP and direct methods with a subdomain size of \(32^3\).
Metric FlashMP Direct Methods
Arithmetic Operations (G) 0.15 19.3
Memory Usage (GB) 0.24 77.3
Runtime (ms) 1.11 120.78

3.2.1 Theoretical Computational Complexity↩︎

3.2.1.1 FlashMP

The computational complexity of FlashMP is dominated by three components:

  • Component-wise transforms with \(\mathbf{G}\) or \(\mathbf{G}^{-1}\): Each tensor product operation costs \(2n^4\) floating-point operations. The \(\mathbf{G}\) operation involves three components with three directional transforms, costing \(9 \times 2n^4 = 18n^4\) operations. The \(\mathbf{G}^{-1}\) operation is identical. Each exact_solve requires \(18n^4 + 18n^4 = 36n^4\) operations, and the two exact_solve calls in Algorithm 2 total \(72n^4\) operations.

  • Point-wise field solving: A single block sparse matrix-vector multiplication (BSpMV) with the precomputed diagonal matrix \(\text{diag} \left( \mathbf{B}^{-1}_{ijk} \right)\), containing \(n^3\) blocks of size \(3 \times 3\), costs \(18n^3\) floating-point operations.

  • Boundary error correction: Using a precomputed matrix \(\mathbf{C}^{-1} \in \mathbb{R}^{(6n^2 - 3n) \times (6n^2 - 3n)}\), matrix-vector multiplication costs \(2 \times (6n^2 - 3n)^2 \approx 72n^4\) operations.

Total complexity: \(72n^4 + 18n^3 + 72n^4 \approx O(n^4)\) operations.

3.2.1.2 Direct Methods

For \(\mathbf{A^{-1}} \in \mathbb{R}^{3n^3 \times 3n^3}\), the runtime GEMV with a precomputed \(\mathbf{A}^{-1}\) costs \(2 \times (3n^3)^2 = 18n^6\) floating-point operations.

Total complexity: \(O(n^6)\) operations.

3.2.2 Theoretical Space Complexity↩︎

3.2.2.1 FlashMP

Memory usage includes:

  • Correction matrix \(\mathbf{C}^{-1} \in \mathbb{R}^{(6n^2 - 3n) \times (6n^2 - 3n)}\): \((6n^2 - 3n)^2 \times 8 \approx 288n^4\) bytes.

  • Vectors \(\mathbf{E}, \mathbf{R} \in \mathbb{R}^{3n^3}\): \(3n^3 \times 8 = 24n^3\) bytes each.

  • Matrices \(\mathbf{U}, \mathbf{V}\) and diagonal \(\mathbf{S} \in \mathbb{R}^{n \times n}\) : \(O(n^2)\) bytes.

Total complexity: \(O(n^4)\) bytes, dominated by \(\mathbf{C}^{-1}\).

3.2.2.2 Direct Methods

Memory usage includes:

  • Precomputed \(\mathbf{A}^{-1} \in \mathbb{R}^{3n^3 \times 3n^3}\): \((3n^3)^2 \times 8 = 72n^6\) bytes.

  • Vectors \(\mathbf{E}, \mathbf{R} \in \mathbb{R}^{3n^3\times 1}\): \(3n^3 \times 8 = 24n^3\) bytes each.

Total complexity: \(O(n^6)\) bytes, dominated by \(\mathbf{A}^{-1}\).

Figure 4: Tensor product operations of a field component R_x along the x, y, and z directions based on DGEMM.

3.3 System Innovations↩︎

We have also made the following optimizations for the GPU to ensure efficient mapping of the algorithm to the hardware.

GEMM-Based Discrete Transform. Here the field component \(R_x \in \mathbb{R}^{n^3 \times 1}\) is a 3D tensor in the spatial domain, reshaped from a tensor of size \(n_x \times n_y \times n_z\). The tensor product operations performed by \(\text{TrX}\), \(\text{TrY}\), and \(\text{TrZ}\) in (8 ) respectively correspond to contractions in the \(x\), \(y\), and \(z\) directions. We invoke the double-precision general matrix multiply (GEMM) interface to transpose \(R_x\) into a \(y-z-x\) format and perform the \(x\)-direction contraction with matrix parameters \(M = n_x\), \(N = n_y \times n_z\), \(K = n_x\) (for matrices of size \(M \times K\) and \(K \times N\)). The contractions in the \(y\)- and \(z\)-directions are performed similarly, with appropriate transpositions and GEMM invocations. Figure 4 provides an example of Step 1 in the FlashMP algorithm, showcasing the transformation of field variables on a \(2 \times 2 \times 2\) subdomain. Using a GEMM-based approach enables efficient, GPU-accelerated tensor operations.

Figure 5: Inter-subdomain communication. (a) Data derived from 26 adjacent subdomains are distinguished by different colors. The block in the middle represents the data that the subdomain originally had. (b) asm_comm represents the communication process involved in domain decomposition, including the three steps: Pack, Send & Recv, and Unpack.

Communication Mechanism in ASM. Figure 5 illustrates the communication mechanism of the Additive Schwarz Method (ASM). As shown in Figure 5 (a), each subdomain comprises an original interior domain and an extended halo domain, with the interior block representing local data and the halo capturing overlapping data. Figure 5 (b) depicts the asm_comm workflow, consisting of three steps. In the Pack step, the GPU kernel extracts overlapping elements from the global vector into a send buffer. The Send & Recv employs MPI non-blocking calls [20] to exchange halo data between subdomains. Finally, the Unpack kernel merges local and received data into an extended vector for preconditioning.

This mechanism offers two advantages. The GPU-accelerated packing and unpacking kernels enable efficient data preparation, while consolidating scattered communication into coarse-grained transfers reduces the number of MPI calls.

4 Evaluation↩︎

4.1 Experimental Setup↩︎

Platform. The test platform is an AMD GPU cluster with a Hygon C86 7185 CPU (32-core), four AMD MI60 GPUs (16 GB each), and 128 GB host memory per node. The GPUs and CPU are interconnected via PCIe, and nodes are connected via a 200 Gb/s FatTree network. The system uses ROCm 4.0 [21] and CentOS 7.6. The GPUs have a peak performance of 5.4 TFLOPS for FP64 and a memory bandwidth of 672 GB/s.

System setup. FlashMP, as a preconditioner, can accelerate iterative solvers. To verify its effectiveness, we pair FlashMP with two representative iterative solvers: BiCGSTAB and GMRES. The efficient implementations of BiCGSTAB and GMRES on AMD GPUs are based on Hypre [22], a state-of-the-art library that provides highly optimized solver implementations for AMD GPUs.

Workloads. GMRES is configured with a restart length of \(k =30\), and BiCGSTAB uses standard parameters. Both solvers use a right-hand side vector \(\mathbf{b}\) computed as \(A\mathbf{x}_0\), where \(\mathbf{x}_0\) is a vector of random values. The stopping criterion requires a 12-order-of-magnitude reduction in the relative residual, i.e., \(\frac{\|\mathbf{b} - A\mathbf{x}_k\|}{\|\mathbf{b}- A\mathbf{x}_0\|} < 10^{-12}\). Experiments test a fixed subdomain size of \(32^3\) per GPU, without preconditioning and with FlashMP at overlap 0 to 3.

Metrics. The total time \(T_{\text{total}}\) of the iterative solver is: \[T_{\text{total}} = \#\text{iter} \cdot T_{\text{single}}\] where \(\#\text{iter}\) is the number of iterations, and \(T_{\text{single}}\) is the single-iteration time, decomposed into precond time \(T_{\text{precond}}\) and core subspace time \(T_{\text{core}}\). FlashMP minimizes \(\#\text{iter}\) through effective exact preconditioning and optimizes \(T_{\text{single}}\) via efficient implementation. Combining the two sides leads to an overall high performance solver.

a
b

Figure 6: Convergence curves of BiCGSTAB (a) and GMRES (b) with different preconditioners, where “NOPRE” represents without preconditioner, “OL_i” represents the use of the FlashMP with overlap \(i\), and “ILU”, “IC”, “SOR” represent incomplete LU, incomplete Cholesky, and successive over-relaxation, respectively. The Y axis is the relative residual, and the X axis is the iteration number.. a — Convergence traces for BiCGSTAB., b — Convergence traces for GMRES.

4.2 Convergence Analysis↩︎

Figure 6 shows convergence curves of iterative solvers with various preconditioners. Unlike approximate methods (ILU, ICC, Jacobi, SOR), FlashMP’s subdomain exact solving achieves the fastest convergence with significantly lower single-iteration time than traditional exact direct methods (in table 2). Existing approximate methods on GPU fail to reduce iterations substantially while increasing single-iteration overhead, rendering them uncompetitive with NOPRE in total time; thus, we focus on comparing FlashMP and NOPRE.

For BiCGSTAB in Figure 6 (a), the NOPRE case converges in 193 iterations, while FlashMP with overlap 0, 1, 2, and 3 converges in 157, 20, 15, and 12 iterations, respectively. The residual curve of BiCGSTAB shows oscillations, indicating numerical instability in its biconjugate framework. Overlap 0 offers a little reduction in iterations due to limited subdomain data exchange, but overlap 1 reduces iterations significantly to 20, with minor further gains at overlap 2 and 3. For GMRES in Figure 6 (b), the NOPRE case requires 364 iterations, while FlashMP with overlap 0, 1, 2, and 3 converges in 301, 33, 24, and 20 iterations, respectively. The residual curve of GMRES exhibits smoother, which is attributed to stability from orthogonalization. Similar to BiCGSTAB, overlap 0 provides minor benefits, highlighting the importance of adequate overlap. BiCGSTAB converges about twice as fast as GMRES due to its dual preconditioning per iteration.

a
b

Figure 7: Time breakdown for BiCGSTAB (a) and GMRES (b) into precond time \(T_{\text{precond}}\) (reorder, asm_comm, fast_solve) and core subspace time \(T_{\text{core}}\) (point-to-point communication, matrix-vector operations, reduction). fast_solve represents the time taken by FlashMP to perform exact subdomain solves. Speedup is relative to the NOPRE across various GPU counts with overlap 1, 2, and 3. Times are in milliseconds.. a — Breakdown for BiCGSTAB., b — Breakdown for GMRES.

4.3 Performance and Speedup↩︎

Figure 7 quantifies the speedup of FlashMP paired with BiCGSTAB and GMRES, decomposing total time into precond time \(T_{\text{precond}}\) and core subsparse time \(T_{\text{core}}\) across GPU counts of 8, 64, 216, 512, and 1000. Overlap 0 is excluded because it does not significantly reduce iteration counts while increasing single-iteration time \(T_{\text{single}}\), making it uncompetitive with the NOPRE case.

For BiCGSTAB in Figure 7 (a) at 1000 GPUs, NOPRE runtime is 363.8 ms, while FlashMP yields 119.4 ms, 89.6 ms, and 79.8 ms for overlaps 1, 2, and 3, achieving speedups of \(3.05\times\), \(4.06\times\), and \(4.56\times\), respectively, due to fewer iterations (20, 15, 12 vs. 193). For GMRES in Figure 7 (b), NOPRE runtime is 440.7 ms, with FlashMP times of 89.9 ms, 103.3 ms, and 129.9 ms for overlaps 1, 2, and 3, yielding speedups of \(3.42\times\), \(4.28\times\), and \(4.90\times\), driven by iteration reductions (33, 24, 20 vs. 364).

Table 3: Work requirements per iteration for BiCGSTAB and GMRES. Here, \(k_{\text{avg}}\), the average number of orthogonal vectors per GMRES iteration, is approximately \(\frac{k + 1}{2}\).
Operation Type BiCGSTAB GMRES
Preconditoning 2 1
Matrix-vector multiplication (SpMV) 2 1
Dot product 4 \(k_{\text{avg}} + 1\)
Scalar-vector multiplication (AXPY) 6 \(k_{\text{avg}}\)

For both solvers, increasing overlap reduces total time, as shown in Figure 7, by strengthening the preconditioner and lowering iteration counts. Although single-iteration time rises with larger subdomains, the substantial iteration reduction outweighs this cost. fast_solve dominates precond time, while reduction time, driven by collective communication, grows logarithmically with GPU count, increasing its share. FlashMP’s exact preconditioning minimizes iterations, enhancing speedup at higher GPU counts by reducing synchronization overhead, especially evident at scale. Table 3 shows BiCGSTAB requires one additional precond step and SpMV GMRES, reflecting its bi-orthogonalization approach with dual updates. Thus, BiCGSTAB needs at least twice the residual norm decrease per iteration to compete with GMRES.

a
b

Figure 8: Weak scalability for BiCGSTAB (a) and GMRES (b). Solid lines represent performance on the left y-axis, and dashed lines represent parallel efficiency on the right y-axis.. a — Weak scalability for BiCGSTAB., b — Weak scalability for GMRES.

4.4 Parallel Scalability↩︎

Parallel efficiency was evaluated by scaling GPUs from 8 to 1000, maintaining a fixed subdomain size of \(32^3\) per GPU. Performance is measured in million degrees of freedom per second (MDoF/s), with parallel efficiency relative to ideal linear scaling from 8 GPUs.

For BiCGSTAB in Figure 8 (a), the NOPRE achieves 262.96 MDoF/s with 63.4% efficiency at 1000 GPUs. FlashMP with overlap 1, 2, and 3 reach 823.12, 1096.73, and 1232.41 MDoF/s, with efficiencies of 77.8%, 81.4%, and 84.1%, respectively. For GMRES in Figure 8 (b), the NOPRE reaches 221.40 MDoF/s with 52.9% efficiency at 1000 GPUs. FlashMP with overlap 1, 2, and 3 yields 791.36, 1007.30, and 1167.05 MDoF/s, with efficiencies of 65.4%, 68.3%, and 73.5%, respectively. BiCGSTAB outperforms GMRES in scalability, achieving up to 84.1% efficiency versus GMRES’s 73.5% at 1000 GPUs, driven by its fixed communication costs. In large-scale parallel contexts, communication overhead is a key factor affecting performance. The orthogonalization of GMRES requires dot products and reductions across all orthogonal vectors, which demand more collective communication and global synchronization compared to BiCGSTAB. This can lead to a decrease in parallel efficiency. The fixed number of operations in BiCGSTAB makes it more suitable for large-scale parallelism, see table 3. Scalability tests demonstrate FlashMP’s effectiveness in enhancing parallel efficiency, as its subdomain exact solving ensures minimal iteration counts, effectively reducing synchronization counts in global communication—a critical factor limiting parallel efficiency.

5 Conclusions↩︎

This paper introduces FlashMP, a high-performance preconditioner tailored for solving large-scale linear systems in electromagnetic simulations. FlashMP effectively decouples the ill-conditioned double-curl operator by combining domain decomposition and discrete transforms, significantly reducing iteration counts and computational overhead across various conditions. Extensive testing on distributed GPU clusters reveals that FlashMP decreases iteration counts by up to 16\(\times\) and achieves speedups ranging from 2.5\(\times\) to 4.9\(\times\) over NOPRE implementation in state-of-the-art libraries Hypre. Furthermore, weak scalability tests demonstrate parallel efficiencies up to 84.1% at 1000 GPUs.

Acknowledgment↩︎

The authors express their gratitude to the anonymous reviewers for their insightful comments and constructive suggestions. This work is supported by the Strategic Priority Research Program of Chinese Academy of Sciences, Grant NO.XDB0500101. Jian Zhang is the corresponding author of this paper ().

References↩︎

[1]
R. Qiang and J. Chen. Amg enhanced CN-FDTD method for low frequency electromagnetic applications. In IEEE Antennas and Propagation Society Symposium, volume 2, pages 1503–1506 Vol.2, 2004.
[2]
Z. Peng, H. W. Yang, R. Weng, et al. A research on the CN-ICCG-FDTD algorithm of plasma photonic crystals and the transmission coefficient of electromagnetic wave. CPC, 185(10):2387–2390, 2014.
[3]
H. Sun, S. Liu, and Y. Yang. Three-dimensional forward modeling of transient electromagnetics using the Crank-Nicolson FDTD method. Journal of Geophysics, 64(1):343–354, 2021.
[4]
T. Shubitidze, B. E. Barrowes, E. Chapman, et al. The Crank-NicolsonFDTD method in cylindrical coordinates and its application to underwater UXO detection and classification problems. In 2018 XXIIIrd International Seminar/Workshop on DIPED, pages 1–4. IEEE, 2018.
[5]
H. K. Rouf. Improvement of computational performance of implicit finite difference time domain method. Progress in Electromagnetics Research M, 43:1–8, 2015.
[6]
T. Shubitidze, B. E. Barrowes, E. Chapman, et al. The CN-FDTD method in cylindrical coordinates and its application to underwater UXO detection and classification problems. In DIPED 2018, pages 1–4, 2018.
[7]
H. K. Rouf, F. Costen, S. G. Garcia, et al. On the solution of 3-D frequency dependent Crank-Nicolson FDTD scheme. J ELECTROMAGNET WAVE, 23(16):2163–2175, 2009.
[8]
S. G. Garcia, F. Costen, M. F. Pantoja, L. D. Angulo, et al. Efficient excitation of waveguides in Crank-Nicolson FDTD. Progress In Electromagnetics Research Letters, 17:27–38, 2010.
[9]
K. Xu, Z. Fan, D.-Z. Ding, et al. accelerated unconditionally stable crank-nicolson FDTD method for the analysis of three-dimensional microwave circuits. Progress In Electromagnetics Research, 102:381–395, 2010.
[10]
Y. Yang, S. Niu, and R.S. Chen. Application of two-step preconditioning technique to the crank-nicolson finite-difference time-domain method for analysis of the 3-D planar circuits. In 2008 Asia-Pacific Microwave Conference, pages 1–4. IEEE.
[11]
R. Qiang, D. Wu, J. Chen, et al. A CN-FDTD scheme and its application to VLSI substrate modeling. In International Symposium on Electromagnetic Compatibility (IEEE Cat. No. 04CH37559), volume 1, pages 97–101. IEEE, 2004.
[12]
A. Chabory, B. P. de Hon, W. H. A. Schilders, et al. Fast transform based preconditioners for 2D finite-difference frequency-domain - waveguides and periodic structures. Journal of Computational Physics, 227(16):7755–7767, 2008.
[13]
X.K. Wei, W. Shao, X.H. Wang, et al. Domain decomposition CN-FDTD with unsplit-field PML for time-reversed channel analysis. 2018.
[14]
B. F. Smith. Domain decomposition methods for partial differential equations. In Parallel Numerical Algorithms, pages 225–243. Springer, 1997.
[15]
X.C. Cai and M. Sarkis. A restricted additive schwarz preconditioner for general sparse linear systems. Siam journal on scientific computing, 21(2):792–797, 1999.
[16]
R. Wicklin. Difference operators. https://blogs.sas.com/content/iml/2017/07/24/difference-operators-matrices.html, 2017.
[17]
H. Inoue, M. Kamibayashi, K. Kishimoto, et al. Numerical laplace transformation and inversion using fast fourier transform. JSME International Journal, 35(3):319–324, 1992.
[18]
W. W. Hager. Updating the inverse of a matrix. SIAM review, 31(2):221–239, 1989.
[19]
Y. Saad. Iterative methods for sparse linear systems. SIAM, 2003.
[20]
W. Gropp, E. Lusk, N. Doss, et al. A high-performance, portable implementation of the MPI message passing interface standard. Parallel Computing, 22(6):789–828, 1996.
[21]
AMD ROCm™ Software. https://www.amd.com/en/products/software/rocm.html, 2025.
[22]
R. D. Falgout and U. M. Yang. hypre: A library of high performance preconditioners. In International Conference on computational science, pages 632–641. Springer, 2002.

  1. This work is supported by the Strategic Priority Research Program of Chinese Academy of Sciences, Grant NO.XDB0500101.↩︎