Early warning of critical transitions: distinguishing tipping points from Turing destabilizations


Abstract

Current early warning signs for tipping points often fail to distinguish between catastrophic shifts and less dramatic state changes, such as spatial pattern formation. This paper introduces a novel method that addresses this limitation by providing more information about the type of bifurcation being approached starting from a spatially homogeneous system state. This method relies on estimates of the dispersion relation from noisy spatio-temporal data, which reveals whether the system is approaching a spatially homogeneous (tipping) or spatially heterogeneous (Turing patterning) bifurcation. Using a modified Klausmeier model, we validate this method on synthetic data, exploring its performance under varying conditions including noise properties and distance to bifurcation. We also determine the data requirements for optimal performance. Our results indicate the promise of a new spatial early warning system built on this method to improve predictions of future transitions in many climate subsystems and ecosystems, which is critical for effective conservation and management in a rapidly changing world.

1 Introduction↩︎

Nature is changing by the day; climate change and land-use change are putting many ecosystems and climate subsystems under stress. As a result, some systems may be pushed beyond their tipping point – a critical threshold where even small disturbances can trigger abrupt shifts to alternative states with different functioning. Such tipping points have been identified in many systems [1][6]. For example, in the Amazon rainforest where they might induce transitions to savanna-like ecosystems [7], [8], or in ice sheets where crossing of tipping points indicates they disappear, which might lead to a significant rise in sea level and disrupt coastal ecosystems [9], [10], or in ocean circulation, such as a collapse of the Atlantic meridional overturning circulation [11], [12].

With the presence of such tipping points appearing increasingly evident, this has prompted the development of methods for detecting and predicting these state transitions. This has led to a variety of so-called early warning signs that detect approach of a tipping point before it has been crossed [13][15]. Among these are physics-based early warning signs that use system process understanding to find system-specific indicators, such as the minimum of the AMOC-induced freshwater transport at the southern boundary of the Atlantic for tipping of the AMOC [16], or an increase in the sensitivity of net ecosystem productivity to temperature anomalies that precedes Amazonian rainforest tipping [17]. However, the most predominant early warning signs are the generic statistical early warning signs, such as those based on critical slowing down as a system approaches a tipping point. Among these are the methods that measure increases in variance and autocorrelation to detect approach to tipping points [14], [18]. Despite their widespread use, these methods can sometimes be misinterpreted, as certain events may not exhibit any trend of critical slowing down before a transition occurs [19].

Next to these issues, in general it is also not clear for what kind of transitions these early warning signs do signal. Often, it is assumed that a bifurcation will induce a significant state transition. However, this need not be the case; for example, a Turing bifurcation might lead to the emergence of spatial patterns with limited effects on system functioning – hereby perhaps even evading tipping altogether [20][22]. Hence, current statistical early warning signs do not distinguish between proper full system change ‘Tipping’ bifurcations and less dramatic transitions, such as Turing bifurcations.

In this paper, we introduce a new early warning system that is capable of making this distinction, by providing more information about the kind of bifurcation that is approached (see Figure 1). The method proposed in this paper evaluates the stability of a spatially homogeneous state via an estimation of the associated dispersion relation. This dispersion relation indicates the stability of the system state against different spatially heterogeneous perturbations, specified using spatial (Fourier) modes \(k\) and associated mode-dependent eigenvalues \(\lambda(k)\). Information on the kind of bifurcation is gained by inferring the critical perturbation that first destabilises the state, i.e. the \(k_c\) such that \(Re\left(\lambda(k_c)\right) = 0\). If \(k_c = 0\), a spatially homogeneous bifurcation occurs (e.g. a saddle-node ‘Tipping’ bifurcation); if \(k_c \neq 0\), a spatially heterogeneous bifurcation occurs (e.g. a pattern forming Turing bifurcation). To obtain these central estimates for the dispersion relation, the method fits noisy spatio-temporal data from before a transition to a linear partial differential equation, of which the dispersion relation can be retrieved analytically.

Here, we investigate the capabilities of this method with synthetic data of a modified-Klausmeier model [23], [24], a reaction-diffusion system used to describe the interplay between water and vegetation in drylands. Depending on parameter choices, this model can showcase both full system tipping, organized via a saddle-node bifurcation, and spatial patterning, organized via a Turing bifurcation (see Figure 1). We test how well the dispersion relations can be estimated in various parameter settings, including varying distance to the bifurcation, and for various noise properties. Further, we determine the data needs for this method to work optimal, in terms of requirements on resolution and time span of the input spatio-temporal data.

The rest of this paper is structured as follows. In Section 2, we first provide an overview of the theory relating to linear stability of homogeneous states of partial differential equations and dispersion relations. Then, we explain the proposed new early warning system in detail. In Section 3, we give the details of the numerical experiments performed. In Section 4, the results of these experiments are given and discussed. Finally, we end with a brief discussion in Section 5.

a
b

Figure 1: Illustration of two bifurcation diagrams with distinct destabilizing bifurcations, which are identified using the method introduced in this paper based on the estimation of dispersion relations. The lines indicate the spatially homogeneous steady states, with solid lines indicating stable and dashed lines unstable states. The left shows a situation in which the orange state destabilizes via a saddle-node ‘Tipping’ bifurcation; the right shows a situation in which the it destabilizes via a spatial pattern forming Turing bifurcation. The insets show dispersion relations, relating spatial Fourier eigenmodes, characterized by their wavenumber \(k\), to eigenvalues \(\lambda(k)\), at various levels with varying closeness to bifurcation. It can be seen that at the bifurcation - and well before it - the dispersion relations are qualitatively different: saddle-node bifurcations are signaled by a peak in the dispersion relation at \(k = 0\), whereas Turing bifurcations have peaks for \(k \neq 0\). The method introduced in this paper makes use of this fact by providing estimates of these dispersion relations, and thus of the most unstable spatial eigenmodes \(k_*\), to determine whether a spatially homogeneous (e.g., left - saddle-node bifurcation) or spatially heterogeneous (e.g., right - Turing bifurcation) destabilization is imminent. Figures are made with the modified Klausmeier model in Eq 5 with parameter values \(m = 0.5\), \(h = 0.1\), \(\delta = 0.5\) (left) or \(\delta = 0.01\) (right) and varying \(p\) along horizontal axis; system state is represented by the variable \(v\).. a — Destabilization via a ‘Tipping’ bifurcation, b — Destabilization via a Turing bifurcation

2 Theory↩︎

2.1 Stability of uniform steady states in spatial systems↩︎

Central to our approach is knowledge about the stability of homogeneous steady states against homogeneous perturbations (signaling e.g. saddle-node bifurcations) and against heterogeneous perturbations (signaling Turing bifurcations). This has been studied mathematically in a wide variety of reaction-diffusion models [21], [25], [26]. For clarity of presentation in this section we restrict ourselves to the simplest setting of a two-component reaction-diffusion equation, but refer the interested reader to e.g. [26], [27] for extensions to more complicated models.

We consider two-component reaction-diffusion equations with one spatial dimension of the form \[\begin{align} \label{eq:theory95genequation} \begin{aligned} \partial_t u & = \partial_{x}^2 u + f(u,v);\\ \partial_t v & = \delta \partial_x^2 v + g(u,v), \end{aligned} \end{align}\tag{1}\] where \(f\) and \(g\) represent the reaction terms (i.e. local dynamics) of the model. Let \((u_*,v_*)\) denote a homogeneous steady state satisfying \[\begin{align} f(u_*,v_*) &= 0, \\ g(u_*,v_*) &= 0. \end{align}\] We study the stability of this steady state, \((u,v)(x,t) = (u_*,v_*)\), by inspecting the growth rate \(\lambda(k)\) of perturbations with wavenumber \(k\). To do so, we substitute \[\begin{pmatrix} u(x,t) \\ v(x,t) \end{pmatrix} = \begin{pmatrix} u_* \\ v_* \end{pmatrix} + e^{\lambda(k) t} e^{i k x} \begin{pmatrix} \overline{u} \\ \overline{v} \end{pmatrix}\] into 1 and linearise the resulting equation. That yields the linear eigenvalue problem \[\lambda(k) \begin{pmatrix} \overline{u} \\ \overline{v} \end{pmatrix} = \begin{pmatrix} \frac{\partial f}{\partial u}(u_*,v_*) - k^2 & \frac{\partial f}{\partial v}(u_*,v_*) \\ \frac{\partial g}{\partial u}(u_*,v_*) & \frac{\partial g}{\partial v}(u_*,v_*) - \delta k^2 \end{pmatrix} \begin{pmatrix} \overline{u} \\ \overline{v} \end{pmatrix} =: \begin{pmatrix} a - k^2 & b \\ c & d - \delta k^2 \end{pmatrix} \begin{pmatrix} \overline{u} \\ \overline{v} \end{pmatrix} .\label{eq:theory95lineareq}\tag{2}\] Hence, we obtain two growth rates \(\lambda_{1,2}(k)\) as the eigenvalues of the \(k\)-dependent matrix in 2 , i.e., as solutions to the \(k\)-dependent characteristic equation \[\label{eq:Theory95char95eq} 0 = \lambda^2 + \lambda \left( -a - d + (1+\delta)k^2 \right) + (a-k^2)(d-\delta k^2) - bc.\tag{3}\] These so-called dispersion relations \(\lambda_{1,2}(k)\) indicate whether a perturbation with wavenumber \(k\) grows or shrinks, i.e. if \(Re\left(\lambda_{1,2}(k)\right) < 0\), then perturbations with wavenumber \(k\) shrink, and hence the steady state \((u_*,v_*)\) is stable against such perturbations. Thus, if \(Re\left( \lambda_{1,2}(k)\right) < 0\) for all wavenumbers \(k\), then the steady state \((u_*,v_*)\) is stable against all perturbations. Similarly to ordinary differential equations, there is a bifurcation when (the real part of) one of the eigenvalues changes sign, i.e. when the dispersion relation indicates growth of perturbations of some wavenumber, i.e. when there is a critical wavenumber \(k_c\) such that \(Re\left( \lambda_1(k_c)\right) = 0\) or \(Re\left( \lambda_2(k_c)\right) = 0\). If \(k_c = 0\) the bifurcation is due to homogeneous perturbations and indicates, e.g., a saddle-node bifurcation; if \(k_c \neq 0\) the bifurcation is due to heterogeneous perturbations and indicates a Turing bifurcation [25].

As long as the steady state \((u_*,v_*)\) is still stable, the dispersion relation can provide insight into the kind of bifurcation that is approached. In this paper we do so by tracking the most dominant perturbation wavenumber \(k_*\), i.e. the wavenumber that corresponds to the slowest decaying regular perturbations. For the system 1 , it can be deduced that either \(k_* = 0\), suggesting approach of a saddle-node bifurcation, or \(k_*^2 = \frac{a\delta + d}{2\delta}-\frac{1+\delta}{2\delta}\lambda(k_*)\), suggesting approach of a Turing bifurcation [26].

2.2 Methodology: inferring spatial stability from data↩︎

For the method in this paper we combine linear stability analysis, regression techniques and the study of dispersion relations. An overview is given in the schematic in Figure 2. Below, we explain the steps of the method one by one in detail. The general idea is that we use spatio-temporal data from before a transition to determine the fluctuations around an estimated equilibrium state. These are then used to fit to a linear partial differential equation, whose dispersion relations are determined. The implementation for this method as used in this paper is available on https://github.com/JustPaul99/Stability_Analysis_RD.

As data input, we assume measurements on \(\alpha\) variables at times \(\{t_1, \ldots, t_{\beta}\}\) and spatial locations \(\{ \underline{x}_1, \ldots, \underline{x}_{\gamma}\}\). We denote the full data set by \(\underline{Y}\), and a measurement on time \(t_i\) and location \(\underline{x}_j\) by \(\underline{Y}(\underline{x}_j,t_i)\).

Figure 2: Schematic for the proposed method. Spatio-temporal data on fluctuations before a transition is fitted to a linear partial differential equation, whose dispersion relation is computed, and then used to track the most unstable spatial eigenmode k_* and associated eigenvalue \lambda_* = \lambda(k_*).

Step 1: Computation of the equilibrium↩︎

We approximate the homogeneous equilibrium states as the spatial average of the data per time step, i.e. the equilibrium at time step \(t_i\) is estimated as \[\begin{align} \underline{Y}^*(t_i) := \text{mean}\left(\underline{Y}(\cdot,t_i)\right). \end{align}\] This should be accurate as long as the solution tracks the equilibrium. Other methods to calculate the equilibrium could be chosen such as using data from multiple timesteps (which might be beneficial for example if one does not have many spatial points).

Step 2: Compute perturbations from equilibrium↩︎

The fluctuations around the estimated equilibrium are calculated per time step, i.e., \[\begin{align} \underline{Z}(\underline{x}_j,t_i) :=\underline{Y}(\underline{x}_j,t_i)-\underline{Y}^*(t_i). \end{align}\]

Step 3: Divide into time windows↩︎

To analyse the progression of (the stability of) the system, we split the data in time windows, in which we will assume that the dynamics stay the same. These small time windows should be chosen not too big (such that the dynamics stay similar), but also not too small (such that enough data is available). Thus, if the dynamics of a system is changing very slowly, a longer time frame could be effective, and vice versa. For notational parsimony, we omit explicit notation for data in a certain time window.

Step 4: Fit to linear reaction-diffusion equation↩︎

In a time window, the data \(\underline{Z}\) on the fluctuations is expected to behave according to a linear partial differential equation. \[\underline{z}_t = A \underline{z} + D \underline{z}_{\underline{x}\underline{x}}, \label{eq:linearPDE}\tag{4}\] where \(D := diag(\delta_1,\ldots,\delta_\alpha)\) and \[A := \begin{Bmatrix} a_{1,1} & \cdots & a_{1,\alpha}\\ \vdots&\ddots&\vdots \\ a_{\alpha,1} & \cdots & a_{\alpha,\alpha} \end{Bmatrix}.\] Now, the goal is to use the data \(\underline{Z}\) to find the best estimates for all these parameters. To do so, we rewrite 4 to \[\underline{z}_t = \Theta \underline{F} := \begin{Bmatrix} A & D \end{Bmatrix} \begin{Bmatrix} \underline{z} \\ \underline{z}_{\underline{x}\underline{x}} \end{Bmatrix}.\] Numerical approximations of the time and spatial derivatives are computed from the data \(\underline{Z}\), for example, through finite difference schemes. We denote the thus obtained (estimated) time derivative by \(\underline{Z}_t\) and the (estimated) second spatial derivative by \(\underline{Z}_{\underline{x}\underline{x}}\). Now, parameter estimation for \(\Theta\) can be done, for instance, using least squares fitting, i.e., the optimal \(\Theta_*\) is found as \[\Theta_* := \mathop{\mathrm{argmin}}_{\Theta} \sum_{i,j} \left\| \underline{Z}_t(\underline{x}_j,t_i) - \Theta \begin{Bmatrix} \underline{Z}(\underline{x}_j,t_i) \\ \underline{Z}_{\underline{x}\underline{x}}(\underline{x}_j,t_i)\end{Bmatrix} \right\|\]

Step 5: Compute dispersion relations↩︎

The linear model 4 with parameters \(\Theta^*\) is analysed using the theory in section 2.1. In particular, using the Fourier mode analysis, i.e. setting \(\underline{z}(\underline{x},t) = e^{\lambda(|k|)t}e^{i \underline{k} \cdot \underline{x}} \underline{w}\), dispersion relations \(\lambda_m(|\underline{k}|)\) can be computed as the eigenvalues of the matrices \[A - |\underline{k}|^2 D = \begin{Bmatrix} a_{1,1} - \delta_1 |\underline{k}|^2 & a_{1,2} & \cdots & a_{1,\alpha}\\ a_{2,1} & a_{2,2} - \delta_2 |\underline{k}|^2 & \ddots & \vdots \\ \vdots&\ddots&\ddots & a_{\alpha-1,\alpha} \\ a_{\alpha,1} & \cdots & a_{\alpha,\alpha-1} & a_{\alpha,\alpha} - \delta_\alpha |\underline{k}|^2 \end{Bmatrix}.\]

Step 6: Tracking most unstable mode↩︎

From the dispersion relations \(\{ \lambda_m(|\underline{k}|) \}\), we determine the most unstable wavenumber \(k_*\) and its associated (real part of) eigenvalue \(\lambda_*\) as \[\begin{align} k_* &:= \mathop{\mathrm{argmax}}_{|\underline{k}|}\;\max_m\;\operatorname{Re} \left\{ \lambda_m(|\underline{k}|) \right\};\\ \lambda_* & := \max_{|\underline{k}|}\;\max_m\;\operatorname{Re} \left\{ \lambda_m(|\underline{k}|)\right\}\;= \max_m\;\operatorname{Re}\left\{\lambda_m(k_*)\right\}. \end{align}\] The value of \(\lambda_*\) indicates closeness to a bifurcation, as it approaches zero at a bifurcation. The value of \(k_*\) is an indicator for the type of bifurcation: if \(k_* \approx 0\), it signals a homogeneous bifurcation typically associated with tipping (e.g. a fold bifurcation); if \(k_* > 0\), it signals a heterogeneous Turing bifurcation.

3 Setup of numerical experiments↩︎

In this section we will introduce the numerical experiments to test the method from section 2.2. For these, we have generated synthetic test data using an extended-Klausmeier model for dryland vegetation [23], [24], which we introduce in section 3.1. This model works well for our experiments as it can showcase destabilisations via saddle-node bifurcations and via Turing bifurcations depending on parameter combinations. Subsequently, in subsection 3.2, we detail the specific numerical experiments that we performed to test the validity and limitations of the method.

3.1 Test model: an extended Klausmeier model↩︎

We use the (non-dimensionalised) modified extended Klausmeier model introduced in [23]. This models dryland ecosystems by the interplay between vegetation (\(v\)) and water (\(u\)). Here, we use this model on a spatial \(1\)D domain. The model is given by \[\label{eq:Extended95klausmeier952} \begin{align} \mathrm{d}u&=\left(p-u-uv^2+u_{xx}\right)\mathrm{d}t+A\mathrm{d}F^{(1)}_t,\\ \mathrm{d}v&=\left(uv^2(1-hv)-mv+\delta v_{xx}\right)\mathrm{d}t+A\mathrm{d}F^{(2)}_t. \end{align}\tag{5}\] Here, the reaction terms describe the change in water due to rainfall (\(+p\)), evaporation of water (\(-u\)), and uptake by plants (\(-uv^2\)). The change of vegetation is described by mortality (\(-mv\)) and growth with a carrying capacity (\(+uv^2(1-hv)\)). Movement of water is modeled as diffusion (\(u_{xx}\)), and similarly for the dispersal of vegetation (\(\delta v_{xx}\)). Here, \(\delta\) represents the difference in diffusion constants between the two processes (typically \(\delta\) is small as vegetation dispersal is slower). Finally, additive spatio-temporal noise \(F=(F^{(1)},F^{(2)})^T\) is added, with noise strength \(A\). We take \(F^{(1)}\) and \(F^{(2)}\) as uncorrelated Gaussian processes that are white in time, and either coloured or white in space (depending on the numerical experiment). Here the noise is coloured by applying a normalized squared exponential filter \(\exp{(-x^2/l_c^2)}\), with correlation length \(l_c\) and position \(x\), to white noise[28].

For an analysis of the stability of the homogeneous steady states of the determinstic part of 5 , we refer the reader to Appendix 6 (and [23]). In short, model 5 admits a homogeneous vegetated state \((u_*,v_*)\) for \(p > 2m \left( h + \sqrt{1+h^2}\right)\), where \[\begin{align} \begin{aligned} u_* & = m \left( \frac{p}{m} - \frac{v_*}{1-hv_*} \right), \\ v_* & = \frac{ \frac{p}{m} + \sqrt{ \left(\frac{p}{m}\right)^2 - 4 \left(1 + \frac{p}{m}h\right)}}{2 \left(1+\frac{p}{m}h\right)}. \end{aligned} \end{align}\] At \(p = p_\textrm{SN} := 2m \left(h + \sqrt{1+h^2}\right)\), there is a saddle-node bifurcation (i.e. tipping point). For \(p = p_\textrm{T}\), the Turing bifurcation point, there is no easy closed-form expression, but it can be derived numerically by combining conditions 6 and 7 . The homogeneous vegetated state is stable for \(p > \max\{p_\textrm{SN},p_\textrm{T}\}\) if \(p_T\) exists and for \(p>p_{SN}\) if \(p_T\) does not exist. This means that as \(p\) decreases a destabilization occurs either via the aforementioned saddle-node bifurcation or Turing bifurcation.

3.2 Numerical experiments↩︎

In this study, we take \(m = 0.5\) and \(h = 0.1\) fixed. To study multiple bifurcation types we typically take either \(\delta = 0.5\) or \(\delta=0.01\), and let \(p\) decrease (i.e. use that as the bifurcation parameter). When \(\delta=0.5\), destabilization occurs via a saddle-node bifurcation when \(p\) reaches the critical \(p_{SN}=1.10499\); when \(\delta=0.01\), destabilization occurs via a Turing bifurcation at \(p=p_T=1.63398\), which suggests tipping evasion via pattern formation. Bifurcation diagrams for both situations are given in Figure  1. From hereon, we will refer to parameter settings with \(\delta = 0.5\) as the saddle-node case and to parameter settings with \(\delta = 0.01\) as the Turing case to remind the reader of the setting studied.

Numerical simulations to generate data were performed as follows. The model 5 was spatially discretized using a central difference scheme, and the resulting stochastic differential equation was numerically integrated using the Euler–Maruyama method. We considered two forms of noise: white noise and the aforementioned spatially correlated noise. For the discretizations, we chose a time step of \(\text{d}t = 10^{-4}\) and specified the spatial grid size for each of the upcoming experiments in Table 1. We used a finite domain of size \(L = 40\) with no-flux boundary conditions.

For this paper, we have designed several numerical experiments to test the applicability and limitations of the method introduced in Section 2.2. Specific numerical settings are summarized in Table 1. First, in experiment 1, we investigate how robust the method is against different type of noise, by varying the noise strength and the spatial correlation \(l_c\) of the noise. Second, in experiment 2 and 3 we investigate the data requirements for the method by varying the time length of the data, and by varying both the temporal and spatial sampling, which will indicate the kind of resolution that is required for successful employment of the method. Third, in experiment 4, we investigate the robustness of the method in different parameter settings, that vary in closeness to the actual bifurcation (with lower \(p\) being closer to the bifurcation), and by also considering the case \(\delta = 0.1\) in which Turing and saddle-node bifurcations lie close together (see Figure 13 for a bifurcation diagram of this situation). Finally, in experiment 5, we investigate a case in which \(p\) varies with time to see how the method deals with non-autonomous forcing. For each experimental setting, we construct an ensemble of 100 datasets from model 5 , each with a distinct noise realization. The same set of 100 white noise seeds is used across all different settings, which ensures that the realizations are comparable. We applied the method to each data set separately, and we report on the statistics of the outcomes.

Table 1: Summary of numerical settings varied between the numerical experiments. The first row corresponds to baseline values, and subsequent rows to the values for the various numerical experiments. Changes from baseline settings are highlighted in blue (\(x\)-axes in figures in the results section below) and red (\(y\)-axes in the figures results section below). Columns denote different numerical settings: \(p\) and \(\delta\) correspond to parameter values for the model 5 ; noise strength \(A\) and correlation length \(l_c\) refer to the Noise properties for the model 5 ; observation time indicates the length of the time series that went into the method; sample size refers to the resolution of the sampling, in either temporal or spatial component of the data, with 100% indicating all simulated synthetic data was used; \(dx\) refers to the chosen spatial discretizations for the numerical simulation (which was varied in experiment 3 only to better show the effects of spatial sampling). Other settings were kept fixed; specifically, model parameters \(m=0.5\) and \(h=0.1\), numerical time discretization \(dt=10^{-4}\), and spatial domain size \(L=40\).
Experiments \(p\) \(\delta\) Noise strength \(A\) Corre-lation length \(l_c\) Obser-vation time Sample size time Sample size space \(\boldsymbol{d} x\)
Baseline 6
0.5 1 0.1 1 100% 100% 0.1
Experiment 1: Noise 6
0.5
1
10
0.1
0.2 1 100% 100% 0.1
Experiment 2: Temporal sampling 6
0.5 1 0.1
1
5
50%
25% 100% 0.1
Experiment 3: Spatial sampling 6
0.5 1 0.1 1 100%
50%
25%
Experiment 4: Parameters
6
20
0.1
0.5 1 0.1 1 100% 100% 0.1
Experiment 5: Decreasing \(p\)
c=2
0.1
0.5 1 0.1 1 100% 100% 0.1

4 Results↩︎

In this section, we present the results on the aforementioned numerical experiments. We present these results in the form of dispersion relations. Specifically, for each experiment, we show the mean dispersion relation computed from the ensemble, the corresponding \(5\%\) and \(95\%\) percentiles and the true dispersion relation (based on the parameters of the model). In addition, the densities of the dominant modes \(k_*\) and the dominant eigenvalue \(\lambda_*\) are estimated using Matlab’s ksdensity function and plotted on their respective axes. The critical pairs \((k_*,\lambda_*)\) are shown in a scatterplot, together with an orange ellipse \(\sigma(k_*,\lambda_*)\) obtained from their covariance, centered at the mean and aligned with the principal axes with radii given by the standard deviations. In these figures, we also include text boxes displaying the mean and standard deviation of the differences between the estimated and true dominant modes and eigenvalues. Specifically, we report statistics on \(\Delta k_* := k_* - k_*^{\text{true}}\) and \(\Delta \lambda_* := \lambda_* - \lambda_*^{\text{true}}\), where \(k_*, \lambda_*\) refer to estimated modes and eigenvalues, and \(k_*^{\text{true}}\), \(\lambda_*^{\text{true}}\) to true dominant modes and eigenvalues. To denote these, we use a short-hand notation Mean(standard deviation), but here the order of the standard deviation is presented as the last digit in the mean. For example, \(-0.06(36)\) means \(-0.06 \pm 0.36\) and \(52(47)\) means \(52 \pm 47\)[29]. Finally, for completeness, in Appendix 7, we report on the statistics of the parameters estimated in step 4 of the method for all of the experiments.

Experiment 1: Effect of noise properties↩︎

Figure 3: Estimated dispersion relations for different noise levels and correlation lengths, with \delta = 0.5 (Turing case). The true dispersion relation is shown in black, the ensemble average of estimated dispersion relations in blue, with its 5th and 95th percentiles in red and green dotted lines. The blue dots show the estimated (k_*, \lambda_*) values from each of the 100 data sets. The pink areas indicate the scaled marginal densities of these pairs. An orange covariance ellipse, centered at the mean, represents the one-standard-deviation, \sigma(k_*,\lambda_*), spread along the principal directions. The true dominant pair (k_*^{\text{true}}, \lambda_*^{\text{true}}) is shown with a star. Blue shaded insets report on statistics of error of the method in determining this dominant pair (see main text). The data has been generated using the parameter settings defined in Experiment 1 from table 1.
Figure 4: Estimated dispersion relations for different noise levels and correlation lengths, with \delta=0.5 (saddle-node case). Data is generated using Experiment 1 settings in Table 1. See the caption of Figure 3 for the details of the depicted lines, areas, circles and insets

Figures 3 and 4 provide an overview of how the method responds to varying noise levels and correlation lengths. For low noise strengths (\(0.1\) and \(1\)) and short correlation lengths (White noise and \(0.1\)), the dominant spatial eigenmodes identified by the method show little variation, and the estimated dispersion relations are in line with the true one. However, the method fails under higher noise strength and larger correlation lengths. For example, in Figure 3 the method fails to recover the correct dispersion relation at a noise strength of \(10\). As shown in Figure 11, high noise levels drive the vegetation below the unstable homogeneous vegetation state. We suspect this indicates that the system is locally being pulled toward alternative attractors. In those cases the method cannot capture all the relevant (nonlinear) dynamics.

Moreover, we find that the spread in the estimated dominant spatial eigenmode gets bigger as the noise’s correlation length increases. We suspect that the interplay of the higher correlation in the noise can make the method unreliable for detecting some noise realizations. For instance, Figure 10 shows all 100 estimated dispersion relations for the Turing case (\(\delta = 0.01\)) at noise level \(0.1\) and correlation length \(l_c = 0.2\), where some realizations exhibit peaks at different wavenumbers.

Finally, we observe that the saddle-node case with \(\delta = 0.01\) exhibits greater robustness to varying noise properties compared to the Turing case with \(\delta = 0.5\). We suspect this is because retrieval of the dominant homogeneous spatial eigenmode is less sensitive to the spatial structure of noise. In fact, the fitted parameters of the linear model in step 4 of the model become less accurate for higher noise strength (see Table [Table:Noise95level95correlation95Tipping]).

Experiment 2 & 3: Data requirements↩︎

Figure 5: Estimated dispersion relations for different observation times and temporal sampling rates for \delta = 0.01 (Turing case). Data is generated using Experiment 2 settings in Table 1. See the caption of Figure 3 for the details of the depicted lines, areas, circles and insets.
Figure 6: Estimated dispersion relations for different observation times and temporal sampling rates for \delta = 0.5 (saddle-node case). Data is generated using Experiment 2 settings in Table 1. See the caption of Figure 3 for the details of the depicted lines, areas, circles and insets.

Figures 5 and 6 show the results of experiment 2 in which we vary the total observation time and the temporal sampling rate for the Turing and saddle-node case respectively. These show that increasing the total observation time, and hence using more data, improves the estimations; the spread in dominant spatial eigenmodes \(k_*\) and eigenvalues \(\lambda_*\) goes down with longer observation times, and the spread in estimated dispersion relations goes down as well. Further, with respect to temporal sampling, it can be seen by eye that the estimated dispersion relations do not change much when less of the time steps have been used. However, the errors for \(k_*\) and \(\lambda_*\) do increase a bit (as do the fitted parameters in step 4; see Table [Table:Sampling95Turing]). In general, it can thus be seen that the total observation time, rather than the amount of data, leads to most improvements.

Figure 7: Estimated dispersion relations for different spatial sampling rates for \delta = 0.01 (Turing case; top row) and \delta = 0.5 (saddle-node case; bottom row). Data is generated using Experiment 3 settings in Table 1. See the caption of Figure 3 for the details of the depicted lines, areas, circles and insets. Note that in the bottom row different scales have been used compared to Figure 4 and 6.

Figure 7 shows the results of experiment 3 in which the spatial sampling rate is varied for both the Turing and the saddle-node case. We see that lower spatial sampling rates worsen the estimation – in particular, the estimation of the dominant spatial eigenmodes \(k_*\). This stems from the fact that spatial sampling reduces the accuracy of the estimation of the diffusion process (see Table [Table:spatial95sampling95corr9502]). During testing we observed that the correlation length of the noise does influence how much spatial sampling is acceptable: the longer the noise’s correlation length, the more coarse spatial sampling could be to still have meaningful estimations. For example, Figure 12 and Table [Table:spatial95sampling95white] illustrate that, under spatial white noise, spatial sampling is effectively not possible.

Experiment 4: sensitivity to parameters↩︎

Figure 8: Estimated dispersion relations for different parameters \delta and p. Data is generated using Experiment 4 settings in Table 1. See the caption of Figure 3 for the details of the depicted lines, areas, circles and insets. Note that the scales in this figure are different compared to the previous figures.

Figure 8 shows the results of Experiment 4, in which the model parameters \(p\) and \(\delta\) were varied. Lower values of \(p\) bring the system closer to a bifurcation, with critical values \(p_c\) given by: \(p_c=1.63398\) for \(\delta=0.01\) (Turing), \(p_c=1.11445\) for \(\delta=0.1\) (Turing), and \(p_c=1.10499\) for \(\delta=0.5\) (saddle-node). These figures show that the method correctly finds the dispersion relations and dominant spatial eigenmodes \(k_*\) and eigenvalues \(\lambda_*\) in many cases. In particular, the closer the system is to a bifurcation (i.e. lower \(p\)), the better the prediction of the dominant spatial eigenmode \(k_*\) corresponds to the critical spatial eigenmode \(k_c\) at the true bifurcation.

However, the method seems to have difficulties in the specific situation near a shift between a dominant homogeneous spatial eigenmode (\(k_*=0\)) and a dominant heterogeneous spatial eigenmode (\(k_*\neq 0\)), as seen for parameters \(p=20,\delta=0.01\) and \(p=2,\delta=0.1\). In these cases, the dispersion relation is near horizontal, which leads to increased sensitivity resulting in higher errors for estimated dominant eigenmodes \(k_*\).

Further, it can be seen in some of the panels that sometimes an eigenvalue with positive real part is unjustly estimated. In these cases, increasing the observation time would lead to more accurate estimations, per experiment 2, removing this issue. However, current results would still accurately indicate the dominant spatial eigenmode \(k_*\), and thus the type of bifurcation (except in aforementioned situation when saddle-node and Turing bifurcations happen almost simultaneously).

Experiment 5: Time-dependent forcing↩︎

Figure 9: Estimated dispersion relations for different parameters \delta an time-varying parameter p(t) = 20 - 2t with measurements between p=20 and p=18 (top row), between p = 8 and p = 6 (middle row) and between p = 4 and p = 2 (bottom row). Data is generated using Experiment 5 settings in Table 1. The black lines indicate the true initial dispersion relations (i.e. at the highest p value of the interval), and the darkgrey lines the true final dispersion relations (i.e. at the lowest p value of the interval). In contrast to the previous figures, the blue insets denote the values of k_* and \lambda_*. See the caption of Figure 3 for the details of the other depicted lines, areas, circles and insets.

In the final experiment, we have introduced a time-varying parameter \(p\), where \(p\) goes down from \(p=20\) to \(p=2\) linearly as \(p(t)=20-2t\). In Figure 9 the results are shown when the methodology is used on time windows between \(p = 20\) and \(p = 18\) (top row), between \(p = 8\) and \(p = 6\) (middle row), and between \(p =4\) and \(p=2\) (bottom row). The estimated dispersion relations are similar to those in Figure 8 with constant parameters. Hence, this indicates that the method is capable of handling time-varying parameters.

In general, we find that the estimated dispersion relation lies roughly between the dispersion relations for the initial and final parameter values of the given parameter window. Similarly to the constant parameter case, the dominant spatial eigenmode is estimated well in most situations, except when the dispersion relation is nearly horizontal (e.g., \(\delta = 0.01\), \(p \in [20,18]\); and \(\delta = 0.1\), \(p \in [4,2]\)). For \(\delta = 0.01\), \(p \in [4,2]\), the dispersion relation changes substantially between the start (\(p = 4\)) and end (\(p = 2\)), which is not fully captured by the method. However, the method correctly distinguishes between a dominant spatial eigenmode with \(k_* = 0\) and \(k_* \neq 0\). As in the constant parameter case, the closer the system is to crossing a bifurcation (i.e., lower \(p\)), the more accurately the estimated \(k_*\) indicates whether a Turing or saddle-node bifurcation is approached — except when the bifurcations are very close together (e.g., \(\delta = 0.1\)).

5 Discussion↩︎

In this paper, we have introduced a new early warning system designed to distinguish between imminent spatially homogeneous ‘Tipping’ destabilizations and imminent spatially heterogeneous ‘Turing’ destabilizations. For this, spatio-temporal data from fluctuations before the crossing of a bifurcation is fitted to a linear reaction-diffusion system. Its associated dispersion relation, that relates spatial Fouriermodes \(k\) to eigenvalues \(\lambda(k)\), then yields the most dominant spatial mode \(k_*\), which is used to distinguish between tipping \((k_c=0)\) and Turing \((k_c \neq 0)\) destabilizations.

Using numerical experiments on synthetic data, we have shown the effectiveness of this methodology in estimating dispersion relations and making the distinction between Tipping and Turing bifurcations. In particular, the results indicate the methodology can handle many different noise dynamics and also time-varying forcing. When used on appropriate data, the method seem to fail only in two cases. First, when noise is too dominating (and e.g. noise-induced transitions occur). Second, when the dispersion relation is very flat (e.g. due to switching between Tipping and Turing destabilisation).

These numerical experiments also highlighted the data requirements to effectively use this methodology. Foremost, it has been indicated that it works better with longer observation times; however, the temporal resolution of the data was less important. This indicates that longer, but potentially less frequent, measurements should be used to improve the estimations – especially for the eigenvalues. At the same time, the spatial resolution of the data is in fact very important, but the specific resolution needs depend on the spatial correlation of the noise. Therefore, when using this method, measurements should be used that have a spatial resolution at least as fine as the expected spatial correlation in the system. So for systems were disturbances influence large areas - such as forest fires - a coarser spatial resolution might still work whereas for systems with primarily small-scale disturbances it will not.

Within this paper, we focused on synthetic data from a model in which there could be a saddle-node ‘Tipping’ bifurcation or a Turing bifurcation. However, this method should also be able to detect destabilizations of spatially homogeneous states organized by other bifurcations, such as pitchfork or transcritical bifurcations. Further, also Hopf or Turing-Hopf bifurcations can be detected by explicitly taking the imaginary part of the eigenvalues into account. Extensions to other systems, with potentially more complicated dispersion relations, also should be possible by e.g. incorporating higher order spatial derivatives or more components in step 4 of the method. Also for spatially heterogeneous states, the method might be useful, although the stability of such states is typically not fully described by dispersion relations and additional analysis would be needed [30].

The method developed here is capable of making statements about the linear stability of a system state, and how it might destabilize. However, it cannot make statements about the new state after crossing of any bifurcation. For example, while it has been argued that a Turing bifurcation can lead to small-amplitude spatial patterns and evastion of tipping [20], spatially heterogeneous destabilisations can also lead to large-amplitude patterns or even expedite tipping [31], [32]. Further analysis that incorporates the nonlinear feedbacks of a system is needed to make such statements. For example, to differentiate between super-critical Turing (small amplitude patterns emerge) and sub-critical Turing (large amplitude patterns emerge) bifurcations, fits to amplitude equations, that incorporate the nonlinear effects, might be fruitful [26].

Finally, at the heart of the early warning system is the fit to a linear model in step 4. In this study, we have used linear regression and numerical approximations of derivatives, but more refined approaches could further improve the results. Specifically, for stochastic spatial systems, progress has been made in retrieving models directly from data [33], [34]. Additional improvements could be achieved using alternative regression methods [35], [36], bootstrapping [37], weak formulations of the partial differential equations [37], [38], or by explicitly accounting for the time-dependence of the forcing [37], [39].

Understanding the timing and nature of ecosystem and climate subsystem destabilization is a pressing challenge in the face of ongoing climate change. The methodology presented in this paper addresses this challenge by estimating dispersion relations from spatio-temporal data collected before a transition. In this way, both the eigenvalues and the dominant spatial eigenmodes are determined, providing information on the timing of destabilization as well as the type of bifurcation. Hence, this methodology might form the basis of refined statistical early warning system that can signal not only when, but also what happens at a critical transition.

Data statement↩︎

All codes used in this paper are available on https://github.com/JustPaul99/Stability_Analysis_RD.

Declaration of generative AI and AI-assisted technologies in the writing process↩︎

During the preparation of this work the author(s) used ChatGPT, Gemini, and Copilot to improve readability of the text. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the published article.

6 Analysis of extended Klausmeier model 5↩︎

In this section, we perform a stability analysis of the spatially homogeneous states of the (deterministic part of the) model 5 . Conform [23], the system has three spatially homogeneous steady states: the no-vegetation solution \((u_0,v_0)=(p,0)\), and two vegetation equilibria \((u_{1,2},v_{1,2})\) given by \[\begin{align} \begin{aligned} u_{1,2}&=\frac{2 hm + p +2h^2p\pm\sqrt{p^2-4hmp-4m^2}}{2+2h^2},\\ v_{1,2}&=\frac{p\mp\sqrt{p^2-4hmp-4m^2}}{2m+2hp}. \end{aligned} \end{align}\] These vegetation states exist for \(p>p_{SN}:=2m(h + \sqrt{1+h^2})\) as we can see that for both \(u_{1,2}\) and \(v_{1,2}\) the square root needs to be positive for the solutions to exist.

First, we determine their stability against spatially homogeneous equations, i.e. by inspecting the Jacobian 2 for \(k=0\), \[\begin{align} A_{i}(0)= \begin{pmatrix} -1-v_i^2 & -2v_iu_i\\ v_i^2(1-hv_i) & - m +2 u_i v_i -3 hu_iv_i^2 \end{pmatrix}. \end{align}\] The no vegetation state \((u_0,v_0)\) is stable for all \(p\geq0\), as \(A_0(0)\) has eigenvalues \(\lambda_1=-1<0\) and \(\lambda_2=-m<0\). For the vegetation states we check the stability conditions \(\det(A_{1,2}(0))>0\) and \(\require{physics} \Tr(A_{1,2})<0\). After some calculations we find \[\begin{align} \det(A_{1,2})(0)=\frac{D\mp p\sqrt{D}}{2m+hp}, \end{align}\] with \(D=-4 m^2-4 h m p+p^2\). In the domain for \(p>p_{SN}\) \(\det(A_1)\) is positive and \(\det(A_2)\) is negative. So \((v_1,w_1)\) is unstable and we check the stability of \((v_2,w_2)\) with the trace. For the trace after some calculations, we find \[\require{physics} \begin{align} \Tr(A_2)=-1-v_2^2+u_2v_2(1-2hv_2)<0. \end{align}\] This is not a trivial condition to uphold, but following Reduce from Mathematica the condition \(0<m\leq2+2h^2\) assures that the trace remains negative. So the vegetation state \((u_2,v_2)\) is stable for \(p>p_{SN}\) and \(0<m\leq 2+2 h^2\).

For the simulations introduce in section 3, we use models with \(m<2\), and we let \(p>p_{SN}\) be our environmental condition that we change to induce tipping or Turing patterns. Therefore, stability on the homogeneous spatial patterns is satisfied.

Second, we perform an analysis on when we should expect tipping behavior or Turing patterns to emerge. We can find the critical eigenmode \(k_c\) by differentiating over the \(k\)-dependent characteristic equation 3 with respect to \(k\) and solving for \(k_c\). Here we can use \(\frac{\mathrm{d}\lambda}{\mathrm{d}k}|_{k_c,\mu_c}=0\) and \(\lambda|_{k_c,\mu_c}=0\), where \(\mu_c\) represents the critical parameters, i.e. \(p_{T}\) for a fixed \(h\), \(m\) and \(\delta\). This gives \(k_c=\sqrt{\frac{{a\delta+d}}{2\delta}}\), where \(a\) and \(d\) are as defined in 2

The critical parameter \(p_T\) can then be found by substituting \(k_c\) in the characteristic equation 3 to find the indirect nontrivial relation, \[\begin{align} (a\delta-d)^2=-4\delta b c, \end{align}\] where \(a,b,c,d\) are as defined in 2 . Using the above relation one can find \(p_T\) from this relation once \(h\), \(m\) and \(\delta\) are fixed. Then from \(p_T\), \(h\) and \(m\) we can calculate the critical wavenumber, \(k_c\).

For Turing patterns, we require that \(k_c>0\), which implies that \(a\delta+d>0\). Together with the stability conditions for the homogeneous spatial vegetation state, \(a+d>0\) and \(ad-bc<0\), from the trace and determinant respectively, we find two minimal conditions for Turing patterns within a stable vegetation state: \[\begin{align} a<0, \quad d>0,\quad bc<0, \quad 0<\delta<1 \quad \boldsymbol{OR} \quad a>0, \quad d<0, \quad bc<0 \quad \delta>1. \end{align}\]

For the Extended Klausmeier model we have that \(a_{KL}=-1-v_2^2<0\), \(b_{KL}=-2v_2u_2<0\), \(c_{KL}=v_2^2(1-hv_2)>0\)1 and \(d_{KL}=u_2v_2(1-2hv_2)\), the sign of which is indeterminate in general. \(d_{KL}\) is therefore the only function we can play with to study different settings for the emergence of saddle-node bifurcations and Turing bifurcations. This means that for \(d_{KL}<0\) the spatially homogeneous vegetation state is always stable. However, as we study \(\delta<1\) and \(a_{KL}<0\), the emergence of Turing patterns or saddle-node bifurcations can only occur when \(d_{KL}>0\). Altogether, the conditions for Turing patterns are \[\begin{align} \label{Eq:Turing95condition951} d_{KL}|_{p_T}>\delta |a_{KL}||_{p_T} \end{align}\tag{6}\] with \(p_T^2-4hmp_T-4m^2>0\), i.e. \(p_T>p_{SN}\), where \(p_T\) follows from \[\begin{align} \label{Eq:Turing95condition952} (\delta a_{KL}-d_{KL})^2=-4\delta b_{KL} c_{KL}. \end{align}\tag{7}\]

7 Supplementary Data: fitted model parameters↩︎

In this section, we report on additional results for the numerical experiments. These includes reporting on the found fitted parameters in step 4 of the method. In this paper, the data of fluctuations \(\underline{Z} = \begin{Bmatrix} \bar{u} & \bar{v} \end{Bmatrix}^T\) was fitted to the linear model (per equation 4 ) \[\begin{align} \begin{aligned}\label{eq:linearPDE95app} \partial_t \bar{u} & = \partial_{xx} \bar{u} + a \bar{u} + b \bar{v}\\ \partial_t \bar{v} & = \delta \partial_{xx} \bar{v} + c \bar{u} + d \bar{v}. \end{aligned} \end{align}\tag{8}\] To denote the estimated parameters of this linear model, we use – as in the main text – the short-hand notation Mean(standard deviation), but here the order of the standard deviation is presented as the last digit in the mean. For example, \(-0.06(36)\) means \(-0.06 \pm 0.36\) and \(52(47)\) means \(52 \pm 47\)[29].

7.1 Additional Material for Numerical Experiment 1↩︎

@c@c@ &

@c@c@ &

Figure 10: All individual dispersion relations found for Experiment 1 with Noise Strength 0.1, correlation length l_c = 0.2 and \delta=0.01 (Turing case). Figure 3 reports on the statistics of these. Noteworthy here is that some of these estimated dispersion relation have different forms with peaks for different spatial eigenmodes k_*, which might explain the spread of the eigenmodes found in the stastics.
Figure 11: Heatmap of a random run from the Klausmeier model (Eq. 5 ) with noise strength 10 and baseline values from Table 1. Blue indicates regions where the vegetation variable v remains above the unstable state threshold (v>0.0846469), while yellow marks regions where it falls below this threshold (v<0.0846469). This demonstrates that strong noise can locally push the vegetation towards different attractors.

7.2 Additional Material for Numerical Experiments 2 & 3↩︎

@c@c@ &

@c@c@ &

@c@c@ &

Figure 12: Estimated dispersion relations for different spatial sampling rates for \delta = 0.01 (Turing case; top row) and \delta = 0.5 (saddle-node case; bottom row). Data is generated using Experiment 3 settings in Table 1, except the added noise is now spatially uncorrelated white noise (l_c=0). See the caption of Figure 3 for the details of the depicted lines, areas, circles and insets.

@c@c@ &

7.3 Additional Material for Numerical Experiment 4↩︎

Figure 13: Bifurcation diagram including insets with dispersion relations for the case \delta = 0.1. For these parameter settings, the orange line does destabilise via a Turing bifurcation, but this bifurcation is located very close to the saddle-node bifurcation (p_\text{T} \approx 1.11445 and p_\text{SN} \approx 1.10499). Hence the dispersion relation only shows a peak for k_* \neq 0 very close to the Turing bifurcation, and it can appear very flat around k = 0. Note that Figure 1 shows similar figures for cases \delta = 0.01 (Turing case) and \delta = 0.5 (Tipping case).

@c@c@ &

7.4 Additional Material for Numerical Experiment 5↩︎

@c@c@ &

References↩︎

[1]
David I Armstrong McKay, Arie Staal, Jesse F Abrams, Ricarda Winkelmann, Boris Sakschewski, Sina Loriani, Ingo Fetzer, Sarah E Cornell, Johan Rockström, and Timothy M Lenton. Exceeding 1.5°C global warming could trigger multiple climate tipping points. Science, 377(6611):eabn7950, 2022. https://doi.org/10.1126/science.abn7950.
[2]
Timothy M Lenton, Hermann Held, Elmar Kriegler, Jim W Hall, Wolfgang Lucht, Stefan Rahmstorf, and Hans Joachim Schellnhuber. Tipping elements in the earth’s climate system. Proceedings of the National Academy of Sciences, 105(6):1786–1793, 2008. https://doi.org/10.1073/pnas.0705414105.
[3]
Marten Scheffer, Steve Carpenter, Jonathan A Foley, Carl Folke, and Brian Walker. Catastrophic shifts in ecosystems. Nature, 413(6856):591–596, 2001. https://doi.org/10.1038/35098000.
[4]
C. S. Holling. Resilience and stability of ecological systems. Annual Review of Ecology and Systematics, 4:1–23, 1973. https://doi.org/10.1146/annurev.es.04.110173.000245.
[5]
Sybren Drijfhout, Sebastian Bathiany, Claudie Beaulieu, Victor Brovkin, Martin Claussen, Chris Huntingford, Marten Scheffer, Giovanni Sgubin, and Didier Swingedouw. . Proceedings of the National Academy of Sciences, 112(43):E5777–E5786, 2015. https://doi.org/10.1073/pnas.1511451112.
[6]
Sjoerd Terpstra, Swinda KJ Falkena, Robbin Bastiaansen, Sebastian Bathiany, Henk A Dijkstra, and Anna S von der Heydt. Assessment of abrupt shifts in CMIP6 models using edge detection. AGU Advances, 6(3):e2025AV001698, 2025. https://doi.org/10.1029/2025AV001698.
[7]
Bernardo Flores, Encarni Montoya, Boris Sakschewski, Nathália Nascimento, Arie Staal, Richard Betts, Carolina Levis, David Lapola, Adriane Esquível-Muelbert, et al. . Nature, 615:70–80, 2024. https://doi.org/10.1038/s41586-023-06970-0.
[8]
Markus Drüke, Boris Sakschewski, Werner von Bloh, Maik Billing, Wolfgang Lucht, and Kirsten Thonicke. . Communications Earth & Environment, 4:911, 2023. https://doi.org/10.1038/s43247-023-00911-5.
[9]
Niklas Boers, Alexander Robinson, Marisa Montoya, Martin Rypdal, and Nils Bochow. . Geophysical Research Letters, 50:e2022GL101827, 2023. https://doi.org/10.1029/2022GL101827.
[10]
Nils Bochow, Anna Poltronieri, Alexander Robinson, Marisa Montoya, Martin Rypdal, and Niklas Boers. . Nature, 615:503–509, 2023. https://doi.org/10.1038/s41586-023-06503-9.
[11]
Peter Ditlevsen and Susanne Ditlevsen. . Nature Communications, 14(1):1–12, 2023. https://doi.org/10.1038/s41467-023-39810-w.
[12]
René M van Westen, Michael Kliphuis, and Henk A Dijkstra. . Geophysical Research Letters, 52(6):e2024GL114532, 2025. https://doi.org/10.1029/2024GL114532.
[13]
Sebastian Bathiany, Robbin Bastiaansen, Ana Bastos, Lana Blaschke, Jelle Lever, Sina Loriani, Wanda De Keersmaecker, Wouter Dorigo, Milutin Milenković, Cornelius Senf, et al. . Surveys in Geophysics, 46(2):265–301, 2025. https://doi.org/10.1007/s10712-024-09833-z.
[14]
Marten Scheffer, Jordi Bascompte, William A Brock, Victor Brovkin, Stephen R Carpenter, Vasilis Dakos, Hermann Held, Egbert H Van Nes, Max Rietkerk, and George Sugihara. Early-warning signals for critical transitions. Nature, 461(7260):53–59, 2009. https://doi.org/10.1038/nature08227.
[15]
Israel Edem Agbehadji, Tafadzwanashe Mabhaudhi, Joel Botai, and Muthoni Masinde. . Climate, 11(9), 2023. https://doi.org/10.3390/cli11090188.
[16]
René M Van Westen, Michael Kliphuis, and Henk A Dijkstra. . Science Advances, 10(6):eadk1189, 2024. https://doi.org/10.1126/sciadv.adk1189.
[17]
Chris A Boulton, Peter Good, and Timothy M Lenton. . Theoretical Ecology, 6(3):373–384, 2013. https://doi.org/10.1007/s12080-013-0191-7.
[18]
Vasilis Dakos, Marten Scheffer, Egbert H. van Nes, Victor Brovkin, Vladimir Petoukhov, and Hermann Held. Slowing down as an early warning signal for abrupt climate change. Proceedings of the National Academy of Sciences, 105(38):14308–14312, 2008. https://doi.org/10.1073/pnas.0802430105.
[19]
Max Rietkerk, Vanessa Skiba, Els Weinans, Raphaël Hébert, and Thomas Laepple. Ambiguity of early warning signals for climate tipping points. Nature Climate Change, pages 1–10, 2025. https://doi.org/10.1038/s41558-025-02328-8.
[20]
Max Rietkerk, Robbin Bastiaansen, Swarnendu Banerjee, Johan van de Koppel, Mara Baudena, and Arjen Doelman. Evasion of tipping in complex systems through spatial pattern formation. Science, 374(6564):eabj0359, 2021. https://doi.org/10.1126/science.abj0359.
[21]
Koen Siteur, Eric Siero, Maarten B. Eppinga, Jens D.M. Rademacher, Arjen Doelman, and Max Rietkerk. . Ecological Complexity, 20:81–96, 2014. https://doi.org/10.1016/j.ecocom.2014.09.002.
[22]
Swarnendu Banerjee, Mara Baudena, Paul Carter, Robbin Bastiaansen, Arjen Doelman, and Max Rietkerk. Rethinking tipping points in spatial ecosystems. arXiv preprint arXiv:2306.13571, 2023. https://doi.org/10.48550/arXiv.2306.13571.
[23]
Robbin Bastiaansen, Paul Carter, and Arjen Doelman. Stable planar vegetation stripe patterns on sloped terrain in dryland ecosystems. Nonlinearity, 32(8):2759–2814, 2019. https://doi.org/10.1088/1361-6544/ab1767.
[24]
Christopher A. Klausmeier. Regular and irregular patterns in semiarid vegetation. Science, 284(5421):1826–1828, 1999. https://doi.org/10.1126/science.284.5421.1826.
[25]
A. M. Turing. . Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 237(641):37–72, 1952. https://doi.org/10.1098/rstb.1952.0012.
[26]
A. Doelman. Pattern formation in reaction–diffusion systems: An explicit approach, 2019. pp. 129–182 in Complexity Science(M. Peletier, R. van Santen, E. Steur, eds.), World Scientific. URL: https://pub.math.leidenuniv.nl/ doelmana/RDSPF-web.pdf.
[27]
Eric Siero. Resolving soil and surface water flux as drivers of pattern formation in turing models of dryland vegetation: A unified approach. Physica D: Nonlinear Phenomena, 414:132695, 2020. https://doi.org/10.1016/j.physd.2020.132695.
[28]
Dave Higdon. . In Clive W. Anderson, Vic Barnett, Philip C. Chatwin, and Abdel H. El-Shaarawi, editors, Quantitative Methods for Current Environmental Issues, pages 37–56, London, 2002. Springer London. https://doi.org/10.1007/978-1-4471-0657-9_2.
[29]
U.S. Environmental Protection Agency. . Technical Report EPA 402-B-04-001C, U.S. Environmental Protection Agency. Manual for the Assessment of Radiation in the Environment (MARLAP), Vol. III. URL: https://www.epa.gov/sites/default/files/2015-05/documents/402-b-04-001c-19-final.pdf.
[30]
Björn Sandstede. . In Bernold Fiedler, editor, Handbook of Dynamical Systems, volume 2 of Handbook of Dynamical Systems, pages 983–1055. Elsevier Science, 2002. https://doi.org/10.1016/S1874-575X(02)80039-X.
[31]
Jelle van der Voort, Mara Baudena, Ehud Meron, Max Rietkerk, and Arjen Doelman. . Environmental Research Letters, 2025. https://doi.org/10.1088/1748-9326/adc3ab.
[32]
David Pinto-Ramos and Ricardo Martinez-Garcia. How spatial patterns can lead to less resilient ecosystems. arXiv preprint arXiv:2505.08671, 2025. https://doi.org/10.48550/arXiv.2505.08671.
[33]
Lorenzo Boninsegna and Cecilia Clementi. Sparse learning of stochastic dynamical equations. Journal of Chemical Physics, 148(24):241723, 2018. https://doi.org/10.1063/1.5018409.
[34]
Jared L. Callaham, Jean-Christophe Loiseau, Georgios Rigas, and Steven L. Brunton. Nonlinear stochastic modelling with langevin regression. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 477(2250):20210092, 2021. https://doi.org/10.1098/rspa.2021.0092.
[35]
Steven L. Brunton, Joshua L. Proctor, and J. Nathan Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the National Academy of Sciences, 113(15):3932–3937, 2016. https://doi.org/10.1073/pnas.1517384113.
[36]
Samuel H. Rudy, Steven L. Brunton, Joshua L. Proctor, and J. Nathan Kutz. Data-driven discovery of partial differential equations. Science Advances, 3(4):e1602614, 2017. https://doi.org/10.1126/sciadv.1602614.
[37]
U. Fasel, J. N. Kutz, B. W. Brunton, and S. L. Brunton. Ensemble-SINDy: Robust sparse model discovery in the low-data, high-noise limit, with active learning and control. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 478(2260), 2022. https://doi.org/10.1098/rspa.2021.0904.
[38]
Daniel A. Messenger and David M. Bortz. Weak SINDy for partial differential equations. Journal of Computational Physics, 443:110525, 2021. https://doi.org/10.1016/j.jcp.2021.110525.
[39]
Daniel Messenger, Emiliano Dall’Anese, and David Bortz. . Proceedings of Mathematical and Scientific Machine Learning, 190:241–256, 2022. https://doi.org/10.48550/arXiv.2203.03979.

  1. Follows from substitution of the minimal parameter \(h_{min}=\frac{-2m+p}{p}\) in \((1-hv_1)|_{h=h_{ex}}=\frac{2m}{p}>0\).↩︎