February 28, 2025
The critical value of an eikonal equation is the unique value of a parameter for which the equation admits solutions and is deeply related to the effective Hamiltonian of a corresponding homogenization problem. We study approximation strategies for the critical value of eikonal equations posed on networks. They are based on the large time behavior of corresponding time-dependent Hamilton–Jacobi equations. We provide error estimates and some numerical tests, showing the performance and the convergence properties of the proposed algorithms.
35R02, 65M15, 49L25, 35B40.
Hamilton–Jacobi equations, numerical approximation, Mañé critical value, embedded networks, large time behavior, effective Hamiltonian.
Given a Hamiltonian \(H:\mathbb{T}^N\times\mathbb{R}^N\to\mathbb{R}\), where \(\mathbb{T}^N\) is the \(N\)-dimensional torus, the Mañé critical value of the eikonal equation defined by \(H\) is the unique value \(c\) such that \[H(x,\mathrm{D}u)=c,\qquad x\in\mathbb{T}^N,\] admits solutions. The critical value is also deeply connected to the homogenization of Hamilton–Jacobi equations, which starting from the seminal paper LionsPapanicolaouVaradhan87? has received a considerable interest both from theoretical and from an applied viewpoint, see for example Capuzzo-DolcettaIshii01?, MajdaSouganidis94?, ContrerasIturriagaSiconolfi14?, Evans92?, NamahRoquejoffre00?. It then comes as no surprise that many works, such as GomesOberman04?, AchdouCamilliCapuzzo-Dolcetta08?, CamilliCapuzzoDolcettaGomes07?, CacaceCamilli16?, are devoted to the approximation of the critical value for problems posed on \(\mathbb{T}^N\).
The aim of this paper is to approximate the critical value of eikonal equations posed on networks. With network we mean a connected finite graph \(\Gamma\) embedded in \(\mathbb{R}^N\) with vertices linked by regular simple curves \(\gamma\), called arcs of the network. A Hamiltonian on \(\Gamma\) is a collection of Hamiltonians \(\{H_\gamma\}\) indexed by the arcs, depending on state and momentum variables, with the crucial feature that Hamiltonians associated to arcs possessing different support are totally unrelated. Over the last years there has been an increasing interest in the study of this type of problem, leading to many valuable contributions. Among these we cite BarlesChasseigne24?, CamilliSchieborn12?, ImbertMonneau17?, ImbertMonneauZidani12?, LionsSouganidis17?, AchdouTchou15?.
The eikonal problem on \(\Gamma\), as defined in SiconolfiSorrentino18?, is to find a continuous function \(u:\Gamma\to\mathbb{R}\) such that \(u\circ\gamma\) is a viscosity solution to \[\label{eq:introeik} H_\gamma(s,\partial U)=c,\tag{1}\] for any arc \(\gamma\), where \(c\in\mathbb{R}\) is a fixed parameter. As in the torus case, the critical value of the eikonal equation is the unique value of the parameter \(c\) for which 1 admits solutions and is deeply related to the so-called effective Hamiltonian of a corresponding homogenization problem, see PozzaSiconolfiSorrentino24?.
In general, 1 cannot be solved explicitly. Moreover, to design a numerical scheme for such an equation is a very difficult task since it requires the approximation of a first order Hamilton–Jacobi problem in which both the solution \(u\) and the critical value \(c\) are unknown. Indeed, there are few papers devoted to the numerical approximation of solutions to Hamilton–Jacobi equations on networks and most of them are focused on time-dependent problems as described in ImbertMonneau17?, Siconolfi22?, which are, from the numerical point of view, simpler than the eikonal one. Let us mention the finite difference based methods outlined in CamilliCarliniMarchi18?, CostesequeLebacqueMonneau14? and the semi-Lagrangian schemes in CarliniFestaForcadel20?, CarliniSiconolfi23?, CamilliFestaSchieborn13?.
Recently, in Pozza25_2?, it has been shown that the large time behavior of solutions to time-dependent Hamilton–Jacobi equations can be exploited to obtain solutions to a corresponding eikonal problem. Accordingly, our idea is to use the large time behavior of such solutions to approximate the critical value.
We provide two numerical schemes for the approximation of the critical value. The first one, 1, uses an approach already employed in AchdouCamilliCapuzzo-Dolcetta08? for problems posed on \(\mathbb{T}^N\) and allows an a priori error estimate. For 2 we provide a convergence theorem and our simulations show that it usually achieves the same results of 1 with fewer steps.
The paper is organized as follows: 2 provides some basic facts about networks and Hamiltonians defined on them, and gives our main assumptions. In 3 we introduce the time-dependent equation and its large time behavior, which will provide the theoretical basis for our algorithms. 4 is about the analysis of our approximation strategies for the critical value. Finally, in 5, we present the numerical simulations of our algorithms. 6 is devoted to the proof of a which extends the large time behavior analysis performed in Pozza25_2? to our framework.
The first author was partially supported by Italian Ministry of Instruction, University and Research (MIUR) (PRIN Project2022238YY5, “Optimal control problems: analysis, approximation”) by INdAM–GNCS Project (CUP\(\_\) E53C24001950001, “Metodi avanzati per problemi di Mean Field Games ed applicazioni”), by Avvio alla Ricerca (CUP\(\_\) B83C24006550001, “Numerical schemes for Hamilton-Jacobi equations and Mean Field Games problems”) and by European Union - Next Generation EU, Missione 4, Componente 1, CUP\(\_\) B53C23002010006. The second author is a member of the INdAM research group GNAMPA.
Here we describe our framework, namely what is a network, a Hamiltonian defined on it and some related concepts useful for our study.
We fix a dimension \(N\) and \(\mathbb{R}^N\) as ambient space. An embedded network, or continuous graph, is a subset \(\Gamma\subset\mathbb{R}^N\) of the form \[\Gamma=\bigcup_{\gamma\in\mathcal{E}}\gamma([0,|\gamma|])\subset\mathbb{R}^N,\] where \(\mathcal{E}\) is a finite collection of regular (i.e., \(C^1\) with non-vanishing derivative) simple oriented curves, called arcs of the network, with Euclidean length \(|\gamma|\) and parameterized by arc length in \([0,|\gamma|]\) (i.e., \(|\dot{\gamma}|\equiv1\) for any \(\gamma\in\mathcal{E}\)). Note that we are also assuming existence of one-sided derivatives at the endpoints 0 and \(|\gamma|\). We stress out that a regular change of parameters does not affect our results.
On the support of any arc \(\gamma\), we also consider the inverse parametrization defined as \[\widetilde{\gamma}(s)\mathrel{\vcenter{:}}=\gamma(|\gamma|-s),\qquad \text{for }s\in[0,|\gamma|].\] We call \(\widetilde{\gamma}\) the inverse arc of \(\gamma\). We assume \[\label{eq:nosovrap} \gamma((0,|\gamma|))\cap\gamma'\mleft(\mleft[0,\mleft|\gamma'\mright|\mright]\mright)=\emptyset,\qquad \text{whenever }\gamma'\ne\gamma,\widetilde{\gamma}.\tag{2}\]
We call vertices the initial and terminal points of the arcs, and denote by \(\mathbf{V}\) the sets of all such vertices. It follows from 2 that vertices are the only points where arcs with different support intersect and, in particular, \[\gamma((0,|\gamma|))\cap\mathbf{V}=\emptyset,\qquad \text{for any }\gamma\in\mathcal{E}.\]
We set, for each \(x\in\mathbf{V}\), \[\Gamma_x\mathrel{\vcenter{:}}=\{\gamma\in\mathcal{E}:\gamma(0)=x\}.\]
We assume that the network is connected, namely given two vertices there is a finite concatenation of arcs linking them. A loop is an arc with initial and final point coinciding. The unique restriction we require on the geometry of \(\Gamma\) is
\(\mathcal{E}\) does not contain loops.
This condition is due to the fact that in the known literature about time-dependent Hamilton–Jacobi equations on networks no loops are admitted, see, e.g., ImbertMonneau17?, Siconolfi22?, LionsSouganidis17?. We believe that our approach, the same used in Siconolfi22?, can also include the presence of loops, but this should require nontrivial adjustments like the addition of a periodic condition on loops.
The network \(\Gamma\) inherits a geodesic distance, denoted with \(d_\Gamma\), from the Euclidean metric of \(\mathbb{R}^N\). It is clear that given \(x\), \(y\) in \(\Gamma\) there is at least a geodesic linking them. The geodesic distance is in addition equivalent to the Euclidean one. We will denote with \(\mathop{\mathrm{diam}}(\Gamma)\) the diameter of \(\Gamma\) with respect to the geodesic distance.
We also consider a differential structure on the network by defining the tangent bundle of \(\Gamma\), \(T\Gamma\) in symbols, as the set made up by the \((x,q)\in\Gamma\times\mathbb{R}^N\) with \(q\) of the form \[q=\lambda\dot{\gamma}(s),\qquad \text{if x=\gamma(s), s\in[0,|\gamma|], with \lambda\in\mathbb{R}}.\] Note that \(\dot{\gamma}(s)\) is univocally determined, up to a sign, if \(x\in\Gamma\setminus\mathbf{V}\) or in other words if \(s\notin\{0,|\gamma|\}\).
For the sake of simplicity, by curve we mean throughout the paper an absolutely continuous curve. We point out that the pair \(\mleft(\xi,\dot{\xi}\mright)\), where \(\xi\) is a curve in \(\Gamma\), is naturally contained in \(T\Gamma\).
A Hamiltonian on \(\Gamma\) is a collection of Hamiltonians \(\mathcal{H}\mathrel{\vcenter{:}}=\{H_\gamma\}_{\gamma\in\mathcal{E}}\), where \[\begin{align} {2} H_\gamma:[0,|\gamma|]\times\mathbb{R}&&\:\longrightarrow\:&\mathbb{R}\\ (s,\mu)&&\:\longmapsto\:&H_\gamma(s,\mu), \end{align}\] satisfying \[H_{\widetilde{\gamma}}(s,\mu)=H_\gamma(|\gamma|-s,-\mu),\qquad \text{for any arc }\gamma.\] We emphasize that, apart the above compatibility condition, the Hamiltonians \(H_\gamma\) are unrelated.
We require each \(H_\gamma\) to be:
continuous in both arguments;
Lipschitz continuous in \(s\) for any \(\mu\in\mathbb{R}\);
superlinearly coercive in the momentum variable, uniformly in \(s\), i.e., \[\lim_{|\mu|\to\infty}\inf_{s\in [0,|\gamma|]}\frac{H_\gamma(s,\mu)}{|\mu|}=\infty,\qquad \text{for any \gamma\in\mathcal{E}};\]
convex in \(\mu\);
strictly quasiconvex in \(\mu\), which means that, for any \(s\in[0,|\gamma|]\), \(\mu,\mu'\in\mathbb{R}\) and \(\rho\in(0,1)\), \[H_\gamma\mleft(s,\rho\mu+(1-\rho)\mu'\mright)<\max\mleft\{H_\gamma(s,\mu),H_\gamma\mleft(s,\mu'\mright)\mright\}.\]
Note that if \(H_\gamma\) is strictly convex in the momentum variable, then it satisfies both .
Under [condcont] [condsuplin] [condconv] it is natural to define, for any \(\gamma\in\mathcal{E}\), the Lagrangian corresponding to \(H_\gamma\) as \[L_\gamma(s,\lambda)\mathrel{\vcenter{:}}=\sup_{\mu\in\mathbb{R}}(\lambda\mu-H_\gamma(s,\mu)),\] where the supremum is actually achieved thanks to [condsuplin]. We have, for each \(\lambda\in\mathbb{R}\) and \(s\in[0,|\gamma|]\), \[\label{eq:lagcomp} L_\gamma(s,\lambda)=L_{\widetilde{\gamma}}(|\gamma|-s,-\lambda).\tag{3}\] Moreover, the Lagrangians \(L_\gamma\) are Lipschitz continuous in the first variable, and
convex and superlinear in the second one. We will define later on a Lagrangian defined on the whole network, assuming suitable gluing conditions on the \(L_\gamma\).
In this we briefly recall some results about Hamilton–Jacobi equations posed on networks which will form the theoretical basis of our algorithms.
We focus on the time-dependent problem, thoroughly analyzed in Siconolfi22?, ImbertMonneau17?, of the form \[\label{eq:globteik}JE} \partial_t v(x,t)+\mathcal{H}(x,\mathrm{D}_x v)=0,\qquad \text{on }\Gamma\times(0,\infty).\tag{4}\] This notation synthetically indicates the family (for \(\gamma\) varying in \(\mathcal{E}\)) of Hamilton–Jacobi equations \[\label{eq:teikg}E} \partial_t U(s,t)+H_\gamma(s,\partial_s U(s,t))=0,\qquad \text{on }(0,|\gamma|)\times(0,\infty).\tag{5}\] It has been established that to get existence and uniqueness of solutions the equations 5 must be coupled with an additional condition on the one-dimensional interfaces \[\mleft\{(x,t):t\in\mathbb{R}^+\mright\}\qquad \text{with x\in\mathbf{V}}.\] Following ImbertMonneau17?, we set \[a_\gamma\mathrel{\vcenter{:}}=\max_{s\in[0,|\gamma|]}\min_{\mu\in\mathbb{R}}H_\gamma(s,\mu),\quad \text{for each }\gamma\in\mathcal{E},\] and call flux limiter any function \(x\mapsto c_x\) from \(\mathbf{V}\) to \(\mathbb{R}\) satisfying \[c_x\ge\max_{\gamma\in\Gamma_x}a_\gamma,\qquad \text{for any }x\in\mathbf{V}.\] Hereafter we will only consider the minimal flux limiter \(c_x\), namely \[c_x\mathrel{\vcenter{:}}=\max_{\gamma\in\Gamma_x}a_\gamma,\qquad \text{for any }x\in\mathbf{V}.\] The flux limiter, roughly speaking, is a choice of a constant on each vertex which bound from above the time derivative of any subsolutions on it. We refer to Siconolfi22? for the definition of solution to 4 and the role of the flux limiter in it.
Next we define a global Lagrangian \(L:T\Gamma\to\mathbb{R}\) by gluing together the \(L_\gamma\) and taking into account the flux limiter \(c_x\):
if \(x=\gamma(s)\) for some \(\gamma\in\mathcal{E}\) and \(s\in(0,1)\) then \[L(x,q)\mathrel{\vcenter{:}}= L_\gamma\mleft(s,\frac{q\dot{\gamma}(s)}{|\dot{\gamma}(s)|^2}\mright);\]
if \(x\in\mathbf{V}\) and \(q\ne0\) then \[L(x,q)\mathrel{\vcenter{:}}=\min L_\gamma\mleft(0,\frac{q\dot{\gamma}(0)}{|\dot{\gamma}(0)|^2}\mright),\] where the minimum is taken over the \(\gamma\in\Gamma_x\) with \(\dot{\gamma}(0)\) parallel to \(q\);
if \(x\in\mathbf{V}\) and \(q=0\) then \[L(x,q)\mathrel{\vcenter{:}}=-c_x.\]
We notice that thanks to 3 \(L\) is a well-defined function in \(T\Gamma\).
PozzaSiconolfi23? Given an initial datum \(\phi\in C(\Gamma)\), the function \[\label{eq:vft} (\mathcal{S}(t)\phi)(x)\mathrel{\vcenter{:}}=\min\mleft\{\phi(\xi(0))+\int_0^t L\mleft(\xi,\dot{\xi}\mright)\mathrm{d}\tau: \text{\xi is a curve with }\xi(t)=x\mright\}.\tag{6}\] is the unique solution to 4 with \(\mathcal{S}(0)\phi=\phi\) and flux limiter \(c_x\).
We stress out that there exists an optimal curve for \((\mathcal{S}(t)\phi)(x)\). Furthermore, the family of operators \(\{\mathcal{S}(t)\}_{t\in\mathbb{R}^+}\) form a semigroup whose main properties are summarized below.
(Semigroup property) For any \(t,t'\in\mathbb{R}^+\) we have \(\mathcal{S}(t+t')=\mathcal{S}(t)\circ\mathcal{S}(t')\);
(Monotonicity property) for every \(\phi_1,\phi_2\in C(\Gamma)\) such that \(\phi_1\le\phi_2\) in \(\Gamma\) \[\mathcal{S}(t)\phi_1\le\mathcal{S}(t)\phi_2,\qquad \text{for any }t\in\mathbb{R}^+;\]
for any \(\phi\in C(\Gamma)\), \(t\in\mathbb{R}^+\) and \(a\in\mathbb{R}\), \(\mathcal{S}(t)(\phi+a)=\mathcal{S}(t)\phi+a\).
Siconolfi22? If the initial datum \(\phi\) is Lipschitz continuous, then \((x,t)\mapsto(\mathcal{S}(t)\phi)(x)\) is Lipschitz continuous in \(\Gamma\times\mathbb{R}^+\).
We consider the eikonal equation \[\label{eq:globeik}Ja} \mathcal{H}(x,\mathrm{D}u)=a,\qquad \text{on }\Gamma,\tag{7}\] namely the family of Hamilton–Jacobi equations \[H_\gamma(s,\partial U)=a,\qquad \text{on }[0,|\gamma|],\] where \(a\in\mathbb{R}\), for \(\gamma\) varying in \(\mathcal{E}\) and coupled with a state-constraint-type boundary condition at the vertices. The critical value, or Mañé critical value, is characterized in SiconolfiSorrentino18?, Pozza25? as the unique value \[\label{eq:critvaldef} c\ge a_0\mathrel{\vcenter{:}}=\max_{\gamma\in\mathcal{E}}a_\gamma\tag{8}\] for which (\(\mathcal{H}\)J\(c\)) (namely the equation 7 with \(a=c\)) admits solutions.
It has been shown in Pozza25_2? that \(\mathcal{S}(t)\phi+ct\) converges to a solution to (\(\mathcal{H}\)J\(c\)) as \(t\to\infty\). We will use this fact to approximate the critical value.
The large time behavior of the solutions to 4 is mainly due to the relation between the optimal curves of 6 and the so called Aubry set. More in detail, let \(\sigma_c\) be a real map on the tangent bundle \(T\Gamma\) such that:
if \(x=\gamma(s)\) for some \(\gamma\in\mathcal{E}\) and \(s\in(0,|\gamma|)\) then \[\sigma_c(x,q)\mathrel{\vcenter{:}}=\max\{\mu q\dot{\gamma}(s):\mu\in\mathbb{R},\,H_\gamma(s,\mu)=c\};\]
if \(x\in\mathbf{V}\) then \[\sigma_c(x,q)\mathrel{\vcenter{:}}=\min\max\{\mu q\dot{\gamma}(0):\mu\in\mathbb{R},\,H_\gamma(0,\mu)=c\},\] where the minimum is taken over the \(\gamma\in\Gamma_x\) with \(\dot{\gamma}(0)\) parallel to \(q\).
We call Aubry set on \(\Gamma\), the closed set \(\mathcal{A}_\Gamma\) made up of
the \(x\in\Gamma\) incident to a closed curve \(\xi:[0,T]\to\Gamma\) with \(\int_0^T\sigma_c\mleft(\xi,\dot{\xi}\mright)d\tau=0\) and a.e.non-vanishing derivative;
the \(x=\gamma(s)\) with \(\gamma\in\mathcal{E}\) and \(s\in[0,1]\) such that the set \(\{\mu\in\mathbb{R}:\,H_\gamma(s,\mu)=c\}\) is a singleton.
Setting the semidistance \[S_c(y,x)\mathrel{\vcenter{:}}=\min\mleft\{\int_0^T\sigma_c\mleft(\xi,\dot{\xi}\mright)\mathrm{d}\tau: \text{\xi:[0,T]\to\Gamma is a curve from y to x}\mright\}\] we further have
Pozza25? Let \(\Gamma'\) be a closed subset of \(\Gamma\), \(w:\Gamma'\to\mathbb{R}\) be a continuous function and define \[u(x)\mathrel{\vcenter{:}}=\min_{y\in\Gamma'}(w(y)+S_c(y,x)),\qquad \text{for }x\in\Gamma.\] Then \(u\) is the maximal subsolution to (\(\mathcal{H}\)J\(c\)) not exceeding \(w\) on \(\Gamma'\) and a solution in \(\Gamma\setminus(\Gamma'\setminus\mathcal{A}_\Gamma)\). If \(w\) is a subsolution \(u\) agrees with it on \(\Gamma'\).
As pointed out in Pozza25_2?, the convergence of \(\mathcal{S}(t)\phi+ct\) is influenced by the flux limiter. We account for this by extending, in a certain sense, the Aubry set.
We call extended Aubry set on \(\Gamma\), the closed set \(\widetilde{\mathcal{A}}_\Gamma\) made up by
the \(x\in\mathcal{A}_\Gamma\);
the \(x\in\mathbf{V}\) such that \(c_x=c\).
The large time behavior of solutions to 4 in our setting is described by the next , whose proof is given in 6.
Given \(\phi\in C(\Gamma)\), the function \(\mathcal{S}(t)\phi+ct\) uniformly converges, as \(t\) goes to \(\infty\), to \[\label{eq:ltb461} u(x)\mathrel{\vcenter{:}}=\min_{y\in\widetilde{\mathcal{A}}_\Gamma}\mleft(\min_{z\in\Gamma}(\phi(z)+S_c(z,y))+S_c(y,x)\mright),\qquad \text{for }x\in\Gamma.\tag{9}\]
The limit function 9 is a solution to (\(\mathcal{H}\)J\(c\)) in \(\Gamma\setminus\mleft(\widetilde{\mathcal{A}}_\Gamma\setminus\mathcal{A}_\Gamma\mright)\) agreeing with \[w(x)\mathrel{\vcenter{:}}=\min\limits_{y\in\Gamma}(\phi(y)+S_c(y,x)),\qquad \text{for }x\in\Gamma,\] on \(\widetilde{\mathcal{A}}_\Gamma\), as well as the maximal subsolution in \(\Gamma\) equal to \(w\) on \(\widetilde{\mathcal{A}}_\Gamma\), see [maxsubsol].
Let us now list some consequences of [ltb] which will provide the theoretical basis for the algorithms in 4.
Given a Lipschitz continuous function \(\phi\), we have that \[\label{eq:basictheo461} \mleft|\frac{\phi(x)-(\mathcal{S}(t)\phi)(x)}{t}-c\mright|\le2\ell\mathop{\mathrm{diam}}(\Gamma)\frac{1}{t},\qquad \text{for any }(x,t)\in\Gamma\times(0,\infty),\tag{10}\] where \(\ell\) is the Lipschitz constant of \((x,t)\mapsto(\mathcal{S}(t)\phi)(x)\).
Proof. Let \(u\) be a solution to (\(\mathcal{H}\)J\(c\)) in \(\Gamma\setminus\widetilde{\mathcal{A}}_\Gamma\). We know from [weakltb] [Scalprop] that \[(\mathcal{S}(t)\phi)(x)+ct\le u(x)+\max_{y\in\Gamma}(\phi(y)-u(y)),\qquad \text{for any }(x,t)\in\Gamma\times\mathbb{R}^+.\] It follows that there is an \(x_1\in\Gamma\) such that \[\label{eq:basictheo1} (\mathcal{S}(t)\phi)(x_1)+ct\le\phi(x_1),\qquad \text{for any }t\in\mathbb{R}^+,\tag{11}\] thus [liptsol] yields, for any \((x,t)\in\Gamma\times\mathbb{R}^+\), \[\label{eq:basictheo2} \begin{align} \phi(x)-(\mathcal{S}(t)\phi)(x)-ct\ge\;&\phi(x)-(\mathcal{S}(t)\phi)(x)-ct-\phi(x_1)+(\mathcal{S}(t)\phi)(x_1)+ct\\ \ge\;&-2\ell d_\Gamma(x_1,x)\ge-2\ell\mathop{\mathrm{diam}}(\Gamma). \end{align}\tag{12}\] Exploiting once again [weakltb] [Scalprop] we get \[(\mathcal{S}(t)\phi)(x)+ct\ge u(x)+\min_{y\in\Gamma}(\phi(y)-u(y)),\qquad \text{for any }(x,t)\in\Gamma\times\mathbb{R}^+,\] thereby there is an \(x_2\in\Gamma\) such that \[\label{eq:basictheo3} (\mathcal{S}(t)\phi)(x_2)+ct\ge\phi(x_2),\qquad \text{for any }t\in\mathbb{R}^+,\tag{13}\] and, as a consequence, \[\phi(x)-(\mathcal{S}(t)\phi)(x)-ct\le2\ell\mathop{\mathrm{diam}}(\Gamma),\qquad \text{for any }(x,t)\in\Gamma\times\mathbb{R}^+.\] This, together with 12 , implies 10 and concludes our proof. ◻
The bounds given below are a simple consequence of 11 and 13 .
Given a Lipschitz continuous function \(\phi\), we have that \[\min_{x\in\Gamma}\frac{\phi(x)-(\mathcal{S}(t)\phi)(x)}{t}\le c\le\max_{x\in\Gamma}\frac{\phi(x)-(\mathcal{S}(t)\phi)(x)}{t},\qquad \text{for any }t\in(0,\infty).\]
Next we provide a different approach to the approximation of the critical value. The main difference between [basictheo] and the below is that the former provides an estimate for the rate of convergence of \((\phi-\mathcal{S}(t)\phi)/t\) to \(c\) as \(t\) positively diverges, while the latter exploits the convergence of \(\mathcal{S}(t)\phi+ct\) to a solution to (\(\mathcal{H}\)J\(c\)) to define two sequences converging monotonically to \(c\).
Given \(\phi\in C(\Gamma)\) and \(T>0\), we recursively set \(\phi_0\mathrel{\vcenter{:}}=\phi\), \(\phi_{k+1}\mathrel{\vcenter{:}}=\mathcal{S}(T)\phi_k\), \[\widetilde{c}_k\mathrel{\vcenter{:}}=\max_{x\in\Gamma}\frac{\phi_{k-1}(x)-\phi_k(x)}{T}\qquad \text{and}\qquad\undertilde{c}_k\mathrel{\vcenter{:}}=\min_{x\in\Gamma}\frac{\phi_{k-1}(x)-\phi_k(x)}{T}.\] The sequences \(\{\widetilde{c}_k\}_{k\in\mathbb{N}}\) and \(\{\undertilde{c}_k\}_{k\in\mathbb{N}}\) are nonincreasing and nondecreasing, respectively, and both converge to \(c\).
Proof. Fixed \(k\ge1\), let \(x_1,y_1\in\Gamma\) and \(\xi_1:[0,(k+1)T]\to\Gamma\) be such that \[\max_{x\in\Gamma}(\phi_k(x)-\phi_{k+1}(x))=\phi_k(x_1)-\phi_{k+1}(x_1)=(\mathcal{S}(T)\phi_{k-1})(x_1)-(\mathcal{S}(T)\phi_k)(x_1)\] and \[(\mathcal{S}(T)\phi_k)(x_1)=\phi_k(y_1)+\int_0^T L\mleft(\xi_1,\dot{\xi}_1\mright)\mathrm{d}\tau.\] Then \[\label{eq:itertheo1} \begin{align} \widetilde{c}_{k+1}T=\,&\max_{x\in\Gamma}(\phi_k(x)-\phi_{k+1}(x))\\ \le\,&\phi_{k-1}(y_1)+\int_0^T L\mleft(\xi_1,\dot{\xi}_1\mright)\mathrm{d}\tau-\phi_k(y_1)-\int_0^T L\mleft(\xi_1,\dot{\xi}_1\mright)\mathrm{d}\tau\le\widetilde{c}_k T. \end{align}\tag{14}\] Similarly, we set \(x_2,y_2\in\Gamma\) and \(\xi_2:[0,(k+1)T]\to\Gamma\) such that \[\min_{x\in\Gamma}(\phi_k(x)-\phi_{k+1}(x))=\phi_k(x_2)-\phi_{k+1}(x_2)=(\mathcal{S}(T)\phi_{k-1})(x_2)-(\mathcal{S}(T)\phi_k)(x_2)\] and \[(\mathcal{S}(T)\phi_{k-1})(x_2)=\phi_{k-1}(y_2)+\int_0^T L\mleft(\xi_2,\dot{\xi}_2\mright)\mathrm{d}\tau.\] Hence, \[\label{eq:itertheo2} \begin{align} \undertilde{c}_{k+1}T=\,&\min_{x\in\Gamma}(\phi_k(x)-\phi_{k+1}(x))\\ \ge\,&\phi_{k-1}(y_2)+\int_0^T L\mleft(\xi_2,\dot{\xi}_2\mright)\mathrm{d}\tau-\phi_k(y_2)-\int_0^T L\mleft(\xi_2,\dot{\xi}_2\mright)\mathrm{d}\tau\ge\undertilde{c}_k T. \end{align}\tag{15}\] Inequalities 14 and 15 show that the sequences \(\{\widetilde{c}_k\}\) and \(\{\undertilde{c}_k\}\) are nonincreasing and nondecreasing, respectively. Finally, we exploit [ltb] to obtain \[\begin{align} \lim_{k\to\infty}\widetilde{c}_k=\;&\lim_{k\to\infty}\max_{x\in\Gamma}\frac{\phi_{k-1}(x)-\phi_k(x)}{T}\\ =\;&\lim_{k\to\infty}\max_{x\in\Gamma}\frac{(\mathcal{S}((k-1)T)\phi)(x)+c(k-1)T-(\mathcal{S}(kT)\phi)(x)-ckT}{T}+c=c, \end{align}\] i.e., \(\{\widetilde{c}_k\}\) converges to \(c\). Similarly, we can prove that \(\{\undertilde{c}_k\}\) converges to \(c\). ◻
Following [itertheo], we get that \(\undertilde{c}_k\le c\le\widetilde{c}_k\) for any \(k\in\mathbb{N}\), which implies the next result.
Under the same assumptions of [itertheo], we define for every \(k\in\mathbb{N}\) \[\label{eq:iterapprox461} c_k\mathrel{\vcenter{:}}=\frac{\widetilde{c}_k+\undertilde{c}_k}{2}\qquad \text{and}\qquad\varepsilon_k\mathrel{\vcenter{:}}=\frac{\widetilde{c}_k-\undertilde{c}_k}{2}.\tag{16}\] We have that \(|c_k-c|\le\varepsilon_k\) for any \(k\in\mathbb{N}\). In particular \(\{\varepsilon_k\}_{k\in\mathbb{N}}\) is a nonincreasing infinitesimal sequence and \(c_k\) converges to \(c\).
In this we present two algorithms for the approximation of the critical value.
Our algorithms require a method to approximate solutions to 4 , since they rely on the large time behavior of said solutions.
We employ the semi-Lagrangian scheme presented in CarliniSiconolfi23? to approximate the solutions to 4 . Rather than directly discretizing equation 4 , the scheme addresses a modified problem that involves modified Hamiltonians \(\widetilde{H}_\gamma\) obtained from the \(H_\gamma\) through the procedure described in CarliniSiconolfi23?. This construction depends on the preliminary choice of a compact interval \(I\) of the momentum variable \(\mu\) within which the modified Hamiltonians coincide with the original ones: \[\widetilde{H}_\gamma(s,\mu)=H_\gamma(s,\mu),\qquad \text{for every }\gamma\in\mathcal{E},\,s\in[0,|\gamma|],\,\mu\in I.\] The key advantage of this approach is the existence of a positive constant \(\beta_0\), depending on \(I\), such that the modified Lagrangians \[\widetilde{L}_\gamma(s,\lambda)\mathrel{\vcenter{:}}=\max_{\mu\in\mathbb{R}}\mleft(\lambda\mu-\widetilde{H}_\gamma(s,\mu)\mright),\qquad \text{for }\gamma\in\mathcal{E},\,s\in [0,|\gamma|],\,\lambda\in\mathbb{R},\] are equal to \(+\infty\) when \(\lambda\) is outside \([-\beta_0,\beta_0]\), for all \(s\in[0,|\gamma|]\), \(\gamma\in\mathcal{E}\), and are Lipschitz continuous in \([-\beta_0,\beta_0]\). Setting \(\widetilde{\mathcal{H}}\mathrel{\vcenter{:}}=\{\widetilde{H}_\gamma\}_{\gamma\in\mathcal{E}}\), the modified problem is \[}Jmod}\label{eq:globteikmod} \partial_t u(x,t)+\widetilde{\mathcal{H}}(x,D_x u)=0,\qquad \text{on }\Gamma\times(0,\infty).\tag{17}\] This reformulation is justified by the fact that the solution to 17 coincides with the solution to 4 (see CarliniSiconolfi23?).
The scheme is based on a space-time discretization defined by a final time \(T>0\) and a pair \(\Delta\mathrel{\vcenter{:}}=(\Delta x,\Delta t)\) such that \[0<\Delta x<|\gamma|,\quad \text{for every }\gamma\in\mathcal{E},\qquad \text{and}\qquad0<\Delta t<T.\] To define the space-time grid, we fix an orientation \(\mathcal{E}^+\), i.e., a subset of \(\mathcal{E}\) containing exactly one arc in each pair \(\{\gamma,\widetilde{\gamma}\}\), and set \[N^\Delta_\gamma\mathrel{\vcenter{:}}=\mleft\lceil\frac{|\gamma|}{\Delta x}\mright\rceil,\quad \text{for }\gamma\in\mathcal{E}^+,\qquad \text{and}\qquad N^\Delta_T\mathrel{\vcenter{:}}=\mleft\lceil\frac{T}{\Delta t}\mright\rceil,\] where \(\lceil\cdot\rceil\) stands for the ceiling function. We define \[s^{\Delta,\gamma}_i\mathrel{\vcenter{:}}= i\frac{|\gamma|}{N^\Delta_\gamma},\quad \text{for }i\in\mleft\{0,\dotsc,N_\gamma^\Delta\mright\},\,\gamma\in\mathcal{E}^+,\qquad t^\Delta_m\mathrel{\vcenter{:}}= m\frac{T}{N^\Delta_T},\quad \text{for }m\in\mleft\{0,\dotsc,N_T^\Delta\mright\}.\] We introduce the grids associated to any \(\gamma \in \mathcal{E}^+\) and the grid on the interval \([0,T]\) \[\mathscr{S}_{\Delta,\gamma}\mathrel{\vcenter{:}}=\mleft\{s_i^{\Delta,\gamma}:i\in\mleft\{0,\dotsc,N_\gamma^\Delta\mright\}\mright\},\quad \text{for }\gamma\in\mathcal{E}^+,\qquad\mathscr{T}_\Delta^T\mathrel{\vcenter{:}}=\mleft\{t^\Delta_m:m\in\mleft\{0,\dotsc,N^\Delta_T\mright\}\mright\},\] and finally the grid on \(\Gamma\) and the grid on \(\Gamma \times [0,T]\) \[\Gamma^0_\Delta\mathrel{\vcenter{:}}=\bigcup_{\gamma\in\mathcal{E}^+}\gamma(\mathscr{S}_{\Delta,\gamma}),\qquad\Gamma_\Delta^T\mathrel{\vcenter{:}}=\Gamma_\Delta^0\times\mathscr{T}^T_\Delta.\] We further set \(\Delta_\gamma x\mathrel{\vcenter{:}}=\dfrac{|\gamma|}{N_\gamma^\Delta}\) and say that \(\Delta\mathrel{\vcenter{:}}=(\Delta x,\Delta t)\) is an admissible pair whenever \[\label{eq:goodpair} 0<\Delta x<|\gamma|,\quad \text{for every }\gamma\in\mathcal{E},\qquad0<\Delta t<T\qquad \text{and}\qquad\Delta t\le\min_{\gamma\in\mathcal{E}^+}\frac{\Delta_\gamma x}{\beta_0}.\tag{18}\] We denote by \(B\mleft(\Gamma^0_\Delta\mright)\) and \(B(\mathscr{S}_{_\Delta,\gamma})\), for \(\gamma\in\mathcal{E}^+\), the spaces of bounded functions from \(\Gamma^0_\Delta\) and \(\mathscr{S}_{_\Delta,\gamma}\), respectively, to \(\mathbb{R}\). Given an arc \(\gamma\in\mathcal{E}^+\) and \(f\in B(\mathscr{S}_{_\Delta,\gamma})\), we define the operator \[S_\gamma[f](s)\mathrel{\vcenter{:}}=\min_{\frac{s-|\gamma|}{\Delta t}\le\lambda\le\frac{s}{\Delta t}}\{I_\gamma[f](s-\Delta t\lambda)+\Delta t\widetilde{L}_\gamma(s,\lambda)\},\qquad \text{for }s\in\mathscr{S}_{_\Delta,\gamma},\] where \(I_\gamma\) is the interpolating polynomial of degree 1 defined by \[I_\gamma[f](s)\mathrel{\vcenter{:}}= f(s_i)+\frac{s-s_i}{s_{i+1}-s_i}(f(s_{i+1})-f(s_i)),\] with \(s_i,s_{i+1}\in\mathscr{S}_{_\Delta,\gamma}\) and \(s\in[s_i,s_{i+1}]\). To define the scheme, we extend \(S_\gamma\) to \(B\mleft(\Gamma^0_\Delta \mright)\) setting the map \(S:B\mleft(\Gamma^0_\Delta \mright)\to B\mleft(\Gamma^0_\Delta\mright)\) such that
if \(x\in\Gamma^0_\Delta \setminus\mathbf{V}\) let \(\gamma\in\mathcal{E}^+\) and \(s\in\mathscr{S}_{_\Delta,\gamma}\) be such that \(x=\gamma(s)\), then \[S[f](x)\mathrel{\vcenter{:}}= S_\gamma[f\circ\gamma](s);\]
if \(x\in\mathbf{V}\) we set \(S\) through a two steps procedure: \[\begin{align} \widetilde{S}[f](x)\mathrel{\vcenter{:}}=\,&\min\mleft\{S_\gamma[f]\mleft(\gamma^{-1}(x)\mright): \text{\gamma\in\mathcal{E}^+ with \gamma(0)=x or \gamma(|\gamma|)=x}\mright\},\\ S[f](x)\mathrel{\vcenter{:}}=\,&\min\mleft\{\widetilde{S}[f](x),f(x)-c_x\Delta t\mright\}. \end{align}\]
Given a Lipschitz initial datum \(\phi\), the scheme computes the approximate solution \(v^\Delta: \Gamma^T_\Delta \to \mathbb{R}\) to the time-dependent problem in the following way: \[\label{eq:SLscheme} \mleft\{ \begin{align} v^\Delta(x,0)=\,&\phi (x),&& \text{if }x\in\Gamma^0_\Delta ,\\ v^\Delta(x,t)=\,&S\mleft[v^\Delta(\cdot,t-\Delta t)\mright](x),&& \text{if }t\ne0,\,(x,t)\in\Gamma^T_\Delta. \end{align} \mright.\tag{19}\] The following error estimate holds for this numerical scheme:
CarliniCoscettiPozza24? Fixed a Lipschitz continuous initial datum \(\phi\), \(T>0\) and an admissible pair \(\Delta\), let \(v^\Delta\) be the approximation of \(\mathcal{S}(t)\phi\) on \(\Gamma^T_\Delta\) given by the semi-Lagrangian scheme 19 . There exists a constant \(C>0\), independent of \(\Delta\) and \(T\), such that \[\mleft|v^\Delta(x,t)-(\mathcal{S}(t)\phi)(x)\mright|\le CT\mleft(\frac{1}{\sqrt{\Delta t}}\min\mleft\{\Delta x,\frac{\Delta x^2}{\Delta t}\mright\}+\sqrt{\Delta t}\mright),\qquad \text{for any }(x,t)\in\Gamma^T_\Delta.\]
We stress out that, in principle, our algorithms could employ any other numerical method for the approximation of solutions to 4 . In view of CarliniCoscettiPozza24?, which proves that our definition of solution is equivalent to the one proposed in ImbertMonneau17?, we could even choose the finite difference schemes proposed in CamilliCarliniMarchi18?, CostesequeLebacqueMonneau14? or the semi-Lagrangian method outlined in CarliniFestaForcadel20?.
Using an approach employed in AchdouCamilliCapuzzo-Dolcetta08? for equations posed on the \(N\)-dimensional torus, we obtain an algorithm complete with an error estimate which proves the convergence of the scheme. The main idea behind this algorithm is to exploit the estimates given in [basictheo] [basicbounds].
1 consists of two moduli: \(\mathcal{M}_1\) and \(\mathcal{M}_2\). \(\mathcal{M}_1\) is the scheme 19 , takes as inputs a Lipschitz continuous initial datum \(\phi\), a final time \(T>0\) and an admissible pair \(\Delta\mathrel{\vcenter{:}}=(\Delta x,\Delta t)\), and gives as output an approximation \(v^\Delta\) of the exact solution to 4 with initial datum \(\phi\), presented at the start of this , at time \(T\): \[\mathcal{M}_1:(\phi,T,\Delta)\longmapsto v^\Delta(\cdot,T).\] \(\mathcal{M}_2\) is a control modulus on the outputs of \(\mathcal{M}_1\). For \(f\), \(g\) defined on the grid \(\Gamma_\Delta^0\) and \(A>0\), \[\mathcal{M}_2:(f,g,A,\Delta x)\longmapsto\mleft(\min_{x\in\Gamma_\Delta^0}\frac{f(x)-g(x)}{A},\max_{x\in\Gamma_\Delta^0}\frac{f(x)-g(x)}{A}\mright).\] Fixed a Lipschitz continuous initial value \(\phi\), a time \(T>0\) and an admissible pair \(\Delta\), these modules are used to define, at each iteration of the algorithm, \[\overline{c}^\Delta_k\mathrel{\vcenter{:}}=\max_{x\in\Gamma_\Delta^0}\frac{\phi(x)-v^\Delta(x,kT)}{kT}\qquad \text{and}\qquad\underline{c}^\Delta_k\mathrel{\vcenter{:}}=\min_{x\in\Gamma_\Delta^0}\frac{\phi(x)-v^\Delta(x,kT)}{kT},\] where \(k\) is the number of iterations. The algorithm stops when \(\mleft(\overline{c}^\Delta_k-\underline{c}^\Delta_k\mright)/2\) is smaller than a given tolerance and returns \(\mleft(\overline{c}^\Delta_k+\underline{c}^\Delta_k\mright)/2\) as approximation of the critical value \(c\). We point out that \(\Delta t\) defines the time step used internally by \(\mathcal{M}_1\), while \(T\) defines the time step used by our method at each step.
In view of [basicbounds] and 8 , 1 can be improved by taking \[\overline{c}^\Delta_1\mathrel{\vcenter{:}}=\max_{x\in\Gamma_\Delta^0}\frac{\phi(x)-v^\Delta(x,T)}{T},\qquad\underline{c}^\Delta_1\mathrel{\vcenter{:}}=\max\mleft\{a_0,\min_{x\in\Gamma_\Delta^0}\frac{\phi(x)-v^\Delta(x,T)}{T}\mright\}\] and, for \(k>1\), \[\overline{c}^\Delta_k\mathrel{\vcenter{:}}=\min\mleft\{\overline{c}^\Delta_{k-1},\max_{x\in\Gamma_\Delta^0}\frac{\phi(x)-v^\Delta(x,kT)}{kT}\mright\},\qquad\underline{c}^\Delta_k\mathrel{\vcenter{:}}=\max\mleft\{\underline{c}^\Delta_{k-1},\min_{x\in\Gamma_\Delta^0}\frac{\phi(x)-v^\Delta(x,kT)}{kT}\mright\}.\]
Given a Lipschitz continuous initial datum \(\phi\), a \(T>0\) and an admissible pair \(\Delta\), let \(v^\Delta(\cdot,kT)\mathrel{\vcenter{:}}=\mathcal{M}_1(\phi,kT,\Delta)\) for \(k\in\mathbb{N}\). We recursively set, for \(k\ge1\) \[\overline{c}^\Delta_k\mathrel{\vcenter{:}}=\max_{x\in\Gamma_\Delta^0}\frac{\phi(x)-v^\Delta(x,kT)}{kT},\qquad\underline{c}^\Delta_k\mathrel{\vcenter{:}}=\min_{x\in\Gamma_\Delta^0}\frac{\phi(x)-v^\Delta(x,kT)}{kT}.\] We have that \[\mleft|\frac{\overline{c}^\Delta_k+\underline{c}^\Delta_k}{2}-c\mright|\le C_0\mleft(\frac{1+\Delta x}{kT}+\frac{1}{\sqrt{\Delta t}}\min\mleft\{\Delta x,\frac{\Delta x^2}{\Delta t}\mright\}+\sqrt{\Delta t}\mright),\qquad \text{for any }k\ge1,\] where \(C_0>0\) is a constant independent of \(\Delta\) and \(k\).
[basicerr] proves that the approximation given by 1 converges to the critical value as \(k\) diverges and, if 18 holds, \[\min\mleft\{\frac{\Delta x}{\sqrt{\Delta t}},\frac{\Delta x^2}{\Delta t^{\frac{3}{2}}}\mright\}\longrightarrow0.\]
Proof of [basicerr]. Fixed \(k\ge1\), we set \[\overline{x}\mathrel{\vcenter{:}}=\mathop{\mathrm{arg\,max}}_{x\in\Gamma}\phi(x)-(\mathcal{S}(kT)\phi)(x),\qquad\overline{y}\mathrel{\vcenter{:}}=\mathop{\mathrm{arg\,min}}_{y\in\Gamma_\Delta^0}|y-\overline{x}|.\] Clearly \(|\overline{x}-\overline{y}|\le\dfrac{\Delta x}2\), thereby \[\label{eq:basicerr1} \begin{align} 0\le\,&\max_{x\in\Gamma}(\phi(x)-(\mathcal{S}(kT)\phi)(x))-\max_{y\in\Gamma_\Delta^0}(\phi(y)-(\mathcal{S}(kT)\phi)(y))\\ \le\,&\phi(\overline{x})-(\mathcal{S}(kT)\phi)(\overline{x})-\phi(\overline{y})+(\mathcal{S}(kT)\phi)(\overline{y})\le\ell\Delta x, \end{align}\tag{20}\] where \(\ell\) is the Lipschitz constant of \((x,t)\mapsto(\mathcal{S}(t)\phi)(x)\). Then, by [errCS], there is \(C>0\) independent of \(\Delta\) and \(k\) such that \[\begin{align} \mleft|\overline{c}_k^\Delta-\max_{x\in\Gamma}\frac{\phi(x)-(\mathcal{S}(kT)\phi)(x)}{kT}\mright|\le\,&\frac{\ell\Delta x}{kT}+C\mleft(\frac{1}{\sqrt{\Delta t}}\min\mleft\{\Delta x,\frac{\Delta x^2}{\Delta t}\mright\}+\sqrt{\Delta t}\mright),\\ \mleft|\underline{c}_k^\Delta-\min_{x\in\Gamma}\frac{\phi(x)-(\mathcal{S}(kT)\phi)(x)}{kT}\mright|\le\,&\frac{\ell\Delta x}{kT}+C\mleft(\frac{1}{\sqrt{\Delta t}}\min\mleft\{\Delta x,\frac{\Delta x^2}{\Delta t}\mright\}+\sqrt{\Delta t}\mright), \end{align}\] thus [basictheo] yields \[\begin{align} \mleft|\frac{\overline{c}^\Delta_k+\underline{c}^\Delta_k}{2}-c\mright|\le&\mleft|\max_{x\in\Gamma}\frac{\phi(x)-(\mathcal{S}(kT)\phi)(x)}{2kT}+\min_{x\in\Gamma}\frac{\phi(x)-(\mathcal{S}(kT)\phi)(x)}{2kT}-c\mright|\\ &+\frac{\ell\Delta x}{kT}+C\mleft(\frac{1}{\sqrt{\Delta t}}\min\mleft\{\Delta x,\frac{\Delta x^2}{\Delta t}\mright\}+\sqrt{\Delta t}\mright)\\ \le&\,2\ell\mathop{\mathrm{diam}}(\Gamma)\frac{1}{kT}+\frac{\ell\Delta x}{kT}+C\mleft(\frac{1}{\sqrt{\Delta t}}\min\mleft\{\Delta x,\frac{\Delta x^2}{\Delta t}\mright\}+\sqrt{\Delta t}\mright). \end{align}\] Since \(k\) is arbitrary this concludes the proof. ◻
Finally, we know from [basicbounds] that, for any \(k\ge1\), \[\begin{gather} \mleft|\max_{x\in\Gamma}\frac{\phi(x)-(\mathcal{S}(kT)\phi)(x)}{2kT}+\min_{x\in\Gamma}\frac{\phi(x)-(\mathcal{S}(kT)\phi)(x)}{2kT}-c\mright|\\ \le\mleft|\max_{x\in\Gamma}\frac{\phi(x)-(\mathcal{S}(kT)\phi)(x)}{2kT}-\min_{x\in\Gamma}\frac{\phi(x)-(\mathcal{S}(kT)\phi)(x)}{2kT}\mright|, \end{gather}\] thus, arguing as in the proof of [basicerr], we get the following which justifies the stopping criterion \(\overline{c}^\Delta_k-\underline{c}^\Delta_k<2\varepsilon\) at [alg:basicalgo1] in 1.
Under the same assumptions of [basicerr], we have that \[\mleft|\frac{\overline{c}^\Delta_k+\underline{c}^\Delta_k}{2}-c\mright|\le\frac{\overline{c}_k^\Delta-\underline{c}_k^\Delta}{2}+C_0\mleft(\frac{\Delta x}{kT}+\frac{1}{\sqrt{\Delta t}}\min\mleft\{\Delta x,\frac{\Delta x^2}{\Delta t}\mright\}+\sqrt{\Delta t}\mright),\qquad \text{for any }k\ge1,\] where \(C_0>0\) is a constant independent of \(\Delta\) and \(k\).
Next we describe a numerical scheme which exploit [itertheo], instead of [basictheo], to obtain an approximation of the critical value. While we do not have an a priori error estimate for this new algorithm, the simulations in 5 show that it is considerably faster than 1.
Following [iterapprox], \(c_k\) defined in 16 is a reasonable approximation of \(c\) for \(k\) big enough, thus we propose a scheme, 2, which estimates the critical value using a numerical approximation of \(c_k\). It consists of the same two moduli \(\mathcal{M}_1\) and \(\mathcal{M}_2\) used in 1. They are used to defining at each iteration, fixed a Lipschitz continuous initial datum \(\phi\), a time \(T>0\) and an admissible pair \(\Delta\), \[\widetilde{c}^\Delta_k\mathrel{\vcenter{:}}=\max_{x\in\Gamma_\Delta^0}\frac{v^\Delta(x,(k-1)T)-v^\Delta(x,kT)}{T}\qquad \text{and}\qquad\undertilde{c}^\Delta_k\mathrel{\vcenter{:}}=\min_{x\in\Gamma_\Delta^0}\frac{v^\Delta(x,(k-1)T)-v^\Delta(x,kT)}{T},\] where \(k\) is the number of iterations. The algorithm stops when \(\mleft(\widetilde{c}^\Delta_k-\undertilde{c}^\Delta_k\mright)/2\) is smaller than a given tolerance and returns \(\mleft(\widetilde{c}^\Delta_k+\undertilde{c}^\Delta_k\mright)/2\) as approximation of the critical value \(c\). We recall that \(\Delta t\) defines the time step used internally by \(\mathcal{M}_1\), while \(T\) defines the time step used by our method at each step.
In view of [itertheo] and 8 , 2 can be improved by taking \[\widetilde{c}^\Delta_1\mathrel{\vcenter{:}}=\max_{x\in\Gamma_\Delta^0}\frac{\phi(x)-v^\Delta(x,T)}{T},\qquad \undertilde{c}^\Delta_1\mathrel{\vcenter{:}}=\max\mleft\{a_0,\min_{x\in\Gamma_\Delta^0}\frac{\phi(x)-v^\Delta(x,T)}{T}\mright\}\] and, for \(k>1\), \[\begin{align} \widetilde{c}^\Delta_k&\mathrel{\vcenter{:}}=\min\mleft\{\widetilde{c}^\Delta_{k-1},\max_{x\in\Gamma_\Delta^0}\frac{v^\Delta(x,(k-1)T)-v^\Delta(x,kT)}{T}\mright\},\\ \undertilde{c}^\Delta_k&\mathrel{\vcenter{:}}=\max\mleft\{\undertilde{c}^\Delta_{k-1},\min_{x\in\Gamma_\Delta^0}\frac{v^\Delta(x,(k-1)T)-v^\Delta(x,kT)}{T}\mright\}. \end{align}\]
The next shows the convergence of 2.
Given a Lipschitz continuous initial datum \(\phi\), a \(T>0\) and an admissible pair \(\Delta\), let \(v^\Delta(\cdot,kT)\mathrel{\vcenter{:}}=\mathcal{M}_1(\phi,kT,\Delta)\) for \(k\in\mathbb{N}\). Setting for \(k\ge1\) \[\widetilde{c}^\Delta_k\mathrel{\vcenter{:}}=\max_{x\in\Gamma_\Delta^0}\frac{v^\Delta(x,(k-1)T)-v^\Delta(x,kT)}{T},\qquad\undertilde{c}^\Delta_k\mathrel{\vcenter{:}}=\min_{x\in\Gamma_\Delta^0}\frac{v^\Delta(x,(k-1)T)-v^\Delta(x,kT)}{T}\] and \(c_k\) as in [iterapprox], we have that, \[\label{eq:iterconv461} \mleft|\frac{\widetilde{c}^\Delta_k+\undertilde{c}^\Delta_k}{2}-c\mright| \to 0\tag{21}\] as \(k\to \infty\) and \(\Delta \to 0\) such that \[\label{eq:iterconvexp} k\min\mleft\{\frac{\Delta x}{\sqrt{\Delta t}},\frac{\Delta x^2}{\Delta t^{\frac{3}{2}}}\mright\}\longrightarrow0.\tag{22}\]
Proof. We start fixing \(k\ge1\). The same arguments used to prove 20 show that \[0\le\max_{x\in\Gamma}((\mathcal{S}((k-1)T)\phi)(x)-(\mathcal{S}(kT)\phi)(x))-\max_{y\in\Gamma_\Delta^0}((\mathcal{S}((k-1)T)\phi)(y)-(\mathcal{S}(kT)\phi)(y))\le\ell\Delta x,\] where \(\ell\) is the Lipschitz constant of \((x,t)\mapsto(\mathcal{S}(t)\phi)(x)\). Setting \(\widetilde{c}_k\) and \(\undertilde{c}_k\) as in [itertheo], [errCS] yields that there is a \(C>0\) independent of \(\Delta\) and \(k\) such that \[\mleft|\frac{\widetilde{c}^\Delta_k+\undertilde{c}^\Delta_k}{2}-\frac{\widetilde{c}_k+\undertilde{c}_k}{2}\mright|\le\frac{\ell\Delta x}{T}+C(2k+1)\mleft(\frac{1}{\sqrt{\Delta t}}\min\mleft\{\Delta x,\frac{\Delta x^2}{\Delta t}\mright\}+\sqrt{\Delta t}\mright),\] therefore \[\mleft|\frac{\widetilde{c}^\Delta_k+\undertilde{c}^\Delta_k}{2}-c\mright|\le|c_k-c|+\frac{\ell\Delta x}{T}+C(2k+1)\mleft(\frac{1}{\sqrt{\Delta t}}\min\mleft\{\Delta x,\frac{\Delta x^2}{\Delta t}\mright\}+\sqrt{\Delta t}\mright).\] This and [iterapprox] imply 21 whenever 18 and 22 hold. ◻
Adopting the same notation used in the proof of [iterconv], we notice that [iterapprox] yields \[|c_k-c|\le\frac{\widetilde{c}_k-\undertilde{c}_k}{2}\le\frac{\widetilde{c}^\Delta_k-\undertilde{c}^\Delta_k}{2}+\frac{\ell\Delta x}{T}+C(2k+1)\mleft(\frac{1}{\sqrt{\Delta t}}\min\mleft\{\Delta x,\frac{\Delta x^2}{\Delta t}\mright\}+\sqrt{\Delta t}\mright).\] This and [iterconv] prove the following error estimate for 2.
Under the same assumptions of [iterconv], we have that, for any \(k\ge1\), \[\label{eq:itererr461} \mleft|\frac{\widetilde{c}^\Delta_k+\undertilde{c}^\Delta_k}{2}-c\mright|\le\frac{\widetilde{c}^\Delta_k-\undertilde{c}^\Delta_k}{2}+C_0\mleft(\frac{\Delta x}{T}+(k+1)\mleft(\frac{1}{\sqrt{\Delta t}}\min\mleft\{\Delta x,\frac{\Delta x^2}{\Delta t}\mright\}+\sqrt{\Delta t}\mright)\mright),\tag{23}\] where \(C_0>0\) is a constant independent of \(\Delta\) and \(k\).
[itererr] justifies the stopping criterion \(\widetilde{c}^\Delta_k-\undertilde{c}^\Delta_k<2\varepsilon\) at [alg:iteralgo1] in 2, but does not prove the convergence of the algorithm, as [iterconv], due to the presence of the terms \(\widetilde{c}^\Delta_k\) and \(\undertilde{c}_k^\Delta\).
We point out that the presence of \(k\) in the numerator of both 22 and 23 affects the estimated convergence rate of our method. It also means that the approximation provided by 2 becomes less reliable with each step, in contrast with 1. Fixed an admissible pair \(\Delta\), one could exploit 23 to pick a maximal number of iterations after which the algorithm should not be trusted, namely, fix a maximal number of iterations 2 can perform reliably.
In this section we present numerical simulations performed on two types of networks in \(\mathbb{R}^2\) to show the performance and convergence properties of the proposed algorithms. In the first case we consider a simple network in the form of an equilateral triangle. In the second one we investigate a more complex network structured as a traffic circle. In both settings we discuss one problem with a Hamiltonian dependent on \(s\) and one problem with a Hamiltonian independent of \(s\).
We now focus on the selection of parameters required by 1. Based on the theoretical results established in the previous sections, we examine the expected performance of the algorithm as a function of these chosen parameters. First, we select a sequence of pairs \(\Delta\) such that \[\label{eq:notenoughgoodpair} \min\mleft\{\frac{\Delta x}{\sqrt{\Delta t}},\frac{\Delta x^2}{\Delta t^{\frac{3}{2}}}\mright\}\longrightarrow0 \qquad \text{as }\Delta \to 0.\tag{24}\] If \(\Delta t\) is further chosen to satisfy 18 , then [basicerr] ensures that for \(k\) large enough the error \(\mleft|\frac{\overline{c}^\Delta_k + \underline{c}^\Delta_k}{2} - c\mright|\) is bounded above by a term proportional to \(\min\mleft\{\frac{\Delta x}{\sqrt{\Delta t}}, \frac{\Delta x^2}{\Delta t^{3/2}}\mright\}\) and independent of \(k\). This term arises solely from the error estimate for the semi-Lagrangian scheme employed to approximate the solution to 4 (as shown in [errCS]), it remains constant for fixed \(\Delta\) and vanishes as \(\Delta \to 0\). Equivalently, by selecting a small enough stopping condition \(\varepsilon\) in 1, [basicstop] implies that the error \(\mleft|\frac{\overline{c}^\Delta_k + \underline{c}^\Delta_k}{2} - c\mright|\) is bounded above by a term proportional to \(\min\mleft\{\frac{\Delta x}{\sqrt{\Delta t}}, \frac{\Delta x^2}{\Delta t^{3/2}}\mright\}\) when the algorithm reaches this tolerance. As before, this term remains constant for fixed \(\Delta\) and tends to zero as \(\Delta \to 0\). Consequently, this analysis enables us to conclude that, for a fixed \(\Delta\), the tolerance \(\varepsilon\) can be chosen as a function of \(\Delta\) without any loss of accuracy.
In the next simulations we implement the algorithm with time step \(\Delta t= (\min_{\gamma\in\mathcal{E}^+} \Delta x_\gamma)/\beta_0\), for which both 24 and 18 hold. We also evaluate the algorithm for \(\Delta t= \Delta x/2\) and \(\Delta t= \Delta x^{5/6}\), which satisfy 24 but do not satisfy 18 . These last two choices do not meet the assumptions in [errCS] and, consequentially, in [basicerr], but allow us for larger time steps. However, the last choice aligns with the convergence properties of the semi-Lagrangian scheme for the time-dependent Hamilton–Jacobi equation 4 established in CarliniSiconolfi23?.
We additionally perform these tests for 2 to compare its performance with 1. We point out that, while [basicerr] guarantees convergence of 1 as \(\Delta \to 0\) suitably and as \(k\to \infty\) independently of \(\Delta\), the convergence of 2 is proven if 22 holds. Moreover, while [basicstop] ensures that, by choosing a sufficiently small stopping condition, the difference between the approximated and exact critical value is bounded above by a term dependent on \(\Delta\) and independent of \(k\) when 1 reaches the prescribed tolerance, [itererr] does not provide a similar result for 2. However, from a numerical point of view, 2 turns out to be highly advantageous. Indeed, the following simulations show that it attains the same accuracy as 1 while reducing the number of iterations by about 81% on average.
We implement 1 2 with the improvements detailed in [basicimprov] [iterimprov]. With a slight abuse of notation, we denote by \(c^\Delta\) the output returned by both 1 and 2. The accuracy is measured by computing the absolute value of the difference between the approximated critical value \(c^{\Delta}\) and a reference critical value \(\hat{c}\) or the exact critical value \(c\), when it is known. We apply the algorithms in two ways: either by fixing the number of iterations to be executed or by setting a tolerance upon reaching which the algorithm terminates. In the following tables, the column labeled \(k\) reports the number of iterations performed, either to complete the predetermined number of iterations or to satisfy the prescribed tolerance. In the following simulations, we run the algorithms with \(\phi \equiv 0\), \(T=1\) and minimal flux limiters.
We consider as \(\Gamma \subset \mathbb{R}^2\) the triangle with vertices \[z_1 \mathrel{\vcenter{:}}=(0, 0), \qquad z_2\mathrel{\vcenter{:}}=\mleft(\frac{1}{2},\frac{\sqrt{3}}{2}\mright), \qquad z_3\mathrel{\vcenter{:}}=(1,0),\] and arcs \(\gamma_1,\gamma_2,\gamma_3:[0,1]\longrightarrow \mathbb{R}^2\) \[\gamma_1(s) \mathrel{\vcenter{:}}= sz_2, \qquad \gamma_2(s) \mathrel{\vcenter{:}}=(1-s)z_2+sz_3,\qquad \gamma_3 (s)\mathrel{\vcenter{:}}=(1-s)z_3,\] plus the reversed arcs (3).
Let us take the Hamiltonians \(H_{\gamma_i}: [0,1]\times \mathbb{R}\longrightarrow \mathbb{R}\) for \(i=1,2,3\) \[H_{\gamma_1}(s,\mu)\mathrel{\vcenter{:}}=(\mu+2s)^2,\qquad H_{\gamma_2}(s,\mu)\mathrel{\vcenter{:}}=\mu^2+2Bs,\qquad H_{\gamma_3}(s,\mu)\mathrel{\vcenter{:}}=(\mu+A-1+2s)(\mu+2A)+C,\] where \(A\), \(B\), \(C\) are nonnegative constants such that \(C\ge2B\). It is easy to prove that if \[A=\sqrt C-1+\frac{2}{6B}\mleft(C^{\frac{3}{2}}-(C-2B)^{\frac{3}{2}}\mright),\] then \(c=C\) is the exact critical value of the problem. We study the example with \(A=2/3\), \(B=1/2\) and \(C=1\). For this problem we can choose \(\beta_0=12\).
We begin evaluating the performance of 1 in relation to the theoretical results explored at the beginning of 5. This analysis further supports the choice of the tolerance selected in the following tests. Specifically, we fix \(2000\) iterations to be executed, set the time step \(\Delta t = (\min_{\gamma\in\mathcal{E}^+} \Delta x_\gamma)/12\) and implement the algorithm for several refinements of the space grid. The results are illustrated in 4 and summarized in 1. In 4 (left), the behavior of \((\overline{c}^\Delta_k - \underline{c}^\Delta_k)/2\) is displayed for several values of the spatial step \(\Delta x\). As expected, this quantity decreases as \(k\) increases. Consequently, setting a small tolerance in the stopping condition requires the algorithm to perform a large number of iterations. In 4 (right), the error \(\mleft| \frac{\overline{c}^\Delta_k + \underline{c}^\Delta_k}{2} - c \mright|\) is shown for several values of \(\Delta x\). Consistently with the discussion at the beginning of 5, this error stabilizes and reaches a constant value after a sufficient number of iterations. This result suggests that, for a fixed \(\Delta x\), running all \(2000\) iterations is unnecessary and the algorithm can be stopped once the plateau is reached. Examining the behavior as \(\Delta x\) varies, as once again predicted by the previous theoretical analysis, it can be seen that the value of the aforementioned plateau decreases as \(\Delta x\) decreases. Additionally, the number of iterations required to reach the plateau increases (or equivalently, the tolerance \(\varepsilon\) required to stop the algorithm must be smaller) as \(\Delta x\) decreases. These observations justify the following choice of tolerance proportional to \(\Delta x\).
Figure 4: 1. Test on the triangle with Hamiltonian dependent on \(s\), \(A=2/3\), \(B=1/2\), \(C=1\), \(\Delta t=\min_{\gamma\in\mathcal{E}^+}\Delta_\gamma x/12\) and \(2000\) fixed iterations..
\({\Delta x}\) | \(k\) | \(c^{\Delta}\) | \(|c^{\Delta}-c |\) |
---|---|---|---|
\(2.00\cdot 10^{-1}\) | \(2000\) | \(1.15\) | \(1.49\cdot 10^{-1}\) |
\(1.00\cdot 10^{-1}\) | \(2000\) | \(1.06\) | \(6.26\cdot 10^{-2}\) |
\(5.00\cdot 10^{-2}\) | \(2000\) | \(1.02\) | \(2.41\cdot 10^{-2}\) |
\(2.50\cdot 10^{-2}\) | \(2000\) | \(1.01\) | \(7.29\cdot 10^{-3}\) |
We now implement 1 with \(\varepsilon = \Delta x/10\), \(\Delta t = (\min_{\gamma \in \mathcal{E}^+} \Delta x_\gamma) / 12\) and several refinements of the space grid. The results are presented in columns 8–10 of 2. Column 10 shows a decreasing trend in the error as \(\Delta x\) decreases. As previously justified theoretically and observed in 4 (right), this trend is independent of the choice of tolerance scaled by \(\Delta x\). Indeed, comparing the errors in column 4 of 1 with those in column 10 of 2, the approximated critical value is computed with the same order of error. Additionally, we implement 1 using the same tolerance \(\varepsilon = \Delta x/10\) but varying the time step: \(\Delta t = \Delta x^{5/6}\) (columns 2–4 of 2) and \(\Delta t = \Delta x / 2\) (columns 5–7 of 2). 2 reveals that the worst approximation is obtained with \(\Delta t = \Delta x^{5/6}\), while choosing \(\Delta t = \Delta x / 2\) produces errors comparable to those achieved with \(\Delta t = \min_{\gamma \in \mathcal{E}^+} \Delta x_\gamma / 12\) preserving almost the same number of iterations. This choice offers the benefit of a larger time step, thereby reducing the computational cost.
\({\Delta x}\) | \(k\) | \(c^{\Delta}\) | \(|c^{\Delta}-c |\) | \(k\) | \(c^{\Delta}\) | \(|c^{\Delta}-c |\) | \(k\) | \(c^{\Delta}\) | \(|c^{\Delta}-c |\) |
---|---|---|---|---|---|---|---|---|---|
\(2.00\cdot 10^{-1}\) | \(40\) | \(1.91\) | \(9.11\cdot 10^{-1}\) | \(19\) | \(1.20\) | \(1.95\cdot 10^{-1}\) | \(18\) | \(1.16\) | \(1.63\cdot 10^{-1}\) |
\(1.00\cdot 10^{-1}\) | \(46\) | \(1.30\) | \(2.99\cdot 10^{-1}\) | \(36\) | \(1.08\) | \(7.86\cdot 10^{-2}\) | \(35\) | \(1.07\) | \(7.07\cdot 10^{-2}\) |
\(5.00\cdot 10^{-2}\) | \(75\) | \(1.10\) | \(1.00\cdot 10^{-1}\) | \(70\) | \(1.03\) | \(3.02\cdot 10^{-2}\) | \(69\) | \(1.03\) | \(2.84\cdot 10^{-2}\) |
\(2.50\cdot 10^{-2}\) | \(141\) | \(1.04\) | \(4.36\cdot 10^{-2}\) | \(138\) | \(1.01\) | \(9.87\cdot 10^{-3}\) | \(138\) | \(1.01\) | \(9.48\cdot 10^{-3}\) |
\(1.25\cdot 10^{-2}\) | \(277\) | \(1.02\) | \(1.71\cdot 10^{-2}\) | \(275\) | \(1.00\) | \(1.53\cdot 10^{-3}\) | \(275\) | \(1.00\) | \(1.47\cdot 10^{-3}\) |
Finally, we evaluate 2 with \(\varepsilon = \Delta x/10\) and report the results in 3. Columns 2–4 correspond to \(\Delta t = \Delta x^{5/6}\), columns 5–7 to \(\Delta t = \Delta x / 2\), and columns 8–10 to \(\Delta t = (\min_{\gamma \in \mathcal{E}^+} \Delta x_\gamma) / 12\). Comparing this table with 2, we find that 2 exhibits a similar approximation to 1 and reduces the number of iterations on average by \(67\%\) for \(\Delta t = \Delta x^{5/6}\), \(80 \%\) for \(\Delta t = \Delta x / 2\) and \(83\%\) for \(\Delta t = (\min_{\gamma \in \mathcal{E}^+} \Delta x_\gamma) / 12\).
\({\Delta x}\) | \(k\) | \(c^{\Delta}\) | \(|c^{\Delta}-c |\) | \(k\) | \(c^{\Delta}\) | \(|c^{\Delta}-c |\) | \(k\) | \(c^{\Delta}\) | \(|c^{\Delta}-c |\) |
---|---|---|---|---|---|---|---|---|---|
\(2.00\cdot 10^{-1}\) | \(4\) | \(1.91\) | \(9.13\cdot 10^{-1}\) | \(7\) | \(1.18\) | \(1.79\cdot 10^{-1}\) | \(5\) | \(1.15\) | \(1.50\cdot 10^{-1}\) |
\(1.00\cdot 10^{-1}\) | \(12\) | \(1.29\) | \(2.92\cdot 10^{-1}\) | \(10\) | \(1.07\) | \(7.09\cdot 10^{-2}\) | \(8\) | \(1.06\) | \(6.32\cdot 10^{-2}\) |
\(5.00\cdot 10^{-2}\) | \(48\) | \(1.10\) | \(9.59\cdot 10^{-2}\) | \(15\) | \(1.03\) | \(2.60\cdot 10^{-2}\) | \(13\) | \(1.02\) | \(2.42\cdot 10^{-2}\) |
\(2.50\cdot 10^{-2}\) | \(56\) | \(1.04\) | \(4.12\cdot 10^{-2}\) | \(16\) | \(1.01\) | \(7.52\cdot 10^{-3}\) | \(15\) | \(1.01\) | \(7.40\cdot 10^{-3}\) |
\(1.25\cdot 10^{-2}\) | \(65\) | \(1.02\) | \(1.59\cdot 10^{-2}\) | \(6\) | \(1.00\) | \(5.98\cdot 10^{-4}\) | \(7\) | \(1.00\) | \(5.06\cdot 10^{-4}\) |
We analyze a simpler problem in which the Hamiltonians are independent of \(s\). Let us take the Hamiltonians \(H_{\gamma_i}: \mathbb{R}\longrightarrow \mathbb{R}\) for \(i=1,2,3\) \[H_{\gamma_1}(\mu)\mathrel{\vcenter{:}}=(\mu+1)^2,\qquad H_{\gamma_2}(\mu)\mathrel{\vcenter{:}}=\mu^2+B,\qquad H_{\gamma_3}(\mu)\mathrel{\vcenter{:}}=(\mu+A)(\mu+2A)+C,\] where \(A\), \(B\), \(C\) are nonnegative constants with \(C\ge B\). If \(A=\sqrt C-1+\sqrt{C-B}\ge0\), then \(c=C\) is the exact critical value. We consider the problem with \(A=1\), \(B=0\) and \(C=1\), for which we can set \(\beta_0=9.1\).
We begin by performing a preliminary analysis of 1 through two tests, both using \(\Delta t=(\min_{\gamma\in\mathcal{E}^+}\Delta_\gamma x)/9.1\). The first test employs \(\varepsilon=\Delta x/10\) (columns 2–4 of 4), while the second test fixes \(2000\) iterations to be executed (5 and columns 5–7 of 4). As observed in the previous problem, the decreasing behavior of \((\overline{c}^\Delta_k - \underline{c}^\Delta_k)/2\), shown in 5 (left), confirms that a smaller tolerance \(\varepsilon\) in 1 leads the algorithm to perform a larger number of iterations before stopping. However, unlike the previous problem where the Hamiltonians depend on \(s\), the current tests do not approximate critical values with error of the same order of magnitude. Indeed, as illustrated in 5 (right), the error \(\mleft| \frac{\overline{c}^\Delta_k + \underline{c}^\Delta_k}{2} - c \mright|\) decreases as \(k\) increases, without reaching a plateau. In this case, the decreasing trend of errors as \(\Delta x\) decreases, presented in column 4 of 4, dependents on the tolerance proportional to \(\Delta x\). This is because as \(\Delta x\) decreases, the tolerance \(\varepsilon\) also decreases, requiring to an increase in the number of iterations (see column 2 of 4). What has just been observed does not contradict the theoretical analysis conducted at the beginning of 5. Indeed, as derived from [basicerr] (or equivalently, from [basicstop]), the error \(\mleft| \frac{\overline{c}^\Delta_k + \underline{c}^\Delta_k}{2} - c \mright|\) is bounded above by a term inversely proportional to \(k\) (or equivalently, by \((\overline{c}_k^\Delta-\underline{c}_k^\Delta)/2\)) and an additional term depending on \(\min \mleft\{ \frac{\Delta x}{\sqrt{\Delta t}}, \frac{\Delta x^2}{\Delta t^{3/2}} \mright\}\), independent of \(k\). The latter term originates exclusively from the approximation error estimate for the semi-Lagrangian scheme applied to 4 . However, as thoroughly analyzed in CarliniCoscettiPozza24?, when the Hamiltonians are independent of \(s\) and condition 18 holds, the approximation error is primarily due to the approximation of the minimum in the scheme. As a result, this error is much smaller than the estimate in [errCS]. Consequently, in the error estimate for \(\mleft| \frac{\overline{c}^\Delta_k + \underline{c}^\Delta_k}{2} - c \mright|\), the dominant term is that one proportional to \(1/k\) (or equivalently, the dominant term is \((\overline{c}_k^\Delta-\underline{c}_k^\Delta)/2\)), which explains the absence of a plateau in 5 (right).
Figure 5: 1. Test on the triangle with Hamiltonian independent of \(s\), \(A=1\), \(B=0\), \(C=1\), \(\Delta t=(\min_{\gamma\in\mathcal{E}^+}\Delta_\gamma x)/9.1\) and \(2000\) fixed iterations..
\({\Delta x}\) | \(k\) | \(c^{\Delta}\) | \(|c^{\Delta}-c |\) | \(k\) | \(c^{\Delta}\) | \(|c^{\Delta}-c |\) |
---|---|---|---|---|---|---|
\(2.00\cdot 10^{-1}\) | \(25\) | \(1.01\) | \(1.35\cdot 10^{-2}\) | \(2000\) | \(1.00\) | \(1.69\cdot 10^{-4}\) |
\(1.00\cdot 10^{-1}\) | \(51\) | \(1.01\) | \(7.60\cdot 10^{-3}\) | \(2000\) | \(1.00\) | \(1.94\cdot 10^{-4}\) |
\(5.00\cdot 10^{-2}\) | \(100\) | \(1.00\) | \(4.27\cdot 10^{-3}\) | \(2000\) | \(1.00\) | \(2.14\cdot 10^{-4}\) |
\(2.50\cdot 10^{-2}\) | \(200\) | \(1.00\) | \(2.28\cdot 10^{-3}\) | \(2000\) | \(1.00\) | \(2.28\cdot 10^{-3}\) |
\(1.25\cdot 10^{-2}\) | \(400\) | \(1.00\) | \(1.18\cdot 10^{-3}\) | \(2000\) | \(1.00\) | \(2.37\cdot 10^{-4}\) |
Next, we implement 1 with the same time step \(\Delta t = (\min_{\gamma \in \mathcal{E}^+} \Delta_{\gamma} x) / 9.1\), but with a smaller tolerance \(\varepsilon = \Delta x / 100\) (columns 8–11 in 5). This yields smaller errors compared to the previous test shown in column 4 of 4. As just discussed, this reduction in error can be attributed to the choice of a smaller tolerance. Additionally, the error exhibits a decreasing trend as \(\Delta x\) decreases, which is still caused by the tolerance being proportional to the spatial step.
Furthermore, we apply 1 keeping \(\varepsilon = \Delta x/100\) but exploring different time step choices: \(\Delta t = \Delta x^{5/6}\) (columns 2–4 of 5) and \(\Delta t = \Delta x / 2\) (columns 5–7 of 5). It can be seen that the largest error occurs when \(\Delta t = \Delta x^{5/6}\). On the other hand, setting \(\Delta t = \Delta x / 2\) results in errors of a comparable magnitude while preserving an almost identical number of iterations. This latter choice is particularly advantageous, as the larger time step leads to a reduction in computational cost.
\({\Delta x}\) | \(k\) | \(c^{\Delta}\) | \(|c^{\Delta}-c |\) | \(k\) | \(c^{\Delta}\) | \(|c^{\Delta}-c |\) | \(k\) | \(c^{\Delta}\) | \(|c^{\Delta}-c |\) |
---|---|---|---|---|---|---|---|---|---|
\(2.00\cdot 10^{-1}\) | \(263\) | \(0.95\) | \(4.93\cdot 10^{-2}\) | \(250\) | \(1.00\) | \(1.64\cdot 10^{-3}\) | \(251\) | \(1.00\) | \(1.35\cdot 10^{-3}\) |
\(1.00\cdot 10^{-1}\) | \(504\) | \(0.99\) | \(9.70\cdot 10^{-3}\) | \(501\) | \(1.00\) | \(8.78\cdot 10^{-4}\) | \(501\) | \(1.00\) | \(7.74\cdot 10^{-4}\) |
\(5.00\cdot 10^{-2}\) | \(1004\) | \(1.00\) | \(4.51\cdot 10^{-3}\) | \(1000\) | \(1.00\) | \(4.62\cdot 10^{-4}\) | \(1001\) | \(1.00\) | \(4.27\cdot 10^{-4}\) |
\(2.50\cdot 10^{-2}\) | \(2004\) | \(1.00\) | \(1.77\cdot 10^{-3}\) | \(2001\) | \(1.00\) | \(2.39\cdot 10^{-4}\) | \(2001\) | \(1.00\) | \(2.27\cdot 10^{-4}\) |
\(1.25\cdot 10^{-2}\) | \(4003\) | \(1.00\) | \(7.50\cdot 10^{-4}\) | \(4000\) | \(1.00\) | \(1.22\cdot 10^{-4}\) | \(4001\) | \(1.00\) | \(1.18\cdot 10^{-4}\) |
Finally, we implement 2 with \(\varepsilon = \Delta x/100\) and report the results in 6, whose columns 2–4 refer to \(\Delta t = (\Delta x)^{5/6}\), columns 5–7 to \(\Delta t = \Delta x / 2\), and columns 8–10 to \(\Delta t = \min_{\gamma \in \mathcal{E}^+} \Delta x_\gamma / 9.1\). A comparison with 5 reveals that 2 achieves a better approximation with substantially fewer iterations. More specifically, the number of iterations is reduced on average by \(79 \%\) for \(\Delta t = (\Delta x)^{5/6}\), \(93 \%\) for \(\Delta t = \Delta x / 2\) and \(97\%\) for \(\Delta t = \min_{\gamma \in \mathcal{E}^+} \Delta x_\gamma / 9.1\).
\({\Delta x}\) | \(k\) | \(c^{\Delta}\) | \(|c^{\Delta}-c |\) | \(k\) | \(c^{\Delta}\) | \(|c^{\Delta}-c |\) | \(k\) | \(c^{\Delta}\) | \(|c^{\Delta}-c |\) |
---|---|---|---|---|---|---|---|---|---|
\(2.00\cdot 10^{-1}\) | \(49\) | \(0.95\) | \(5.11\cdot 10^{-2}\) | \(18\) | \(1.00\) | \(1.41\cdot 10^{-3}\) | \(9\) | \(1.00\) | \(5.11\cdot 10^{-5}\) |
\(1.00\cdot 10^{-1}\) | \(73\) | \(0.99\) | \(1.07\cdot 10^{-2}\) | \(36\) | \(1.00\) | \(4.77\cdot 10^{-5}\) | \(17\) | \(1.00\) | \(3.48\cdot 10^{-5}\) |
\(5.00\cdot 10^{-2}\) | \(168\) | \(0.99\) | \(5.00\cdot 10^{-3}\) | \(72\) | \(1.00\) | \(1.35\cdot 10^{-5}\) | \(34\) | \(1.00\) | \(1.04\cdot 10^{-5}\) |
\(2.50\cdot 10^{-2}\) | \(442\) | \(1.00\) | \(2.02\cdot 10^{-3}\) | \(143\) | \(1.00\) | \(3.66\cdot 10^{-6}\) | \(68\) | \(1.00\) | \(3.11\cdot 10^{-6}\) |
\(1.25\cdot 10^{-2}\) | \(1358\) | \(1.00\) | \(8.74\cdot 10^{-4}\) | \(286\) | \(1.00\) | \(9.36\cdot 10^{-7}\) | \(136\) | \(1.00\) | \(8.34\cdot 10^{-7}\) |
We consider as \(\Gamma \subset \mathbb{R}^2\) a network formed by 8 vertices and 12 arcs. The vertices are defined as \[\begin{align} z_1\mathrel{\vcenter{:}}=\;&(-2, 0), & z_2\mathrel{\vcenter{:}}=\;&(-1, 0), & z_3\mathrel{\vcenter{:}}=\;&(0,2), & z_4\mathrel{\vcenter{:}}=\;&(0,1),\\ z_5\mathrel{\vcenter{:}}=\;&(2, 0), & z_6\mathrel{\vcenter{:}}=\;&(1, 0), & z_7\mathrel{\vcenter{:}}=\;&(0,-2), & z_8\mathrel{\vcenter{:}}=\;&(0,-1). \end{align}\] The arcs \(\gamma_i:[0,1]\longrightarrow \mathbb{R}^2\) for \(i=1,\dots, 12\) are defined as \[\begin{align} \gamma_1(s)\mathrel{\vcenter{:}}=\;&(1-s)z_1+sz_2, & \gamma_2(s)\mathrel{\vcenter{:}}=\;&(1-s)z_1+sz_3,& \gamma_3(s)\mathrel{\vcenter{:}}=\;&(1-s)z_1+sz_7,\\ \gamma_4(s)\mathrel{\vcenter{:}}=\;&(1-s)z_2+sz_4, & \gamma_5(s)\mathrel{\vcenter{:}}=\;&(1-s)z_2+sz_8,& \gamma_6(s)\mathrel{\vcenter{:}}=\;&(1-s)z_3+sz_4,\\ \gamma_7(s)\mathrel{\vcenter{:}}=\;&(1-s)z_3+sz_5, & \gamma_8(s)\mathrel{\vcenter{:}}=\;&(1-s)z_4+sz_6,& \gamma_9(s)\mathrel{\vcenter{:}}=\;&(1-s)z_5+sz_6,\\ \gamma_{10}(s)\mathrel{\vcenter{:}}=\;&(1-s)z_5+sz_7, & \gamma_{11}(s)\mathrel{\vcenter{:}}=\;&(1-s)z_6+sz_8,& \gamma_{12}(s)\mathrel{\vcenter{:}}=\;&(1-s)z_7+sz_8, \end{align}\] plus the reversed arcs (6).
Let us take the Hamiltonians \(H_{\gamma_i}:[0,1]\times \mathbb{R}\longrightarrow \mathbb{R}\) for \(i=1,\dots,12\) \[H_{\gamma_i}(s,\mu)\mathrel{\vcenter{:}}=\mleft\{ \begin{align} &\frac{(\mu-1)^2}{2}- 5, && \text{for } i=2,3,7,10 ,\\ &\frac{(\mu+4s)^2}{4}, && \text{for } i=4,5,8,11 , \\ &\frac{\mu^2}{2}- 5, && \text{for } i=1,6,9,12, \end{align} \mright.\] where \(\beta_0 = 9.5\) can be chosen.
Since the exact critical value for this problem is not explicitly known, we take \(\hat{c} = 2.59 \cdot 10^{-1}\) as the reference critical value, computed by running 2 with parameters \(\Delta x = 1.25 \cdot 10^{-2}\), \(\Delta t = (\min_{\gamma\in\mathcal{E}^+} \Delta_\gamma x)/9.5\) and \(\varepsilon = \Delta x/10\), and rounding to the first three significant figures.
We carry out the study of the algorithms, starting with 1. The algorithm is tested with \(\Delta t = (\min_{\gamma\in\mathcal{E}^+} \Delta_\gamma x)/9.5\) and with \(2000\) fixed iterations to be executed. The corresponding results are presented in 7 and 7. We compare these results with those in columns 8–10 of 8, obtained by implementing the algorithm with the same time step \(\Delta t = (\min_{\gamma\in\mathcal{E}^+} \Delta_\gamma x)/9.5\) but setting the tolerance \(\varepsilon = \Delta x/10\). The observed errors are approximately the same. As illustrated in 7, the sequences \((\overline{c}^\Delta_k - \underline{c}^\Delta_k)/2\) decrease in \(k\) and the errors \(\mleft| \frac{\overline{c}^\Delta_k + \underline{c}^\Delta_k}{2} - \hat{c} \mright|\) reach a plateau, whose value decreases in \(\Delta x\). This confirms that choosing \(\varepsilon=\Delta x/10\) is not restrictive, as already analyzed in 5.1.1.
Figure 7: 1. Test on the traffic circle with Hamiltonian dependent on \(s\), \(\Delta t=(\min_{\gamma\in\mathcal{E}^+}\Delta_\gamma x)/9.5\) and \(2000\) fixed iterations. The third graph refers to the test with \(\Delta x=2.50\cdot 10^{-2}\)..
\({\Delta x}\) | \(k\) | \(c^{\Delta}\) | \(|c^{\Delta}-\hat{c} |\) | |||
---|---|---|---|---|---|---|
\(2.00\cdot 10^{-1}\) | \(2000\) | \(0.41\) | \(1.53\cdot 10^{-1}\) | |||
\(1.00\cdot 10^{-1}\) | \(2000\) | \(0.33\) | \(6.67\cdot 10^{-2}\) | |||
\(5.00\cdot 10^{-2}\) | \(2000\) | \(0.29\) | \(2.72\cdot 10^{-2}\) | |||
\(2.50\cdot 10^{-2}\) | \(2000\) | \(0.27\) | \(8.42\cdot 10^{-3}\) |
\({\Delta x}\) | \(k\) | \(c^{\Delta}\) | \(|c^{\Delta}-\hat{c} |\) | \(k\) | \(c^{\Delta}\) | \(|c^{\Delta}-\hat{c} |\) | \(k\) | \(c^{\Delta}\) | \(|c^{\Delta}-\hat{c} |\) |
---|---|---|---|---|---|---|---|---|---|
\(2.00\cdot 10^{-1}\) | \(334\) | \(0.50\) | \(2.36\cdot 10^{-1}\) | \(294\) | \(0.41\) | \(1.48\cdot 10^{-1}\) | \(294\) | \(0.41\) | \(1.48\cdot 10^{-1}\) |
\(1.00\cdot 10^{-1}\) | \(613\) | \(0.33\) | \(7.43\cdot 10^{-2}\) | \(584\) | \(0.32\) | \(6.47\cdot 10^{-2}\) | \(584\) | \(0.32\) | \(6.46\cdot 10^{-2}\) |
\(5.00\cdot 10^{-2}\) | \(1192\) | \(0.29\) | \(2.67\cdot 10^{-2}\) | \(1168\) | \(0.29\) | \(2.67\cdot 10^{-2}\) | \(1168\) | \(0.29\) | \(2.66\cdot 10^{-2}\) |
\(2.50\cdot 10^{-2}\) | \(2372\) | \(0.27\) | \(8.91\cdot 10^{-3}\) | \(2341\) | \(0.27\) | \(8.54\cdot 10^{-3}\) | \(2341\) | \(0.27\) | \(8.53\cdot 10^{-3}\) |
\(1.25\cdot 10^{-2}\) | \(4711\) | \(0.26\) | \(1.08\cdot 10^{-3}\) | \(4679\) | \(0.26\) | \(3.85\cdot 10^{-4}\) | \(4679\) | \(0.26\) | \(3.87\cdot 10^{-4}\) |
We point out that, while 7 (right) shows decreasing errors in \(k\) toward a plateau, 7 (center) reports a different behavior. For instance, in the test with \(\Delta x= 2.50\cdot 10^{-2}\), the error reaches the lowest point at iteration 165 before converging and stabilizing at a certain value. In this test, when \(k=165\), the distance between \(\overline{c}^\Delta_k\) and \(\underline{c}^\Delta_k\) is not smaller than \(2\varepsilon\), as can be seen in 7 (left and right), however their average is very close to the reference critical value.
We continue our study setting \(\varepsilon=\Delta x/10\), \(\Delta t = \Delta x^{5/6}\), \(\Delta t = \Delta x / 2\), \(\Delta t = (\min_{\gamma\in\mathcal{E}^+} \Delta_\gamma x)/9.5\) and comparing the results given by 1 and 2, shown in 8 and 9, respectively. What stands out is that 2 requires fewer iterations than 1. In fact, 2 yields an average reduction in the number of iterations as following: \(67\%\) for \(\Delta t = \Delta x^{5/6}\) (column 2 of 8 and 9), \(84 \%\) for \(\Delta t = \Delta x / 2\) (column 5 of 8 and 9) and \(87 \%\) for \(\Delta t = (\min_{\gamma\in\mathcal{E}^+} \Delta_\gamma x)/9.5\) (column 8 of 8 and 9).
\({\Delta x}\) | \(k\) | \(c^{\Delta}\) | \(|c^{\Delta}-\hat{c} |\) | \(k\) | \(c^{\Delta}\) | \(|c^{\Delta}-\hat{c} |\) | \(k\) | \(c^{\Delta}\) | \(|c^{\Delta}-\hat{c} |\) |
---|---|---|---|---|---|---|---|---|---|
\(2.00\cdot 10^{-1}\) | \(99\) | \(0.50\) | \(2.43\cdot 10^{-1}\) | \(47\) | \(0.41\) | \(1.54\cdot 10^{-1}\) | \(38\) | \(0.41\) | \(1.54\cdot 10^{-1}\) |
\(1.00\cdot 10^{-1}\) | \(190\) | \(0.34\) | \(7.76\cdot 10^{-2}\) | \(93\) | \(0.33\) | \(6.78\cdot 10^{-2}\) | \(79\) | \(0.33\) | \(6.76\cdot 10^{-2}\) |
\(5.00\cdot 10^{-2}\) | \(435\) | \(0.29\) | \(2.85\cdot 10^{-2}\) | \(184\) | \(0.29\) | \(2.80\cdot 10^{-2}\) | \(159\) | \(0.29\) | \(2.80\cdot 10^{-2}\) |
\(2.50\cdot 10^{-2}\) | \(814\) | \(0.27\) | \(1.02\cdot 10^{-2}\) | \(368\) | \(0.27\) | \(9.21\cdot 10^{-3}\) | \(319\) | \(0.27\) | \(9.20\cdot 10^{-3}\) |
\(1.25\cdot 10^{-2}\) | \(1576\) | \(0.26\) | \(1.36\cdot 10^{-3}\) | \(737\) | \(0.26\) | \(5.25\cdot 10^{-5}\) | \(643\) | \(0.26\) | \(6.17\cdot 10^{-5}\) |
We now simplify the previous test and change the Hamiltonian on the inner traffic circle so that it becomes independent of \(s\). Let us take the Hamiltonians \(H_{\gamma_i}: \mathbb{R}\longrightarrow \mathbb{R}\) for \(i=1,\dots,12\)
\[H_{\gamma_i}(\mu)\mathrel{\vcenter{:}}=\mleft\{ \begin{align} &\frac{(\mu-1)^2}{2}- 5, && \text{for } i=2,3,7,10 ,\\ &\frac{(\mu+2)^2}{2}- 2, && \text{for } i=4,5,8,11 , \\ &\frac{\mu^2}{2}- 5, && \text{for } i=1,6,9,12. \end{align} \mright.\]
For this problem we can choose \(\beta_0=7.5\) and, as in the previous case, the exact critical value is unknown. We set the reference critical value \(\hat{c} = -1.50\) as the value computed by 2 with \(\Delta x = 1.25 \cdot 10^{-2}\), \(\Delta t=(\min_{\gamma\in\mathcal{E}^+}\Delta_\gamma x)/7.5\), \(\varepsilon = \Delta x/10\) and rounded to the first three significant figures.
We test 1 with \(\Delta t=(\min_{\gamma\in\mathcal{E}^+} \Delta_\gamma x)/7.5\) and \(2000\) fixed iterations to be executed (10 and 8). The current problem is independent of \(s\) and, as detailed in 5.1.2, the approximation error introduced by the semi-Lagrangian scheme is smaller than the estimate in [errCS]. As a result, the errors \(\mleft| \frac{\overline{c}^\Delta_k + \underline{c}^\Delta_k}{2} - \hat{c} \mright|\) decrease and do not reach a plateau, as shown in 8. Since \((\overline{c}^\Delta_k - \underline{c}^\Delta_k)/2\) decreases in \(k\) (see 8), when we implement 1 with \(\varepsilon=\Delta x/10\) (columns 8–10 of 11), the decreasing trend of the errors as \(\Delta x\) decreases is a consequence of the chosen tolerance.
Figure 8: 1. Test on the traffic circle with Hamiltonian independent of \(s\), \(\Delta t=(\min_{\gamma\in\mathcal{E}^+}\Delta_\gamma x)/7.5\) and \(2000\) fixed iterations..
\({\Delta x}\) | \(k\) | \(c^{\Delta}\) | \(|c^{\Delta}-\hat{c} |\) | |||
---|---|---|---|---|---|---|
\(2.00\cdot 10^{-1}\) | \(2000\) | \(-1.50\) | \(4.20\cdot 10^{-4}\) | |||
\(1.00\cdot 10^{-1}\) | \(2000\) | \(-1.50\) | \(3.44\cdot 10^{-4}\) | |||
\(5.00\cdot 10^{-2}\) | \(2000\) | \(-1.50\) | \(3.10\cdot 10^{-4}\) | |||
\(2.50\cdot 10^{-2}\) | \(2000\) | \(-1.50\) | \(2.77\cdot 10^{-4}\) |
We conduct some tests with \(\varepsilon = \Delta x / 10\) and compare the outcomes of 1 and 2, which are presented in 11 and 12, respectively. The tests are performed with three different values of \(\Delta t\), given by \(\Delta t = \Delta x^{5/6}\), \(\Delta t = \Delta x / 2\) and \(\Delta t = (\min_{\gamma \in \mathcal{E}^+} \Delta_\gamma x)/7.5\). Numerical simulations show that, by implementing 2 instead of 1, we achieve an average reduction in the number of iterations by: \(66\%\) for \(\Delta t = \Delta x^{5/6}\) (column 2 of 11 and 12), \(83 \%\) for \(\Delta t = \Delta x / 2\) (column 5 of 11 and 12) and \(88 \%\) for \(\Delta t = (\min_{\gamma \in \mathcal{E}^+} \Delta_\gamma x)/7.5\) (column 8 of 11 and 12).
\({\Delta x}\) | \(k\) | \(c^{\Delta}\) | \(|c^{\Delta}-\hat{c} |\) | \(k\) | \(c^{\Delta}\) | \(|c^{\Delta}-\hat{c} |\) | \(k\) | \(c^{\Delta}\) | \(|c^{\Delta}-\hat{c} |\) |
---|---|---|---|---|---|---|---|---|---|
\(2.00\cdot 10^{-1}\) | \(244\) | \(-1.51\) | \(1.23\cdot 10^{-2}\) | \(232\) | \(-1.50\) | \(2.97\cdot 10^{-3}\) | \(232\) | \(-1.50\) | \(3.62\cdot 10^{-3}\) |
\(1.00\cdot 10^{-1}\) | \(479\) | \(-1.51\) | \(5.73\cdot 10^{-3}\) | \(465\) | \(-1.50\) | \(1.26\cdot 10^{-3}\) | \(465\) | \(-1.50\) | \(1.48\cdot 10^{-3}\) |
\(5.00\cdot 10^{-2}\) | \(952\) | \(-1.50\) | \(2.84\cdot 10^{-3}\) | \(938\) | \(-1.50\) | \(5.94\cdot 10^{-4}\) | \(938\) | \(-1.50\) | \(6.61\cdot 10^{-4}\) |
\(2.50\cdot 10^{-2}\) | \(1894\) | \(-1.50\) | \(9.82\cdot 10^{-4}\) | \(1878\) | \(-1.50\) | \(2.75\cdot 10^{-4}\) | \(1878\) | \(-1.50\) | \(2.95\cdot 10^{-4}\) |
\(1.25\cdot 10^{-2}\) | \(3774\) | \(-1.50\) | \(4.29\cdot 10^{-4}\) | \(3756\) | \(-1.50\) | \(1.30\cdot 10^{-4}\) | \(3756\) | \(-1.50\) | \(1.36\cdot 10^{-4}\) |
\({\Delta x}\) | \(k\) | \(c^{\Delta}\) | \(|c^{\Delta}-\hat{c} |\) | \(k\) | \(c^{\Delta}\) | \(|c^{\Delta}-\hat{c} |\) | \(k\) | \(c^{\Delta}\) | \(|c^{\Delta}-\hat{c} |\) |
---|---|---|---|---|---|---|---|---|---|
\(2.00\cdot 10^{-1}\) | \(76\) | \(-1.51\) | \(9.31\cdot 10^{-3}\) | \(40\) | \(-1.50\) | \(7.33\cdot 10^{-4}\) | \(29\) | \(-1.50\) | \(6.66\cdot 10^{-4}\) |
\(1.00\cdot 10^{-1}\) | \(126\) | \(-1.50\) | \(4.36\cdot 10^{-3}\) | \(77\) | \(-1.50\) | \(2.19\cdot 10^{-4}\) | \(55\) | \(-1.50\) | \(3.20\cdot 10^{-5}\) |
\(5.00\cdot 10^{-2}\) | \(271\) | \(-1.50\) | \(2.06\cdot 10^{-3}\) | \(153\) | \(-1.50\) | \(4.24\cdot 10^{-5}\) | \(109\) | \(-1.50\) | \(8.55\cdot 10^{-6}\) |
\(2.50\cdot 10^{-2}\) | \(717\) | \(-1.50\) | \(6.91\cdot 10^{-4}\) | \(306\) | \(-1.50\) | \(1.21\cdot 10^{-5}\) | \(218\) | \(-1.50\) | \(1.19\cdot 10^{-5}\) |
\(1.25\cdot 10^{-2}\) | \(1656\) | \(-1.50\) | \(2.18\cdot 10^{-4}\) | \(616\) | \(-1.50\) | \(2.94\cdot 10^{-6}\) | \(437\) | \(-1.50\) | \(5.15\cdot 10^{-7}\) |
[ltb] has been proved in Pozza25_2? under the additional assumption
This assumption relies on knowing the value of \(c\) in advance, which is clearly too restrictive for our purposes. Under this condition the extended Aubry set \(\widetilde{\mathcal{A}}_\Gamma\) consists of the support of a collection of arcs, while in our case it could even be made up of a countable number of disjointed pieces of arcs or just a single point. However, the proof of Pozza25_2? can be readily extended to our case using the same methods, with mostly straightforward modifications. Thus, we will just sketch the proof of [ltb], highlighting the main steps.
First, the same arguments used in the proof of Pozza25_2? prove the following result.
Let \(w\) be a subsolution to (\(\mathcal{H}\)J\(c\)), then \((x,t)\mapsto w(x)-ct\) is a subsolution to 4 . If \(w\) is also a solution to (\(\mathcal{H}\)J\(c\)) in \(\Gamma\setminus\mleft(\widetilde{\mathcal{A}}_\Gamma\setminus\mathcal{A}_\Gamma\mright)\), then \(w-ct=\mathcal{S}(t)w\) on \(\Gamma\times\mathbb{R}^+\).
Setting \(u\) as in 9 and \[w(x)\mathrel{\vcenter{:}}=\min\limits_{y\in\Gamma}(\phi(y)+S_c(y,x)),\qquad \text{for }x\in\Gamma,\] we get, according to the comparison principle Siconolfi22?, [weakltb] [Scalprop] [maxsubsol], that \[\label{eq:ltb1} \begin{align} w(x)&\le(\mathcal{S}(t)w)(x)+ct\le u(x),&& \text{for any (x,t)\in\Gamma\times\mathbb{R}^+},\\ w(x)&=(\mathcal{S}(t)w)(x)+ct=u(x),&& \text{for any (x,t)\in\widetilde{\mathcal{A}}_\Gamma\times\mathbb{R}^+}. \end{align}\tag{25}\]
Let us define the semilimits \[\begin{align} \underline{\phi}(x)\mathrel{\vcenter{:}}=\;&\sup\mleft\{\limsup_{n\to\infty}(\mathcal{S}(t_n)\phi)(x_n)+ct_n\mright\},\tag{26}\\ \overline{\phi}(x)\mathrel{\vcenter{:}}=\;&\inf\mleft\{\liminf_{n\to\infty}(\mathcal{S}(t_n)\phi)(x_n)+ct_n\mright\}\tag{27}, \end{align}\] where the supremum and the infimum are taken over the sequences \(\{x_n\}_{n\in\mathbb{N}}\) converging to \(x\) and the positive diverging sequences \(\{t_n\}_{n\in\mathbb{N}}\). Arguing as in Pozza25_2? we get
Given \(\phi\in C(\Gamma)\), let \(\underline{\phi}\) and \(\overline{\phi}\) be as defined in 26 and 27 , respectively. Then \(\underline{\phi}\) and \(\overline{\phi}\) are a subsolution to (\(\mathcal{H}\)J\(c\)) on \(\Gamma\) and a supersolution to (\(\mathcal{H}\)J\(c\)) on \(\Gamma\setminus\widetilde{\mathcal{A}}_\Gamma\), respectively.
Defining the semilimits \(\underline{w}\) and \(\overline{w}\) as in 26 27 with \(\phi\) replaced by \(w\), respectively, we get from 25 \[\begin{align} \overline{w}(x)&\le\underline{w}(x)\le u(x),\qquad \text{for any x\in\Gamma},\\ \overline{w}(x)&=\underline{w}(x)=u(x),\qquad \text{for any x\in\widetilde{\mathcal{A}}_\Gamma}, \end{align}\] then [ovphisupsol] and the comparison principle for the eikonal equation Pozza25? yield that \(\overline{w}=\underline{w}=u\) on \(\Gamma\), which in turn shows \[\label{eq:ltb2} \lim_{t\to\infty}(\mathcal{S}(t)w)(x)+ct=u(x),\qquad \text{for every x\in\Gamma}.\tag{28}\]
Next we have by [Scalprop] that \[(\mathcal{S}(t)w)(x)\le(\mathcal{S}(t)\phi)(x),\qquad \text{for all (x,t)\in\Gamma\times\mathbb{R}^+},\] therefore, taking into account 28 , \[u(x)\le\overline{\phi}(x)\le\underline{\phi}(x),\qquad \text{for every x\in\Gamma}.\] Arguing as in Pozza25? we get that \(\underline{\phi}=u\) on \(\widetilde{\mathcal{A}}_\Gamma\), thus another application of Pozza25? proves [ltb].