Recurrence of the VRJP and Exponential Decay in the \(H^{2|2}\)-Model on the Hierarchical Lattice for \(d\le 2\)


Abstract

We show that the Vertex-Reinforced Jump Processes on a \(d\)-dimensional hierarchical lattice are recurrent for \(d < 2\) and transient for \(d > 2\), and at the critical dimension \(d=2\), we show recurrence in the case of strong reinforcement. The proof of recurrence relies on an exponential decay estimate of the fractional moments of the Green’s function; which, unlike the classical approach used for \(\mathbb{Z}^d\), requires additional entropy estimates via stability of the model distribution under coarse-grain operation, exploiting a key symmetry inherent to the model’s linear reinforcement structure.

1 Introduction↩︎

In this paper, we show that the vertex reinforced jump process (VRJP) on the hierarchical lattice and its associated supersymmetric hyperbolic sigma \(H^{2|2}\)-model has a phase transition. In particular, when \(d\leq 2\), the VRJP is recurrent and the \(H^{2|2}\)-model has local order. By local order we mean exponential decay of correlations, which we establish by proving a bound on the fractional moments of its effective \(u\)-field. Together with the results in Disertori2022?, disertori23:_trans?, disertori24:_fluct? (long range order of the \(H^{2|2}\)-model), it establishes a phase transition of the VRJP (from recurrence to transient) and the \(H^{2|2}\)-model (from local order to long range order) with critical dimension \(2\) on the hierarchical lattice, as illustrated in Figure 1.

Figure 1: Plot of the empirical 1/4-moment of e^{u^{(1)}_{i}},0\leq i\leq n with sample size 50 on \Lambda_{10}, i.e. graph with 1024 vertices, with parameter W=1 and different \rho, i.e. spectral dimensions d=2 \frac{\log 2}{\log \rho}. This plot is illustrative and not intended to be quantitative.

An essential aspect of VRJP is how reinforcement affects its long-term behavior. Strong reinforcement tends to make the random walker return to previously visited sites, while weak reinforcement leads to behavior similar to a simple random walk.

This process is interesting in probability theory, one reason is that it is one of the few self-interacting processes that can be analyzed in detail. VRJP had been studied from various angles Collevecchio2009?, Basdevant2010?, Davis2004?, Pemantle1999?, Davis2002?, Angel2014?. Its relation to the supersymmetric (SUSY) hyperbolic sigma model Disertori2010a?, Disertori2010? discovered by Sabot and Tarrès Sabot2015? is of particular significance, because this connection enables powerful analytical tools from field theory to study its behavior.

Using the sigma model mapping, Sabot and Tarrès showed that there is a phase transition on \(\mathbb{Z}^{d}\) with \(d\geq 3\). In low dimensions (1D and 2D), VRJP does not undergo a phase transition and is recurrent regardless of the reinforcement strength. The sigma model connection allowed one to prove recurrence in dimension two sabot19:_polyn_vertex_reinf_jump_proces?, kozma19:_power_vrjp?. Positive recurrence on \(\mathbb{Z}^{2}\) is an important conjecture.

The study of VRJP and \(H^{2|2}\)-model on the hierarchical lattice has recently gained attention. The hierarchical lattice introduced by Dyson Dyson1969?, Dyson_1971? is particularly well-suited for renormalization group (RG) analysis due to its recursive, self-similar structure. This nested, scale-invariant design enables systematic coarse-graining, allowing one to track how interactions evolve across scales. As a result, it serves as a powerful tool for studying critical phenomena and long-range effects in statistical mechanics.

In Disertori2022?, disertori23:_trans?, disertori24:_fluct?, the long-range order of the \(H^{2|2}\)-model was established on the hierarchical lattice for spectral dimensions \(d > 2\), and interesting consequences follow. In this paper, we show that the \(H^{2|2}\)-model exhibits local order when the spectral dimension \(d < 2\) for any strength of reinforcement or \(d=2\) with strong reinforcement, namely exponential decay of correlations. Using our estimates, we then establish that the VRJP undergoes a phase transition on the hierarchical lattice in terms of its long-term behavior (transience or recurrence). Notably, when \(d\ne2\), the reinforcement strength (i.e. temperature of the sigma model) has little effect, while at \(d=2\) it becomes influential. Neverthless, we conjecture that the model is recurrent in \(d=2\) even with weak reinforcement.

Our approach is based on the fractional moment estimates, which originated from studies on Anderson localization Aizenman1993?. The fractional moment method, applied on the hierarchical lattice von_Soosten_2017? does not yeild the optimal bound. The main difficulty lies in the lattice’s full connectivity (which, aside from the edge weights, makes it similar to a complete graph), resulting in significant entropy that significantly weaken standard techniques. However, in our specific case, we address these obstacles by employing a coarse-graining lemma for VRJP that takes advantage of its linear reinforcement, and we obtain the optimal exponential decay rate.

The paper is organized as follows. In Section 2 we outline several definitions and present the main results. In Section 3, we demonstrate the recurrence and the transience of the VRJP, utilizing different estimates of the correlations from the \(H^{2|2}\)-model. Section 4 contains the proof of exponential decay of the correlations, the proof use the fractional moment method with coarse graining adapted to our scenario.

2 Definition of models and results↩︎

The hierarchical lattice \(\mathbb{X}\) is an infinite graph for which the vertices are indexed by \(\mathbb{N}^{\star }\), endowed with its hierarchical distance \(d_{\mathbb{X}}\), which is the order of the smallest dyadic hierarchy containing both \(i\) and \(j\).

More precisely, an order-\(k\) hierarchy is a set of \(2^k\) vertices, such as \(\{2^k(m-1)+1, \dots, m 2^k\}\) for some \(m\). For \(i,j\in \mathbb{N}^{\star }\), the hierarchical distance between \(i,j\) is defined by \[d_{\mathbb{X}}(i,j) = \min \left\{ k\geq 0,\;\exists m,\;\text{ s.t. } i,j\in \{2^k(m-1)+1, \dots, m 2^k\} \right\}\] We can think of a binary tree structure on top of it, i.e. each hierarchy contains \(n=2\) sub-hierarchy. The smallest hierarchy is a site \(i\), of order 0, see Figure 2.

For example, a first order hierarchy is two consecutive sites of the form \(\{2k-1,2k\}\), recursively, an order \(n+1\) hierarchy (thus of cardinality \(2^{n+1}\)) contains two order \(n\) hierarchies. Therefore, \(d_{\mathbb{X}}(1,1)=0\), \(d_{\mathbb{X}}(1,7)=3\) and \(d_{\mathbb{X}}(i,2^{n}+1)=n +1,\;\forall 1\leq i\leq 2^{n}\).

\((\mathbb{X},d_{\mathbb{X}})\) is a metric space, as a graph, it’s similar to a complete graph: there is an edge \(\{i,j\}\) between any two distinct vertices \(i,j\) in \(\mathbb{N}^{\star }\). We denote by \(\mathbb{X}\) such a graph and metric space, we write by \(i\in \mathbb{X}\) vertex of \(\mathbb{X}\) (which is an element of \(\mathbb{N}^{\star }\)), and for \(i,j\in \mathbb{N}^{\star }\) with \(i\ne j\), we write \(\{i,j\}\in \mathbb{X}\) edge of \(\mathbb{X}\).

Fix real parameters \(\rho>1\) and \(\overline{W} >0\). For each edge \(\{i,j\}\in \mathbb{X}\), we associate a positive real number \(W_{i,j}\), called the edge weight, it’s defined by \[\begin{align} \label{eq-def-of-Wij} W_{i,j} = \overline{W} (2\rho)^{-d_{\mathbb{X}}(i,j)} \text{ for }i\ne j \text{ and }W_{i,i}=0 \;\forall i\in \mathbb{X}. \end{align}\tag{1}\] Note that \(\sum_{j}W_{i,j} = \overline{W} \sum_{k=0}^{\infty} 2^{k} (2\rho)^{-(k+1)} =\frac{\overline{W}}{2\rho} \sum_{k\geq 0} \rho^{-k} < \infty\). This also defines the lattice Laplacian \(P_W\) on \(\ell^2(\mathbb{X})\), by \[P_W f(i) = \sum_{j\in \mathbb{X}} W_{i,j} f(j), \forall f\in \ell^2(\mathbb{X}).\] \(P_{W}\) is a self-adjoint operator, it’s eigenfunctions are explicit with explicit eigenvalues.

In fact, the dimension of \(\mathbb{X}\) is defined as \(d= 2 \frac{\log 2}{\log \rho}\) (see e.g. Kritchevski_2008?). \(-P_W\) is the generator of the simple random walk on the \(d\)-dimensional hierarchical lattice, it is well known that this simple random walk has a phase transition with critical dimension \(d=2\) (i.e. \(\rho=2\)), see Eq. (3.1) and Theorem 3.1 in Kritchevski_2008?.

When \(d=2\), the edge weight \(W_{i,j}\) is unchanged by the operation of coarse-grain, that is, if we regroup every two vertices to form new vertices, e.g. \(\{1,2\}=: 1', \;\{3,4\}=:2'\), the rule of coarse-grain tell us that \(W_{1,2}=W_{1',2'}\), see more details in Eq. 11 . In disertori23:_trans?, the authors fine-tune the definition of edge weight by writing (note that \(\rho=2\)) \[\begin{align} \label{eq-def-Wij-2d} W_{i,j} = \overline{W} 4^{-d_{\mathbb{X}}(i,j)} Q(d_{\mathbb{X}}(i,j)) \end{align}\tag{2}\] where \(Q >0\) grows/decays at most polynomially. The spectral dimension is still 2 but we consider it slightly larger than 2, we can think of as \(d_{s}= 2^{+}\). Without loss of generality, we can think of \(Q(x)\) as \(x^{k}\) for \(k\in \mathbb{Z}\).

On the infinite weighted graph \(\mathbb{X}\) (with parameters \(\rho,\overline{W}\)), let us define the vertex reinforced jump process \((Y_{t})_{t\geq 0}\). Fix \(i_0\), let \(Y_{0}=i_0\) a.s., at time \(t\), \(Y_{t}\) jumps from \(i\) to another vertex \(j\) at rate \[\begin{align} \label{eq-jumprate-vrjp} W_{i,j}\left(1+ \int_{0}^{t} \boldsymbol{1}_{Y_{s}=j}ds\right). \end{align}\tag{3}\]

By Theorem 1.iii of Sabot2019?, up to a random time change, \((Y_{t})_{t\geq 0}\) is a mixture of Markov jump processes.

The random environment of \((Y_{t})\) is defined via a positive real random field \((\beta_{i})_{i\in \mathbb{X}}\), defined as follows: for any finite set \(U\subset \mathbb{X}\), the marginal law of \((\beta_{i})_{i\in U}\) is characterized by its Laplace transform \(\mathbb{E}\left(e^{- \sum_{i\in U} \lambda_{i}\beta_{i}}\right)\), which equals \[\begin{align} \label{eq-laplace-transform-beta} e^{-\sum_{i,j\in U} W_{i,j} (\sqrt{(1+\lambda_{i})(1+\lambda_{j})}-1)-\sum_{i\in U,j\notin U}W_{i,j} (\sqrt{1+\lambda_{i}}-1)} \prod_{i\in U} \frac{1}{\sqrt{1+\lambda_{i}}}. \end{align}\tag{4}\] The above expression is well defined, because for finite \(U\), \(\sum_{j\notin U}W_{i,j} <\infty\) for any \(i\in U\). The existence of such a random field is guaranteed by Proposition 1 of Sabot2019?, see page 2 of disertori23:_trans? for a detailed discussion on VRJP being random walk in random conductance even on infinite, possibly not locally finite graph. We can write down explicit the probability density function of \((\beta_{i})_{i\in U}\), it looks like a Schrödinger matrix variate generalization of the inverse Gaussian distribution, as explained in Letac2017?.

Therefore, we have at our disposal a random Schrödinger operator \[H_{\beta} = 2\beta-P_W\] where \(2\beta\) is understood as a multiplication operator by \(2\beta_{i}\), so \[H_{\beta}f(i) = 2\beta_{i} f(i) -\sum_{j\ne i} W_{i,j} f(j), \;\forall f\in \ell^2(\mathbb{X}).\] Almost surely \(H_{\beta}\) is a self-adjoint operator and it is positive. For our purpose we only need the restriction of \(H_{\beta}\) on a finite box, i.e. we only use random Schrödinger matrix \(H_{\beta,\widetilde{\Lambda}_{n}}\) introduced in the next session.

2.1 Finite box approximations↩︎

The discussion of the above mathematical objects can be reduced to the case on a (large) finite box with appropriate boundary condition.

For \(n\geq 1\), let \(\Lambda_{n} =\{1,2 ,\cdots,2^{n}\}\), we add an additional vertex \(\delta_{n}\) to \(\Lambda_{n}\), let \(\widetilde{\Lambda}_{n} = \Lambda_{n}\cup \{\delta_{n}\}\). \(\delta_n\) represents the aggregated influence of all vertices in \(X \setminus \Lambda_n\). On \(\widetilde{\Lambda}_{n}\), we have additional edges \(\{i,\delta_{n}\}\) for \(i\in \Lambda_{n}\), its edge weight is defined as \[\begin{align} \label{eq-edgeweight} W_{\delta_{n},i} = \sum_{j\notin \Lambda_{n}} W_{i,j} = \frac{\overline{W}\rho^{-n}}{2(\rho-1)}. \end{align}\tag{5}\] This weight is obtained by summing \(W_{i,j}\) for a fixed \(i \in \Lambda_n\) over all \(j \notin \Lambda_n\). The hierarchical structure of the weights allows for an explicit calculation of this sum.

Let’s start by defining a coupling of the random field \((\beta_{i})_{i\in \mathbb{X}}\) with a sequence of random fields defined on \(\widetilde{\Lambda}_{n}\), \(n\geq 0\). By Lemma 2 Sabot2019?, we can define a random variable \(\beta_{\delta_{n}}\) such that the probability density of \((\beta_{i})_{i\in \widetilde{\Lambda}_{n}}\) is \[\begin{align} \label{eq-beta-density-Lambdatilda-n} \boldsymbol{1}_{H_{\beta, \widetilde{\Lambda}_{n}}>0} e^{-\frac{1}{2}\left(\left<1,H_{\beta, \widetilde{\Lambda}_{n}} 1\right>\right)} \frac{1}{\sqrt{\det H_{\beta, \widetilde{\Lambda}_{n}}}} \prod_{i\in \widetilde{\Lambda}_{n}}d\beta_{i} \end{align}\tag{6}\] where \(H_{\beta, \widetilde{\Lambda}_{n}}\) is an operator on \(\ell^2(\widetilde{\Lambda}_{n})\) defined by \[H_{\beta, \widetilde{\Lambda}_{n}} f(i) = 2 \beta_{i} f(i) - \sum_{j\ne i}W_{i,j}f(j),\;i,j\in \widetilde{\Lambda}_{n}\] and \(\left<1,H_{\beta, \widetilde{\Lambda}_{n}}1\right> = \sum_{i,j\in \widetilde{\Lambda}_{n}} H_{\beta, \widetilde{\Lambda}_{n}}(i,j)\). Note that we also have a random Schrödinger matrix on \(\ell^2(\widetilde{\Lambda}_{n})\).

The discussions in Section 6.2 Sabot2019? applies to our case of hierarchical lattice too, as a consequence, we have, in particular, if \(\mathbb{P}^{\mathbb{X}}_{1}\) denotes the law of the discrete random walk related to the VRJP on \(\mathbb{X}\) with \(Y_{0}=1\) a.s., and \(\mathbb{P}^{\widetilde{\Lambda}_{n}}_{1}\) the law of the discrete random walk related to the VRJP on \(\widetilde{\Lambda}_{n}\) with \(Y_{0}=1\); and let \(\tau_{1}^{+}\) be the first return time to 1, and \(\tau_{\delta_{n}}\) the first hitting time of \(\delta_{n}\). We have \[\lim_{n\to \infty} \mathbb{P}^{\widetilde{\Lambda}_{n}}_{1}(\tau_{1}^{+} < \tau_{\delta_{n}}) = \mathbb{P}^{\mathbb{X}}_{1}(\tau_{1}^{+} < \infty).\] This relation will help us to reduce the study of recurrence/transient of the VRJP on \(\mathbb{X}\) to those of the VRJP on \(\widetilde{\Lambda}_{n}\).

Sometimes it is more convenient to write the random field \((\beta_{i})_{i\in \widetilde{\Lambda}_{n}}\) in another set of variables \((u^{(n)}_{i})_{i\in \Lambda_{n}}\) and \(\gamma_{n}\). The definitions is as follows: recall the random Schrödinger matrix \(H_{\beta,\widetilde{\Lambda}_{n}}\) with density defined in 6 , first let \(G_{\beta,\widetilde{\Lambda}_{n}}= (H_{\beta, \widetilde{\Lambda}_{n}})^{-1}\), defined \(\gamma_{n} = \frac{1}{2 G_{\beta,\widetilde{\Lambda}_{n}}(\delta_{n},\delta_{n})}\) and \[\begin{align} \label{eq-def-u-i-n} e^{u^{(n)}_i} = \frac{G_{\beta,\widetilde{\Lambda}_{n}}(\delta_{n},i)}{G_{\beta,\widetilde{\Lambda}_{n}}(\delta_{n},\delta_{n})} \end{align}\tag{7}\] The family of random variables \((u^{(n)}_{i})_{i\in \Lambda_{n}}\) is called the effective \(u\)-field of the \(H^{2|2}\)-model, see Sabot2019?. We can (e.g. Eq. 4.2 in Sabot2019?) show that giving \((u^{(n)}_i)_{i\in \Lambda_{n}}\) and \(G_{\beta,\widetilde{\Lambda}_{n}}(\delta_{n},\delta_{n})\), \((\beta_{i})_{i\in V}\) is expressed by the following formula \[2\beta_{i} = \sum_{ij}W_{i,j} e^{u^{(n)}_{j}-u^{(n)}_{i}} + \boldsymbol{1}_{i=\delta_{n}} \frac{1}{G_{\beta,\widetilde{\Lambda}_{n}}(\delta_{n},\delta_{n})},\;\forall i\in \Lambda_{n} .\] An important consequence of supersymmetry is \[\begin{align} \label{eq-ward-identity-eu611} \mathbb{E}(e^{u^{(n)}_{i}})=1, \forall i\in \widetilde{\Lambda}_{n} \end{align}\tag{8}\] see e.g. Eq. (5.26) in Disertori2017a?.

We reformulate known results in e.g. Lemma 5.2 Collevecchio2018? and Theorem 3 of Sabot2017? as the following convenient lemma, which will be invoked frequently in the sequel.

Lemma 1. Let \(\mathcal{G}=(V,E,W)\) be an edge weighted finite (non oriented) graph with edge weight \(W_{i,j}\), consider the random Schrödinger matrix \(H_{\beta}\), s.t. \(H_{\beta}(i,j)=-W_{i,j}\) if \(i\ne j\) and \(H_{\beta}(i,i)=2\beta_{i}\), the law of \(H_{\beta}\) is defined via the following p.d.f. \[\boldsymbol{1}_{H_{\beta}>0} e^{-\frac{1}{2} \left<1,H_{\beta}1\right>} \frac{1}{\sqrt{\det H_{\beta}}} \prod_{i\in V} d\beta_{i},\] call such random matrix \(H_{\beta}\) the random Schrödinger matrix of \(H^{2|2}\)-mode on \(\mathcal{G}\).

For any \(i\in V\), the law of \(\frac{1}{2 G_{\beta}(i,i)}\) is as follows: for all \(a>0\), \[\begin{align} \label{eq-density-G-i-i} \mathbb{P}\left( (2G_{\beta}(i,i))^{-1} <a\right) = \int_{0}^{a} \frac{1}{\sqrt{\pi t}} e^{-t}dt \end{align}\qquad{(1)}\] i.e., it is a Gamma-distributed random variable with parameter \(\frac{1}{2}\). For any \(s<\frac{1}{2}\), there exists a constant \(c_s \in (0, \infty)\) such that \(\mathbb{E}( G_{\beta}(i,i)^{s}) = c_{s}^{s}\). Moreover, \(G_{\beta}(i,i)\) is independent of \(\{\beta_{j},j\ne i\}\).

If, for any \(i \in V\), the effective \(u\)-field pinned at \(i\) in relation to \(H_{\beta}\) on \(\mathcal{G}\) is defined as: \[e^{u^{(i)}_{j}}= \frac{G_{\beta}(i,j)}{G_{\beta}(i,i)},\;G_{\beta}=H_{\beta}^{-1}.\] Then \(G_{\beta}(i,i)\) is independent of \(\{u_{j}^{(i)},\;j\ne i\}\).

In particular, we have, for \(s<\frac{1}{2}\), \[\begin{align} \label{eq-euij-euji-are-equal} \mathbb{E}(e^{s u_{j}^{(i)}})= \mathbb{E}(e^{s u_{i}^{(j)}}) . \end{align}\tag{9}\] In fact, \[\begin{align} \mathbb{E}(G_{\beta}(i,i)^{s}) \mathbb{E}(e^{s u_{j}^{(i)}}) = \mathbb{E}(G_{\beta}(i,j)^{s}) = \mathbb{E}(G_{\beta}(j,i)^{s}) = \mathbb{E}(G_{\beta}(j,j)^s) \mathbb{E}(e^{s u_{i}^{(j)}}). \end{align}\]

Note that a.s. \(H_{\beta,\widetilde{\Lambda}_{n}}\) is an M-matrix (see Section 6 of Sabot2017?), so a.s. \(G_{\beta,\widetilde{\Lambda}_{n}}(i,j) >0\) for all \(i,j\). When there is no ambiguity, we simply write \(u_{i}\) instead of \(u^{(n)}_{i}\). Note that this definition can be done for any pinning point \(j\) instead of \(\delta_{n}\), and the so obtained field \((u^{(j)}_{i})_{i}\) is called the effective field of the \(H^{2|2}\)-model on \(\mathcal{G}\) pinned at \(j\).

As \(H_{\beta,\widetilde{\Lambda}_{n}}\) is a.s. positive definite, the map \((\beta_{i})_{i\in \widetilde{\Lambda}_{n}} \mapsto (u_i,i\in \Lambda_{n})\cup\{\gamma_{n}\}\) is a differmorphism (in particular, a bijection). C.f. Section 6.3 Sabot2017? for detailed computations. The field \((u_i)\) can be think of the random environment of the VRJP in Sabot2015?, see also e.g. Sabot2017?.

It turns out that, by Sabot2019?, the discrete time random walk associated to the VRJP is a random conductance model, and the random conductance of the edge \(\{i,j\}\) equals precisely \(W_{i,j}e^{u_i+u_j}\). Intuitively, the behavior of the VRJP is dictated by the fluctuations of the \(u\)-field. If the field is relatively constant (long-range order), the process resembles a simple random walk on a transient graph and is thus transient. Conversely, if the field fluctuates strongly (disorder), the reinforcement creates effective potential wells that localize the walker, leading to recurrence.

2.2 Results and strategies of proofs↩︎

Our first result is about a phase transition in the long term behavior of the VRJP on the hierarchical lattice:

Theorem 1. Consider the VRJP \((Y_{t})_{t\geq 0}\) on the \(d\) dimensional hierarchical lattice \(\mathbb{X}\) with \(W_{i,j}\) defined in 1 for \(d\ne 2\) and when \(d=2\), we allow the more general definition in 2 , with any initial law for \(Y_{0}\).

  1. If \(d>2\), a.s., \((Y_{t})\) visits every vertices a finite number of times, i.e. the VRJP is transient;

  2. If \(d<2\), a.s., \((Y_{t})\) visit every vertices infinitely often, i.e. the VRJP is recurrent;

  3. When \(d=2\), if \(\lim_{x\to \infty}\frac{Q(x)}{x^\alpha } \geq 1\) for some \(\alpha>1\), then the VRJP is transient; if \(\overline{W}\) small enough and \(Q(x)=1\) for all \(x\), then the VRJP is recurrent.

Now, we state a fractional moment type estimate for the normalized Green’s function of finite size random Schrödinger matrix, uniformly in the size of the system. We start by stating the estimate for \(G(\delta_{n},i)\):

Theorem 2. The normalized Green’s function of the random Schrödinger matrix \(H_{\beta,\widetilde{\Lambda}_{n}}\) (at energy 0), defined in 7 satisfies the following FMM estimate: \(\forall s<\frac{1}{2}\), for all \(n\geq 1\);

  1. If \(d< 2\), then \(\exists C(\overline{W},d,s)>0\), s.t. for all \(i\in \Lambda_{n}\): \[\begin{align} \label{eq-FMM-eu-subcritical} \mathbb{E}\left( e^{{s u^{(n)}_{i}}} \right) \leq C(\overline{W},d,s) \rho^{-{s n}}. \end{align}\qquad{(2)}\]

  2. If \(d=2\), then \(\exists C(\overline{W},s)>0\), s.t. for all \(i\in \Lambda_{n}\): \[\begin{align} \label{eq-FMM-eu-critical} \mathbb{E}\left( e^{{s u^{(n)}_{i}}} \right) \leq C(\overline{W},d,s) c(\overline{W},s)^{-{s n}}. \end{align}\qquad{(3)}\] where \(c(\overline{W},s) = \frac{2}{\left( 1+ \left(1+ \left(\frac{c_s\overline{W}}{4} \right)^{s}\right) \left( \frac{c_s\overline{W}}{4} \right)^{s}\right)^{1 / s}}\), and \(c_s=\int_{0}^{\infty} \int_{0}^{(2 b^{1/s})^{-1}} \frac{1}{\sqrt{\pi t}} e^{-t}dt db\) is related to the \(-s\) moment of Gamma r.v., see Eq. ?? . In particular, if \(\overline{W}\to 0\), then \(c(\overline{W},s) \to{2} > 1\), so we have exponential decay of fractional moment.

We also have estimate of \(G(i,j)\):

Theorem 3. The normalized Green’s function of the random Schrödinger matrix \(H_{\beta,\widetilde{\Lambda}_{n}}\) (at energy 0), defined in 7 satisfies the following FMM estimate: \(\forall s<\frac{1}{2}\), for all \(n\geq 1\);

  1. If \(d< 2\), then \(\exists C(\overline{W},d,s)>0\), s.t. for all \(i,j\in \Lambda_{n}\): \[\begin{align} \label{eq-FMM-eu-subcritical-ij} \mathbb{E}\left( e^{{s u^{(j)}_{i}}} \right) \leq C(\overline{W},d,s) (2\rho)^{-{s d_{\mathbb{X}}(i,j)}}. \end{align}\qquad{(4)}\]

  2. If \(d=2\), then \(\exists C(\overline{W},s)>0\), s.t. for all \(i,j\in \Lambda_{n}\): \[\begin{align} \label{eq-FMM-eu-critical-ij} \mathbb{E}\left( e^{{s u^{(j)}_{i}}} \right) \leq C(\overline{W},d,s) (2c(\overline{W},s))^{-{s d_{\mathbb{X}}(i,j)}}. \end{align}\qquad{(5)}\] where \(c(\overline{W},s)\) is same as Theorem 2.

On the Euclidean lattice \(\mathbb{Z}^d\), the volume of a ball of radius \(n\) grows at most polynomially. Exponential decay of the fractional moments of the Green’s function then forces the random conductance model to be positive recurrent. A key point in the usual fractional-moment approach (e.g. Aizenman1993?) is the Feenberg (or Loop erased random walk) expansion of the Green’s function. On \(\mathbb{Z}^d\), any loop-erased path between two vertices must be at least as long as their Euclidean distance, so if each step in the expansion contributes a factor less than 1 (once we take the disorder parameter large enough), then this ensures exponential decay (in the distance of two points) of each path, which beats the polynomial growth of the path count.

On the hierarchical lattice, by contrast, any two vertices \(i,j\) is connected through a single edge with weight \(W_{i,j}=\Theta((2\rho)^{-d_{\mathbb{X}}(i,j)})\). Hence, in the Feenberg expansion, there is always a path of length 1. The weight of this edge decays exponentially with the "hierarchical distance" between the two vertices, leading to a lower bound on the fractional moment of the Green’s function \(\mathbb{E} G(i,j)^{s} \geq \text{(const.)} (2\rho)^{-{s d_{\mathbb{X}}(i,j)}}\). Thus our upper bound matches the lower bound up to the multiplicative constant in the exponential. In the case \(d<2\), the decay rate we obtained in Theorem 2 and 3 are hence optimal.

Note that, the number of self-avoiding walks grows exponentially on the hierarchical lattice, so we cannot apply the standard Feenberg argument directly. Instead, we use a special property of the \(H^{2|2}\) field: if a subset of vertices appears identical from the outside, we merge them into a single vertex, the obtained random field is still a \(H^{2|2}\) field, and we have a uniform estimate on fractional moment of any diagonal Green’s function of \(H^{2|2}\) random operator. After each jump in the Feenberg expansion, we coarse-grain (merge) as much as possible. This drastically reduces the combinatorial growth and ensures exponential decay of the fractional moments.

3 Recurrence and transience of the VRJP, proof of Theorem 1↩︎

3.1 Recurrence of VRJP when \(d\leq 2\)↩︎

Consider the VRJP \(Y_{t}\) on \(\mathbb{X}\) starting from \(1\in \mathbb{X}\), let \(T_n\) be the first time \(Y_{t}\) is outside \(\Lambda_{n}\), clearly, the law of \((Y_{t})_{t\in [0,T_{n}^{-}]}\) equals the law of a VRJP \(\widetilde{Y}_{t}^{(n)}\) on \(\widetilde{\Lambda}_{n}\) starts from \(1\) up to its first hitting time \(\tau_{\delta_{n}}\) of \(\delta_{n}\); moreover, \(\widetilde{Y}^{(n)}_{t}\) is, after a random time change, a mixture of Markov jump processes (Theorem 1.iii of Sabot2019?), and its associated discrete time Markov chain is a random conductance model with conductance \[C^{(n)}_{i,j} = W_{i,j} e^{u^{(1)}_{i}+u^{(1)}_{j}}\] where \((u^{(1)}_{i})_{i\in \widetilde{\Lambda}_{n}}\) is the effective field of the \(H^{2|2}\)-model on \(\widetilde{\Lambda}_{n}\), defined by \[\begin{align} \label{eq-e-u-1i-Lambdatildan} e^{u^{(1)}_{i}} = \frac{G_{\beta,\widetilde{\Lambda}_{n}}(1,i)}{G_{\beta,\widetilde{\Lambda}_{n}}(1,1)},\;\;i\in \widetilde{\Lambda}_{n} \end{align}\tag{10}\] where \(G_{\beta,\widetilde{\Lambda}_{n}}\) is the same as in Eq. 7 .

Therefore, let \(\tau_{1}^{+}\) be the first return time to 1 of \(\widetilde{Y}^{(n)}_{t}\), by Eq. (2.4) of Lyons2015? we have \[\mathbb{P}_{1}^{\widetilde{\Lambda}_{(n)}}(\tau_{\delta_{n}} < \tau_{1}^{+}) = \frac{1}{\pi^{(n)}_{1}} C^{(n)}_{\text{eff}}(1,\delta_{n})\] where \(\pi^{(n)}_{1} = \sum_{j\in \widetilde{\Lambda}_{n}} W_{1,j} e^{u^{(1)}_{j}}\), and \(C^{(n)}_{\text{eff}}(1,\delta_{n} )\) is the effective conductance of the electrical network with conductance \(C^{(n)}_{i,j}\) on the finite graph \(\widetilde{\Lambda}_{n}\).

By Rayleight’s monotonicity principle (Sec. 2.4 Lyons2015?), for \(i,j\in \Lambda_{n}\), set \(W_{i,j}^{+}= \infty\), and put a superscript \(+\) to all the quantities computed from \(W_{i,j}^{+}\) instead of \(W_{i,j}\), we have the following upper bound: \[C_{\text{eff}}^{(n)}(\delta_{n},1) \leq C_{\text{eff}}^{(n),+}(\delta_{n},1).\] As a.s. \(e^{u_{i}^{(1)}+u_{j}^{(1)}} > 0\), for all \(i,j\in \Lambda_{n}\), we have a.s. \(C_{i,j}^{+} = \infty\), therefore, we can identify all the vertices \(i\in \Lambda_{n}\), which results in a parallel configuration (of conductance), thus \[C_{\text{eff}}^{(n),+}(\delta_{n},1) = \sum_{i\in \Lambda_{n}} C_{\delta_{n},i} = \sum_{i\in\Lambda_n} W_{\delta_{n},i} e^{u^{(1)}_{i}+u^{(1)}_{\delta_{n}}}.\] There are two cases: \(d<2\) and \(d=2\) but \(Q(n)\to 0\) as \(n\to \infty\). In the case \(d<2\), we have, by Eq. 5 , \[W_{\delta_{n},i}= \overline{W} \frac{\rho^{-n}}{2(\rho-1)}.\] Therefore, \[\begin{align} C_{\text{eff}}^{(n),+}(\delta_{n},1) &= \sum_{i\in\Lambda_n} W_{\delta_{n},i} e^{u^{(1)}_{i}+u^{(1)}_{\delta_{n}}} = W_{\delta_{n},i} e^{u^{(1)}_{\delta_{n}}}\left( \sum_{i\in\Lambda_n} e^{u^{(1)}_{i}} \right). \end{align}\] By Eq. 16 \[\begin{align} \mathbb{P}\left( \sum_{i\in\Lambda_n} e^{u^{(1)}_{i}} > \chi_1 \right) \leq \frac{1}{\chi_1^{s}} \mathbb{E}\left( \left( \sum_{i\in\Lambda_n} e^{u^{(1)}_{i}}\right)^{s}\right) \leq \frac{1}{\chi_1^{s}} \sum_{i\in\Lambda_n}\mathbb{E} \left( e^{{s u^{(1)}_{i}}}\right)\leq \frac{1}{\chi_1^{s}} \sum_{k=0}^{n} 2^{k}(2\rho)^{-{s k}} \end{align}\] using bounds in Theorem 3, also by Eq. 8 , we have \(\mathbb{P}(e^{u^{(1)}_{\delta_{n}}}> \chi_2) < \frac{1}{\chi_2}\).

Now let’s derive a lower bound of \(\pi^{(n)}_{1}\), we have, by Proposition 6 of Sabot2019?, \(e^{u^{(1)}_{2}} > \frac{W_{1,2}}{2 \beta_{2}}\), thus \[\pi_{1}^{(n)} > W_{1,2}e^{u^{(1)}_{2}} > \frac{W_{2,1}^2}{2\beta_{2}}\] By Proposition 1 of Sabot2019?, \(2\beta_{2}\) is inverse Gaussian distributed with parameter \(IG(\frac{1}{W_{2}},1)\), and \(W_{2}=\sum_{i\in \mathbb{X}} W_{2,i}\) which is in fact independent of \(n\), the law of \(\frac{W_{2,1}^2}{2 \beta_{2}}\) has a density and it’s a.s. positive, so \(\mathbb{P}(\pi_{1}^{(n)} <\chi_3) \leq \mathbb{P}( \frac{W_{2,1}^2}{2 \beta_{2}} <\chi_3) \xrightarrow[]{\chi_3\to 0}0\).

By choosing \(\chi_{1}= \rho^{\frac{n}{6}}\), \(\chi_{2}= \rho^{\frac{n}{6}}\) and \(\chi_{3} = \rho^{-\frac{n}{6}}\), by union bound, \[\begin{align} \mathbb{P}( \mathbb{P}_{1}^{\widetilde{\Lambda}_{(n)}}(\tau_{\delta_{n}} &< \tau_{1}^{+}) < W_{1,\delta_{n}} \chi_{1}\chi_{2} \frac{1}{\chi_{3}}) \\ &\geq 1- \mathbb{P}( \frac{1}{\pi_{1}^{(n)}} > \frac{1}{\chi_{3}}) - \mathbb{P}( \sum_{i\in\Lambda_n} e^{u^{(1)}_{i}} > \chi_{1}) -\mathbb{P}(e^{u^{(1)}_{\delta_{n}}} > \chi_{2}) \end{align}\] There exists constant \(c_0 >0\) s.t. \(\mathbb{P}( \mathbb{P}_{1}^{\widetilde{\Lambda}_{(n)}}(\tau_{\delta_{n}} < \tau_{1}^{+}) < \rho^{-\frac{n}{2}}) \geq 1- e^{-\frac{c_0}{\chi_{3}}}- \frac{\sum_{k=0}^{n} 2^{k}(2\rho)^{-{s k}}}{\chi_{1}^{s}}- \frac{1}{\chi_{2}}= 1- e^{-\rho^{\frac{c_0 n}{6}}}- \rho^{-\frac{n s \left( \sum_{k=0}^{n} 2^{k}(2\rho)^{-{s k}} \right) }{6}}- \rho^{-\frac{n}{6}}\). For the VRJP on \(\mathbb{X}\), the event "escape \(\Lambda_{n}\) before going back to 1" is decreasing, so we have \(\mathbb{P}_{1}( \tau_{1}^{+} <\infty) = \lim_{n\to\infty} \mathbb{P}_{1}(\tau_{1}^{+} < \tau_{\delta_{n}})\), and so the sequence of events \(\{\mathbb{P}_{1}^{\widetilde{\Lambda}_{(n)}}(\tau_{\delta_{n}} < \tau_{1}^{+}) < \rho^{-\frac{n}{2}}\}\) is monotone in \(n\), thus we have \(\mathbb{P}( \mathbb{P}_{1}(\tau_{1}^{+} < \infty))=1\).

When \(d=2\) and \(Q(n) \to 0\) as \(n\to \infty\), it suffices to replace \(\rho\) in the above argument by \(c(\overline{W},s)\).

3.2 Transience of VRJP when \(d>2\)↩︎

By Theorem 3.2 of disertori23:_trans?, we have, for all \(m\geq 1\), \[\sup_{L}\sup_{i\in \Lambda_{L}} \mathbb{E}_{\Lambda_{L}}(\cosh(u^{(n)}_j)^{m}) \leq c_{H}(m) <\infty\] Recall that \(C_{i,j}\) is the random conductance of the discrete time process obtained from the VRJP, to show that VRJP is transient, it suffices to look at effective resistance between \(1\) and \(\delta_{L}\) for \(\Lambda_{L}\) with wired boundary: \[R^{L}(C,1,\delta_{L}) = \inf_{(\theta_{e})} \sum_{e} \frac{\theta_{e}^2}{C_{e}} ,\] where the infimum is over all unit current flow \(\theta_{e}\) from 1 to \(\delta_{L}\), see Section 2.4 Lyons2015? for the definition of unit current flow, in particular Exercise 2.13 therein.

Let \(\overline{\theta}\) be the unit current flow that realizes the effective resistance for the simple random walk, \(R^{L}(W,x,\delta_{L})\). Since the simple random walk on a hierarchical lattice with \(d>2\) is transient (see e.g.Imbrie1992?), this resistance \(R^{L}(W,x,\delta_{L})\) remains bounded as \(L\to \infty\). On the other hand, we have \[R^{L}(C,1,\delta_{L}) \leq \sum_{e} \frac{\overline{\theta}_{e}^2}{W_{e}} \frac{W_{e}}{C_{e}}\] As the expected number of visit to \(1\) before hitting \(\delta_{L}\) is less than \(\mathbb{E}(C_{1} R^{L}(C,1,\delta_{L}))\), its enough to bound this expectation.

Let’s first assume that for all \(e\in \Lambda_{L}\), \(\sum_{f:1\in f} \mathbb{E}_{\Lambda_{L}}\left( \frac{C_{f}}{C_{e}} \right) \leq \frac{c}{W_{e}}\), it follows that, if we denote \(e=\{i,j\}\) an edge of \(\Lambda_{L}\), \[\begin{align} \mathbb{E}(C_{1} R^{L}(C,1,\delta_{L})) &\leq \mathbb{E}( \sum_{k\sim 1}C_{1,k} \sum_{e} \frac{\overline{\theta}_{e}^2}{W_{e}} \frac{W_{e}}{C_{e}})\\ & \leq \sum_{1\sim k} \sum_{i\sim j} \frac{\overline{\theta}_{i,j}^2}{W_{i,j}} \mathbb{E}\left(\frac{C_{1,k}}{C_{i,j}}\right) W_{e} \\ & \leq c\sum_{i\sim j} \frac{\overline{\theta}_{i,j}^2}{W_{i,j}} \end{align}\] therefore transience follows from transient of \(W\) conductance model.

Finally, to show our assumption, it’s because \[\begin{align} \mathbb{E}\left( \frac{C_{1,k}}{C_{i,j}} \right) & = \mathbb{E}\left( \frac{W_{1,k} e^{u_{1}+u_{k}}}{W_{i,j} e^{u_{i}+u_{j}}} \right)\\ &\leq \frac{W_{1,k}}{W_{i,j}} \mathbb{E}(e^{4u_{1}})^{\frac{1}{4}} \mathbb{E}(e^{4u_{k}})^{\frac{1}{4}}\mathbb{E}(e^{-4u_{i}})^{\frac{1}{4}} \mathbb{E}(e^{-4u_{j}})^{\frac{1}{4}} \end{align}\] and we have uniform bound on these moments so they give a constant.

The case \(d=2\) with \(\alpha>3\) is done with the same argument in Session 2 of Imbrie1992?.

4 Fractional moment estimates↩︎

Let’s start with the proof of 1) in Theorem 2, our proof used the classical idea of fractional moment localization in Aizenman1993?. We mainly use two tools, the first one is that, for \(H^{2|2}\) model on a finite graph with zero boundary, i.e. with density defined in Eq. 6 , for any \(i\in \widetilde{\Lambda}_{n}\), the law of \(\frac{1}{2 G_{\beta,\widetilde{\Lambda}_{n}} (i,i) }\) is a Gamma r.v. with parameter \(\frac{1}{2}\), which is summarize in Lemma 1.

The second tool is a coincidence of distribution of the \(H^{2|2}\) model w.r.t. the operation of coarse-grain on the hierarchical lattice, i.e. Corollary 2.3 Disertori2022?. Before restating it here as Lemma 2 and Corollary 1 for the sake of self-completeness of our discussion, let’s introduce some convenient notations:

4.1 Estimate for \(G(i,\delta)\)↩︎

Definition (Coarse-Grained Graph Partitioning): Let \(\mathcal{G} = (V, E, W)\) be a finite weighted graph with vertex set \(V\), edge set \(E\), and weight function \(W: E \to \mathbb{R}^+\). Partition the vertex set \(V\) into disjoint subsets \(V = B_1 \sqcup B_2 \sqcup \cdots \sqcup B_k\), where each \(B_j\) forms a block of the partition. Construct a new graph \(\mathcal{G}' = (V', E', W')\) by mapping each subset \(B_j\) to a coarse-grained vertex \(B_j \in V'\) (this slightly abusive notation will entail no ambiguity). The weight \(W'_{B_i, B_j}\) of an edge \(\{B_{i},B_{j}\}\) in \(\mathcal{G}'\) is defined as: \[\begin{align} \label{eq-block-W-general} W'_{B_i, B_j} = \sum_{\substack{u \in B_i,\\ v \in B_j}} W_{u,v} \end{align}\tag{11}\] The graph \(\mathcal{G}'\) is called the coarse-grained graph of \(\mathcal{G}\) by the partition \(V=B_1\sqcup \cdots \sqcup B_k\).

Let’s apply this definition to \(\widetilde{\Lambda}_{n}\) with some particular choices of partitions. Note that its vertex set is \(\{1, 2, \dots, 2^n, \delta_n\}\). Define \(\widetilde{\Lambda}_{n}^{(0)}\) as the graph obtained by identifying groups of vertices: \(B_0' = \{1\}\), \(B_0 = \{2\}\), \(B_1 = \{3, 4\}\), \(B_2 = \{5, 6, 7, 8\}\), and continuing with \(B_j = \{2^{j} + 1, \dots, 2^{j+1}\}\) for \(1 \leq j \leq n-1\). The resulting vertex set of \(\widetilde{\Lambda}_{n}^{(0)}\) is \(\{B_0', B_0, B_1, \dots, B_{n-1}, \delta_n\}\), and it forms a complete graph.

The edge weight \(W_{B_j, B_k}\) is defined as the sum of the edge weights between all pairs of vertices in \(B_j\) and \(B_k\), given by, for \(j,k<n\), \[\begin{align} \label{eq-block-W-BjBk} W_{B_j, B_k} = 2^{j+k} \overline{W} (2\rho)^{-(j \vee k) - 1}, \;W_{\delta_{n},B_{j}}=2^{j} \frac{\overline{W} \rho^{-n}}{2(\rho-1)}. \end{align}\tag{12}\] Note that when \(d < 2\), successive coarse-graining reduces the effective edge weight, whereas when \(d > 2\), the coarse-grained edge weight increases.

For \(k = 1, \dots, n-1\), define the graph \(\widetilde{\Lambda}_n^{(k)}\) with vertex groups \[\begin{align} \label{eq-BkBkprimeetc} &B_{k'} = \{1, \dots, 2^k\}, \\ & B_{k} = \{2^k + 1, \dots, 2^{k+1}\},\\ & B_{k+1}=\{2^{k+1}+1 ,\cdots,2^{k+2}\} \\ & \dots \\ & B_{n-1} = \{2^{n-1} + 1, \dots, 2^n\}. \end{align}\tag{13}\] The vertex set of \(\widetilde{\Lambda}_n^{(k)}\) is \(\{B_{k'}, B_{k}, B_{k+1},\dots, B_{n-1}, \delta_n\}\), forming a complete graph. The edge weight \(W_{B_j, B_m}\) is inherited from \(\widetilde{\Lambda}_n^{(0)}\), or equivalently, obtained from summing edge weights between all pairs of vertices in \(B_j\) and \(B_k\), for \(j,k\in \widetilde{\Lambda}_{n}^{(k)}\).

This construction progressively coarsens the graph by increasing the minimum size of the vertex groups from \(2^0 = 1\) in \(\widetilde{\Lambda}_n^{(0)}\) to \(2^k\) in \(\widetilde{\Lambda}_n^{(k)}\).

Figure 2: The box \widetilde{\Lambda}_4, note that \delta_4 is also B_4.

Now let’s state an equivalent version of Corollary 2.3 of Disertori2022?:

Lemma 2. Given a graph \(\mathcal{G}=(V,E,W)\), consider the random Schrödinger matrix \(H_{\beta}\) on \(\mathcal{G}\), assume moreover that there are vertices \(B=\{i_1,\cdots,i_{k}\}\subset V\) such that the following holds: \[W_{i,j}=W_{i,j'},\;\forall i\notin B,\forall j,j'\in B.\] Fix \(\delta\in V \setminus B\), consider \(\mathcal{G}'\), the coarse-grained graph of \(\mathcal{G}\) by the partition \(V=B\sqcup \left(\bigsqcup_{j\notin B} \{j\}\right)\), let \(H'_{\beta}\) be the random Schrödinger matrix on \(\mathcal{G}'\). Define \(e^{u^{(\delta)}_{i}} = \frac{G_{\beta}(\delta,i)}{G_{\beta}(\delta,\delta)}\), let \(e^{u^{(\delta)}_{B}} = \frac{1}{\operatorname{card}B} \sum_{j\in B} e^{u^{(\delta)}_{j}}\), also define \(e^{u^{(\delta)'}_{j}} = \frac{G'_{\beta}(\delta,j)}{G'_{\beta}(\delta,\delta)}\), then we have equality in distribution \[\left\{u_{B}^{(\delta)},\;(u_{i}^{(\delta)})_{i\notin B}\right\} \overset{(\ell)}{=} \left\{u^{(\delta)'}_{B}, (u^{(\delta)'}_{i})_{i\notin B}\right\} .\]

To be self contained, we provide an alternative proof below. Note that Lemma 2 is just a partial result of the results in Disertori2022?.

Proof of Lemma 2. The joint law of \((u^{\delta}_{i})_{i\in V}\) can be realized as the random environment of the VRJP \(Y_{t}\) on \(\mathcal{G}\) with starting point \(\delta\) and initial local time 1, see e.g. Theorem 2 in Sabot2015? or Theorem 3 in Sabot2017?. We now examine \(Y_{t}\) while conceptualizing \(B\) as a single vertex; that is, when \(Y_{t}\) is located in \(B\), we do not distinguish between the specific vertices within \(B\). This can be interpreted as the VRJP \(Y'\) on \(\mathcal{G}'\), also starting from \(\delta\). When the process \(Y_{t}\) is outside \(B\), say at vertex \(i\), the probability of jumping to \(B\) is identical in both \(\mathcal{G}\) and \(\mathcal{G}'\), i.e. that the jump rate from \(i\) to \(B\) for \(Y_{t}\) is \[\sum_{j\in B} W_{i,j} e^{u_j-u_i} = |B| W_{i,j} e^{-u_i} \left(\frac{\sum_{j\in B} e^{u_j}}{|B|}\right) = W_{i,B} e^{u_{B}-u_i}.\] Even though the process will transition between different sites within \(B\), the total local time accumulated in \(B\) remains consistent for both processes, the identity in distribution hence follows.  


We can apply this lemma to our coarse-grained graphs \(\widetilde{\Lambda}_{n}^{(k)}\), and we have the following convenient corollary

Corollary 1. Consider the effective \(H^{2|2}\) field \((u^{(n)}_{i})\) on \(\widetilde{\Lambda}_{n}\) with pinning point \(\delta_{n}\), recall that \(\widetilde{\Lambda}_{n}^{(k)}\) is the coarse-grained graph obtained by the partition \(\{B_{k'}, B_{k}, B_{k+1},\dots, B_{n-1}, \delta_n\}\), let \((u^{(n)'}_{i})\) be the effective \(H^{2|2}\) field on \(\widetilde{\Lambda}_{n}^{(k)}\), define \(u^{(n)}_{B_{j}}\) by \(e^{u^{(n)}_{B_{j}}} = \frac{1}{ \operatorname{card}B_{j}} \sum_{k\in B_{j}} e^{u^{(n)}_{k}}\), then we have equality in distribution \[\left( u_{B_{j}}^{(n)}\right)_{j} \overset{(\ell)}{=} \left(u_{B_{j}}^{(n)'}\right)_{j}\]

Our proof strategy is to bound \(s\)-moment of \(e^{u_1^{(n)}}\), i.e. the \(s\)-moment of \(e^{u_{B_0}^{(n)}}\), by \(s\)-moments of \(e^{u_{B_k}^{(n)}},\;k\geq 1\) (which is Proposition 1), then use recursion to conclude (as have been done in Proposition 1).

Before diving into the proof, let’s first introduce some notations. For \(k=0,1 ,\cdots,n-1\), consider the effective \(H^{2|2}\) field on \(\widetilde{\Lambda}_{n}^{(k)}\), which is a random vector \(\left(u_{i}\right)_{i\in \widetilde{\Lambda}_{n}^{(k)}}\), recall that the vertices of \(\widetilde{\Lambda}_{n}^{(k)}\) are \[B_{k'},B_{k},B_{k+1} ,\cdots,B_{n-1},\delta_{n} ,\] and we omit the superscript in \(u_{i}^{(\delta_{n})}\), even though the definition of \(e^{u_{i}}= \frac{G_{\beta,\widetilde{\Lambda}_{n}^{(k)}}(\delta_{n},i)}{G_{\beta,\widetilde{\Lambda}_{n}^{(k)}}(\delta_{n},\delta_{n})}\) depends on \(\delta_{n}\) and \(k\), but it will be clear from the context, as we will use \(\mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left(\cdot\right)\) for the expectation w.r.t. the \(H^{2|2}\) model on the graph \(\widetilde{\Lambda}_{n}^{(k)}\). Also we choose a coupling such that there is a r.v. \(\gamma\), independent of all these effective field, s.t. \(G_{\beta,\widetilde{\Lambda}_{n}^{(k)}} (\delta_{n},\delta_{n}) = \frac{1}{2\gamma}\) for all \(k=0,\cdots,n-1\).

By Corollary 1, we have for all bounded continuous function \(f\), \[\mathbb{E}_{\widetilde{\Lambda}_{n}}(f(u_{1}))= \mathbb{E}_{\widetilde{\Lambda}_{n}^{(0)}}(f(u_1)).\]

The following proposition gives a recursive inequality that bounds the fractional moment at a given scale by a sum over fractional moments at a coarser scale. Iterating this inequality in a suitable way will yield the desired fractional moment estimates.

Proposition 1. For any \(n\) and \(k<n\), any \(s<\frac{1}{2}\), we have \[\begin{align} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}(e^{{s u_{B_{k'}}}}) \leq \left(1+ c_s^{s} W_{B_{k'},B_{k}}^{s}\right) c_s^{s} \left(\sum_{\ell=k+1}^{n} W_{B_{k},B_{\ell}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(\ell)}}(e^{{s u_{B_{\ell}}}})\right). \end{align}\]

Proposition 1. For any \(n\) and \(k<n\), any \(s<\frac{1}{2}\), there is a constant \(C(\overline{W},d,s)\) depend only on \(\overline{W},d,s\) such that, \[\mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}(e^{{s u_{B_{k}}}}) \leq C(\overline{W},d,s) \rho^{- {s (n-k)}} \text{ when } d<2,\] and \[\mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}(e^{{s u_{B_{k}}}}) \leq C(\overline{W},d,s) c(\overline{W},s)^{- {s (n-k)}} \text{ when } d=2\] where \[c(\overline{W},s) = \frac{2}{\left( 1+ \left(1+ \left(\frac{c_s\overline{W}}{4} \right)^{s}\right) \left( \frac{c_s\overline{W}}{4} \right)^{s}\right)^{\frac{1}{s}}}.\]

Proof of Theorem 2. It suffice to let \(k=0\) in Proposition 1.  


Proof of Proposition 1. Fix \(s< \frac{1}{2}\), for any \(k= 0 ,\cdots,n-1\), we have, denote \(B_n=\delta_{n}\), by Proposition 6 of Sabot2019?, \[\begin{align} \label{eq-A-1} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}(G(\delta_{n},\delta_{n})^{s}) \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}(e^{{ s u_{B_{k'}}}}) & = \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}(G(B_{k'},\delta_{n})^{s})\\ & = \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left(\left(\sum_{\sigma:B_{k'}\to \delta_{n}} \left( \frac{W_{\sigma}}{(2 \beta)_{\sigma}}\right)\right)^{s}\right) \end{align}\tag{14}\] where the sum is over the set \(B_{k'} \to \delta_{n}\) of all paths \(\sigma\) going from \(B_{k'}\) to \(\delta_{n}\) in the graph \(\widetilde{\Lambda}_{n}^{(k)}\), and \(W_{\sigma} = \prod_{j=0}^{\ell-1} W_{\sigma_{j},\sigma_{j+1}}\) where \(\sigma\) is of length \(\ell\); and \((2\beta)_{\sigma} = \prod_{j=0}^{\ell} (2 \beta)_{\sigma_{j}}\), and \(\beta\) is the random potential of the \(H^{2|2}\) model on \(\widetilde{\Lambda}_{n}^{(k)}\).

We cut the sum over paths into two parts: those paths s.t. after the last visit of \(B_{k'}\), the path jumps to \(B_{k}\), we denote by \(B_{k'} \Rightarrow B_{k} \to \delta_{n}\) the set of such paths; and the other paths are denoted by \(B_{k'} \Rightarrow B_{>k}\to \delta_{n}\), this is a partition of \(B_{k'} \to \delta_{n}\). That is we write \[\begin{align} \label{eq-pathsum-2-parts} \sum_{\sigma:B_{k'}\to \delta_{n}} \left( \frac{W_{\sigma}}{(2 \beta)_{\sigma}}\right)= \sum_{\sigma:B_{k'}\Rightarrow B_{k}\to \delta_{n}} \left( \frac{W_{\sigma}}{(2 \beta)_{\sigma}}\right)+ \sum_{\sigma:B_{k'}\Rightarrow B_{>k}\to \delta_{n}} \left( \frac{W_{\sigma}}{(2 \beta)_{\sigma}}\right). \end{align}\tag{15}\] The following property will be used frequently throughout our argument: if \(x_1,\cdots,x_{m} >0\) and \(0<\alpha<1\), then \[\begin{align} \label{eq-FMM-power-bound} (x_1 + \cdots +x_{m})^{\alpha} \leq x_1^{\alpha} + \cdots +x_{m}^{\alpha}, \end{align}\tag{16}\] though we will not mention it each time.

To analyze the first sum in Eq. 15 , corresponding to paths that pass through the adjacent block \(B_k\), we perform a standard path decomposition based on the last visit to \(B_k'\) and \(B_k\).

For the first sum (over the set \(B_{k'} \Rightarrow B_{k} \to \delta_{n}\)) in Eq 15 , we cut the path into 3 parts: the first one is from \(B_{k'}\) to the last visit of \(B_{k'}\), which results a factor \(G(B_{k'},B_{k'})\), the next jump must be to \(B_{k}\), then we have the paths from \(B_{k}\) to the last visit of \(B_{k}\) (without going back to \(B_{k'}\) by definition), after the last visit to \(B_{k}\), we jump to a point \(B_{\ell}\) for some \(\ell >k\), the remaining path is from \(B_{\ell}\) to \(\delta_{n}\) (without visiting \(B_{k'},B_{k}\)). Formally, we mean \[\begin{align} &\sum_{B_{k'} \Rightarrow B_{k} \to \delta_{n} } \frac{W_{\sigma}}{(2\beta)_{\sigma}}\\ & = G(B_{k'},B_{k'}) W_{B_{k'},B_{k}} \sum_{\sigma':B_{k} \xrightarrow[]{\cancel{B_{k'}}} B_{k}} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}} \sum_{\ell=k+1}^{n} W_{B_{k},B_{\ell}} \sum_{\sigma': B_{\ell} \xrightarrow[]{\cancel{B_{k'},B_{k}}} \delta_{n}} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}} \end{align}\] where \(\sigma':B_{k} \xrightarrow[]{\cancel{B_{k'}}} B_{k}\) means the set of paths going from \(B_{k}\) to \(B_{k}\) without using the vertex \(B_{k'}\), similarly, \(B_{\ell} \xrightarrow[]{\cancel{B_{k'},B_{k}}} \delta_{n}\) is the set of paths goring from \(B_{\ell}\) to \(\delta_{n}\) which do not visit the set \(\{B_{k'},B_{k}\}\).

Under the law \(\mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\), \(G(B_{k'},B_{k'})\) is independent of the rest in the RHS above, so we have \[\begin{align} &\mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left( \left(\sum_{B_{k'} \Rightarrow B_{k} \to \delta_{n} } \frac{W_{\sigma}}{(2\beta)_{\sigma}}\right)^{s}\right) \\ & = c_s^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left( \left(W_{B_{k'},B_{k}} \sum_{\sigma':B_{k} \xrightarrow[]{\cancel{B_{k'}}} B_{k}} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}} \sum_{\ell=k+1}^{n} W_{B_{k},B_{\ell}} \sum_{\sigma': B_{\ell} \xrightarrow[]{\cancel{B_{k'},B_{k}}} \delta_{n}} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}}\right)^{s}\right) \end{align}\]

Next \[\begin{align} & \sum_{\sigma':B_{k} \xrightarrow[]{\cancel{B_{k'}}} B_{k}} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}} \sum_{\ell=k+1}^{n} W_{B_{k},B_{\ell}} \sum_{\sigma': B_{\ell} \xrightarrow[]{\cancel{B_{k'},B_{k}}} \delta_{n}} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}} \\ &\leq \sum_{\sigma':B_{k} \xrightarrow[]{} B_{k}} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}} \sum_{\ell=k+1}^{n} W_{B_{k},B_{\ell}} \sum_{\sigma': B_{\ell} \xrightarrow[]{\cancel{B_{k'},B_{k}}} \delta_{n}} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}}\\ & =G(B_{k},B_{k}) \sum_{\ell=k+1}^{n} W_{B_{k},B_{\ell}} \sum_{\sigma': B_{\ell} \xrightarrow[]{\cancel{B_{k'},B_{k}}} \delta_{n}} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}} \end{align}\] and again \(G(B_{k},B_{k})\) is independent of the rest in the RHS above, so we have \[\begin{align} &\mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left( \left(\sum_{\sigma':B_{k} \xrightarrow[]{\cancel{B_{k'}}} B_{k}} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}} \sum_{\ell=k+1}^{n} W_{B_{k},B_{\ell}} \sum_{\sigma': B_{\ell} \xrightarrow[]{\cancel{B_{k'},B_{k}}} \delta_{n}} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}} \right)^{s} \right) \\ & \leq c_s^{s} \sum_{\ell=k+1}^{n} W_{B_{k},B_{\ell}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left( \left( \sum_{\sigma': B_{\ell} \xrightarrow[]{\cancel{B_{k'},B_{k}}} \delta_{n}} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}}\right)^{s} \right)\\ & \leq c_s^{s} \sum_{\ell=k+1}^{n} W_{B_{k},B_{\ell}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left( \left( \sum_{\sigma': B_{\ell} \xrightarrow[]{} \delta_{n}} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}}\right)^{s} \right) \end{align}\] The last factor can be rewritten as \[\begin{align} \label{eq-Bl-deltan-Geu} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left( \sum_{\sigma':B_{\ell} \xrightarrow[]{} \delta_{n}} \left(\frac{W_{\sigma'}}{(2 \beta)_{\sigma'}}\right)^{s} \right) & = \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left(G(\delta_{n},B_{\ell})^{s}\right)\\ & = \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}(G(\delta_{n},\delta_{n})^{s}) \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left(e^{{s u_{B_{\ell}}}}\right) \\ & = \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}(G(\delta_{n},\delta_{n})^{s}) \mathbb{E}_{\widetilde{\Lambda}_{n}^{(\ell)}}\left(e^{{s u_{B_{\ell}}}}\right) \end{align}\tag{17}\] where in the last equality, we applied Corollary 1 several times to coarse-grain \(B_{k'},B_{k}\) into \(B_{(k+1)'}\), etc until we get \(B_{\ell'}\). To summarize, our discussion of the first sum gives the following upper bound: \[\begin{align} \label{eq-3A} &\mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left( \left(\sum_{B_{k'} \Rightarrow B_{k} \to \delta_{n} } \frac{W_{\sigma}}{(2\beta)_{\sigma}}\right)^{s}\right) \\ &\leq c_s^{{s 2}} W_{B_{k'},B_{k}}^{s} \sum_{\ell=k+1}^{n} W_{B_k,B_{\ell}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}(G(\delta_{n},\delta_{n})^{s}) \mathbb{E}_{\widetilde{\Lambda}_{n}^{(\ell)}}\left(e^{{s u_{B_{\ell}}}}\right) \end{align}\tag{18}\]

For the second sum (over \(B_{k'}\Rightarrow B_{>k}\to \delta_{n}\)) in Eq 15 , we factorize the part of the paths from the beginning to the last visit of \(B_{k'}\), which results to a factor \(G(B_{k'},B_{k'})\), i.e. \[\begin{align} \sum_{B_{k'} \Rightarrow B_{>k} \to \delta_{n}} \frac{W_{\sigma}}{(2\beta)_{\sigma}} = G(B_{k'},B_{k'}) \sum_{\ell=k+1}^{n} W_{B_{k'},B_{\ell}} \sum_{\sigma':B_{\ell} \xrightarrow[]{\cancel{B_{k'}}} \delta_{n}} \frac{W_{\sigma'}}{(2 \beta)_{\sigma'}} \end{align}\] Under the law \(\mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\), \(G(B_{k'},B_{k'})\) is independent of \(\sum_{\sigma':B_{\ell} \xrightarrow[]{\cancel{B_{k'}}} \delta_{n}} \frac{W_{\sigma'}}{(2 \beta)_{\sigma'}}\) and it equals in law to \(\frac{1}{2\gamma}\), therefore, \[\begin{align} &\mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left(\left(\sum_{B_{k'} \Rightarrow B_{>k} \to \delta_{n}} \frac{W_{\sigma}}{(2\beta)_{\sigma}}\right)^{s}\right) \\ & = \sum_{\ell=k+1}^{n} W_{B_{k'},B_{\ell}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left(G(B_{k'},B_{k'})^{s}\right) \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left( \sum_{\sigma':B_{\ell} \xrightarrow[]{\cancel{B_{k'}}} \delta_{n}} \left(\frac{W_{\sigma'}}{(2 \beta)_{\sigma'}}\right)^{s} \right) \\ &\leq \sum_{\ell=k+1}^{n} W_{B_{k'},B_{\ell}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left(G(B_{k'},B_{k'})^{s}\right) \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left( \sum_{\sigma':B_{\ell} \xrightarrow[]{} \delta_{n}} \left(\frac{W_{\sigma'}}{(2 \beta)_{\sigma'}}\right)^{s} \right)\\ & = \sum_{\ell=k+1}^{n} W_{B_{k'},B_{\ell}}^{s} c_s^{s}\mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left( \sum_{\sigma':B_{\ell} \xrightarrow[]{} \delta_{n}} \left(\frac{W_{\sigma'}}{(2 \beta)_{\sigma'}}\right)^{s} \right) \end{align}\] The last factor is already discussed in Eq. 17 . To summarize, the second sum satisfies the following upper bound: \[\begin{align} \label{eq-4A} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left(\left(\sum_{B_{k'} \Rightarrow B_{>k} \to \delta_{n}} \frac{W_{\sigma}}{(2\beta)_{\sigma}}\right)^{s}\right) \leq \sum_{\ell=k+1}^{n} W_{B_{k'},B_{\ell}}^{s} c_s^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}(G(\delta_{n},\delta_{n})^{s}) \mathbb{E}_{\widetilde{\Lambda}_{n}^{(\ell)}}\left(e^{{s u_{B_{\ell}}}}\right) \end{align}\tag{19}\]

Combining Eqs 14 , 15 , 18 , 19 , we have \[\begin{align} \label{eq-recursion-fmm} & \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}(e^{{s u_{B_{k'}}}}) \\ & \leq c_s^{{s 2}} W_{B_{k'},B_{k}}^{s} \sum_{\ell=k+1}^{n} W_{B_k,B_{\ell}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}(G(\delta_{n},\delta_{n})^{s}) \mathbb{E}_{\widetilde{\Lambda}_{n}^{(\ell)}}\left(e^{{s u_{B_{\ell}}}}\right) \\ & + \sum_{\ell=k+1}^{n} W_{B_{k'},B_{\ell}}^{s} c_s^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}(G(\delta_{n},\delta_{n})^{s}) \mathbb{E}_{\widetilde{\Lambda}_{n}^{(\ell)}}\left(e^{{s u_{B_{\ell}}}}\right) \\ & = \left(1+ c_s^{s} W_{B_{k'},B_{k}}^{s}\right) c_s^{s} \left(\sum_{\ell=k+1}^{n} W_{B_{k},B_{\ell}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(\ell)}}(e^{{s u_{B_{\ell}}}})\right). \end{align}\tag{20}\]  


Proof of Proposition 1. Note that \(\mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}(e^{{s u_{B_{k}}}})= \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}(e^{{s u_{B_{k'}}}})\).

We can directly iterate this bound by writing \[\begin{align} &\mathbb{E}_{\widetilde{\Lambda}_{n}^{(k')}}(e^{{s u_{B_{k}}}}) \\ & \leq \left(1+ c_s^{s} W_{B_{k'},B_{k}}^{s}\right) c_s^{s} \left(\sum_{k_1=k+1}^{n} W_{B_{k},B_{k_1}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k_1)}}(e^{{s u_{B_{k_1}}}})\right) \\ & \leq \left(1+ c_s^{s} W_{B_{k'},B_{k}}^{s}\right) c_s^{s} \left(\sum_{k_1=k+1}^{n} W_{B_{k},B_{k_1}}^{s} \left(1+ c_s^{s} W_{B_{k_1'},B_{k_1}}^{s}\right) c_s^{s}\left( \sum_{k_2=k_1+1}^{n} W_{B_{k_1},B_{k_2}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k_2)}} (e^{ {s u_{B_{k_2}}}})\right) \right) \\ & \vdots \\ & \leq \sum_{k=k_0<k_1<k_2 <\cdots < k_{\ell}=n} \prod_{i=0}^{\ell-1} \left[\left(1+ c_s^{s} W_{B_{k_i'},B_{k_i}}^{s}\right)c_s^{s} W_{B_{k_i},B_{k_{i+1}}}^{s}\right] \end{align}\] Now plug in the value of \(W_{B_{k_i,B_{k_{i+1}}}}\), we continue our computation \[\begin{align} & = \sum_{k=k_0<k_1<k_2 <\cdots < k_{\ell}=n} \prod_{i=0}^{\ell-1} \left[\left(1+ \left( c_s2^{2 k_i} \overline{W} (2\rho)^{-k_i-1}\right)^{s}\right) c_s^{s} \left(2^{k_i+k_{i+1}} \overline{W} (2\rho)^{-k_{i+1}-1}(\frac{\boldsymbol{1}_{i=\ell-1}}{4(\rho-1)\rho} + \boldsymbol{1}_{i<\ell-1})\right)^{s} \right]\\ & = \sum_{k=k_0<k_1<k_2 <\cdots < k_{\ell}=n} \left( \frac{1}{4(\rho-1)\rho}\right)^{s} \prod_{i=0}^{\ell-1}\left[ \left(1+ \left(\frac{c_s\overline{W}}{2\rho} \left(\frac{2}{\rho}\right)^{k_i} \right)^{s}\right) \left( \frac{c_s\overline{W}}{2\rho} \frac{2^{k_i}}{\rho^{k_{i+1}}}\right)^{s}\right]\\ & = \left( \frac{1}{4(\rho-1)\rho}\right)^{s} \sum_{k=k_0<k_1<k_2 <\cdots < k_{\ell}=n} \prod_{i=0}^{\ell-1} \left[ \left(1+ \left(\frac{c_s\overline{W}}{2\rho} \left(\frac{2}{\rho}\right)^{k_i} \right)^{s}\right) \left( \frac{c_s\overline{W}}{2\rho} \frac{2^{k_i}}{\rho^{k_{i}}}\right)^{s}\right] \prod_{i=0}^{\ell-1} \left(\frac{\rho^{k_i}}{\rho^{k_{i+1}}}\right)^{s} \\ & = \left( \frac{1}{4(\rho-1)\rho}\right)^{s} \rho^{-{s (n-k)}} \sum_{k=k_0<k_1<k_2 <\cdots < k_{\ell}=n} \prod_{i=0}^{\ell-1} \left[ \left(1+ \left(\frac{c_s\overline{W}}{2\rho} \left(\frac{2}{\rho}\right)^{k_i} \right)^{s}\right) \left( \frac{c_s\overline{W}}{2\rho} \frac{2^{k_i}}{\rho^{k_{i}}}\right)^{s}\right] \end{align}\] As \(\sum_{k=k_0<k_1<k_2 <\cdots < k_{\ell}=n} \prod_{i=0}^{\ell-1} A_{k_i} = \prod_{i=k}^{n-1} (1+A_i)\), thus we have, to summarize for now, \[\begin{align} \label{eq-ELambda-euBk-bound-with-rho} &\mathbb{E}_{\widetilde{\Lambda}_{n}^{(k')}}(e^{{s u_{B_{k}}}}) \\ & \leq \left( \frac{1}{4(\rho-1)\rho}\right)^{s} \rho^{-{s (n-k)}} \prod_{i=k}^{n-1} \left[1+ \left(1+ \left(\frac{c_s\overline{W}}{2\rho} \left(\frac{2}{\rho}\right)^{i} \right)^{s}\right) \left( \frac{c_s\overline{W}}{2\rho} \frac{2^{i}}{\rho^{i}}\right)^{s}\right] \end{align}\tag{21}\] The product in the above expression is bounded for all \(n\), when \(d<2\), for all \(\overline{W}\), it is bounded by, \[\begin{align} &\prod_{i=k}^{n-1} \left[1+ \left(1+ \left(\frac{c_s\overline{W}}{2\rho} \left(\frac{2}{\rho}\right)^{i} \right)^{s}\right) \left( \frac{c_s\overline{W}}{2\rho} \frac{2^{i}}{\rho^{i}}\right)^{s}\right] \\ & \leq \exp \sum_{i=0}^{\infty} \left(1+ \left(\frac{c_s\overline{W}}{2\rho} \left(\frac{2}{\rho}\right)^{i} \right)^{s}\right) \left( \frac{c_s\overline{W}}{2\rho} \frac{2^{i}}{\rho^{i}}\right)^{s}\\ & <\infty \end{align}\] For \(d<2\) (i.e., \(\rho > 2\)), the term \((2/\rho)^s\) is less than 1, ensuring the infinite product converges and is bounded uniformly in \(n\).

Therefore, we have a constant \(C(\overline{W},d,s)\) s.t. if \(d<2\), \[\begin{align} \label{eq-sum-one-path-1plusAj} \sum_{k=k_0<k_1<k_2 <\cdots < k_{\ell}=n} \prod_{i=0}^{\ell-1} \left[\left(1+ c_s^{s} W_{B_{k_i'},B_{k_i}}^{s}\right)c_s^{s} W_{B_{k_i},B_{k_{i+1}}}^{s}\right] \leq C(\overline{W},d,s) \rho^{- {s (n-k)}} \end{align}\tag{22}\] As a consequence: \[\mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}(e^{{s u_{B_{k}}}}) \leq C(\overline{W},d,s) \rho^{- {s (n-k)}}.\]

When \(d=2\), then \(\rho=2\), we have to choose \(\overline{W}\) small enough such that \[\begin{align} \label{eq-choose-upsilon} 1+ \left(1+ \left(\frac{c_s\overline{W}}{2\rho} \left(\frac{2}{\rho}\right)^{i} \right)^{s}\right) \left( \frac{c_s\overline{W}}{2\rho} \frac{2^{i}}{\rho^{i}}\right)^{s} = 1+ \left(1+ \left(\frac{c_s\overline{W}}{4} \right)^{s}\right) \left( \frac{c_s\overline{W}}{4} \right)^{s} < 2^{s}. \end{align}\tag{23}\] Therefore, we have \[\begin{align} \label{eq-sum-one-path-1plusAj-d-is-2} &\sum_{k=k_0<k_1<k_2 <\cdots < k_{\ell}=n} \prod_{i=0}^{\ell-1} \left[\left(1+ c_s^{s} W_{B_{k_i'},B_{k_i}}^{s}\right)c_s^{s} W_{B_{k_i},B_{k_{i+1}}}^{s}\right] \\ &\leq \left( \frac{1}{4(\rho-1)\rho}\right)^{s} \rho^{-{s (n-k)}} \prod_{i=k}^{n-1} \left[1+ \left(1+ \left(\frac{c_s\overline{W}}{2\rho} \left(\frac{2}{\rho}\right)^{i} \right)^{s}\right) \left( \frac{c_s\overline{W}}{2\rho} \frac{2^{i}}{\rho^{i}}\right)^{s}\right] \\ &\leq \left( \frac{1}{8}\right)^{s} 2^{-{s (n-k)}} \left[1+ \left(1+ \left(\frac{c_s\overline{W}}{4} \right)^{s}\right) \left( \frac{c_s\overline{W}}{4} \right)^{s}\right]^{n-k} \end{align}\tag{24}\] That is, it suffice to choose \(c(\overline{W},s)\) as in Theorem 2.  


4.2 Estimate for \(G(i,j)\)↩︎

The proof of the estimate of the fractional moment of \(G(i,j)\) follows the same idea as the proof of that of \(G(\delta,i)\) in Section 4.1, except that the entropy reduction step is more complex due to the increased number of possibilities in the path counting.

To estimate fractional moments of \(G_{\beta, \widetilde{\Lambda}_{n}}(1,j)\) for \(2\leq j\leq 2^{n}\), we will coarse-grain as most as possible the graph \(\widetilde{\Lambda}_{n}\), e.g. in Figure 3, we coarse all the binary blocks that does not contains neither \(i=1\) nor \(j=9\).

Figure 3: The graph \widetilde{\Lambda}_{5,\{1,9\}}^{(0)} after coarse-grain in the box \widetilde{\Lambda}_5 to compute G_{\widetilde{\Lambda}_5}(1,9), note that \delta_5 is also B_5.

The blocks \(B_{k}\) and \(B_{k'}\) are defined as in Eq. 13 , the blocks \(C_{k}\) and \(C_{k'}\) are defined as follows: they are actually similar to the \(B_{k},B_{k'}\)s. While in the subtree rooted at \(B_{3}\), which is the right child of the youngest common ancestor of \(1,j\), c.f. Figure 3. Without loss of generality, we can set \(j= 2^{m}+1\), thus we define, \[\begin{align} &C_{1'} = \{2^{m}+1,2^{m}+2\}=2^{m}+B_{1'}\\ &C_{1} = \{2^{m}+3,2^{m}+4\}=2^{m}+B_{1}\\ &C_{2} = 2^{m}+B_{2}\\ &\vdots\\ &C_{m-1}=2^{m}+B_{m-1}. \end{align}\] Given \(1,j\), we define a sequence of more and more coarse graphs \(\widetilde{\Lambda}_{n,\{1,j\}}^{(0)} , \widetilde{\Lambda}_{n,\{1,j\}}^{(1)} ,\cdots, \widetilde{\Lambda}_{n,\{1,j\}}^{(m)}\), the first graph \(\widetilde{\Lambda}_{n,\{1,j\}}^{(0)}\) is defined by coarse-graining all the dyadic blocks \(B_1 ,\cdots,B_{m-1}, C_1 ,\cdots,C_{m-1}\) and the blocks \(B_{m+1} ,\cdots, B_{n-1}\), see Figure 3 for a concrete example. Namely, \(\widetilde{\Lambda}_{n,\{1,j\}}^{(0)}\) has vertex set \(\{B_{0'},B_{0},B_1,B_2 ,\cdots,B_n,C_0,C_{0'},C_1 ,\cdots,C_{m-1}\}\), where \(C_{0}=\{j\}, C_{0'}=\{j+1\}\). In general, the vertex set of \(\widetilde{\Lambda}_{n,\{1,j\}}^{(k)}\) is \[\begin{align} \label{eq-vertex-set-Lambdan1jk} \{B_{0'},B_{0},B_1 ,\cdots,B_{n}, C_{k'},C_k,C_{k+1} ,\cdots, C_{m-1}\}. \end{align}\tag{25}\]

By Eq. 11 we can compute the effective \(W\) between the \(B\)s and the \(C\)s: given \(0\leq k\leq m-1\), for \(\ell\geq k+1\) and \(i\in \{0 ,\cdots,m-1,m+1 ,\cdots,n\}\), \[\begin{align} \label{eq-block-W-CkClBi} &W_{C_{k},C_{\ell}}= W_{C_{k'},C_{\ell}}=W_{B_{k'},B_{\ell}}= W_{B_{k},B_{\ell}}=2^{k+\ell} \overline{W} (2\rho)^{-(k\vee \ell)-1}\\ & W_{C_{k'},B_{i}}= 2^{k+i} \overline{W}(2\rho)^{-m-1} \boldsymbol{1}_{i<m} + 2^{k+i}\overline{W} (2\rho)^{-(k\vee i)-1} \boldsymbol{1}_{i>m}\\ & W_{C_{k'},C_{k}}= W_{B_{k'},B_{k}}= 2^{2k} \overline{W} (2\rho)^{-k-1}. \end{align}\tag{26}\]

Let’s state some lemmas that we will proved afterwards.

Proposition 1. On \(\widetilde{\Lambda}_{n}^{(0)}\), for \(0\leq \ell\leq n-1\), we have the following estimate for the fractional moments of the Green’s function:

  1. If \(d<2\), then there is a constant \(C''(\overline{W},d,s) > 0\) for all \(0\leq \ell\leq n\), \[\mathbb{E}_{\widetilde{\Lambda}_{n}^{(0)}}(e^{{s u^{(1)}_{B_{\ell}}}}) \leq C''(\overline{W},d,s) \rho^{-{\ell s}}.\]

  2. If \(d=2\), then there is a constant \(C''(\overline{W},s)>0\) s.t. for all \(0\leq \ell\leq n\), \[\mathbb{E}_{\widetilde{\Lambda}_{n}^{(0)}}(e^{{s u^{(1)}_{B_{\ell}}}}) \leq C''(\overline{W},s) c(\overline{W},s)^{-{\ell s}}\] where \(c(\overline{W},s)\) is the same as in Theorem 2.

Proof of Theorem 3. By Lemma 2 we have, recall that \(j=2^{m}+1\), \[\mathbb{E}_{\widetilde{\Lambda}_{n}}(e^{{s u^{(j)}_{1}}}) = \mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(0)}}(e^{{s u^{(j)}_{1}}}).\] Now, for \(k=0 ,\cdots,m-1\), recall the definition of \(C_{k'}=2^{m}+B_{k'}\), we have, recall definition in Eq. 25 , \[\begin{align} \mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(k)}}(G(1,1)^{s})\mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(k)}}(e^{{s u^{(1)}_{C_{k'}}}}) & = \mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(k)}}(G(1,C_{k'})^{s})\\ & = \mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(k)}} \left(\left(\sum_{\sigma:C_{k'}\to 1} \left(\frac{W_{\sigma}}{(2\beta)_{\sigma}}\right)\right)^{s}\right). \end{align}\] We cut the sum over paths into two parts: from the beginning to the last visit of \(C_{k'}\) and the rest, for the second part, we distinguish three cases, either it jumps from \(C_{k'}\) to some \(B\) block, or it jumps to one of the \(\{C_{k+1} ,\cdots,C_{m-1}\}\), or it jumps to \(C_{k}\), \[\begin{align} \sum_{\sigma:C_{k'}\to 1} \frac{W_{\sigma}}{(2\beta)_{\sigma}} & = G(C_{k'},C_{k'}) \sum_{i\in \{0 ,\cdots,m-1,m+1 ,\cdots,n\}}W_{C_{k'},B_{i}} \sum_{\sigma': B_i \xrightarrow[]{\cancel{C_{k'}}}1} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}} \\ &+ G(C_{k'},C_{k'})\sum_{\ell=k+1}^{m-1} W_{C_{k'},C_{\ell}} \sum_{\sigma':C_{\ell} \xrightarrow[]{\cancel{C_{k'}} }1} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}}\\ &+ G(C_{k'},C_{k'}) W_{C_{k'},C_{k}} \sum_{\sigma':C_{k} \xrightarrow[]{\cancel{C_{k'}} }1} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}} \end{align}\] Now \[\begin{align} & \mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(k)}} \left(\left(\sum_{\sigma:C_{k'}\to 1} \left(\frac{W_{\sigma}}{(2\beta)_{\sigma}}\right)\right)^{s}\right) \\ & \leq c_s^{s} \sum_{i\in \{0 ,\cdots,m-1,m+1 ,\cdots,n\}}W_{C_{k'},B_{i}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(k)}}\left(\left(\sum_{\sigma': B_i \xrightarrow[]{\cancel{C_{k'}}}1} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}}\right)^{s}\right) \\ &+ c_s^{s} \sum_{\ell=k+1}^{m-1} W_{C_{k'},C_{\ell}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(k)}}\left(\left(\sum_{\sigma':C_{\ell} \xrightarrow[]{\cancel{C_{k'}} }1} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}}\right)^{s}\right)\\ &+ c_s^{s} W_{C_{k'},C_{k}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(k)}}\left(\left( \sum_{\sigma':C_{k} \xrightarrow[]{\cancel{C_{k'}} }1} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}}\right)^{s}\right) \end{align}\] The first term is bounded by Proposition 1. We keep expanding the other terms, \[\begin{align} &\sum_{\sigma':C_{k} \xrightarrow[]{\cancel{C_{k'}} }1} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}} = \left(\sum_{\sigma':C_{k} \xrightarrow[]{\cancel{C_{k'}}} C_{k} } \frac{W_{\sigma'}}{(2\beta)_{\sigma'}}\right) \\ &\cdot \left( \sum_{i\in \{0 ,\cdots,m-1,m+1 ,\cdots,n\}}W_{C_{k'},B_{i}} \sum_{\sigma': B_i \xrightarrow[]{\cancel{C_{k'},C_{k}}}1} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}} + \sum_{\ell=k+1}^{m-1} W_{C_{k'},C_{\ell}} \sum_{\sigma':C_{\ell} \xrightarrow[]{\cancel{C_{k'},C_{k}} }1} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}}\right) \end{align}\] As \(\sum_{\sigma':C_{k} \xrightarrow[]{\cancel{C_{k'}}} C_{k} } \frac{W_{\sigma'}}{(2\beta)_{\sigma'}} \leq G(C_{k},C_{k})\) on \(\widetilde{\Lambda}_{n,\{1,j\}}^{(k)}\), so we have \[\begin{align} &\mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(k)}}\left(\left( \sum_{\sigma':C_{k} \xrightarrow[]{\cancel{C_{k'}} }1} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}}\right)^{s}\right)\\ & \leq c_s^{s} \sum_{i\in \{0 ,\cdots,m-1,m+1 ,\cdots,n\}}W_{C_{k},B_{i}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(k)}}\left(\left(\sum_{\sigma': B_i \xrightarrow[]{\cancel{C_{k'},C_{k}}}1} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}}\right)^{s}\right) \\ &+ c_s^{s} \sum_{\ell=k+1}^{m-1} W_{C_{k},C_{\ell}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(k)}}\left(\left(\sum_{\sigma':C_{\ell} \xrightarrow[]{\cancel{C_{k'},C_{k}} }1} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}}\right)^{s}\right)\\ \end{align}\] Overall, we thus have \[\begin{align} & \mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(k)}} \left(\left(\sum_{\sigma:C_{k'}\to 1} \left(\frac{W_{\sigma}}{(2\beta)_{\sigma}}\right)\right)^{s}\right) \\ & \leq c_s^{s} (1+ c_s^{s} W_{C_{k'},C_{k}}^{s}) \sum_{i\in \{0 ,\cdots,m-1,m+1 ,\cdots,n\}}W_{C_{k'},B_{i}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(k)}}\left(\left(\sum_{\sigma': B_i \xrightarrow[]{\cancel{C_{k'}}}1} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}}\right)^{s}\right) \\ &+ c_s^{s} (1+ c_s^{s} W_{C_{k'},C_{k}}^{s}) \sum_{\ell=k+1}^{m-1} W_{C_{k'},C_{\ell}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(k)}}\left(\left(\sum_{\sigma':C_{\ell} \xrightarrow[]{\cancel{C_{k'}} }1} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}}\right)^{s}\right) \end{align}.\] Now we can bound the first path sums on the RHS above by the Green’s function \(G(B_i,1)\), use \(G(B_i,1)=G(1,1) e^{u^{(1)}_{B_i}}\), by independence of \(G(1,1)\) and \(e^{u^{(1)}_{B_{i}}}\), we can factorize out in the expecation the factor \(G(1,1)^{s}\), then the expectation of \(e^{u^{(1)}_{B_i}}\) may pass to an effective model using Lemma 2, as the diagonal Green’s function are all identically distributed (as an inverse Gamma), we can again put in the Green’s function on the effective model, more precisely, we have the following \[\begin{align} \mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(k)}}\left(\left(\sum_{\sigma': B_i \xrightarrow[]{\cancel{C_{k'}}}1} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}}\right)^{s}\right) & \leq \mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(k)}} \left(G(B_i,1)^{s}\right)\\ & = \mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(k)}}(G(1,1)^{s}) \mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(k)}}\left(\left(e^{u^{(1)}_{B_i}}\right)^{s}\right)\\ & = \mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(k)}}(G(1,1)^{s}) \mathbb{E}_{\widetilde{\Lambda}_{n}^{(0)}}(e^{{s u^{(1)}_{B_i}}}) \end{align}.\] Similarly we can treat the second path sum, therefore, \[\begin{align} & \mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(k)}} \left(\left(\sum_{\sigma:C_{k'}\to 1} \left(\frac{W_{\sigma}}{(2\beta)_{\sigma}}\right)\right)^{s}\right) \\ & \leq c_s^{s} (1+ c_s^{s} W_{C_{k'},C_{k}}^{s}) \sum_{i\in \{0 ,\cdots,m-1,m+1 ,\cdots,n\}}W_{C_{k'},B_{i}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(k)}}(G(1,1)^{s}) \mathbb{E}_{\widetilde{\Lambda}_{n}^{(0)}}(e^{{s u^{(1)}_{B_i}}}) \\ &+ c_s^{s} (1+ c_s^{s} W_{C_{k'},C_{k}}^{s}) \sum_{\ell=k+1}^{m-1} W_{C_{k'},C_{\ell}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(k)}}\left(G(1,1)^{s}\right)\mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(\ell)}}(e^{{s u^{(1)}_{C_{\ell}}}}). \end{align}\] Overall we have the following recursive inequality: \[\begin{align} \label{eq-recursive-inequality-Gij} & \mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(k)}}(e^{{s u^{(1)}_{C_{k'}}}}) \\ & \leq c_s^{s} (1+ c_s^{s} W_{C_{k'},C_{k}}^{s}) \sum_{i\in \{0 ,\cdots,m-1,m+1 ,\cdots,n\}}W_{C_{k'},B_{i}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(0)}}(e^{{s u^{(1)}_{B_i}}}) \\ &+ c_s^{s} (1+ c_s^{s} W_{C_{k'},C_{k}}^{s}) \sum_{\ell=k+1}^{m-1} W_{C_{k'},C_{\ell}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(\ell)}}(e^{{s u^{(1)}_{C_{\ell}}}}). \end{align}\tag{27}\] Now, note that \(W\) in Eq. 26 implies in particular, \(W_{C_{k'},C_{k}}=W_{B_{k'},B_{k}}\), \(W_{C_{k'},C_{\ell}}=W_{B_{k},B_{\ell}}\), and the term corresponds to \(\ell=n\) in the sum in RHS of Eq. 20 has no expectation, because \(\mathbb{E}_{\widetilde{\Lambda}_{n}^{(n)}}(e^{{s u_{B_n}}})=1\) (as \(B_{n}=\delta_{n}\)), so it corresponds to the term without expectation factor in the RHS of Eq. 27 , that is, we are at the same position as in Eq. 20 , that is, we actually have \[\begin{align} & \mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(k)}}(e^{{s u^{(1)}_{C_{k'}}}}) \\ & \leq c_s^{s} (1+ c_s^{s} W_{B_{k'},B_{k}}^{s}) \sum_{i\in \{0 ,\cdots,m-1,m+1 ,\cdots,n\}}W_{C_{k'},B_{i}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(0)}}(e^{{s u^{(1)}_{B_i}}}) \\ &+ c_s^{s} (1+ c_s^{s} W_{B_{k'},B_{k}}^{s}) \sum_{\ell=k+1}^{m-1} W_{B_{k'},B_{\ell}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(\ell)}}(e^{{s u^{(1)}_{C_{\ell}}}})\\ & = c_s^{s} (1+ c_s^{s} W_{B_{k'},B_{k}}^{s})\left[ \sum_{\ell=k+1}^{m-1} W_{B_{k'},B_{\ell}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(\ell)}}(e^{{s u^{(1)}_{C_{\ell}}}})+ W_{B_{k},B_{m}}^{s} \mathfrak{C}\right] \end{align}\] where we define \(\mathfrak{C}\) by \[\mathfrak{C} W_{B_{k},B_{m}}^{s} = \sum_{i\in \{0 ,\cdots,m-1,m+1 ,\cdots,n\}}W_{C_{k'},B_{i}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(0)}}(e^{{s u^{(1)}_{B_i}}}) .\] \(\mathfrak{C}\) is in fact bounded from above by a constant after we plug in Proposition 1 to bounds the expectations in the RHS above.

Thus we have, by the same iteration argument as in Section 4.1 , \[\begin{align} \label{eq-tobeinserted} \mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(k)}}(e^{{s u^{(1)}_{C_{k'}}}}) \leq \mathfrak{C} \sum_{k=k_0 < k_1<k_2 <\cdots <k_{\ell}=m } \prod_{i=0}^{\ell-1} \left[\left(1+ c_s^{s} W_{B_{k_i'},B_{k_i}}^{s}\right)c_s^{s} W_{B_{k_i},B_{k_{i+1}}}^{s}\right] \end{align}\tag{28}\] Assume for instant \(d<2\), plug in bounds in Proposition 1, after computation, plug in the values of \(W\) as in Eq. 26 in the definition of \(\mathfrak{C}\), \[\begin{align} &(2^{k+m} \overline{W} (2\rho)^{-(k\vee m)-1})^{s}\mathfrak{C}\\ &\leq 2^{{k s}} \left(\frac{\overline{W}}{2\rho}\right)^{s} C''(\overline{W},d,s) \left((2\rho)^{-{m s}} \left(\frac{1}{1-( \frac{2}{\rho})^{s}}\right) + \rho^{-{2sm}} \left(\frac{1}{\rho^{{2 s}}-1}\right)\right)\\ & \leq (2\rho)^{-{m s}} 2^{{k s}} \left(\frac{\overline{W}}{2\rho}\right)^{s} C''(\overline{W},d,s)\left( \frac{1}{1-( \frac{2}{\rho})^{s}} + \frac{1}{\rho^{{2 s}}-1}\right) \end{align}\] this give an upper bound of \(\mathfrak{C} \leq 2^{-{m s}} C''(\overline{W},d,s) \left( \frac{1}{1-( \frac{2}{\rho})^{s}} + \frac{1}{\rho^{{2 s}}-1}\right)\). Insert this into Eq. 28 , by Eq. 22 \[\begin{align} &\mathbb{E}_{\widetilde{\Lambda}_{n,\{1,j\}}^{(k)}}(e^{{s u^{(1)}_{C_{k'}}}})\leq \mathfrak{C} C(\overline{W},d,s) \rho^{-{s (m-k)}} \leq C(\overline{W},d,s)(2\rho)^{-{m s}} 2^{{k s}} \end{align}\] and set \(k=0\) we get Eq. ?? .

When \(d=2\), plug in Proposition 1 (the case \(d=2\)), after computation, we have, similarly \[\begin{align} &\left(2^{k+m}\overline{W} (2\rho)^{-k\vee m-1}\right)^{s} \mathfrak{C} \\ & = C''(\overline{W},s) \left( \left(\overline{W} 2^{-2m-2+k}\right)^{s} \frac{(\frac{2}{c(\overline{W},s)})^{{m s}}-1}{(\frac{2}{c(\overline{W},s)})^{s}-1} + \left(2^{k+1}\overline{W}\right)^{s} \frac{ ((2c(\overline{W},s))^{-s})^{n+1}-((2c(\overline{W},s))^{-s})^{m+1}}{((2c(\overline{W},s))^{-s})-1}\right) \end{align}\] we choose \(\overline{W}\) small enough s.t. \(c(\overline{W},s) > \frac{1}{2}\) so as to have exponential decay.  


Proof of Proposition 1. The case \(\ell=n\) is already done in Theorem 2, because \(B_{\ell}=B_{n}=\delta_{n}\), so we will only deal with \(2\leq \ell\leq n-1\), fix \(0\leq k < j \leq n\), consider the coarse graph \(\widetilde{\Lambda}_{n}^{(j)}\), we have \[\begin{align} \label{eq-A-1-Gij} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}(G(B_j,B_j)^{s}) \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}(e^{{s u^{(B_{j})}_{B_{k'}}}}) & = \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}(G(B_{k'},B_j)^{s})\\ & = \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left(\left(\sum_{\sigma:B_{k'}\to B_j} \left( \frac{W_{\sigma}}{(2 \beta)_{\sigma}}\right)\right)^{s}\right) \end{align}\tag{29}\] where the sum is over the set \(B_{k'} \to B_j\) of all paths \(\sigma\) going from \(B_{k'}\) to \(B_j\) in the graph \(\widetilde{\Lambda}_{n}^{(k)}\).

Let’s start by proving a counterpart of Proposition 1 in our \(G(i,j)\) setting instead of the previous simpler \(G(i,\delta)\) setting.

We cut the sum over paths into two parts: those paths s.t. after the last visit of \(B_{k'}\), the path jumps to \(B_{k}\), we denote by \(B_{k'} \Rightarrow B_{k} \to B_j\) the set of such paths; and the other paths are denoted by \(B_{k'} \Rightarrow B_{>k}\to B_j\), this is a partition of \(B_{k'} \to B_j\). That is we write \[\begin{align} \label{eq-pathsum-2-parts-Gij} \sum_{\sigma:B_{k'}\to B_j} \left( \frac{W_{\sigma}}{(2 \beta)_{\sigma}}\right)= \sum_{\sigma:B_{k'}\Rightarrow B_{k}\to B_j} \left( \frac{W_{\sigma}}{(2 \beta)_{\sigma}}\right)+ \sum_{\sigma:B_{k'}\Rightarrow B_{>k}\to B_j} \left( \frac{W_{\sigma}}{(2 \beta)_{\sigma}}\right). \end{align}\tag{30}\] For the first sum (over the set \(B_{k'} \Rightarrow B_{k} \to B_j\)) in Eq 30 , as before, we cut the path into 3 parts, then we have

\[\begin{align} &\sum_{B_{k'} \Rightarrow B_{k} \to B_j } \frac{W_{\sigma}}{(2\beta)_{\sigma}}\\ & = G(B_{k'},B_{k'}) W_{B_{k'},B_{k}} \sum_{\sigma':B_{k} \xrightarrow[]{\cancel{B_{k'}}} B_{k}} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}} \sum_{\ell=k+1}^{n} W_{B_{k},B_{\ell}} \sum_{\sigma': B_{\ell} \xrightarrow[]{\cancel{B_{k'},B_{k}}} B_j} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}} \end{align}\] where \(\sigma':B_{k} \xrightarrow[]{\cancel{B_{k'}}} B_{k}\) means the set of paths going from \(B_{k}\) to \(B_{k}\) without using the vertex \(B_{k'}\), similarly, \(B_{\ell} \xrightarrow[]{\cancel{B_{k'},B_{k}}} B_j\) is the set of paths goring from \(B_{\ell}\) to \(B_j\) which do not visit the set \(\{B_{k'},B_{k}\}\).

Under the law \(\mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\), \(G(B_{k'},B_{k'})\) is independent of the rest in the RHS above, so we have \[\begin{align} &\mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left( \left(\sum_{B_{k'} \Rightarrow B_{k} \to B_j } \frac{W_{\sigma}}{(2\beta)_{\sigma}}\right)^{s}\right) \\ & = c_s^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left( \left(W_{B_{k'},B_{k}} \sum_{\sigma':B_{k} \xrightarrow[]{\cancel{B_{k'}}} B_{k}} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}} \sum_{\ell=k+1}^{n} W_{B_{k},B_{\ell}} \sum_{\sigma': B_{\ell} \xrightarrow[]{\cancel{B_{k'},B_{k}}} B_j} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}}\right)^{s}\right) \end{align}\]

Next \[\begin{align} & \sum_{\sigma':B_{k} \xrightarrow[]{\cancel{B_{k'}}} B_{k}} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}} \sum_{\ell=k+1}^{n} W_{B_{k},B_{\ell}} \sum_{\sigma': B_{\ell} \xrightarrow[]{\cancel{B_{k'},B_{k}}} B_j} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}} \\ &\leq \sum_{\sigma':B_{k} \xrightarrow[]{} B_{k}} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}} \sum_{\ell=k+1}^{n} W_{B_{k},B_{\ell}} \sum_{\sigma': B_{\ell} \xrightarrow[]{\cancel{B_{k'},B_{k}}} B_j} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}}\\ & =G(B_{k},B_{k}) \sum_{\ell=k+1}^{n} W_{B_{k},B_{\ell}} \sum_{\sigma': B_{\ell} \xrightarrow[]{\cancel{B_{k'},B_{k}}} B_j} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}} \end{align}\] and again \(G(B_{k},B_{k})\) is independent of the rest in the RHS above, so we have \[\begin{align} &\mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left( \left(\sum_{\sigma':B_{k} \xrightarrow[]{\cancel{B_{k'}}} B_{k}} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}} \sum_{\ell=k+1}^{n} W_{B_{k},B_{\ell}} \sum_{\sigma': B_{\ell} \xrightarrow[]{\cancel{B_{k'},B_{k}}} B_j} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}} \right)^{s} \right) \\ & \leq c_s^{s} \sum_{\ell=k+1}^{n} W_{B_{k},B_{\ell}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left( \left( \sum_{\sigma': B_{\ell} \xrightarrow[]{\cancel{B_{k'},B_{k}}} B_j} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}}\right)^{s} \right)\\ & \leq c_s^{s} \sum_{\ell=k+1}^{n} W_{B_{k},B_{\ell}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left( \left( \sum_{\sigma': B_{\ell} \xrightarrow[]{} B_j} \frac{W_{\sigma'}}{(2\beta)_{\sigma'}}\right)^{s} \right) \end{align}\] The last factor can be rewritten as \[\begin{align} \label{eq-Bl-deltan-Geu-Gij} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left( \sum_{\sigma':B_{\ell} \xrightarrow[]{} B_j} \left(\frac{W_{\sigma'}}{(2 \beta)_{\sigma'}}\right)^{s} \right) & = \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left(G(B_j,B_{\ell})^{s}\right)\\ & = \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}(G(B_j,B_j)^{s}) \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left(e^{{s u^{(B_{j})}_{B_{\ell}}}}\right)\\ & = \begin{cases} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}(G(B_j,B_j)^{s}) \mathbb{E}_{\widetilde{\Lambda}_{n}^{(\ell)}}\left(e^{{s u^{(B_{j})}_{B_{\ell}}}}\right) & \ell\leq j \\ \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}(G(B_j,B_j)^{s}) \mathbb{E}_{\widetilde{\Lambda}_{n}^{(j)}}\left(e^{{s u^{(B_{j})}_{B_{\ell}}}}\right) & \ell >j \end{cases} \end{align}\tag{31}\] by Corollary 1.

To summarize, our discussion of the first sum gives the following upper bound: \[\begin{align} \label{eq-3A-Gij} &\mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left( \left(\sum_{B_{k'} \Rightarrow B_{k} \to B_j } \frac{W_{\sigma}}{(2\beta)_{\sigma}}\right)^{s}\right) \\ &\leq c_s^{{2 s}} W_{B_{k'},B_{k}}^{s} \sum_{\ell=k+1}^{j} W_{B_k,B_{\ell}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}(G(B_j,B_j)^{s}) \mathbb{E}_{\widetilde{\Lambda}_{n}^{(\ell)}}\left(e^{{s u^{(B_j)}_{B_{\ell}}}}\right) \\ & +c_s^{{2 s}} W_{B_{k'},B_{k}}^{s} \sum_{\ell=j+1}^{n} W_{B_k,B_{\ell}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}(G(B_j,B_j)^{s}) \mathbb{E}_{\widetilde{\Lambda}_{n}^{(j)}}\left(e^{{s u^{(B_j)}_{B_{\ell}}}}\right) \end{align}\tag{32}\]

For the second sum (over \(B_{k'}\Rightarrow B_{>k}\to B_j\)) in Eq 30 , we factorize the part of the paths from the beginning to the last visit of \(B_{k'}\), which results to a factor \(G(B_{k'},B_{k'})\), i.e. \[\begin{align} \sum_{B_{k'} \Rightarrow B_{>k} \to B_j} \frac{W_{\sigma}}{(2\beta)_{\sigma}} = G(B_{k'},B_{k'}) \sum_{\ell=k+1}^{n} W_{B_{k'},B_{\ell}} \sum_{\sigma':B_{\ell} \xrightarrow[]{\cancel{B_{k'}}} B_j} \frac{W_{\sigma'}}{(2 \beta)_{\sigma'}} \end{align}\] Under the law \(\mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\), \(G(B_{k'},B_{k'})\) is independent of \(\sum_{\sigma':B_{\ell} \xrightarrow[]{\cancel{B_{k'}}} B_j} \frac{W_{\sigma'}}{(2 \beta)_{\sigma'}}\) and it equals in law to \(\frac{1}{2\gamma}\), therefore, \[\begin{align} &\mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left(\left(\sum_{B_{k'} \Rightarrow B_{>k} \to B_j} \frac{W_{\sigma}}{(2\beta)_{\sigma}}\right)^{s}\right) \\ & = \sum_{\ell=k+1}^{n} W_{B_{k'},B_{\ell}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left(G(B_{k'},B_{k'})^{s}\right) \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left( \sum_{\sigma':B_{\ell} \xrightarrow[]{\cancel{B_{k'}}} B_j} \left(\frac{W_{\sigma'}}{(2 \beta)_{\sigma'}}\right)^{s} \right) \\ &\leq \sum_{\ell=k+1}^{n} W_{B_{k'},B_{\ell}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left(G(B_{k'},B_{k'})^{s}\right) \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left( \sum_{\sigma':B_{\ell} \xrightarrow[]{} B_j} \left(\frac{W_{\sigma'}}{(2 \beta)_{\sigma'}}\right)^{s} \right)\\ & = \sum_{\ell=k+1}^{n} W_{B_{k'},B_{\ell}}^{s} c_s^{s}\mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left( \sum_{\sigma':B_{\ell} \xrightarrow[]{} B_j} \left(\frac{W_{\sigma'}}{(2 \beta)_{\sigma'}}\right)^{s} \right) \end{align}\] The factor \(\mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left( \sum_{\sigma':B_{\ell} \xrightarrow[]{} B_j} \left(\frac{W_{\sigma'}}{(2 \beta)_{\sigma'}}\right)^{s} \right)\) is already discussed in Eq. 31 . To summarize, the second sum satisfies the following upper bound: \[\begin{align} \label{eq-4A-Gij} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}\left(\left(\sum_{B_{k'} \Rightarrow B_{>k} \to B_j} \frac{W_{\sigma}}{(2\beta)_{\sigma}}\right)^{s}\right) & \leq \sum_{\ell=k+1}^{j} W_{B_{k'},B_{\ell}}^{s} c_s^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}(G(B_j,B_j)^{s}) \mathbb{E}_{\widetilde{\Lambda}_{n}^{(\ell)}}\left(e^{{s u^{(B_j)}_{B_{\ell}}}}\right) \\ & + \sum_{\ell=j+1}^{n} W_{B_{k'},B_{\ell}}^{s} c_s^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}(G(B_j,B_j)^{s}) \mathbb{E}_{\widetilde{\Lambda}_{n}^{(j)}}\left(e^{{s u^{(B_j)}_{B_{\ell}}}}\right) \end{align}\tag{33}\]

Combining Eqs 29 , 30 , 32 , 33 , we have \[\begin{align} \label{eq-recursion-fmm-Gij} & \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}(e^{{s u^{(B_{j})}_{B_{k'}}}}) \\ & \leq c_s^{{2 s}} W_{B_{k'},B_{k}}^{s} \sum_{\ell=k+1}^{n} W_{B_k,B_{\ell}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}(G(B_j,B_j)^{s}) \mathbb{E}_{\widetilde{\Lambda}_{n}^{(\ell)}}\left(e^{{s u^{(B_j)}_{B_{\ell}}}}\right) \\ & + \sum_{\ell=k+1}^{n} W_{B_{k'},B_{\ell}}^{s} c_s^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k)}}(G(B_j,B_j)^{s}) \mathbb{E}_{\widetilde{\Lambda}_{n}^{(\ell)}}\left(e^{{s u^{(B_j)}_{B_{\ell}}}}\right) \\ & = \left(1+ c_s^{s} W_{B_{k'},B_{k}}^{s}\right) c_s^{s} \left(\sum_{\ell=k+1}^{j} W_{B_{k},B_{\ell}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(\ell)}}(e^{{s u^{(B_j)}_{B_{\ell}}}})+ \sum_{\ell=j+1}^{n} W_{B_{k},B_{\ell}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(j)}}(e^{{s u^{(B_j)}_{B_{\ell}}}})\right). \end{align}\tag{34}\] This recursive inequality is the counterpart of Proposition 1 in the case of \(G(i,j)\).

To summarize, consider the integer interval \(\{k',k,k+1,k+2,\ldots,j,\ldots,n\}\). The fractional moment of \(G(B_{k'},B_{j})=G(B_{j},B_{j}) e^{u^{(B_{j})}_{B_{k'}}}\) on the graph \(\widetilde{\Lambda}_{n}^{(k)}\) can be bounded by summing over fractional moment of \(G(B_{i_1},B_{j})^{s} = G(B_{j},B_{j})^{s} e^{s u^{(B_{j})}_{B_{i_1}}}\). The LHS is on the graph \(\widetilde{\Lambda}_{n}^{(i_1)}\), which is more coarse-grained than \(\widetilde{\Lambda}_{n}^{(k)}\), thus entropy is reduced for \(i_1\) between \(k+1\) and \(j\), and \(G(B_{j},B_{i_2})^{s} =G(B_{j},B_{j})^{s} e^{s u^{(B_{j})}_{B_{i_2}}}\). The LHS is in the graph \(\widetilde{\Lambda}_{n}^{(i_2)}\) for \(i_2\) between \(j+1\) and \(n\). As shown in Figure 4.

We can then apply the bound Eq. 34 iteratively to the terms \(G(B_{i_1},B_{j}) = G(B_{j},B_{j}) e^{u^{(B_{j})}_{B_{i_1}}}\) and \(G(B_{j},B_{i_2}) =G(B_{j},B_{j}) e^{u^{(B_{j})}_{B_{i_2}}}\).

To derive the following bound, imagine two particles, \(A\) (starting at \(k'\)) and \(B\) (starting at \(j\)). Particle \(A\) moves to the right first:

  • If \(A\) jumps to a position \(k_1 < j\), we set \(A\)’s new position to \(k_1\), and \(A\) keeps moving.

  • If \(A\) jumps to a position \(k_2 > j\), we set \(A\)’s new position to \(k_2\), and now \(B\) makes the next jump.

  • If \(B\) jumps over \(A\), then we update the position of \(B\) and now let \(A\) makes the next jump, otherwise \(B\) continue to jump forward.

These two particles continue to jump (in the above alternating manner) until both particles reach the same position \(m\). Summing over all such possible sequences of jumps gives us \[\sum_{\substack{k=k_0<k_1<\cdots <k_{\ell_{1}}=m \\ j=j_0<j_{1}<\cdots < j_{\ell_{2}}=m\\ \{k_{i}\}_{i=0}^{\ell_{1}}\cap \{j_{i}\}_{i=0}^{\ell_{2}}=\{m\}}}\] where \(\{k_0, \dots, k_{\ell_1}\}\) are the successive positions of \(A\), and \(\{j_0, \dots, j_{\ell_2}\}\) are the successive positions of \(B\). The only common position is \(m\), where they finally meet.

Figure 4: Illustration of the inequality 34 , where fractional moment of G(B_{k'},B_j) is bounded by sum over G(B_{i_1},B_j) and G(B_j,B_{i_2}).

By successive iterations we arrive at a bound summing over all the possible way where \(k_0, k_1 ,\cdots,k_{\ell_{1}}\) are the jumps of A, and \(j_0,j_1 ,\cdots,j_{\ell_{2}}\) are the jumps of B, where the jumps end once A and B are at the same position \(m\). As a result we obtained the bound \[\begin{align} &\mathbb{E}_{\widetilde{\Lambda}_{n}^{(k')}}(e^{{s u^{(B_{j})}_{B_{k}}}}) \\ & \leq \left(1+ c_s^{s} W_{B_{k'},B_{k}}^{s}\right) c_s^{s} \left(\sum_{k_1=k+1}^{j} W_{B_{k},B_{k_1}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k_1)}}(e^{{s u_{B_{k_1}}}})+ \sum_{k_1=j+1}^{n} W_{B_{k},B_{k_1}}^{s} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(j)}}(e^{{s u_{B_{k_1}}}})\right) \\ & \vdots \\ & \leq \sum_{m=j}^{n} \sum_{\substack{k=k_0<k_1<\cdots <k_{\ell_{1}}=m \\ j=j_0<j_{1}<\cdots < j_{\ell_{2}}=m\\ \{k_{i}\}_{i=0}^{\ell_{1}}\cap \{j_{i}\}_{i=0}^{\ell_{2}}=\{m\}}} \prod_{i=0}^{\ell_{1}-1} \left[\left(1+ c_s^{s} W_{B_{k_i'},B_{k_i}}^{s}\right)c_s^{s} W_{B_{k_i},B_{k_{i+1}}}^{s}\right]\prod_{i=0}^{\ell_{2}-1} \left[\left(1+ c_s^{s} W_{B_{j_i'},B_{j_i}}^{s}\right)c_s^{s} W_{B_{j_i},B_{j_{i+1}}}^{s}\right] \end{align}\] Then we upper bound this sum by summing over the \(\{k_0 ,\cdots,k_{\ell_{1}}\}\) and \(\{j_0 ,\cdots,j_{\ell_{2}}\}\) without the condition that the only intersection is \(m\), \[\begin{align} & \leq \sum_{m=j}^{n} \sum_{\substack{k=k_0<k_1<\cdots <k_{\ell_{1}}=m \\ j=j_0<j_{1}<\cdots < j_{\ell_{2}}=m}} \prod_{i=0}^{\ell_{1}-1} \left[\left(1+ c_s^{s} W_{B_{k_i'},B_{k_i}}^{s}\right)c_s^{s} W_{B_{k_i},B_{k_{i+1}}}^{s}\right]\prod_{i=0}^{\ell_{2}-1} \left[\left(1+ c_s^{s} W_{B_{j_i'},B_{j_i}}^{s}\right)c_s^{s} W_{B_{j_i},B_{j_{i+1}}}^{s}\right]\\ & \leq \sum_{m=j}^{n} \left(\sum_{{k=k_0<k_1<\cdots <k_{\ell_{1}}=m }} \prod_{i=0}^{\ell_{1}-1} \left[\left(1+ c_s^{s} W_{B_{k_i'},B_{k_i}}^{s}\right)c_s^{s} W_{B_{k_i},B_{k_{i+1}}}^{s}\right]\right) \\ &\times \left(\sum_{ j=j_0<j_{1}<\cdots < j_{\ell_{2}}=m}\prod_{i=0}^{\ell_{2}-1} \left[\left(1+ c_s^{s} W_{B_{j_i'},B_{j_i}}^{s}\right)c_s^{s} W_{B_{j_i},B_{j_{i+1}}}^{s}\right]\right). \end{align}\] Now, plug in Eq. 22 we have, when \(d<2\), \[\begin{align} \mathbb{E}_{\widetilde{\Lambda}_{n}^{(k')}}(e^{{s u^{(B_{j})}_{B_{k}}}}) &\leq \sum_{m=j}^{n} C(\overline{W},d,s) \rho^{- {s (m-k)}} C(\overline{W},d,s) \rho^{- {s (m-j)}}\\ & = C(\overline{W},d,s)^2 \rho^{{s (k-j)}} {s (1-\rho^{-{2s (n-j+1))}}}{1-\rho^{-{2 s}}}. \end{align}\] Choose \(k=0\) we conclude with \(C''(\overline{W},d,s) = C(\overline{W},d,s)^2 \frac{1}{1-\rho^{-{2 s}}}\).

When \(d=2\), we use Eq. 22 and again we choose \(\overline{W}\) small enough so as Eq. 23 holds, then we conclude as in the proof of Theorem 2 with \(c(\overline{W},s)\) in the place of \(\rho\).  


Acknowledgment We thank M. Disertori for helpful discussions on the first draft of this manuscript. We also acknowledge the support of the Institut de recherche en mathématiques, interactions & applications: IRMIA\(++\).