October 02, 2025
Let \(X : \mathbb{Z}^2 \to \mathbb{R}\) be a random field which we interpret as a random height function describing some landscape of mountains. We consider a source of light (a sun) located at infinity in a direction parallel with an axis of \(\mathbb{Z}^2\) and emitting rays which are all parallel and make a slope \(\ell\in \mathbb{R}\) with the horizontal plane. Given the value of \(\ell\in \mathbb{R}\) some mountains of the landscape will be lit by the sun and other will be in the shadow of some higher mountain. Under some assumptions on \(X\), including an independence assumption, we prove that this model may present two distincts phases depending on \(\ell\). When the slope \(\ell>0\) is very small, almost surely there exists an unbounded cluster of points in the shadow. However, if \(\ell\) is big enough, almost surely there exists an unbounded cluster of points lit by the sun. We reformulate this problem in terms of percolation of a field \(\alpha : \mathbb{Z}^2 \to \overline{\mathbb{R}}\) which has a simple definition but does not present many of the nice properties usually found in percolation models such as the FKG inequality, invariance by rotation or finite range correlations.
Figure 1: Illustration of a configuration \(\alpha\) on the hexagonal lattice when the entries of \(X\) are independent standard Gaussian random variables. In black, \(\alpha>\ell\). In white, \(\alpha<\ell\). In red, the biggest component of \(\alpha>\ell\). In blue the biggest component of \(\alpha<\ell\).. a — \(\ell=0.4\), b — \(\ell=0.5\)
In this paper we introduce a new percolation model. Consider a planar discrete field \(X : \mathbb{Z}^2 \to \mathbb{R}.\) One may interpret this field as a height function, on each point \(u\) of the discrete plane \(\mathbb{Z}^2\) there is a mountain of height \(X(u).\) Consider a sun located at infinity in the direction \(e_1=(1,0).\) We suppose that the sun emits rays that are all parallel and that make a slope \(\ell\in \mathbb{R}\) with respect to the horizontal plane. Given this source of light, observers located at the top of each mountain of our landscape may see the sun or be in the shadow of some other mountain located between them and the sun.
Formally, we say that an observer located at \(u\in \mathbb{Z}^2\) can see the sun of slope \(\ell\in \mathbb{R}\) if and only if \[\label{a4:eq:cond95light} \forall r\in \mathbb{N}^*,\;X(u)+r\ell \geq X(u+re_1).\tag{1}\]
It appears that this problem is related to percolation. Indeed, let \(\alpha : \mathbb{Z}^2 \to \overline{\mathbb{R}}\) be the field defined as follows: \[\label{a4:eq:def95alpha95general} \forall u\in \mathbb{Z}^2,\;\alpha(u):= \sup_{r\in \mathbb{N}^*}\frac{X(u+re_1)-X(u)}{r} \in \overline{\mathbb{R}}\tag{2}\] A key observation is that for any \(\ell\in \mathbb{R}\) \[\label{a4:eq:equiv95alpha} \alpha(u)\leq \ell \Leftrightarrow \forall r\in \mathbb{N}^*,\;X(u)+r\ell \geq X(u+re_1)\tag{3}\] Therefore, understanding who among the observers see the sun at slope \(\ell\) is equivalent to understanding the excursion set of \(\alpha\), that is \[\label{a4:eq:alpha95excursion} \{\alpha \leq \ell\} := \{u\in \mathbb{Z}^2\;|\;\alpha(u)\leq \ell\}.\tag{4}\] This is a reformulation of our problem in terms of percolation. Indeed, in standard percolation theory one takes a domain \(S\) (for instance \(S=\mathbb{Z}^2\) in our case) and sample a field \(f : S \to \mathbb{R}\) according a certain law of probability and then study the properties of the random subsets \(\{f\leq \ell\}\) when \(\ell\) ranges over \(\mathbb{R}\). There exist a variety of ways of sampling \(f\) each giving its own percolation model. The most famous model is certainly the Bernoulli percolation model on edges of \(\mathbb{Z}^2\). This model was heavily studied (see [1], [2], [3], [4] for a far from complete list of references). In this model each edge \(e\) of \(\mathbb{Z}^2\) is open with some probability \(p\) (and \(f(e)=1\)) and is closed otherwise and \(f(e)=0\)), all edges being independent from one another. From this model we can obtain several variants, indeed one can play in various directions: replace the graph \(\mathbb{Z}^2\) by another graph or lattice, consider site percolation instead of edge percolation, introduce some correlations between edges (there are again various ways to do that). It is also possible to do percolation in the continuum. A first natural example it Poisson percolation, or Poisson Voronoï percolation (see for instance [5], [6], [7]). Another example in the continuum is the study of the level sets of Gaussian fields (see for instance [8],[9], [10], [11], [12]). Although most of these models largely differ from one another, there are still some similarities between some of them.
The domain \(S\) on which the percolation models are defined present natural symmetries. For instance the lattice \(\mathbb{Z}^2\) has both horizontal and vertical symmetries and is invariant by rotation of \(\pi/2\). The other lattices that are frequently found in the literature are also rich in symmetries (for instance the hexagonal lattice or the union jack lattice). In the continuum the set \(\mathbb{R}^d\) also present such symmetries.
Most of the models used in percolation require the random field \(f\) to have a law that preserve certain symmetries of the domain. For instance, in Bernoulli percolation it is clear that if one rotates by \(\pi/2\) a random configuration one obtain another configuration which has the same law as the original one.
It is usually the case that the law of the random function \(f\) is required to be stationary. This means that if one look at the restriction of the function \(f\) in a subset \(S_1 \subset S\) or a translated version \(S_2\) of \(S_1\), then these two restrictions \(f_{|S_1}\) and \(f_{|S_2}\) must have the same law (of course this implies that the set \(S\) admits a good notion of translation, which is the case for \(\mathbb{Z}^2\), \(\mathbb{R}^d\) etc...)
Some of the usual models in percolation have an independence property meaning that if two points are far enough from one another then the values of \(f\) at these two points are independent.
In the case where we lack the independence property, some models still envoy the fact that conditioning the random field \(f\) on some of its values is well behaved. This is the case for Bernoulli percolation (since all entries of \(f\) are independent) but also when \(f\) is a correlated Gaussian field for instance.
This property is more technical. It basically states that two increasing events (events that are favored by an increase of the random field \(f\)) must be positively correlated. This FKG property is a crucial tool that
can be used in various arguments. Note that this property is verified for Bernoulli percolation but also for the percolation of Gaussian fields which have a correlation matrix with positive entries.
In the following we will consider a field \(X\) which has the following assumption.
Assumption 1. There exists a probability space \((\Omega, \mathcal{A}, \mathbb{P})\) on which is defined a collection \((X(u))_{u\in \mathbb{Z}^2}\) of real random variables which are independent and identically distributed according to \(\mu\), a probability distribution on \(\mathbb{R}\).
In particular, Assumption 1 implies that \(X\) verifies all the above nice properties used in percolation. However, most of these properties do not transfer to the field \(\alpha\) defined by 2 . In fact, under Assumption 1, the field \(\alpha\) is stationary but other than this property not much is left. Indeed, among the symmetries of \(\mathbb{Z}^2\) that preserve the law of \(\alpha\) only the reflection along the horizontal axis remains. The law of the field \(\alpha\) will not be invariant by rotation. The field \(\alpha\) has long range correlations and does not behave well under conditioning. Moreover, numerical experiments suggest that the field \(\alpha\) does not have the FKG property.
In this paper, we study this field \(\alpha\) and show that although it lacks many of the nice usual properties used in percolation, it still presents a phase transition when \(\ell\) varies. More precisely, under some assumptions of the law \(\mu\) of Assumption 1, we show that when \(\ell>0\) is small enough almost surely there exists a unique unbounded connected component of points in the shadow (that is, in \(\{\alpha \geq \ell\}\)). We also show that when \(\ell>0\) is big enough, then almost surely there exists a unique unbounded connected component of points lit by the sun (that is, in \(\{\alpha \leq \ell\}\)).
We now present our assumptions on the law \(\mu\).
Assumption 2. The law \(\mu\) admits a density \(\varphi \in L^1(\mathbb{R})\) with respect to the Lebesgue measure on \(\mathbb{R},\) that is \[\forall a\leq b,\;\mu([a,b])=\int_a^b \varphi(x)dx.\]
Assumption 3. There exist constants \(c,C>0\) such that \[\forall x\in \mathbb{R}_+,\;\mu(\mathbb{R}\setminus [-x,x])\leq Ce^{-cx}\]
Our main result concerns site percolation on \(\mathbb{Z}^2\) for the field \(\alpha\).
Theorem 1. Let \(X : \mathbb{Z}^2 \to \mathbb{R}\) be such that Assumption 1 holds and let \(\alpha\) be defined as in 2 . We denote by \(\mu\) the law of \(X(0)\). In the context of site percolation on \(\mathbb{Z}^2\) the following holds.
If \(\mu\) satisfies Assumption 2, there exists \(\ell_1>0\) such that, if \(\ell<\ell_1\) then,
almost surely, the set \(\{\alpha\leq \ell\}\) does not present an infinite cluster,
almost surely, the set \(\{\alpha \geq\ell\}\) presents an infinite cluster.
If \(\mu\) satisfies Assumption 3, there exists \(\ell_2<\infty\) such that, if \(\ell>\ell_2\) then,
almost surely, the set \(\{\alpha\leq \ell\}\) presents an infinite cluster,
almost surely, the set \(\{\alpha \geq\ell\}\) does not present an infinite cluster.
We comment that Theorem 1 is first a result in an objective of understanding the phase transition of this new model. Altough Theorem 1 is stated for a field \(X\) defined on \(\mathbb{Z}^2\), the proof would work exactly the same if we replaced \(\mathbb{Z}^2\) other lattices such as the triangular lattice or the union jack lattice (with appropriate adjustment of the definition 2 of \(\alpha\)). For various usual percolation models (such as the Bernoulli percolation model, and Gaussian field percolation for instance), the phase transition is proven to be sharp, that is the map that to the level \(\ell\) associates the probability of a big rectangle being crossed by \(\{f\leq \ell\}\) converges to a Heaviside function when the size of the rectangle goes to infinity. In particular this implies the existence of a critical parameter \(\ell_c\in \mathbb{R}\) such that if \(\ell<\ell_c\) then \(\{f\leq \ell\}\) has no infinite cluster but \(\{f\geq \ell\}\) has one, and the converse is true when \(\ell>\ell_c\). It is still an open question to know if, on the hexagonal lattice for instance, we one may take \(\ell_1=\ell_2\) in Theorem 1 (which would prove the existence of such a critical parameter). Such a behaviour is suggested by numerical experiment, see Figure 1. It is also an open question to understand whether or not the phase transition is sharp. Indeed, most strategies to prove sharpness in a phase transition rely either on the FKG inequality (see for instance [13],[6],[14]), or on symmetries and invariance presented by the law of the field, notably the invariance by rotation (see [15], [16]). Unfortunately, the field \(\alpha\) lacks these properties, and the usual arguments cannot be directly applied. Finally, we comment that the model also makes sense when the \(X(u)\) are no longer independent but have correlations between them. In future work we plan to address this question.
We split the proof of Theorem 1 into two parts. In Section 2 we study the regime of large \(\ell\), that is the sun is very high and a large proportion of the mountain peaks should be lit by the sun. In this context a natural idea is to stochastically dominate the set \(\{\alpha\geq \ell\}\) by a collection of independent Bernoulli random variable of small parameter. This is a known strategy (see [17] for result on stochastic domination). However, since our field \(\alpha\) presents long range correlation, one would first need to define a truncated version of \(\alpha\) and study how close it is to the actual field \(\alpha\). We prefer to directly use the definition of the field \(\alpha\). Our proof relies on understanding what constraints apply to the field \(X\) when we ask a few points to be in the shadow even when \(\ell\) is large. In Section 3 we study the regime of small \(\ell\), that is when the rays of the suns are almost parallel to the horizontal plane. We use the intuition that if the slope is \(\ell=0\), then in order for \(n\) points aligned in direction of the sun to be all lit by the sun (forgetting what is before them) then those \(n\) mountains must have height in a very specific order (that is they must be order by descending height, the highest mountain in the back and the smallest mountain in front). However, we show that this ordering has very small probability and we then extend this reasoning to \(\ell>0\) small enough. In Appendix 4, we tackle another problem which is to understand how much information about \(X\) is stored in the field \(\alpha\). We show that under some assumption then, almost surely, the field \(X\) may be reconstructed from the field \(\alpha\) up to an additive constant.
We introduce some notations. We work on the cubic lattices of the form \(\mathbb{L}=\mathbb{Z}^2\). Note that this lattice could be replaced by other lattices such as triangular lattices or the union jack lattices, but we choose \(\mathbb{Z}^2\) for simplicity). We say that two points \(u=(u_1,u_2)\), \(v=(v_1,v_2)\) of \(\mathbb{L}\) are orthogonally adjacent (and we write \(u\sim v\)) if \(|u_1-v_1|+|u_2-v_2|=1\). We say that these two points are \(\star\)-connected (and we write \(u\sim_\star v\)) if \(\max(|u_1-v_1|,|u_2-v_2|)=1\). Given two vertices \(u,v\in \mathbb{L}\), an orthogonally connected path (resp. \(\star\)-connected path) between the two vertices \(u,v\) is the data of an integer \(n\geq 0\) together with a sequence \(a_0,a_1,\dots,a_{n-1}\) of vertices of \(\mathbb{L}\) such that, \(a_0=u\), \(a_{n-1}=v\), for all \(1\leq i \leq n-1\), \(a_{i-1}\sim a_{i}\) (resp. \(a_i\sim_\star a_{i+1}\)). The interger \(n\) is called the length of the path. Moreover a path is said to not self-intersect if the map \(i\mapsto a_i\) is injective. Given a subset \(A\subset \mathbb{L}\) we write \(u\overset{A}{\longleftrightarrow} v\) (resp. \(u\overset{A}{\underset{\star}{\longleftrightarrow}} v\)) if there exists at least one orthogonally connected path (resp \(\star\)-connected path) between \(u\) and \(v\) that only uses vertices in \(A\). The orthogonally connected component (resp. \(\star\)-connected component) in \(A\) of a vertex \(u\) can be written as \(C_A(u):=\{v\in \mathbb{L}\;|\;u\overset{A}{\longleftrightarrow}v\}\) (resp. \(C^\star_A(u) :=\{v\in \mathbb{L}\;|\;u\overset{A}{\underset{\star}{\longleftrightarrow}}v\}\)). We write \(u\overset{A}{\longleftrightarrow}\infty\) if \(C_A(u)\) is unbounded.
Given a fixed level \(\ell\in \mathbb{R}\) we want to understand the behavior of the sets \[\begin{align} \{\alpha \leq \ell\} &:= \{u\in \mathbb{Z}^2\;|\;\alpha(u)\leq \ell\}, \\ \{\alpha \geq \ell\} &:= \{u\in \mathbb{Z}^2\;|\;\alpha(u)\geq \ell\}. \end{align}\] In particular, for a fixed realization of the field \(\alpha\), we see that the map \(\ell \mapsto \{\alpha \leq \ell\}\) is non decreasing in \(\ell\) for set inclusion, whereas \(\ell\mapsto \{\alpha \geq \ell\}\) is non increasing in \(\ell\). Denote by \(\Theta_{\leq \ell}\) and \(\Theta_{\geq \ell}\) the two following quantities. \[\begin{align} \Theta_{\leq \ell} := \mathbb{P}\left( 0\overset{\{\alpha\leq \ell\}}{\longleftrightarrow}\infty \right), \\ \Theta_{\geq \ell} := \mathbb{P}\left( 0\overset{\{\alpha \geq \ell\}}{\longleftrightarrow}\infty \right). \end{align}\] Then the map \(\ell \mapsto \Theta_{\leq \ell}\) is non decreasing in \(\ell\) whereas \(\ell\mapsto \Theta_{\geq \ell}\) is non increasing in \(\ell\). We define two critical parameters, \[\begin{align} \ell_c^\leq := \sup\{\ell \in \mathbb{R}\;|\;\Theta_{\leq \ell}=0\}\in \overline{\mathbb{R}},\\ \ell_c^\geq := \inf\{\ell \in \mathbb{R}\;| \;\Theta_{\geq \ell}=0\}\in \overline{\mathbb{R}}, \end{align}\] with the usual conventions \(\sup\emptyset=-\infty\) and \(\inf\emptyset =+\infty.\) We remark that Theorem 1 can be reformulated as
Theorem 2. Under Assumption 1 and 2 then \(0<\ell_c^\leq\) and \(0<\ell_c^\geq\). Under Assumptions 1 and 3 then \(\ell_c^\leq<\infty\) and \(\ell_c^\geq<\infty\).
I am grateful to my PhD advisor Damien Gayet who first presented me this model and offered me to study it and also reviewed a first version of this manuscript. I also would like to thank Florent Gimbert who suggested that such models could also be of interest in the understanding of glacier sliding.
The main result of this section is the Proposition 3 which addresses the case of large \(\ell\).
Proposition 3. Let \(X : \mathbb{Z}^2 \to \mathbb{R}\) and \(\mu\) be such that Assumption 1 holds. Assume that \(\mu\) satisfies Assumption 3. For any \(\varepsilon>0\) there exists \(0<\ell_0(\varepsilon)<\infty\) such that if \(\ell>\ell_0(\varepsilon)\) the following holds. For any finite subset \(A\subset \mathbb{Z}^2\) we have \[\mathbb{P}\left( \forall u\in A,\;\alpha(u)\geq \ell \right)\leq \varepsilon^{|A|},\] where \(|A|\) denotes the cardinal of \(A\).
Remark 4. In the special case where \(\mu\) is the probability distribution of a standard Gaussian random variable (that is \(\mu(dx)=\frac{1}{\sqrt{2\pi}}e^{-x^2/2}dx\)) then \(\ell\) can be chosen explicitly in terms of \(\varepsilon\). Indeed, in this case it can be checked that for all \(\ell>0\), for any finite subset \(A\subset \mathbb{Z}^2\), \[\mathbb{P}\left( \forall u\in A,\;\alpha(u)\geq \ell \right)\leq \left(\frac{q(2-q)}{1-q}\right)^{|A|},\] where \(q=e^{-\ell^2/4}\).
We argue that by the classical Peierls argument, see also [18] and [19], Proposition 3 implies \(\ell_c^\leq<\infty\) and \(\ell_c^\geq<\infty\).
Corollary 1. Let \(X : \mathbb{Z}^2 \to \mathbb{R}\) and \(\mu\) be such that Assumption 1 holds. Assume that \(\mu\) satisfies Assumption 3. Then, \(\ell_c^\leq<\infty\) and \(\ell_c^\geq<\infty\).
The proof is completely classical and we provide it for the convenience of the reader.
Proof of Corollary 1. Let \(\varepsilon>0\) be small parameter and \(\ell_0\) associated to it by Proposition 3. Let \(\ell>\ell_0\) and \(n\in \mathbb{N}^*\).
Let us first show that Proposition 3 implies that \(\ell_c^\geq<\infty\). In \(\mathbb{Z}^2\) there are at most \(4^n\) orthogonally connected paths which do not self-intersect that start from \((0,0)\) and of length \(n\). For the event \(\{0\overset{\{\alpha\geq \ell\}}{\longleftrightarrow}\infty\}\) to occur, there must be at least one of these paths of length \(n\) such that \(\alpha(u)\geq \ell\) for all \(u\) in this path. However, for a fixed path \(\gamma\) of length \(n\), the probability that \(\alpha(u)\geq \ell\) for all \(u\) in the path \(\gamma\) is bounded from above by \(\varepsilon^n\) (by Proposition 3). We can do an union bound on the at most \(4^n\) such paths to get \[\label{a4:eq:p1} \forall n\geq 1,\;\mathbb{P}\left( 0\overset{\{\alpha\geq \ell\}}{\longleftrightarrow}\infty \right)\leq \left(4\varepsilon\right)^n\tag{5}\] By choosing \(\varepsilon <\frac{1}{4}\) and letting \(n\) go to infinity in 5 we get \[\forall \ell\geq \ell_0,\;\mathbb{P}\left( 0\overset{\{\alpha\geq \ell\}}{\longleftrightarrow}\infty \right)=0.\] This shows \(\ell_c^\geq<\infty.\)
Let us now show that Proposition 3 implies that \(\ell_c^\leq<\infty\). In fact for the event \(\{0\overset{\{\alpha < \ell\}}{\longleftrightarrow}\infty\}\) to not occur, there must exist some \(n\geq 1\) and some non self-intersecting circuit \(\gamma\) of length \(n\) separating \(0\) from infinity such that for all \(u\) in \(\gamma\) we have \(\alpha(u)\geq \ell\). Note that for such a circuit to disconnect \(0\) from \(\infty\), then this circuit does not necessarily requires to be orthogonally connected, but requires to be \(\star-\)connected. Therefore, the number of such circuits of length \(n\) is no more than \(n\times8^n\) (in fact one may choose the first point of the circuit in \(\{0,\dots, n\}\times \{0\}\) and each vertex has at most \(8\) neighbors for \(\sim_\star\)). We can thus obtain the following crude upper bound, \[\label{a4:eq:p2} 1-\mathbb{P}\left( \{0\overset{\{\alpha < \ell\}}{\longleftrightarrow}\infty\} \right) \leq \sum_{n\geq 1}n8^n\varepsilon^n.\tag{6}\] A straightforward computation shows that the right hand side in 6 is finite and strictly less than \(1\) if we choose \(\varepsilon>0\) small enough. Therefore, we may find \(\ell_0\in ]0,\infty[\) (associated to this \(\varepsilon\)) such that, \[\forall \ell>\ell_0,\;\mathbb{P}\left( 0\overset{\{\alpha < \ell\}}{\longleftrightarrow}\infty \right)>0.\] This implies \(\ell_c^\leq<\infty.\) ◻
The rest of this section is devoted to the proof of Proposition 3. We will restrict ourselves to the case of a subset \(A\subset \mathbb{Z}\times \{0\}\). In fact given the definition of \(\alpha\) in 2 we can see that the restriction of \(\alpha\) to the lines \(\mathbb{Z}\times \{n\}\), where \(n\) ranges over \(\mathbb{Z}\), are independent and identically distributed.
To ease notations, let us write \(Y_i=X_{(i,0)}\) for \(i\in \mathbb{Z}\). Then \((Y_i)_{i\in \mathbb{Z}}\) is a collection of independent standard Gaussian random variables. Let us fix \(\ell>0\). We define a relation \(\prec_\ell\) between points of \(\mathbb{Z}\) as follows:
Definition 1. For \(i,j\in \mathbb{Z}\) we write \(i\prec_\ell j\) if \(i<j\) and \(Y_j>Y_i+(j-i)\ell\).
Remark 5. This relation is transitive, if \(i\prec_\ell j\) and \(j\prec_\ell k\) then \(i\prec_\ell k\). In terms of shadow, if the rays of the sun make a slope \(\ell\) with the horizontal, then the mountain \((j,Y_j)\) casts shadow on the mountain \((i,Y_i)\) if and only if \(i\prec_\ell j\). Moreover, observe than we can rewrite \(\alpha(i)>\ell \Leftrightarrow \exists j>i, i\prec_\ell j.\)
In the following we will make repeated use of the following consequence of Assumptions 1 and 3.
Fact 6. Assume that \(X\) satisfies Assumptions 1 and 3. Then there exists \(c'>0\) and \(x_0>0\) such that \[\forall x\geq x_0,\;\forall u\neq v\in \mathbb{Z}^2,\;\mathbb{P}\left( |X(u)-X(v)|\geq x \right)\leq e^{-c'x}.\]
Proof. By Assumption 1, \(X(u)\) and \(X(v)\) are independent and distributed according to \(\mu.\) We have by an union bound \[\begin{align} \mathbb{P}\left( |X(u)-X(v)|\geq x \right) &\leq \mathbb{P}\left( |X(u)|\geq \frac{x}{2} \right)+\mathbb{P}\left( |X(v)|\geq \frac{x}{2} \right) \\ &\leq 2\mu(\mathbb{R}\setminus [-x/2,x/2]). \end{align}\] By Assumption 3, we may find \(C>0\) \(c>0\) such that \(\mu(\mathbb{R}\setminus [-x,x])\leq Ce^{-cx},\) therefore if we set \(c'=c/4>0\) we may find \(x_0>0\) such that for \(x\geq x_0\), \[\begin{align} \mathbb{P}\left( |X(u)-X(v)|\geq x \right) \leq 2Ce^{-cx/2} \leq e^{-c'x}. \end{align}\] ◻
In order to prove Proposition 3, we make a few definitions.
Definition 2. Let \(\ell>0\) and \(i\in \mathbb{Z}\). We define the random set \(M_\ell(i)\) as \[M_\ell(i):= \{Y_j - (j-i)\ell\;|\;j>i\}.\] We define the event \(\mathcal{T}_\ell(i)\) as \[\mathcal{T}_\ell(i):= \{\text{the set }M_\ell(i) \text{ admits a maximum}\}.\] On the event \(\mathcal{T}_\ell(i)\) we define a random integer \(T_\ell(i)\) as \[T_\ell(i) := \min\{j>i\;|\;Y_j-(j-i)\ell = \max M_\ell(i)\}.\]
Remark 7. In the definition of \(T_\ell(i)\) we decided to select the minimum index \(j\) that satisfies the maximum condition, this choice if just for convenience. Actually, if we also assume Assumption 2, then it would not be hard to check that for fixed \(\ell\) then, almost surely, for all \(i\in \mathbb{Z}\), there cannot be two indexes that realize the maximum of \(M_\ell(i)\). But for our purpose, this arbitrary choice is not important. What matters is that \(T_\ell(i)\) can be understood as the index of a point that generates the "highest" shadow on \(i\) (at level \(\ell\)).
Fact 8. Assume that Assumptions 1 and 3 hold. For any \(\ell>0\) and \(i\in \mathbb{Z}\), the event \(\mathcal{T}_\ell(i)\) has probability \(1\).
Proof. Let \(i\in \mathbb{Z}\). We have \[\mathbb{P}\left( (\mathcal{T}_\ell(i))^c \right) \leq \mathbb{P}\left( (Y_j -(j-i)\ell > Y_{i+1}-\ell \text{ for infinitely many }j>i \right).\] This last probability is zero by the Borel-Cantelli Lemma and using Fact 6. ◻
For fixed \(\ell>0\), we now work on \(\Omega_\ell = \bigcap_{i\in \mathbb{Z}}\mathcal{T}_\ell(i)\) which has full probability.
Fact 9. Let \(i\in \mathbb{Z}\). We have \(\alpha(T_\ell(i))\leq \ell\). Moreover, if \(\alpha(i)>\ell\), then \(i\prec_\ell T_\ell(i)\).
Remark 10. If we keep the intuition of \(T_\ell(i)\) as the mountain that creates the highest shadow on \(i\). Then the first part of Fact 9 can be interpreted by saying that the mountain creating the highest shadow on us cannot itself be in the shadow of someone else.
Proof of Fact 9. First, let us show that \(\alpha(T_\ell(i))\leq \ell\). By contradiction, if that is not the case we would find some integer \(j>T_\ell(i)\) such that \(T_\ell(i)\prec_\ell j\), then \[Y_{T_\ell(i)}< Y_j - (j-T_\ell(i))\ell.\] This would imply \[Y_{T_\ell(i)}-(T_\ell(i)-i)\ell < Y_j -(j-i)\ell.\] Such an inequality contradicts the definition of \(T_\ell(i)\).
Now, let us show that if \(\alpha(i)>\ell\) then \(i\prec_\ell T_\ell(i)\). Since \(\alpha(i)>\ell\) there exists \(j_0>i\) such that \(i\prec_\ell j_0\). This rewrites as \(Y_{j_0}-(j_0-i)\ell > Y_i\). By definition of \(T_\ell(i)\), we have \[Y_{T_\ell(i)}-(T_\ell(i)-i)\ell \geq Y_{j_0}-(j_0-i)\ell > Y_i.\] Thus, \[i\prec_\ell T_\ell(i).\] ◻
Fact 11. Let \(i<j\) be two points of \(\mathbb{Z}\) such that \(\alpha(i)>\ell\) and \(\alpha(j)>\ell\). If \(i\not\prec_\ell T_\ell(j)\) then \(i<T_\ell(i)<j\).
Remark 12. We may interpret Fact 11 the following way: If two points \(i<j\) are in the shadow but if \(i\) is not in the shadow of the principal responsible for the shadow on \(j\), then the principal responsible for the shadow on \(i\) lies between \(i\) and \(j\).
Proof. First, we always have \(T_\ell(i)>i\) (by definition).
By contradiction, if \(T_\ell(i)>j\), then by definition of \(T_\ell(j)\) we have \[Y_{T_\ell(j)}-(T_\ell(j)-j)\ell \geq
Y_{T_\ell(i)}-(T_\ell(i)-j)\ell.\] By substracting \((j-i)\ell\) from both sides we find \[Y_{T_\ell(j)}-(T_\ell(j)-i)\ell \geq Y_{T_\ell(i)} - (T_\ell(i)-i)\ell.\] However, since
\(i\prec_\ell T_\ell(i)\) (by Fact 9) and since \(i\not\prec_\ell T_\ell(j)\) (par assumption), we have \[Y_i \geq Y_{T_\ell(j)}-(T_\ell(j)-i)\ell \geq Y_{T_\ell(i)} - (T_\ell(i)-i)\ell > Y_i,\] which is a contradiction.
It only remains to argue that \(T_\ell(i)\neq j\). In fact, by assumption we have \(\alpha(j)>\ell\) but by Fact 9 we have \(\alpha(T_\ell(i))\leq \ell\) which concludes the proof. ◻
We now provide the proof of Proposition 3.
Proof of Proposition 3. For convenience, we see \(\mathbb{Z}\) as a subset of \(\mathbb{Z}^2\) by the natural inclusion \(n\mapsto (n,0).\) We first do the proof for a finite subset of the form \(A\subset \mathbb{Z}\) (that is, we must
understand \(A\) as a subset of \(\mathbb{Z}\times \{0\}\)). If \(A=\emptyset\), there is nothing to prove. In the following we suppose \(|A|\geq 1\), and we note \(A=\{a_1,a_2,\dots,a_n\}\) with \(a_1<a_2<\dots<a_n\) (thus \(n=|A|\)). First, on the event
\[\{\forall i\in A\;|\;\alpha(i)>\ell\}.\] we will define and integer \(r\geq 0\) and a decreasing sequence of integers \((k_i)_{0\leq i \leq r}\) such
that \(n=k_0>k_1>\dots>k_r\geq 1\). We build this sequence by induction. We first set \(k_0=n\). Let us assume that we are a step \(r'\geq 0\),
\((k_m)_{0\leq m\leq r'}\) was built and the construction is not over. If for every \(l\in \llbracket 1, k_{r'}-1\rrbracket\) we have \(a_l \prec_\ell
T_\ell(a_{k_r'})\) then we end the construction (and \(r=r'\)). Otherwise, let \(k_{r'+1}\) be the biggest integer strictly smaller than \(k_{r'}\) such that \(a_{k_{r'+1}}\not\prec_\ell T_\ell(a_{k_{r'}})\). We add \(k_{r'+1}\) to our sequence, and we then reproduce this procedure
until it ends.
since \(A\) is finite, the procedure eventually ends. Moreover, Facts 9 and 11 imply that on the event \(\{\forall i \in A\;|\;\alpha(i)>\ell\}\) then the sequence \((k_i)_{0\leq i \leq r}\) has the following properties:
\(\forall l\in \llbracket 1, k_r\rrbracket,\;a_l \prec_\ell T_\ell(a_{k_r})\)
\(\forall m\in \llbracket 1,r\rrbracket,\;\forall l\in \llbracket k_m+1, k_{m-1}\rrbracket,\;a_l \prec_\ell T_\ell(a_{k_{m-1}}).\)
\(\forall m\in \llbracket 1, r\rrbracket,\; a_{k_m} \not\prec_\ell T_\ell(a_{k_{m-1}})\)
\(a_{k_r}<T_\ell(a_{k_r})<a_{k_{r-1}}<T_\ell(a_{k_{r-1}}) <\dots < a_{k_1}<T_\ell(a_{k_1}) < a_{k_0}=a_n < T_\ell(a_{k_0})=T_\ell(a_n)\)
Note that in the case \(r=0\), then \(k_r=k_0=n\) and the second and third property are empty. The first property comes from an application of Fact 9 for \(l=k_r\) and of our ending condition for the algorithmic procedure. The second property also comes from Fact 9 for \(l={k_{m-1}}\) together with the definition of the induction. The third property is a consequence of the definition of the sequence \((k_i)_{0\leq i \leq r}\). The last property follows from Fact 11 and the third property.
Now that we have built this sequence \((k_i)_{0\leq i \leq r}\) we introduce a few more notations. For \(0\leq m\leq r\), we denote by \(j_m\) the index \[j_m:= T_\ell(a_{k_m}).\] We now define the set \(J\) as \[J=\{j_r, \dots, j_0\}.\] One may think of \(J\) as a minimal set of points which is enough to cast shade to all points in \(A\). We observe that \(J\cap A=\emptyset\). Indeed, the elements \(a\in A\) satisfy \(\alpha(a)>\ell\) whereas, by Fact 9 the elements of \(j\in J\) satisfy \(\alpha(j)\leq \ell\). Finally we observe that \(|J|\) can vary between \(|J|=1\) (when only point is enough to cast shade onto all points in \(A\)) and \(|J|=n=|A|\) (when all \(T_\ell(a_i)\) are different). Therefore, we write \[\mathbb{P}(\{\forall i\in A, \alpha(i)>\ell\}) = \sum_{r=0}^{n-1} \mathbb{P}(\{\forall i\in A, \alpha(i)>\ell\}\cap \{|J|=r+1\}).\] For \(0\leq r\leq n-1\), we denote by \(E_r\) the event \[E_r:= \{\forall i\in A,\;\alpha(i)>\ell\}\cap \{|J|=r+1\}.\] Consider \(c',x_0\) given by Fact 6. We work with \(\ell> x_0\), we denote by \(\rho := e^{-c'\ell}\in ]0,1[\) and we observe that if \(i<j\) then \[\mathbb{P}\left( i\prec_\ell j \right)\leq \rho^{j-i}.\] For \(r=0\) we find \[\begin{align} \mathbb{P}\left( E_0 \right)& \leq \sum_{j_0=a_n+1}^\infty\mathbb{P}(a_1\prec_\ell j_0) \\ &\leq \sum_{j_0=a_n+1}^\infty \rho^{j_0-a_1} \\ &\leq \frac{\rho^{a_n+1-a_1}}{1-\rho} \\ &\leq \frac{\rho^n}{1-\rho} \end{align}\] In the first line, we use the fact that \(J=\{j_0\}\) for some \(j_0>a_n\) and we do an union bound on the possible values for \(j_0\) (we also only keep the constraint \(a_1\prec_\ell j_0\)). From the first, to second line, we apply Fact 6 and we use the independence of \(Y_{a_1}\) and \(Y_{j_0}\). In the last two lines, we used the fact that \(\rho=e^{-c'\ell}\) is between \(0\) and \(1\) and the fact that \(a_n-a_1\geq n-1.\)
Here is a similar (although more technical) computation for \(r\geq 1\). \[\begin{align} \mathbb{P}(E_r) &\leq \sum_{j_0=a_n+1}^\infty \sum_{(k_i)_{0\leq i \leq r}} \sum_{\substack{j_m, 1\leq m\leq r\\ a_{k_m}<j_m <a_{k_{m-1}}\\ j_m \not\in A}} \mathbb{P}\left( \begin{array}{c} a_1 \prec_\ell j_r \text{ and } \\ a_{k_m+1}\prec_\ell j_{m-1}\;\forall m\in \llbracket 1,r\rrbracket \end{array} \right). \end{align}\] where the second sum is over all sequences \((k_i)_{0\leq i \leq r}\) such that \(1\leq k_r<k_{r-1}<\dots < k_1< k_0=n\) (note that there are \(\binom{n-1}{r}\) terms in this sum), and the third sum is over all \((j_1,\dots, j_m)\) such that \(a_{k_m}<j_m<a_{k_{m-1}}\) and \(j_m\not\in A\) for all \(m\). (note that we can sum over \(j_m\not\in A\) because \(J\cap A=\emptyset\)). The events \(a_{k_{m}+1}\prec_\ell j_{m-1}\) and \(a_1\prec_\ell j_r\) are all independent (this stems from the fact that \(J\cap A=\emptyset\) and from the independence of the \((Y_i)_{i\in \mathbb{Z}}\)). This allows us to compute an upper bound for \(\mathbb{P}\left( E_r \right)\). \[\begin{align} \mathbb{P}(E_r)\leq& \sum_{j_0=a_n+1}^\infty\sum_{(k_i)_{0\leq i \leq r}}\sum_{\substack{j_m, 1\leq m\leq r\\ a_{k_m}<j_m <a_{k_{m-1}}\\ j_m\not\in A}}\rho^{j_r-a_1}\prod_{m=1}^r \rho^{j_{m-1}-a_{k_m+1}}\\ \leq &\sum_{j_0=a_n+1}^\infty \rho^{j_0-a_n}\sum_{(k_i)_{0\leq i \leq r}}\rho^{a_{n}-a_{k_1+1}}\left(\sum_{j_r=a_{k_r}+1}^\infty \rho^{j_r-a_1}\right) \\ & \times \prod_{m=2}^r \left(\sum_{j_{m-1}=a_{k_{m-1}}+1}^\infty \rho^{j_{m-1}-a_{k_m+1}}\right) \\ =& \sum_{j_0=a_n+1}^\infty \rho^{j_0-a_n}\sum_{(k_i)_{0\leq i \leq r}}\rho^{a_{n}-a_{k_1+1}}\frac{\rho^{a_{k_r}+1-a_1}}{1-\rho}\prod_{m=2}^r \frac{\rho^{a_{k_{m-1}}+1-a_{k_m+1}}}{1-\rho} \\ \leq & \sum_{j_0=a_n+1}^\infty \rho^{j_0-a_n}\sum_{(k_i)_{0\leq i \leq r}}\rho^{n-k_1-1}\frac{\rho^{k_r+1-1}}{1-\rho}\prod_{m=2}^r \frac{\rho^{k_{m-1}+1-k_m-1}}{1-\rho}\\ = &\sum_{j_0=a_n+1}^\infty \rho^{j_0-a_n}\sum_{(k_i)_{0\leq i \leq r}} \frac{\rho^{n-1}}{(1-\rho)^r} \\ = &\binom{n-1}{r}\frac{\rho^n}{(1-\rho)^{r+1}}. \end{align}\] Summing all these inequalities for \(r\) going from \(0\) to \(n-1\) we get \[\begin{align} \mathbb{P}(\forall i\in A, \alpha(i)>\ell) &\leq \sum_{r=0}^{n-1}\mathbb{P}(E_r) \\ &\leq \sum_{r=0}^{n-1}\binom{n-1}{r}\frac{\rho^n}{(1-\rho)^{r+1}}\\ &= \frac{\rho^n}{1-\rho}\left(\frac{2-\rho}{1-\rho}\right)^{n-1}\\ &\leq \left(\frac{\rho(2-\rho)}{1-\rho}\right)^n. \end{align}\] Now, let \(\varepsilon>0\). We choose \(\ell\geq x_0\) big enough so that \(\rho=e^{-c'\ell}\) is small enough so that \(\left(\frac{\rho(2-\rho)}{1-\rho}\right) <\varepsilon.\) This concludes the proof of Proposition 3 for \(A\subset \mathbb{Z}\times \{0\}\). To deduce the result for \(A\subset \mathbb{Z}^2\), observe that any finite set \(A\subset \mathbb{Z}^2\) can be written as \(A=\sqcup_{i\in \mathbb{Z}} A_i\) where \(A_i\) is a subset of \(\mathbb{Z}\times \{i\}\) (in fact \(A_i = A\cap (\mathbb{Z}\times \{i\})\). Since the collections \((\alpha(u))_{u\in \mathbb{Z}\times \{i\}}\) are mutually independent and identically distributed when \(i\) ranges over \(\mathbb{Z}\) we get \[\begin{align} \mathbb{P}\left( \forall u\in A,\;\alpha(u)>\ell \right) &=\mathbb{P}\left( \forall i\in \mathbb{Z},\forall u\in A_i,\;\alpha(u)>\ell \right)\\ &=\prod_{i\in \mathbb{Z}}\mathbb{P}\left( \forall u \in A_i,\;\alpha(u)>\ell \right) \\ &\leq \prod_{i\in \mathbb{Z}} \varepsilon^{|A_i|} \\ &= \varepsilon^{\sum_{i\in \mathbb{Z}}|A_i|}\\&=\varepsilon^{|A|}. \end{align}\] This concludes the proof of Proposition 3. ◻
We now focus our attention on the behavior of our system when \(\ell\) is a small positive number. The main result of this section is Proposition 13 which is the analogue of Proposition 3 for the regime of small positive \(\ell\).
Proposition 13. Let \(X : \mathbb{Z}^2 \to \mathbb{R}\) and \(\mu\) be such that Assumption 1 holds. Assume that \(\mu\) satisfies Assumption 2. Let \(\varepsilon>0\). There exists \(\ell_0(\varepsilon)>0\) such that, for any \(\ell\in [0,\ell_0(\varepsilon)]\), for any finite subset \(A\subset \mathbb{Z}^2\) we have \[\mathbb{P}\left( \forall u\in A,\;\alpha(u)\leq \ell \right)\leq \varepsilon^{|A|}.\]
Here is an immediate corollary of Proposition 13.
Corollary 2. Let \(X : \mathbb{Z}^2 \to \mathbb{R}\) and \(\mu\) be such that Assumption 1 holds. Assume that \(\mu\) satisfies Assumption 2. Then, \(\ell_c^\leq>0\) and \(\ell_c^\geq>0.\)
Proof. The proof is completely similar to the proof of Corollary 1 (using Proposition 13 in place of Proposition 3). ◻
We comment that we now can prove Theorem 1.
The rest of this section is devoted to the proof of Proposition 13. As previously we start working on the one dimensional system, we will write \(Y_i=X_{i,0}\) for \(i\in \mathbb{Z}\). We introduce the following notation.
Definition 3. Let \(h \in \mathbb{R}\). Let \(x,y\in \mathbb{R}\). We write \(x>_h y\) for \(x+h>y\). We will also write \(x_1>_h x_2 >_h x_3 >_h \dots\) to mean \(x_1+h>x_2\), \(x_2+h>x_3\) etc.
The next lemma is the key ingredient to the proof of Proposition 13.
Lemma 1. Let \(X : \mathbb{Z}^2 \to \mathbb{R}\) and \(\mu\) be such that Assumptions 1 and 2 hold. Let \(\varepsilon>0\). There exist \(n_0\in \mathbb{N}^*\) and \(h_0>0\) (depending on \(\varepsilon\)) such that \[\forall h\leq h_0,\;\mathbb{P}(Y_1>_h Y_2>\dots >_h Y_{n_0})\leq \varepsilon^{2n_0}.\]
Proof. The key observation is that for any \(n_0\geq 1\) we have \[\label{a4:eq:key95factorial} \mathbb{P}(Y_1>Y_2>\dots >Y_{n_0})=\frac{1}{n_0!}.\tag{7}\] Indeed, 7 is due to the fact that the probability of having a specific order for \(n_0\) independent random variable with density does not depend on this specific order (we also implicitly used the fact that almost surely, two of the \(Y_i\) cannot be the same).
Let \(n_0\) be such that \[\frac{1}{n_0!}\leq \frac{\varepsilon^{2n_0}}{2}.\] For this specific \(n_0\) which is fixed (and which depends on \(\varepsilon\)) we observe that \[\label{a4:eq:key95factorial952} \mathbb{P}(Y_1>_h Y_2>\dots >_h Y_{n_0}) \xrightarrow[h\to 0]{} \mathbb{P}(Y_1> Y_2>\dots > Y_{n_0}) = \frac{1}{n_0!}\tag{8}\] In fact, 8 comes from the continuity in \(h\) of the left hand side (which can be seen by writing this probability as an integral and using Assumption 2).
Therefore, we may find \(h_0>0\) depending on \(\varepsilon\) such that \[\forall h\leq h_0, \;\mathbb{P}(Y_1>_h Y_2>\dots >_h Y_{n_0}) < \varepsilon^{2n_0}.\] This is precisely the conclusion. ◻
Definition 4. Let \(R\geq 1\). We introduce the field \(\alpha_R\) as a truncated version of the field \(\alpha\), that is \[\forall u \in \mathbb{Z}^2,\;\alpha_R(u) := \max_{r\in \llbracket 1 , R\rrbracket} \frac{X_{u+re_1}-X_u}{r}.\]
Remark 14. Note that the difference between \(\alpha_R\) and \(\alpha\) is that instead of taking the supremum over \(r\in \mathbb{N}^*\) for \(\alpha\) we restrict ourselves to \(r\in \llbracket 1, R\rrbracket\) for \(\alpha_R\). As such, we may understand \(\alpha_R\) as a variant of \(\alpha\) where we only consider the shadows coming from the \(R\) first points ahead of us. A consequence of the definition is that, if \(\alpha(u)\leq \ell\) then \(\alpha_R(u)\leq \ell\) (for any \(u,\ell,R\)).
Lemma 2. Let \(\eta>0\). There exist \(R_0\geq 1\) and \(\ell_0>0\) such that \[\forall u\in \mathbb{Z}^2,\;\forall \ell\in [0,\ell_0],\;\mathbb{P}\left( \alpha_{R_0}(u)\leq \ell \right)\leq \eta\]
Proof. By stationarity, it is enough to prove the result for \(u_0=(0,0)\). First, note that for any given \(R_0\geq 1\) we have \[\label{a4:eq:factorial953} \mathbb{P}\left( \alpha_{R_0}(u_0)\leq 0 \right)=\frac{1}{R_0+1}.\tag{9}\] Indeed, for \(\alpha_{R_0}(u_0)\) to be negative, one need to have \(X_{u_0}\) to be greater than any of the \(X_{u_0+re_1}\) for \(1\leq r\leq R\). So the above probability, is the probability that \(X_{u_0}=\max_{0\leq r\leq R}{X_{u_0+re_1}}.\) Since the \((X_v)_{v\in \mathbb{Z}^2}\) are independent and with density, this probability is precisely \(\frac{1}{R_0+1}\) (there is the same probability for any point in this finite sequence to be the maximum of the sequence).
Due to 9 , we may find \(R_0\geq 1\) depending on \(\eta\) such that \[\label{a4:eq:factorial954} \mathbb{P}\left( \alpha_{R_0}(u_0)\leq 0 \right)\leq \frac{\eta}{2}.\tag{10}\] Now, we argue that the map \(\ell\mapsto \mathbb{P}\left( \alpha_{R_0}(u_0)\leq \ell \right)\) is continuous. In fact one may write \[\begin{align} \mathbb{P}\left( \alpha_{R_0}(u_0)\leq \ell \right)&=\mathbb{P}\left( \forall r\in \llbracket 1, R_0\rrbracket,\; X_{u+re_1}< X_{u_0}+r\ell \right) \\ &=\int_{\mathbb{R}}\varphi(x)\prod_{r=1}^{R_0}F(x+r\ell) dx, \end{align}\] where \(\varphi\in L^1(\mathbb{R})\) is the density of \(X_{u_0}\) and \(F\) is the cumulative distribution function associated to this density (\(F\) is continuous, positive and bounded by \(1\)). The dominated convergence theorem ensures continuity in \(\ell\). This continuity together with 10 conclude the proof. ◻
Definition 5. Let \(A\subset \mathbb{Z}\) be a finite subset, \(A=\{a_1,a_2,\dots,a_n\}\) with \(a_1<a_2<\dots<a_n\). Let \(R_0\geq 1\). We say that \(A\) is \(R_0\)-connected if \[\forall i\in \llbracket 1,n-1\rrbracket,\;|a_{i+1}-a_i|\leq R_0.\]
Lemma 3. Let \(A\subset \mathbb{Z}\) be a finite subset. Let \(R_0\geq 1\). There exist \(k\in \mathbb{N}\) and subsets \(A_1,\dots,A_k\) of \(A\) such that
\(A_i\) if \(R_0\)-connected for any \(i\in \llbracket 1, k\rrbracket.\)
\(A=\sqcup_{i=1}^k A_i\)
\(\forall 1\leq i\leq k-1, \max A_i + R_0 < \min A_{i+1}.\)
Such subsets of \(A\) form what we call a \(R_0-\)decomposition of \(A\).
Proof. We do the proof by induction on the cardinal of \(A\). If \(A=\emptyset\), we may take \(k=0\) and there is nothing to check. Assume the property holds for sets of cardinal \(n\). Let us consider \(A\) a set of cardinal \(n+1\) which we may write as \(A=\{a_1,\dots,a_{n+1}\}\) with \(a_1<a_2<\dots<a_{n+1}.\) Then consider \(\tilde{A}=A\setminus \{a_{n+1}\}\). We may find a \(R_0\)-decomposition of \(\tilde{A}\) which we denote by \((\tilde{A}_i)_{1\leq i \leq k}.\) If \(a_{n+1}-a_n \leq R_0\) we may add \(a_{n+1}\) to \(\tilde{A}_k\), and leave the other \(\tilde{A}_i\) unchanged, which gives an \(R_0\)-decomposition of \(A\). Otherwise we may define \(\tilde{A}_{k+1}:=\{a_{n+1}\}\). Then \((\tilde{A}_i)_{1\leq i \leq k+1}\) is a \(R_0\)-decomposition of \(A\). ◻
We now prove Proposition 13.
Proof of Proposition 13. Let \(\varepsilon\in ]0,1[\). Applying Lemma 1 we let \(h_0>0\) and \(n_0\geq 1\) (depending on \(\varepsilon\)) be such that : \[\label{a4:eq:f1} \forall 0\leq h\leq h_0,\;\mathbb{P}(Y_1>_h Y_2>\dots >_h Y_{n_0})\leq \varepsilon^{2n_0}.\tag{11}\] We apply Lemma 2 for \(\eta = \varepsilon^{2n_0}\). We may therefore define \(\ell_0>0\) and \(R_0\geq 1\) (depending on \(\varepsilon\)) be such that \[\label{a4:eq:f2} \forall u\in \mathbb{Z}^2,\; \forall \ell\in [0, \ell_0],\;\mathbb{P}\left( \alpha_{R_0}(u)<\ell \right)\leq \varepsilon^{2n_0}.\tag{12}\] In the following of the proof we fix some \(\ell>0\) (depending on \(\varepsilon\)) such that \(\ell\leq \ell_0\) and \(\ell\leq \frac{h_0}{R_0}\).
Let \(A\) be a finite subset of \(\mathbb{Z}\) (which we interpret as a subset of \(\mathbb{Z}\times \{0\}\) by the natural inclusion \(i\mapsto (i,0)\)). As mentioned earlier, by the definition of \(\alpha\) and \(\alpha_{R_0}\), we always have the implication \[\alpha(i)<\ell \Rightarrow \alpha_{R_0}(i)<\ell.\] Therefore, we obtain the following upper-bound \[\label{a4:eq:alpha95to95alpha95R950} \mathbb{P}(\forall i \in A,\;\alpha(i)<\ell)\leq \mathbb{P}(\forall i \in A,\;\alpha_{R_0}(i)<\ell).\tag{13}\] By Lemma 3, we may write \[A = A_1 \sqcup A_2\sqcup \dots \sqcup A_r,\] where \((A_k)_{1\leq k \leq r}\) is a \(R_0\)-decomposition of \(A\). We can then write \[\begin{align} \label{a4:eq:product95R950} \mathbb{P}(\forall i \in A,\; \alpha_{R_0}(i)<\ell) &= \mathbb{P}(\forall k\in \llbracket 1,r \rrbracket,\; \forall i\in A_k,\;\alpha_{R_0}(i)<\ell)\nonumber\\ &= \prod_{k=1}^r \mathbb{P}(\forall i\in A_k,\;\alpha_{R_0}(i)<\ell). \end{align}\tag{14}\] Indeed, by the definition of a \(R_0\)-decomposition, the sets \(A_k\) are distant of at least \(R_0\), and since \(\alpha_{R_0}(i)\) only depends on the collection \((Y_j)_{i\leq j\leq i+R_0}\) we see that the restrictions of \(\alpha_{R_0}\) to the different \(A_k\) are independent.
We may therefore focus on the case of a set \(A\subset \mathbb{Z}\) which is finite and \(R_0\)-connected. Let \(A\) be such a set. We want to prove the following. \[\mathbb{P}(\forall i\in A,\;\alpha_{R_0}(i)<\ell) \leq \varepsilon^{|A|}.\] Let us write \(A=\{a_1,\dots,a_n\}\) with \(a_1<a_2<\dots<a_n\) and where \(n=|A|\geq 1\) (in the case \(n=0\) there is nothing to prove). We distinguish two cases. If \(n\leq 2n_0\), we may take some \(i_0\in A\) (arbitrarily chosen) and write \[\mathbb{P}(\forall i\in A,\;\alpha_{R_0}(i)<\ell) \leq \mathbb{P}(\alpha_{R_0}(i_0)<\ell) \leq \varepsilon^{2n_0}\leq \varepsilon^n.\] We now consider the case where \(n>2n_0\). First, observe that since \(A\) is \(R_0\)-compact, then \[\alpha_{R_0}(a_1)<\ell \Rightarrow Y_{a_1}+\ell(a_2-a_1)> Y_{a_2}.\] Moreover, since we assume \(\ell\geq 0\) and \(a_2-a_1\leq R_0\), and \(R_0\ell\leq h_0\) we have \[\alpha_{R_0}(a_1)<\ell \Rightarrow Y_{a_1} >_{h_0} Y_{a_2},\] where we recall that \(Y_{a_1}>_{h_0}Y_{a_2}\) stands for \(Y_{a_1}+h_0>Y_{a_2}.\) We may therefore write \[\begin{align} \mathbb{P}(\forall i\in A,\;\alpha_{R_0}(i)<\ell) & \leq \mathbb{P}(Y_{a_1}>_{h_0}Y_{a_2}>_{h_0} \dots >_{h_0} Y_{a_n}). \end{align}\] This probability is the same as \[\mathbb{P}\left( \forall i\in \llbracket 1, n-1\rrbracket,\;Y_i>_{h_0}Y_{i+1} \right).\] We will do an upper-bound on this probability by forgetting one inequality every \(n_0\) and then use independence. More precisely \[\begin{align} &\mathbb{P}\left( \forall i\in \llbracket 1, n\rrbracket,\;Y_i>_{h_0}Y_{i+1} \right) \\ \leq \quad & \mathbb{P}\left( \forall j\in \llbracket 1, \lfloor \frac{n}{n_0}\rfloor \rrbracket,\;\forall i \in \llbracket (j-1)n_0+1, jn_0-1\rrbracket,\;Y_i>_{h_0} Y_{i+1} \right) \\ \leq\quad & \mathbb{P}(Y_1>_{h_0}Y_2>_{h_0} \dots >_{h_0} Y_{n_0})^{\lfloor \frac{n}{n_0}\rfloor} \\ \leq \quad&\varepsilon^{2n_0 \lfloor \frac{n}{n_0}\rfloor} \\ \leq \quad & \varepsilon^{2n - 2n_0} \\ \leq \quad & \varepsilon^n. \end{align}\] To go from the second to the third line, we use independence and the fact that by stationarity all probabilities are the same. From the third to fourth line we use 11 , in the last two lines we use that \(\varepsilon\in ]0,1[\), \(\lfloor x \rfloor \geq x-1\) and our assumption \(n> 2n_0\).
By 13 , we have thus proven Proposition 13 when \(A\subset \mathbb{Z}\times\{0\}\) is finite and \(R_0\)-connected. By 14 and 13 , Proposition 13 actually holds for any \(A\subset \mathbb{Z}\times \{0\}\) a finite subset. To deduce the result for any \(A\subset \mathbb{Z}^2\) finite, we may proceed as in the proof of Proposition 3 writing \(A=\sqcup_{i\in \mathbb{Z}} A\cap (\mathbb{Z}\times \{i\})\) and using the fact that the restrictions of \(\alpha\) to \(\mathbb{Z}\times \{i\}\) are independent and identically distributed when \(i\) ranges over \(\mathbb{Z}\). This concludes the proof of Proposition 13. ◻
In this appendix we consider the problem of understanding how much information about \(X\) does the field \(\alpha\) really contain. Indeed we have seen the problem in the following way, we start from a field \(X : \mathbb{Z}^2 \to \mathbb{R}\) and we compute a new field \(\alpha : \mathbb{Z}^2 \to \overline{\mathbb{R}}\) by the formula \[\forall u\in \mathbb{Z}^2,\;\alpha(u) := \sup_{r\geq 1}\frac{X(u+re_1)-X(u)}{r}.\] That is, we may write \(\alpha = \Phi(X)\) where \(\Phi\) is a deterministic application \[\Phi : \mathbb{R}^{\mathbb{Z}^2}\to \overline{\mathbb{R}}^{\mathbb{Z}^2}.\] We wonder if it is possible to build another deterministic application \(\Psi : \overline{\mathbb{R}}^{\mathbb{Z}^2} \to \mathbb{R}^{\mathbb{Z}^2}\) such that \(\Psi(\Phi(X))=X.\) We will work under the following assumption.
Assumption 4. The measure \(\mu\) admits a finite first moment, that is \[\int_\mathbb{R}|x|\mu(dx)<\infty.\]
Our main result is the following.
Theorem 15. There exists a deterministic application \(\Psi : \overline{\mathbb{R}}^{\mathbb{Z}^2} \to \mathbb{R}^{\mathbb{Z}^2}\) such that the following holds. Let \(X: \mathbb{Z}^2 \to \mathbb{R}\) and \(\mu\) be such that Assumptions 1 and 4 hold, then \[\mathbb{P}\left( \Psi(\Phi(X))=X-\overline{\mu}\mathbb{1} \right)=1,\] where \(\overline{\mu}\in \mathbb{R}\) denotes the first moment of \(\mu\) and \(\mathbb{1}\in \mathbb{R}^{\mathbb{Z}^2}\) has all coordinates equal to \(1\).
This theorem ensures that under our assumptions it is possible to recover \(X\) from \(\alpha\) up to an extra additive constant. Moreover if one has access to the first moment of \(\mu\) then complete recovery is almost surely possible. We comment that it is impossible to have an inversion in a classical way (even up to a constant). In fact consider the following counter example. Consider two sequences \((x_i)_{i\in \mathbb{Z}}\) and \((y_i)_{i\in \mathbb{Z}}\) such that, \[\forall i\in \mathbb{Z},\; x_i=0 \text{ and }y_i = \begin{cases} 0 &\text{ if }i\geq 0 \\ 1 &\text{ if }i < 0 \end{cases}.\] Then if one define \(X,Y\in \mathbb{R}^{\mathbb{Z}^2}\) by \(X(i,j):=x_i\) and \(Y(i,j):=y_i\) we see that \(X\) and \(Y\) do not differ by an additive constant but \(\Phi(X)=\Phi(Y).\) In order to prove Theorem 15 we introduce some notations.
Definition 6. Let \(X\in \mathbb{R}^{\mathbb{Z}^2}\) and \(j\in \mathbb{Z}^2\). For \(u\in \mathbb{Z}\) we denote by \(\alpha_j(u)\) the quantity \[\alpha_j(u):= \alpha(u,j)=\sup_{r\geq 1}\frac{X(u+r,j)-X(u,j)}{r}.\]
Let \(u,v\in \mathbb{Z}\) be such that \(u<v\), we denote by \(\tau^X_j(u,v)\) the quantity \[\tau^X_j(u,v):= \frac{X(v,j)-X(u,j)}{v-u}.\]
Fact 16. Let \(X\in \mathbb{R}^{\mathbb{Z}^2}\), \(j\in \mathbb{Z}\) and \(u,v,w\) be three points of \(\mathbb{Z}\) such that \(u<v<w\), then \[(\tau^X_j(u,w)-\tau^X_j(u,v))(\tau^X_j(u,w)-\tau^X_j(v,w))\leq 0.\] Moreover, this product is zero only if both factors are equal to zero.
Proof. The proof is a direct consequence of the fact that \(\tau_j^X(u,w)\) can be written as a mean between \(\tau_j^X(u,v)\) and \(\tau_j^X(v,w).\) Indeed, \[\tau_j^X(u,w) = \frac{w-v}{w-u}\tau_j^X(v,w)+\frac{v-u}{w-u}\tau_j^X(u,v).\] ◻
We now prove a technical lemma.
Lemma 4. Assume that \(X : \mathbb{Z}^2 \to \mathbb{R}\) and \(\mu\) are such that Assumptions 1 and 4 hold. Let \(j\in \mathbb{Z}\). Then, almost surely, the following statements are verified,
For all \(u\in \mathbb{Z}\) the set \(M(u):=\left\{v\in \mathbb{Z}\;|\;v>u \text{ and }\alpha_j(u) = \tau^X_j(u,v)\right\}\) is non empty and totally ordered. We denote by \(T(u)\in \mathbb{Z}\) its minimum.
For all \(u\in \mathbb{Z}\), we have the following identity, \[T(u)=\min\{v\in \mathbb{Z}\;|\;v>u\text{ and }\alpha_j(v)\leq \alpha_j(u)\}.\]
For any \(u\in \mathbb{Z}\) and \(v\in \mathbb{Z}\) such that \(u<v<T(u)\) then \(T(v)\leq T(u).\)
Proof. We start with the first point of the lemma. Let \(u\in \mathbb{Z}\). Let \(N\in \mathbb{N}\). For \(r\in \mathbb{N}^*\) we have by stationarity \[\begin{align} \mathbb{P}\left( \frac{X(u+r,j)-X(u,j)}{r}\geq 2^{-N} \right) \leq 2\mathbb{P}\left( 2^{N+1}|X(0,0)|\geq r \right) \end{align}\] Since \(\mathbb{E}[|X(0,0)|]<\infty\), then \(\sum_{r\in \mathbb{N}^*}\mathbb{P}\left( 2^{N+1}|X(0,0)|\geq r \right)<\infty\) and Borel-Cantelli Lemma implies that almost surely there exists a finite \(r_N\) depending on \(N\) such that for all \(r\geq r_N\), \(\frac{X(u+r,j)-X(u,j)}{r}< 2^{-N}\). Since there are countably many \(N\in \mathbb{N}\) we may assume that almost surely all \(r_N\) are finite.
Now denote by \(\mathcal{A}_u\) the event \[\mathcal{A}_u := \{\exists r\in \mathbb{N}\;|\;X(u+r,j)>X(u,j)\}.\] On the event \(\mathcal{A}_u\), let \(r_0\in \mathbb{N}\) be such that \(X(u+r_0,j)>X(u,j)\). We denote by \(\varepsilon_0 := \frac{X(u+r_0,j)-X(u,j)}{r_0}>0.\) Let \(N_0\in \mathbb{N}\) be such that \(2^{-N_0}<\varepsilon_0\) then by our previous observation we see that eventually for \(r\geq r_{N_0}\) we have \(\frac{X(u+r,j)-X(u,j)}{r}<\frac{X(u+r_0,j)-X(u,j)}{r_0}\), this proves that \[\alpha_j(u) = \max_{1\leq r \leq r_{N_0}}\frac{X(u+r,j)-X(u,j)}{r}.\] which in turns proves the first statement of the lemma on the event \(\mathcal{A}_u\). On the event \(\mathcal{A}_u^c\), then for all \(r\in \mathbb{N}\) we have \(X(u+r,j)\leq X(u,j)\), this implies \(\alpha_j(u)\leq 0.\) We now show that almost surely there exists \(r\geq 1\) such that \(X(u+r,j)\geq X(u,j)\) which proves \(0=\alpha_j(u)=\frac{X(u+r,j)-X(u,j)}{r}\) and concludes the proof of the first element of the lemma. First, we write \[\begin{align} \mathbb{P}\left( \forall r\in \mathbb{N}^*,\;X(u+r,j)<X(u,j) \right)=\int_{\mathbb{R}}\prod_{r=1}^\infty\mu(]-\infty,x[)\mu(dx). \end{align}\] This comes from the fact that by Assumption 1, all \(X(v)\) are independent and distributed according to the same probability measure \(\mu.\) Denote by \(I\) the set \[I := \{x\in \mathbb{R}\;|\;\mu(]-\infty,x[)=1\}.\] It turns out that \(I\) is a measurable set. In fact, either \(I=\emptyset\) or \(I\) is an interval of the form \(]a,\infty[\) or \([a,\infty[.\) The above computation can be written as \[\begin{align} \mathbb{P}\left( \forall r\in \mathbb{N}^*,\;X(u+r,j)<X(u,j) \right)=\mu(I). \end{align}\] If \(I=\emptyset\), then the probability is \(0\) and we are done. Otherwise, let \(a=\inf I\in \mathbb{R}\), let \((a_n)_n\) be a decreasing sequence of element in \(I\) such that \(a_n \to a.\) We have \[\mu(]-\infty,a])=\mu(\bigcap_{n\in \mathbb{N}}]-\infty,a_n[)=1.\] There are two cases. If \(\mu(\{a\})>0\), then \(\mu(]-\infty,a[)<1\) so that \(a\not\in I\), and \(I=]a,\infty[\). Therefore \(\mu(I)=\mu(]a,\infty[)=1-\mu(]-\infty,a])=0.\) Otherwise, if \(\mu(\{a\})=0\), then \(I=[a,\infty[\) and \(\mu(I)=1-\mu(]-\infty,a])+\mu(\{a\})=0.\) In all cases we have \(\mu(I)=0\), this shows that with probability \(1\) then there exists \(r\in \mathbb{N}^*\) such that \(X(u+r,j)\geq X(u)\) and this implies the conclusion as mentioned earlier.
For the second item of the lemma. We first show that \(\alpha_j(T(u))\leq \alpha_j(u)\). In fact, let \(v>T(u)\) be such that \(\alpha_j(T(u))=\tau^X_j(T(u),v).\) By definition of \(\alpha_j(u)\) and \(T(u)\) we have \[\alpha_j(u)=\tau^X_j(u,T(u))\geq \tau^X_j(u,v).\] Therefore, by Fact 16, we have \[\tau^X_j(T(u),v)\leq \tau^X_j(u,v).\] This entails \[\alpha_j(T(u))\leq \alpha_j(u).\] We now show that for any \(v\in \mathbb{Z}\) such that \(u<v<T(u)\) then \(\alpha_j(v)>\alpha_j(u)\). Indeed, for such \(v\), then by definition of \(T(u)\) we have \[\tau^X_j(u,v)<\tau^X_j(u,T(u)).\] Therefore, by Fact 16 we have \[\tau^X_j(v,T(u))>\tau^X_j(u,T(u))=\alpha_j(u).\] This implies \(\alpha_j(v)>\alpha_j(u).\)
For the third element of the lemma, assume by contradiction that we have \(u,v\in \mathbb{Z}\) such that \(u<v<T(u)\) and verifying \(T(v)>T(u).\) Then by definition of \(T(v)\) we have \[\tau^X_j(v,T(u))<\tau^X_j(v,T(v))=\alpha_j(v)\] Therefore, by Fact 16 we have \[\tau^X_j(T(u),T(v))>\tau^X_j(v,T(v))=\alpha_j(v).\] This implies \(\alpha_j(T(u))>\alpha_j(v)\) which is in contradiction with the second item of the lemma as we would have \(\alpha_j(u)\geq \alpha_j(T(u))>\alpha_j(v)>\alpha_j(u)\). ◻
We now provide the proof of Theorem 15
Proof of Theorem 15. We will build a function \(\Psi : \overline{\mathbb{R}}^{\mathbb{Z}^2}\to \mathbb{R}^{\mathbb{Z}^2}\) such that almost surely \(\Psi(\Phi(X))=X-\overline{\mu}\mathbb{1}.\) In order to do so, we first build a function \(\Psi_0 : \overline{\mathbb{R}}^{\mathbb{Z}^2}\to \mathbb{R}^{\mathbb{Z}^2}\) such that almost surely, \[\Psi_0(\Phi(X))=(X(i,j)-X(0,j))_{i,j\in \mathbb{Z}}.\] Let us give some element \(\alpha\in \overline{\mathbb{R}}^{\mathbb{Z}^2}.\) We we build \(Y=\Psi_0(\alpha)\) algorithmically. During the various steps of the algorithm we will need to make some assumptions on \(\alpha\) (that are verified almost surely when \(\alpha=\Phi(X)\)). Therefore, to define \(\Psi_0\) and \(\Psi\) as functions on the whole domain \(\overline{\mathbb{R}}^{\mathbb{Z}^2}\) we arbitrary choose an element \(\dagger\) of \(\mathbb{R}^{\mathbb{Z}^2}\) such that \(\Psi_0(\alpha)=\Psi(\alpha)=\dagger\) if \(\alpha\) is a "bad configuration" (which almost surely will not happen). First, if there exists \(u\in \mathbb{Z}^2\) such that \(\alpha(u)\not\in \mathbb{R}\) we define \(\Psi_0(\alpha)=\dagger.\) We now assume that for all \(u\in \mathbb{Z}^2\), \(\alpha(u)\in \mathbb{R}.\) Let \(j\in \mathbb{Z}\) we will first define \(Y\) on \(\mathbb{Z}\times \{j\}\). Let \(u\in \mathbb{Z}\). As in Lemma 4 we denote by \(T(u)\) the index \[T(u):= \inf\{v\in \mathbb{Z}\;|\;v>u\text{ and }\alpha_j(v)\leq \alpha_j(u)\},\] with the convention \(\inf \emptyset =\infty.\) If there exists \(u\in \mathbb{Z}\) such that \(T(u)=\infty\) then we set \(\Psi_0(\alpha)=\dagger.\) We now assume that all \(T(u)\) are finite. This will almost surely be the case by Lemma 4. Let \((x_n)_{n\geq 0}\in \mathbb{Z}^\mathbb{N}\) the sequence defined by \[x_0=0 \quad \quad \forall n\in \mathbb{N},\;x_{n+1} := T(x_n).\] We now define \(\Psi_0(\alpha)\) on the collection \((x_n,j)_{n\geq 0}\) by \[Y(x_0,j):=0\] \[\forall n\in \mathbb{N},\;Y(x_{n+1},j) := Y(x_{n},j)+(x_{n+1}-x_{n})\alpha_j(x_n).\] We proceed to extend this definition on \(\mathbb{N}\times \{j\}.\) Let \(n\in \mathbb{N}\), such that \(x_n+1<x_{n+1}\) we define \(Y\) on \(\llbracket x_n+1, x_{n+1}\rrbracket\times \{j\}\) by induction. \(Y(x_{n+1},j)\) was already defined earlier. If we assume that \(Y\) is defined on \(\llbracket x_{n+1}-r, x_{n+1}\rrbracket\times \{j\}\) for some \(0\leq r \leq x_{n+1}-x_n-2.\) Then let \(x:= x_{n+1}-(r+1)\in \mathbb{Z}\) we define \(Y(x,j)\) by the following rule.
If \(T(x)\leq x_{n+1}\) we set \(Y(x,j):= Y(T(x),j)-\alpha(x)(T(x)-x).\)
Otherwise we set \(\Psi_0(\alpha)=\dagger.\)
By the third element of Lemma 4 we see that if \(\alpha=\Phi(X)\) then, almost surely, we will have \(T(x)\leq x_{n+1}\) and therefore we will not set \(\Psi_0(\alpha)=\dagger.\)
Once we have constructed \(Y\) on \(\mathbb{N}\times \{j\}\) by the above procedure we may proceed to define \(Y\) on \(\mathbb{Z}\times \{j\}.\) We proceed by induction, assume that \(Y\) is constructed on \(\llbracket -r, \infty \llbracket\times \{j\}\) for some \(r\geq 0\) we extend this construction to \(\llbracket -(r+1),\infty\llbracket\times \{j\}\) by setting \[Y(-(r+1),j) := Y(T(-(r+1)),j)-\alpha_j(-(r+1))(T(-(r+1))+(r+1)).\]
By repeating this for all \(j\in \mathbb{Z}\) we build the process \(Y\) on \(\mathbb{Z}\times \mathbb{Z}.\) This process was built so that \(Y(0,j)=0\) and \(Y(T(u),j)-Y(u,j)=\alpha_j(u)(T(u)-u)\) for all \(u,j\in \mathbb{Z}\). It is therefore straightforward to verify by Lemma 4 that almost surely we have \[Y=\Psi_0(\Phi(X))=(X(i,j)-X(0,j))_{i,j\in \mathbb{Z}^2}.\]
Once we have computed \(Y=\Psi_0(\alpha)\), we define \(\Psi(\alpha)\) in the following way, \[\forall (i,j)\in \mathbb{Z}^2,\;\Psi(\alpha)(i,j):=Y(i,j) - \lim_{N\to \infty}\frac{1}{N}\sum_{r=1}^N Y(r,j).\] As usual, if the limit above does not make sense then we set \(\Psi(\alpha)=\dagger.\) When \(\alpha=\Phi(X)\) then, almost surely, we have \(Y(r,j)=X(r,j)-X(0,j)\) and by the law of large numbers the limit exists, \[\frac{1}{N}\sum_{r=1}^N Y(r,j) \xrightarrow[n\to \infty]{} \overline{\mu}-X(0,j).\] This shows that almost surely \[\Psi(\Phi(X))=X-\overline{\mu}\mathbb{1},\] which concludes the proof of Theorem 15. ◻
David, Vernotte
Institut Fourier, UMR 5582, Laboratoire de MathématiquesUniversité Grenoble Alpes, CS 40700, 38058 Grenoble cedex 9, France