April 28, 2025
Figure 1: .
The main purpose of this paper is to study the limit distribution of the number of successes in inhomogeneous Markovian Bernoulli trials. Let \(\{\eta_i\}_{i\ge1}\) be a sequence of Bernoulli random variables. It is well-known that the Chen–Stein method, named after Chen [1] and Stein [2], provides an efficient framework for bounding the approximation error between the distribution of \(\sum_{i=1}^n\eta_i\) and a Poisson distribution. In this paper, from a different perspective, we aim to establish novel convergence regimes characterized by non-Poisson limits. As applications, we study the number of weak cutspheres of a \(d(\ge3)\)-dimensional standard Brownian motion, the number of weak cutpoints of a geometric Brownian motion, and the number of times the population size of a Galton-Watson process or a branching process with immigration in varying environments reaches a given threshold.
The approximation of \(\sum_{i=1}^n\eta_i\) by a Poisson distribution has a long history under various assumptions. For the case where \(\{\eta_i\}_{i\ge1}\) are independent, we refer the reader to [3]–[7] and references therein. Choi and Xia [8] also showed that the binomial approximation is generally better than the Poisson approximation for independent Bernoulli trials. When \(\{\eta_i\}_{i\ge1}\) are dependent, using an approach similar to that of Stein [2], who studied normal approximation for dependent random variables, Chen [1] established a general method for bounding the error in the Poisson approximation of \(\sum_{i=1}^n\eta_i.\) This methodology, now widely recognized as the Chen–Stein method, has proven to be remarkably effective for Poisson approximation problems. For further advancements in this area, see [9]–[12] and references therein.
It is important to note that the primary aim of these works is to bound the approximation error between the distribution of \(\sum_{i=1}^n\eta_i\) (for fixed \(n\)) and a Poisson distribution. For instance, in the case where \(\{\eta_i\}_{i\ge1}\) are independent, \(P(\eta_i=1)=p_i\) and \(\max_{1\le i\le n}p_i\le 1/4,\) Le Cam [5] proved that for any real-valued function \(h\) defined on nonnegative integers, \(\left|E\left(h\left(\sum_{i=1}^n\eta_i\right)\right)-\mathscr P_{\lambda}h\right|<16\lambda^{-1}\sum_{i=1}^np_i^2,\) where \(\mathscr P_\lambda h=\sum_{i=1}^\infty e^{-\lambda}\lambda^ih(i)/i!\) and \(\lambda=\sum_{i=1}^np_i.\)
In contrast, our work adopts a distinct perspective. When the sequence \(\{\eta_i\}_{i\ge1}\) is governed by a Markovian dependence structure, we show that under suitable scaling conditions, the sum \(\sum_{i=1}^n\eta_i\) converges almost surely or in distribution as \(n\rightarrow\infty\) to a random variable with a geometric or gamma distribution.
Prior to presenting the main results, we introduce some notation and conventions.
In this paper, we use \(\xrightarrow{d}\) and \(\xrightarrow{a.s.}\) to denote convergence in distribution and almost sure convergence, respectively. We denote by \(\mathbb{N},\) \(\mathbb{R},\) and \(\mathbb{C}\) the sets of natural, real and complex numbers, respectively. For \(x\in \mathbb{R},\) \(\lfloor x\rfloor\) denotes the floor function, i.e., the greatest integer less than or equal to \(x\). Furthermore, the asymptotic notation \(f(n)\sim g(n)\) indicates that \(\lim_{n\rightarrow\infty}f(n)/g(n)=1.\) We write \(f(n)=O(g(n))\) if there is a constant \(c>0\) such that \(|f(n)|\le cg(n)\) for all sufficiently large \(n.\) For \(\alpha>0,\) the Gamma function is defined as \(\Gamma(\alpha):=\int_{0}^\infty x^{\alpha-1}e^{-x}dx.\) For \(\beta,\;r>0,\) we write \(\xi\sim\mathrm{Gamma}(r,\beta)\) if the random variable \(\xi\) has the density function \(\beta^{r}x^{{r-1}}e^{-\beta x}/\Gamma({r}),~x>0.\) Similarly, for \(\lambda>0\) and \(p\in (0,1),\) we write \(\xi\sim\mathrm{Exp}(\lambda)\) if \(\xi\) has the density function \(\lambda{e}^{-\lambda x},\;x>0,\) and \(\xi\sim\mathrm{Geo}(p)\) if \(P(\xi=i)=(1-p)^{i}p\) for \(i\geq 0\). Finally, for a set \(A\), we denote by \(\#A\) its cardinality.
To state the main results, we need to define the so-called regularly varying sequence.
Definition 1. Let \(\{S(n)\}_{n\ge 1}\) be a sequence of positive numbers. If for each \(x>0,\) \[\begin{align} \lim_{n\rightarrow\infty}\frac{S(\lfloor xn\rfloor)}{S(n)}=x^{\tau}, \end{align}\] we say the sequence \(\{S(n)\}_{n\ge 1}\) is regularly varying with index \(\tau.\)
Suppose that \(\{\eta_n\}_{n\ge 1}\) is a sequence of Bernoulli variables such that for \(i\ge1,\) \(k\ge 1,\) \(1\le j_1\le\ldots\le j_k\le j_{k+1}<\infty,\) \[\label{mc}\begin{align} &P\left(\eta_i=1\right)=\frac{1}{\rho(0,i)},\;P\left(\eta_{j_{k+1}}=1\middle|\, \eta_{j_1}=1,\ldots,\eta_{j_k}=1\right)=P\left(\eta_{j_{k+1}}=1\middle|\,\eta_{j_k}=1\right)=\frac{1}{\rho(j_k,j_{k+1})}, \end{align}\tag{1}\] where \(\rho(j,j)=1,\) \(\rho(i,j)> 1\) for \(j>i\ge 0.\)
We have the following theorem, which says that under certain conditions, when properly scaled, \(\sum_{j=1}^n\eta_j\) converges to a non-degenerate random variable as \(n\rightarrow\infty.\)
Theorem 1. Let \(\{\eta_{n}\}_{\;n\ge 1}\) be a sequence of Bernoulli variables such that 1 is satisfied. Assume that \(\{D(n)\}_{n\ge0}\) is a sequence of numbers such that \(D(0)=1\) and \(D(n)>1\) for all \(n\ge1.\)
Assume that \(\zeta(D):=\sum_{n=1}^\infty \frac{1}{D(n)}<\infty.\) If \(\rho(i,j)=D(j-i)\) for all \(j\ge i\ge 1,\) then \[\sum_{j=1}^n\eta_j\xrightarrow{a.s.}\xi \text{ as }n\rightarrow\infty,\] where \(\xi\sim\mathrm{Geo}\Big(\frac{1}{\zeta(D)+1}\Big).\)
Suppose that \(\sum_{n=1}^\infty \frac{1}{D(n)}=\infty.\) For \(n\ge 1,\) let \(S(n)=\sum_{i=1}^n\frac{1}{D(i)}.\) Assume \(\{S(n)\}_{n\ge1}\) is regularly varying with index \(\sigma\in [0,1].\) If for each \(\varepsilon>0,\) there exists \(n_1\ge 1\) such that \[\begin{align} \label{sigma01} 1-\varepsilon\le \frac{\lambda_\sigma D(i)}{\rho(0,i)}\le 1+\varepsilon, \quad 1-\varepsilon\le \frac{D(j-i)}{\rho(i,j)}\le 1+\varepsilon \end{align}\tag{2}\] for \(j-i\ge n_1,\;i\ge n_1\) with \(\lambda_0=1\) and \(\lambda_\sigma=\frac{\Gamma(1+2\sigma)}{\sigma\Gamma(\sigma)\Gamma(1+\sigma)}\) for \(\sigma\in (0,1],\) then \[\frac{\sum_{j=1}^n\eta_j}{S(n)}\xrightarrow{d}\xi \text{ as }n\rightarrow\infty,\] where \(\xi\sim\mathrm{Exp}(\lambda_\sigma).\)
Remark 1. For \(j\ge i\ge1,\) \(\rho(i,j)\) characterizes the dependence between \(\eta_i\) and \(\eta_j.\) From 2 , we note that to apply the theorem, it is not necessary to strictly require \(\rho(i,j)\) to be an explicit function of the distance \(j-i.\) In practice, if \(\rho(i,j)\) is asymptotically characterized by the distance between \(i\) and \(j,\) the theorem remains valid.
In certain special cases, even if condition 2 fails, it is still possible to determine the limit distribution of \(\sum_{i=1}^n \eta_i\).
Theorem 2. Let \(\{\eta_{n}\}_{\;n\ge 1}\) be a sequence of Bernoulli variables such that 1 is satisfied. Fix \(\alpha>0,\) \(\beta>0.\) Suppose that for each \(\varepsilon>0,\) there exists \(n_2\ge 1\) such that \[\begin{align} \label{al} &1-\varepsilon\le \frac{\beta i}{\rho(0,i)}\le 1+\varepsilon,\quad 1-\varepsilon\le \frac{\beta j^{{1-\alpha}}(j^{{\alpha}}-i^{{\alpha}})}{\rho(i,j)}\le 1+\varepsilon, \end{align}\tag{3}\] for \(j-i\ge n_2,\;i\ge n_2.\) Then \[\frac{{\alpha}\beta}{\log n}\sum_{j=1}^n\eta_j\overset{d}\to\xi \text{ as }n\rightarrow\infty,\] where \(\xi\sim \mathrm{Gamma}({\alpha},1).\)
Remark 2. Assume that 3 holds. If \(\alpha= 1,\) then 2 is naturally satisfied with \(D(n)=n\beta\) for \(n\ge 1.\) However, if \(\alpha\ne 1,\) no function \(D(\cdot)\) can satisfy 2 . Specifically, for all \(j\ge i\ge 1,\) \[\begin{align} \label{ijlu} (\alpha\wedge 1)(j-i)\le j^{1-\alpha}(j^\alpha-i^\alpha)\leq(1\vee \alpha)(j-i). \end{align}\tag{4}\] But for any fixed \(k\ge 1,\) the limit \[\lim_{ j=ki,\;i\rightarrow\infty }\frac{j^{{1-\alpha}}(j^{{\alpha}}-i^{{\alpha}})}{j-i}=\frac{k^{1-\alpha}(k^\alpha-1)}{k-1},\] depends on \(k\) when \(\alpha\ne 1.\)
We outline here the key ideas for proving Theorems 1 and 2. To determine the limit distribution of \(\sum_{i=1}^n\eta_i,\) we employ the method of moments. A primary step is to show that for \(k\ge1,\) when properly scaled, the \(k\)-th moment \(E\left(\sum_{i=1}^n\eta_i\right)^k\) converges to a finite limit as \(n\rightarrow\infty.\) Through detailed computations, we derive the expansion (see 32 below) \[\begin{align} E\Bigg(\sum_{j=1}^n\eta_j\Bigg)^k=\sum_{m=1}^k\sum_{ \substack{l_1+\dots+l_m=k,\\ l_s\ge1,\;s=1,...,m }}\frac{k!}{l_1!\cdots l_m!}\sum_{1\le j_1<...<j_m\le n}\frac{1}{\rho(0,j_1)\rho(j_1,j_2)\cdots \rho(j_{m-1},j_m)}. \end{align}\] When \(\rho(i,j)\) asymptotically behaves like \(D(j-i)\) for some function \(D(\cdot)\) and sufficiently large \(i\) and \(j-i,\) the analysis reduces to studying multiple sums of the form \[\begin{align} \label{cms} \sum_{ 1\le j_1<...<j_m\le n }\frac{1}{D(j_1,s)D(j_2-j_1,s)\cdots D(j_m-j_{m-1},s)},\quad n\ge m\ge 1, \;s\in \mathbb{C}. \end{align}\tag{5}\] In Section 2, we rigorously characterize the asymptotics of these sums. Our results yield some neat formulae which may be of independent interest in analysis. For cases where \(\rho(i,j)\) satisfies 3 , we leverage a result from [13], which addresses the limit behavior of related multiple sums, see Proposition 1 below.
We employ Theorems 1 and 2 to investigate several stochastic processes. Specific applications include determining the number of weak cutspheres for a \(d (\ge 3)\)-dimensional standard Brownian motion, counting the weak cutpoints of geometric Brownian motion, and analyzing the frequency with which the population size hits a predefined level in both Galton-Watson processes and branching processes with immigration in varying environments. We next describe the problems in detail.
Fix \(d\ge 3\) and let \(\{B_t\}_{t\ge0}\) be a standard \(d\)-dimensional Brownian motion starting at the origin. For \(x\in \mathbb{R}^d\) and \(r>0,\) denote by \[\mathcal{B}(x,r):=\{y\in \mathbb{R}^d: |y-x|\le r\}\text{ and } \mathcal{S}(x,r):=\{y\in \mathbb{R}^d: |y-x|=r\}\] the ball and sphere of radius \(r\) centered at \(x,\) respectively.
Definition 2. Fix \(a>0.\) Suppose \(\{B_t\}_{t\ge 0}\) is a standard \(d\)-dimensional Brownian motion. For \(r>0,\) if the motion never returns to \(\mathcal{B}(0,r)\) after its first visit to \(\mathcal{S}(0,r+a),\) then we call \(\mathcal{S}(0,r)\) a weak \(a\)-cutsphere of \(\{B_t\}_{t\ge0}.\) For \(b>a>0,\) set \[\begin{align} \label{decnab} C(a,b):=\left\{k\in \mathbb{N}: \mathcal{S}(0,kb) \text{ is a weak } a\text{-cutsphere of } \{B_{t}\}_{t\ge 0} \right\}. \end{align}\tag{6}\]
Corollary 1. Fix \(d\ge 3.\) Let \(\{B_t\}_{t\ge 0}\) be a standard \(d\)-dimensional Brownian motion. For \(b>a>0,\) let \(C(a,b)\) be as in 6 . Then \[\begin{align} \frac{|C(a,b)\cap [1,n]|}{a/b\log n}\xrightarrow{d}\xi, \end{align}\] as \(n\to\infty,\) where \(\xi\sim \mathrm{Gamma}(d-2,1).\)
Remark 3. For \(1\)-dimensional or \(2\)-dimensional Brownian motions, \(C(a,b)=\emptyset\) since they are all recurrent. Therefore we need only consider here the \(d\)-dimensional Brownian motions with dimension \(d\ge 3.\)
Consider the stochastic differential equation \[\begin{align} dX_t=\mu X_tdt+\sigma X_tdB_t, \label{sde} \end{align}\tag{7}\] where \(\mu>0,\) \(\sigma\ge 0\) are constants, \(\{B_t\}_{t\ge0}\) is a standard 1-dimensional Brownian motion, and \(X_0=x_0>0.\) It is known that the solution to 7 is given by a geometric Brownian motion \[\begin{align} X_t=X_0\exp\left\{\left(\mu-\frac{\sigma^2}{2}\right)t+\sigma B_t\right\},\;t\ge 0.\label{geob} \end{align}\tag{8}\] We refer to Klebaner [14] for the basics of geometric Brownian motion. Analogous to the concept of weak cutspheres for \(d\)-dimensional Brownian motion, we define the weak cutpoints for geometric Brownian motion as follows.
Definition 3. Fix \(a>0.\) Let \(\{X_t\}_{t\ge 0}\) be the geometric Brownian motion in 8 . For \(x>0,\) if \(\{X_t\}_{t\ge 0}\) never returns to \((0,x]\) after its first visit to \(x+a,\) then we call \(x\) a weak \(a\)-cutpoint of \(\{X_t\}_{t\ge0}.\) For \(b>a>0,\) set \[\begin{align} \label{decnabg} \hat{C}(a,b):=\left\{k\in \mathbb{N}: kb \text{ is a weak } a\text{-cutpoint of } \{X_{t}\}_{t\ge 0} \right\}. \end{align}\tag{9}\]
Corollary 2. Fix \(b>a>0.\) Suppose \(2\mu>\sigma^2\) and let \(\{X_t\}_{t\ge 0}\) be the geometric Brownian motion given in 8 with \(x_0\in (0,b)\). Let \(\hat{C}(a,b)\) be as in 9 . Then \[\begin{align} \frac{|\hat{C}(a,b)\cap [1,n]|}{a/b\log n}\xrightarrow{d}\xi, \end{align}\] as \(n\to\infty,\) where \(\xi\sim \mathrm{Gamma}\left(\frac{2\mu}{\sigma^2}-1,1\right).\)
Remark 4. It is easy to see that \(X_t\to 0\) a.s. as \(t\to\infty\) if \(2\mu<\sigma^2,\) while \(\{X_t\}_{t\ge 0}\) is recurrent if \(2\mu=\sigma^2.\) Therefore, if \(2\mu\le \sigma^2,\) we always have \(\hat{C}(a,b)=\emptyset.\)
Now, we consider a branching process with geometric offspring distribution. Set \[f(s)=\frac{1}{2-s},\quad s\in [0,1].\] Let \(\{Y_n\}_{n\ge 0}\) be a Markov chain satisfying \(P(Y_0=1)=1\) and \[\begin{align} E\left(s^{Y_n}\,\middle |\,Y_{n-1},\ldots,Y_{0}\right)=f(s)^{Y_{n-1}},\quad \;s\in [0,1],\;n\ge 1. \end{align}\] This defines a critical Galton-Watson branching process with geometric offspring distribution. For \(n\ge1,\) define \[\begin{align} N_n:=\#\{1\le t\le n:Y_t=1\} \end{align}\] as the number of generations where the population size is exactly \(1.\) The following corollary characterizes the limit distribution of \(N_n\).
Corollary 3. We have \(N_n\xrightarrow{a.s.}\xi\) as \(n\rightarrow\infty,\) where \(\xi\sim\mathrm{Geo}\left(\frac{6}{\pi^2}\right)\).
Next, we consider a branching process in varying environments (BPVE) with immigration. It is well-known that compared to Galton-Watson processes, BPVEs exhibit new phenomena due to the influence of time-dependent environments. For foundational studies on this topic, we refer to [15]–[21]. To formalize the model, let \(0<p_i<1\) for \(i\ge 1\) be a sequence of numbers. For \(i\ge 1\), define the probability generating function \[\begin{align} \label{fis} \tilde{f}_i(s)=\frac{p_i}{1-(1-p_i)s},\quad s\in [0,1], \end{align}\tag{10}\] which governs the offspring distribution of an individual from the \((i-1)\)-th generation. Let \(\{Z_n\}_{n\ge 0}\) be a Markov chain satisfying \(P(Z_0=0)=1\) and \[\begin{align} E\left(s^{Z_{n}}\,\middle |\,Z_0,\ldots,Z_{n-1}\right)=\tilde{f}_n(s)^{Z_{n-1}+1},\quad n\ge 1. \label{mb} \end{align}\tag{11}\] This defines a BPVE with exactly one immigrant per generation. It follows from 11 that \[m_i:=E\left(Z_i\,\middle |\,Z_{i-1}=0\right)=\tilde{f}_i'(1)=\frac{1-p_i}{p_i},\quad i\ge 1.\] We focus on the near-critical regime, for which \(m_i\to 1\) (equivalently \(p_i\to 1/2\)) as \(i\to\infty.\) Specifically, set \[\begin{align} \label{dpa} p_i= \frac{1}{2}-\frac{r_i}{4},\quad i\ge 1, \end{align}\tag{12}\] where \(r_i\in [0,1]\) for all \(\;i\ge1.\) For \(n\ge 1,\) define \[I_n=\#\{1\le t\le n: Z_t=0\}\] as the number of regenerating times within \([1,n].\) A time \(t\) is termed a regenerating time if \(Z_t=0.\) On such events, all individuals (including the immigrant) from generation \(t-1\) leave no descendants, leading to local extinction. However, an immigrant arrives almost surely in generation \(t,\) thereby regenerating the process. Thus, \(I_n\) quantifies the frequency of such regenerative events.
Corollary 4. (i) Suppose \(p_i,\;i\ge 1\) are defined by 12 and \(\lim_{i\to\infty}\sum_{n\ge i}r_n=0\). Then \(\frac{I_n}{\log n}\xrightarrow{d}\xi\) as \(n\rightarrow\infty,\) where \(\xi\sim\mathrm{Exp}(1)\). (ii) Fix \(B\in [0,1).\) If \(p_i=\frac{1}{2}+\frac{B}{4i}\) for \(i\ge1,\) then \(\frac{I_n}{\log n}\xrightarrow{d}\xi\) as \(n\to\infty,\) where \(\xi\sim\mathrm{Gamma}(1-B,1)\).
Remark 5. In [21], the first author studies the case \(p_i=\frac{1}{2}+\frac{B}{4i},\; i\ge1 .\) Fix a positive integer \(a,\) and define \(C(a)=\#\{t\ge 0: Z_t=a\}.\) It is shown that the set \(C(a)\) is almost surely infinite if \(0\le B<1,\) and finite if \(B\ge 1.\) Specifically, when \(B=0,\) it is further proved in [21] that \(\#\{C(a)\cap[1,n]\}/\log n\xrightarrow{d}\xi\) as \(n\rightarrow\infty,\) where \(\xi\sim \mathrm{Exp}(1).\)
Outline of the paper. The remainder of the paper is organized as follows. As mentioned above, Section 2 investigates the asymptotic behavior of the multiple sums defined in 5 . Building on these results, Section 3 establishes the proofs of Theorems 1 and 2. The proofs of Corollaries 1, 2, 3 and 4, which are derived from Theorems 1 and 2, are detailed in Section 4. Finally, concluding remarks are presented in Section 5.
A key step in proving Theorem 1 involves studying the asymptotic behavior of the multiple sums defined in 5 . The following theorem characterizes the limiting behavior of these sums across distinct regimes.
Theorem 3. (i) Fix \(n_0\ge 1.\) For \(s\in \mathbb{C},\) let \(D(n,s),\;n\ge 1,\) be complex functions such that \(\sum_{n=1}^\infty\frac{1}{|D(n,s)|}<\infty\) and denote \(\lambda(s):=\sum_{n=n_0}^\infty \frac{1}{D(n,s)}.\) For \(1\le m\le n\) and \(s\in \mathbb{C},\) set \[\begin{align} \Phi(n,m,s)=\sum_{\substack{ 1\le j_1<...<j_m\le n\\ j_i-j_{i-1}\ge n_0,\;i=1,...,m }}\frac{1}{D(j_1,s)D(j_2-j_1,s)\cdots D(j_m-j_{m-1},s)}, \end{align}\] where and throughout, we assume \(j_0=0.\) Then, we have \[\begin{align} \label{lgnm} \lim_{n\rightarrow\infty}\Phi(n,m,s)=\lambda(s)^m. \end{align}\tag{13}\] (ii) Fix \(n_0\ge 1.\) Let \(D(n),\;n\ge1\) be a sequence of numbers such that \(\inf_{n\ge1}D(n)>0\) and \(\sum_{n=1}^\infty \frac{1}{D(n)}=\infty.\) For \(n\ge 1,\) set \(S(n):=\sum_{i=1}^n \frac{1}{D(i)}.\) For \(m\ge 1,\) denote \[\begin{align} \Phi(n,m)=\sum_{\substack{ 1\le j_1<...<j_m\le n\\ j_i-j_{i-1}\ge n_0,\;1\le i\le m}}\frac{1}{D(j_1)D(j_2-j_1)\cdots D(j_m-j_{m-1})}. \end{align}\] If \(\{S(n)\}_{n\ge1}\) is regularly varying with index \(\tau\in [0,1],\) then we have \[\begin{align} \label{lgnmd} \lim_{n\rightarrow\infty}\frac{\Phi(n,m)}{(S(n))^m}=\lambda_\tau^{-(m-1)}, \end{align}\tag{14}\] where \(\lambda_0=1\) and \(\lambda_\tau=\frac{\Gamma(1+2\tau)}{\tau\Gamma(1+\tau)\Gamma(\tau)}\) for \(\tau\in (0,1].\)
Based on Theorem 3, we derive some formulae that may have potential applications in analysis. To formalize these results, we first introduce necessary notations. Let \(\log_0i=i,\) and for \(n\ge1,\) set \(\log_n i=\log\left(\log_{n-1}i\right).\) For an integer \(m\ge 0\) and a complex number \(s,\) set \[\begin{align} \label{dom} \mathscr O_m=\min\left\{n\in \mathbb{N}: \log_m n>0\right\},\;\lambda(m,s,i)=\left(\log_mi\right)^s\prod_{j=0}^{m-1} \log_{j}i, \end{align}\tag{15}\] where and in what follows, we assume that an empty product equals 1 and an empty sum equals \(0.\) Now fix integers \(m\ge0\) and \(k\ge1.\) For \(s=\sigma+t\mathrm i\in \mathbb{C}\) and \(n_0\ge \mathscr O_m,\) define \[\begin{align} \label{defg} &U_n(k,m,n_0, s)=\sum_{\substack{ 1\le j_1<...<j_k\le n\\ j_i-j_{i-1}\ge n_0,\;i=1,...,k }}\frac{1}{\lambda(m,s,j_1)\lambda(m,s,j_2-j_1)\cdots \lambda(m,s,j_{k}-j_{k-1})}. \end{align}\tag{16}\]
Corollary 5. Fix integers \(m\ge0,\) \(k\ge1\) and let \(\mathscr O_m\) be as in 15 . For \(n_0\ge \mathscr O_m\) and \(s=\sigma+t\mathrm i\in \mathbb{C},\) let \(U_n(k,m,n_0, s)\) be defined as in 16 .
If \(\sigma>1,\) then \[\begin{align} \lim_{n\rightarrow\infty} U_n(k,m,n_0, s)=\zeta(m,s)^k, \label{rzf} \end{align}\tag{17}\] where \(\zeta(m,s)=\sum_{i=n_0}^\infty \frac{1}{\lambda(m,s,i)}\) is absolutely convergent.
If \(\sigma=1,\) then \[\begin{align} \lim_{n\rightarrow\infty} \frac{U_n(k,m,n_0,\sigma)}{\left(\log_{m+1}n\right)^k}=1. \end{align}\]
If \(0\le \sigma<1\) and \(m\ge1,\) then \[\begin{align} \lim_{n\rightarrow\infty} \frac{(1-\sigma)^kU_n(k,m,n_0,\sigma)}{(\log_{m}n)^k}=1. \end{align}\]
If \(0\le\sigma<1\) and \(m=0,\) then \[\begin{align} \label{sl1} &\lim_{n\rightarrow\infty}\frac{U_n(k,m,n_0,\sigma)}{n^{k(1-\sigma)}}=\frac{1}{1-\sigma}\left(\frac{\Gamma(2-\sigma)\Gamma(1-\sigma)}{\Gamma(3-2\sigma)}\right)^{k-1}. \end{align}\tag{18}\]
Remark 6. (a) We shed here some light on the case \(m=0.\) Here \(\lambda(0,s,i)=i^s\) and \(\mathscr O_0=1.\) Consider \[\zeta(s)=\sum_{n=1}^\infty \frac{1}{n^s},\quad s\in \mathbb{C},\] which is known as the Riemann zeta function and is absolutely convergent if \(\mathrm{Re}(s)>1.\) Setting \(n_0=\mathscr O_0=1,\) Part (i) of Corollary 5 yields that \[\begin{align} &\lim_{n\to\infty}\sum_{\begin{subarray}{c} 1\le j_1<...<j_k\le n \end{subarray}}\frac{1}{j_1^s(j_2-j_1)^s(j_3-j_2)^{s}\cdots (j_{k}-j_{k-1})^s}=\zeta(s)^k,\quad k\ge1,\; \mathrm{Re}(s)>1. \end{align}\] This provides an alternative representation of the Riemann zeta function. We remark that the function \(\zeta(s),\) initially defined for \(s\in \mathbb{C}\) such that \(\mathrm{Re}(s)>1,\) admits an analytic continuation to the entire complex plane which is holomorphic except for a simple pole at \(s=1\) with residue \(1.\) For further details, see [22], [23].
Furthermore, from 18 , for \(0<\sigma<1\) and \(n_0\ge 1,\) we have \[\begin{align} \lim_{n\rightarrow\infty}\frac{1}{n^{k(1-\sigma)}}&\sum_{\substack{ 1\le j_1<...<j_k\le n\\ j_s-j_{s-1}\ge n_0 }}\frac{1}{j_1^\sigma(j_2-j_1)^\sigma\cdots (j_{m}-j_{m-1})^\sigma}=\frac{1}{1-\sigma}\left(\frac{\Gamma(2-\sigma)\Gamma(1-\sigma)}{\Gamma(3-2\sigma)}\right)^{k-1}. \end{align}\]
(b) Notice that in Parts (ii)–(iv) of Corollary [rzr], we consider \(U_n(k,m,\sigma)\) with \(\sigma\in \mathbb{R},\) rather than the complex variable \(s\). This restriction arises because it makes no sense to consider the asymptotics of \(U_n(k,m,s)\) for \(s\in \mathbb{C}\) with \(0<\text{Re}(s)\le 1.\) For example, let \(s=\sigma+\mathrm{i}t\in \mathbb{C}.\) Then by definition we have \[U_n(1,m,s)=\sum_{j=\mathscr O_m}^n \frac{\cos \left(t\log_{m+1} j\right)}{\lambda(m,\sigma, j)}-\mathrm{i}\frac{\sin \left(t\log_{m+1} j\right)}{\lambda(m,\sigma, j)}.\] For \(t\ne 0\) and \(0<\sigma\le1,\) neither the real part \(\sum_{j=1}^{n}\cos \left(t\log_{m+1} j\right)/\lambda(m,\sigma, j)\) nor the imaginary part \(\sum_{j=1}^n\sin \left(t\log_{m+1} j\right)/j^\sigma\) converges as \(n\rightarrow\infty\). Thus, for \(\sigma\in (0,1],\) the asymptotics of \(U_n(k,m,s)\) are meaningful only when \(t=0.\)
Next, we quote a result from [13], which will be used to prove Theorem 2.
Proposition 1 ([13]). Fix a number \({\alpha}>0\) and an integer \(k\ge 1\). Suppose \(b_s,\) \(s=1,..,k\) are positive integers and let \(j_0=0.\) Then, \[\begin{align} \lim_{n\rightarrow\infty}\frac{1}{(\log n)^k}{\sum_{\substack{ 1\le j_1<...<j_k\le n\\ j_s-j_{s-1}\ge b_s,\, s=1,...,k} }\frac{1}{j_1j_2^{1-\alpha}(j_2^{{\alpha}}-j_1^{{\alpha}})\cdots j_k^{1-\alpha}(j_k^{{\alpha}}-j_{k-1}^{{\alpha}})}}=\frac{\prod_{j=0}^{k-1}(j+\alpha)}{k!\alpha^k}.\label{aln} \end{align}\qquad{(1)}\]
We divide the rest of this section into three subsections. Parts (i) and (ii) of Theorem 3 are proved in Subsections 2.1 and 2.2 respectively. Corollary 5 is proved in Subsection 2.3.
We prove Part (i) of Theorem 3 by induction. Since \(\sum_{j_1=n_0}^\infty \frac{1}{D(j_1,s)}=\lambda(s),\) 13 holds trivially for \(m=1.\) Fix \(k\ge 2\) and suppose 13 holds for \(m=k-1.\) We now prove it also holds for \(m=k.\) It is easy to see that for \(1\leq k\leq n\), \[\begin{align} \label{gkk1} \Phi&(n,k,s)=\sum_{\substack{ 1\le j_1<...<j_k\le n\\ j_i-j_{i-1}\ge n_0,\, i=1,...,k}}\frac{1}{D(j_1,s)D(j_2-j_1,s)\cdots D(j_k-j_{k-1},s)}\\ &=\sum_{j_1=n_0}^{n-(k-1)n_0}\frac{1}{D(j_1,s)}\sum_{\substack{ j_1<j_2<...<j_k\le n\\ j_i-j_{i-1}\ge n_0,\, i=2,...,k}}\frac{1}{D(j_2-j_1,s)D(j_3-j_2,s)\cdots D(j_k-j_{k-1},s)}\nonumber\\ & =\sum_{j_1=n_0}^{n-(k-1)n_0}\frac{1}{D(j_1,s)}\sum_{\substack{ 1\le j_1'<...<j_{k-1}'\le n-j_1\\ j_i-j_{i-1}\ge n_0, \,i=1,...,k-1}}\frac{1}{D(j_1',s)D(j_2'-j_1',s)\cdots D(j_{k-1}'-j_{k-2}',s)}\nonumber\\ &=\sum_{j_1=n_0}^{n-(k-1)n_0}\frac{1}{D(j_1,s)}\Phi(n-j_1,k-1,s)\nonumber\\ &=\left[\sum_{j_1=n_0}^{\lfloor n/2\rfloor}+\sum_{j_1=\lfloor n/2\rfloor+1}^{n-(k-1)n_0}\right]\frac{1}{D(j_1,s)}\Phi(n-j_1,k-1,s)\nonumber\\ &=: {\rm I_n(s)+II_n(s)}.\nonumber \end{align}\tag{19}\]
Consider first the term \(\mathrm I_n(s)\) in 19 . Observe that \[\begin{align} \label{ione} \mathrm I_n(s)&=\sum_{j_1=n_0}^{\lfloor n/2\rfloor}\frac{1}{D(j_1,s)}\Phi(n-j_1,k-1,s)\\ &=\sum_{j_1=n_0}^{\lfloor n/2\rfloor}\frac{1}{D(j_1,s)}\lambda(s)^{k-1}+\sum_{j_1=n_0}^{\lfloor n/2\rfloor}\frac{1}{D(j_1,s)}\left(\Phi(n-j_1,k-1,s)-\lambda(s)^{k-1}\right).\nonumber \end{align}\tag{20}\] Fix \(\varepsilon>0.\) Since \(\lim_{n\rightarrow\infty}\Phi(n,k-1,s)=\lambda(s)^{k-1},\) there exists \(N>0\) such that \[\begin{align} |\Phi(n,k-1,s)-\lambda(s)^{k-1}|<\varepsilon, \text{ for all }n\ge N. \end{align}\] Thus, for \(n\ge 2N,\) we have \[\begin{align} \left| \sum_{j_1=n_0}^{\lfloor n/2\rfloor}\frac{1}{D(j_1,s)}\left(\Phi(n-j_1,k-1,s)-\lambda(s)^{k-1}\right)\right|\le \varepsilon\sum_{j_1=n_0}^{\lfloor n/2\rfloor}\frac{1}{|D(j_1,s)|}\le \varepsilon\sum_{j_1=n_0}^{\infty}\frac{1}{|D(j_1,s)|}.\nonumber \end{align}\] Therefore, we deduce that \[\begin{align} \lim_{n\rightarrow\infty}\sum_{j_1=n_0}^{\lfloor n/2\rfloor}\frac{1}{D(j_1,s)}\left(\Phi(n-j_1,k-1,s)-\lambda(s)^{k-1}\right)=0.\nonumber \end{align}\] Consequently, letting \(n\rightarrow\infty\) in 20 , we get \[\begin{align} \label{lp1} \lim_{n\rightarrow\infty}\mathrm I_n(s)=\lambda(s)^{k}. \end{align}\tag{21}\]
Consider next the term \(\mathrm{II}_n(s)\) in 19 . Since \(\lim_{n\rightarrow\infty}\Phi(n,k-1,s)=\lambda(s)^{k-1},\) there exists \(M>0\) such that \(|\Phi(n,k-1,s)|<M\) for all \(n\ge1.\) Consequently, \[\begin{align} |\mathrm{II}_n(s)|&\le \sum_{j_1=\lfloor n/2\rfloor+1}^{n-(k-1)n_0}\frac{1}{|D(j_1,s)|}|\Phi(n-j_1,k-1,s)|\le M \sum_{j_1=\lfloor n/2\rfloor+1}^{n-(k-1)n_0}\frac{1}{|D(j_1,s)|}. \end{align}\] Recalling that \(\sum_{n=1}^\infty\frac{1}{|D(n,s)|}<\infty,\) thus we have \[\begin{align} \lim_{n\rightarrow\infty}|\mathrm{II}_n(s)|\le M\lim_{n\rightarrow\infty}\sum_{j_1=\lfloor n/2\rfloor+1}^{n-(k-1)n_0}\frac{1}{|D(j_1,s)|}=0. \label{lp2} \end{align}\tag{22}\] Plugging 21 and 22 into 19 , we conclude that \(\lim_{n\rightarrow\infty}\Phi(n,k, s)={\lambda(s)}^{k}.\) Thus, by induction, 13 holds for all \(k\ge1.\) This completes the proof of Part (i) of Theorem 3. \(\Box\)
By assumption, we have \(\inf_{n\ge1}D(n)>0\) and \(\sum_{n=1}^\infty \frac{1}{D(n)}=\infty.\) Consequently, \[\begin{align} \label{uubd} \lim_{n\rightarrow\infty}S(n)=\infty \text{ and } 0<\sup_{n\ge1}{1}/{D(n)}<C, \end{align}\tag{23}\] for some \(C>0.\) Throughout the remainder of the proof, we will implicitly utilize the facts stated in 23 without further explicit reference.
By virtue of 23 , it follows that \[\lim_{n\rightarrow\infty}\frac{\Phi(n,1)}{S(n)}=\lim_{n\rightarrow\infty}\sum_{j_1=n_0}^n\frac{1}{D(j_1)}\bigg/\sum_{i=1}^n\frac{1}{D(i)}=1,\] which confirms that 14 holds for \(m=1.\) For \(k\ge1,\) a recursive argument analogous to 19 yields that \[\begin{align} \label{pki} \Phi(n,k+1)&=\sum_{\substack{ 1\le j_1<...<j_{k+1}\le n\\ j_i-j_{i-1}\ge n_0,\, 1\le i\le k+1}}\frac{1}{D(j_1)D(j_2-j_1)\cdots D(j_{k+1}-j_{k})}\\ &=\sum_{j_1=n_0}^{n-kn_0}\frac{1}{D(j_1)}\Phi(n-j_1,k)\nonumber\\ &=\left[\sum_{j_1=n_0}^{\lfloor n/2\rfloor}+\sum_{j_1=\lfloor n/2\rfloor+1}^{n-kn_0}\right]\frac{1}{D(j_1)}\Phi(n-j_1,k).\nonumber \end{align}\tag{24}\] Fix \(k\ge 1,\) and assume 14 holds for \(m=k.\) We now prove it holds for \(m=k+1.\) The proof is divided into two cases: \(\tau=0\) and \(\tau\in (0,1].\)
First, assume that \(\tau=0.\) Since \(\Phi(n,k+1)\) is increasing in \(n,\) we obtain \[\begin{align} \sum_{j_1=n_0}^{\lfloor n/2\rfloor}\frac{1}{D(j_1)}\Phi(n-\lfloor n/2\rfloor,k)\le \Phi(n,k+1)\le \sum_{j_1=n_0}^{n}\frac{1}{D(j_1)}\Phi(n,k).\label{lupk} \end{align}\tag{25}\] Given the inductive hypothesis and the regular variation of \(\{S(n)\}_{n\ge1}\) with index \(\tau,\) we have \[\lim_{n\rightarrow\infty}\frac{\sum_{j_1=n_0}^{\lfloor n/2\rfloor}\frac{1}{D(j_1)}}{S(n)}=\lim_{n\rightarrow\infty}\frac{S(\lfloor n/2\rfloor)}{S(n)}=1\] and \[\lim_{n\rightarrow\infty}\frac{\Phi(n-\lfloor n/2 \rfloor,k)}{(S(n))^k}=\lim_{n\rightarrow\infty}\frac{\Phi(n,k)}{(S(n))^k}=1.\] Dividing 25 by \((S(n))^{k+1}\) and taking \(n\to\infty\), we obtain \[\lim_{n\rightarrow\infty}\frac{\Phi(n,k+1)}{(S(n))^{k+1}}=1.\] Thus by induction, 14 holds for \(m=k+1.\)
Next, assume \(\tau \in (0, 1]\). Fix an integer \(l \ge 2\). From 24 we have \[\begin{align} \Phi(n,k+1) &= \sum_{j_1=n_0}^{n-kn_0}\frac{1}{D(j_1)}\Phi(n-j_1,k) \nonumber \\ &= \sum_{s=1}^{l-1}\sum_{i=(s-1)u+n_0+1}^{su+n_0}\frac{1}{D(i)}\Phi(n-i,k) \nonumber \\ &\quad + \sum_{i=(l-1)u+n_0+1}^{n-kn_0}\frac{1}{D(i)}\Phi(n-i,k) + \frac{\Phi(n-n_0,k)}{D(n_0)}, \end{align}\] where \(u = \lfloor(n - (k+1)n_0)/l \rfloor\) is temporarily defined. Since \(\Phi(n,m)\) is monotonically increasing in \(n\), we obtain \[\begin{align} \label{pkul} \Phi&(n,k+1)\le \sum_{s=1}^{l-1}\sum_{i=(s-1)u+n_0+1}^{su+n_0}\frac{1}{D(i)}\Phi(n-(s-1)u-n_0-1,k)\\ &\quad\quad\quad\quad\quad\quad+\sum_{i=(l-1)u+n_0+1}^{n-kn_0}\frac{1}{D(i)}\Phi(n-(l-1)u-n_0-1,k)+\frac{\Phi(n-n_0,k)}{D(n_0)}\nonumber\\ &= \frac{\Phi(n-n_0,k)}{D(n_0)}+\sum_{s=1}^{l-1}\left[S(su+n_0)-S((s-1)u+n_0)\right]\Phi(n-(s-1)u-n_0-1,k)\nonumber\\ &\quad\quad+ \left[S(n-kn_0)-S((l-1)u+n_0)\right]\Phi(n-(l-1)u-n_0-1,k). \nonumber \end{align}\tag{26}\] Recall that 14 is assumed to be true for \(m=k.\) Dividing 26 by \(S(n)^{k+1}\) and taking the upper limit, we obtain \[\begin{align} \label{uul} \varlimsup_{n\rightarrow\infty}\frac{\Phi(n,k+1)}{\left(S(n)\right)^{k+1}}&\le \lambda_\tau^{-(k-1)}\sum_{s=1}^{l-1}\left[\left({s}/{l}\right)^{\tau}-\left(({s-1})/{l}\right)^{\tau}\right]\left(({l-s+1})/{l}\right)^{\tau}\\ &\quad\quad\quad\quad+ \lambda_\tau^{-(k-1)}\left[\left({l}/{l}\right)^{\tau}-\left(({l-1})/{l}\right)^{\tau}\right]\left({1}/{l}\right)^{\tau}\nonumber\\ &= \lambda_\tau^{-(k-1)}\sum_{s=1}^{l}\left[\left({s}/{l}\right)^{\tau}-\left(({s-1})/{l}\right)^{\tau}\right]\left[{(l-s+1)}/{l}\right]^{\tau}.\nonumber \end{align}\tag{27}\] For the lower limit, similar arguments yield \[\begin{align} \Phi(n,k+1)&= \sum_{j_1=n_0}^{n-kn_0}\frac{1}{D(j_1)}\Phi(n-j_1,k) \ge\sum_{s=1}^{l-1}\sum_{i=(s-1)u+n_0+1}^{su+n_0}\frac{1}{D(i)}\Phi(n-su-n_0,k)\nonumber\\ &= \sum_{s=1}^{l-1}\left[S(su+n_0)-S((s-1)u+n_0)\right]\Phi(n-su-n_0,k).\nonumber \end{align}\] Thus, using 14 for \(m=k,\) we deduce that \[\begin{align} \label{lul} \varliminf_{n\rightarrow\infty}\frac{\Phi(n,k+1)}{\left(S(n)\right)^{k+1}}&\ge \lambda_\tau^{-(k-1)}\sum_{s=1}^{l-1}\left[\left({s}/{l}\right)^{\tau}-\left({(s-1)}/{l}\right)^{\tau}\right]\left(({l-s})/{l}\right)^{\tau}. \end{align}\tag{28}\] Combining 27 and 28 gives \[\begin{align} \label{uslu} \lambda_\tau^{-(k-1)}&\sum_{s=1}^{l-1}[\left({s}/{l}\right)^{\tau} -\left({(s-1)}/{l}\right)^{\tau}] \left(({l-s})/{l}\right)^{\tau}\le \varliminf_{n\rightarrow\infty}\frac{\Phi(n,k+1)}{\left(S(n)\right)^{k+1}}\\ & \le \varlimsup_{n\rightarrow\infty}\frac{\Phi(n,k+1)}{\left(S(n)\right)^{k+1}}\le \lambda_\tau^{-(k-1)}\sum_{s=1}^{l}\left[\left({s}/{l}\right)^{\tau}-\left(({s-1})/{l}\right)^{\tau}\right]\left[({l-s+1})/{l}\right]^{\tau}.\nonumber \end{align}\tag{29}\] Next, we demonstrate that \[\begin{align} \tag{30} \lim_{l\rightarrow\infty}&\frac{1}{l^{2\tau}}\sum_{s=1}^{l}\left(s^{\tau}-\left(s-1\right)^{\tau}\right)\left(l-s+1\right)^{\tau}= \lambda_\tau^{-1},\\ \tag{31} \lim_{l\rightarrow\infty}&\frac{1}{l^{2\tau}}\sum_{s=1}^{l-1}\left(s^{\tau}-\left(s-1\right)^{\tau}\right)\left(l-s\right)^{\tau}= \lambda_\tau^{-1}. \end{align}\] For \(\tau=1,\) observe that \(\lambda_1=\frac{\Gamma(3)}{\Gamma(2)\Gamma(1)}=2.\) Direct computation confirms that both 30 and 31 hold in this case.
Suppose next \(\tau\in (0,1),\) and write \[g(s)=\left(s^\tau-\left(s-1\right)^\tau\right)\left(l-s+1\right)^\tau,\quad s\in[1,l].\] Since both \(s^\tau-\left(s-1\right)^\tau\) and \((l-s+1)^\tau\) are decreasing in \(s\), \(g(s)\) is also decreasing on \([1,l]\). Fix \(\varepsilon>0.\) By the mean value theorem, there exists \(\eta\in(s-1,s)\) such that for \(s\geq1\), \[\begin{align} \tau s^{\tau-1}\le s^{\tau}-(s-1)^{\tau}=\tau\eta^{\tau-1}<\tau(s-1)^{\tau-1}.\nonumber \end{align}\] Combining this with the monotonicity of \(g(s)\), we infer that \[\begin{align} \varlimsup_{l\rightarrow\infty}&\frac{1}{l^{2\tau}}\sum_{s=1}^{l}\left(s^{\tau}-\left(s-1\right)^{\tau}\right)\left(l-s+1\right)^{\tau}\nonumber\\ &=\varlimsup_{l\rightarrow\infty}\frac{1}{l^{2\tau}}\left[\int^{l}_{1}\left(s^{\tau}-\left(s-1\right)^{\tau}\right)\left(l-s+1\right)^{\tau}ds+l^\tau\right]\nonumber\\ &\le \tau\varlimsup_{l\rightarrow\infty}\frac{1}{l^{2\tau}}\int_{1}^l (s-1)^{\tau-1}(l-s+1)^rds\nonumber\\ &=\tau\varlimsup_{l\rightarrow\infty}\int_{1/l}^1 \left(x-\frac{1}{l}\right)^{\tau-1}\left(1-x+\frac{1}{l}\right)^{\tau}dx\nonumber\\ &=\tau\int_{0}^1 x^{\tau-1}\left(1-x\right)^{\tau}dx=\frac{\tau\Gamma(\tau+1)\Gamma(\tau)}{\Gamma(2\tau+1)}=\lambda_\tau^{-1}.\nonumber \end{align}\] Similarly, we can show that \[\begin{align} \varliminf_{l\rightarrow\infty}&\frac{1}{l^{2\tau}}\sum_{s=1}^{l}\left(s^{\tau}-\left(s-1\right)^{\tau}\right)\left(l-s+1\right)^{\tau}\ge \lambda_\tau^{-1}.\nonumber \end{align}\] Thus, 30 holds. An analogous argument confirms 31 .
Since 29 holds for all \(l\ge 2,\) taking \(l\rightarrow\infty\) and applying 30 and 31 , we conclude that \[\begin{align} \lim_{n\rightarrow\infty}\frac{\Phi(n,k+1)}{\left(S(n)\right)^{k+1}}= \lambda_\tau^{-k},\nonumber \end{align}\] which establishes 14 for \(m=k+1.\) By induction, 14 holds for all \(k\ge1.\) Part (ii) of Theorem 3 is proved. \(\Box\)
Fix \(m\in \mathbb{N}\) and \(0<\sigma\in\mathbb{R}.\) Let \(\mathscr O_m\) be as in 15 . For \(k\ge \mathscr O_m\) and \(s=\sigma+\mathrm it\in \mathbb{C},\) set \(D(k,s)=\lambda(m,s,k)=\left(\log_mk\right)^s\prod_{j=0}^{m-1}\log_j k.\) For \(x\ge \mathscr O_m,\) let \(A(x):=\int_{\mathscr O_m}^x\frac{1}{D(u,\sigma)}du.\) A direct computation yields \[\begin{align} A(x)=\int_{\mathscr O_m}^x\frac{1}{\lambda(m,\sigma,u)}du= \left\{\begin{array}{ll} \frac{1}{1-\sigma}\left(\left(\log_m x\right)^{1-\sigma}-\left(\log_m \mathscr O_m\right)^{1-\sigma}\right), & \text{if } \sigma\ne 1,\\ \log_{m+1}x-\log_{m+1}\mathscr O_m, &\text{if } \sigma=1. \end{array}\right. \end{align}\]
If \(\sigma>1,\) the series \(\sum_{n=\mathscr O_m}^\infty\frac{1}{D(n,s)}\) is absolutely convergent. Applying Part (i) of Theorem 3 we get 17 .
For \(n\ge1,\) let \[D(n)=\left\{\begin{array}{ll} D(n,\sigma), & \text{if }n\ge \mathscr O_m,\\ D(\mathscr O_m,\sigma),&\text{if }1\le n< \mathscr O_m, \end{array}\right.\] and set \(S(n)=\sum_{i=1}^n\frac{1}{D(i)}.\)
If \(0\le \sigma\le 1,\) it is easy to see that \(\inf_{n\ge1}D(n)\ge \left(\log_m \mathscr O_m\right)^s\prod_{j=0}^{m-1}\log_j \mathscr O_m>0\) and \(S(n)\sim A(n)\to \infty\) as \(n\rightarrow\infty.\) Thus for \(x>0,\) we have \[\begin{align} \lim_{n\to\infty}\frac{S(\lfloor xn\rfloor)}{S(n)} =x^{\tau} \text{ with } \tau=\left\{\begin{array}{ll} 0, & \text{if } m\ge1,\\ 1-\sigma,&\text{if }m=0. \end{array}\right. \end{align}\] Consequently, the sequence \(\{S(n)\}_{n\ge1}\) is regularly varying with index \(\tau.\) Applying Part (ii) of Theorem 3, we obtain Parts (ii)– (iv) of Corollary 5. \(\Box\)
In this section, we prove Theorems 1 and 2 by the method of moments. The key step involves computing the moments \(E\left(\sum_{j=1}^n\eta_j\right)^k\) for \(k\ge1.\)
Fix \(k\ge1.\) Applying the multinomial expansion theorem, we get \[\begin{align} E\Bigg(\sum_{j=1}^n\eta_j\Bigg)^k&=\sum_{\begin{subarray}{c} i_1+...+i_n=k,\\ i_s\ge 0,\, s=1,...,n \end{subarray}}\frac{k!}{i_1!i_2!\cdots i_n!}E\left(\eta_1^{i_1}\eta_{2}^{i_2}\cdots \eta_{n}^{i_n}\right)\nonumber\\ &=\sum_{m=1}^k\sum_{1\le j_1<...<j_m\le n}\sum_{\begin{subarray}{c} i_{j_1}+...+i_{j_m}=k\\ i_{j_s}>0,\, s=1,...,m \end{subarray}}\frac{k!}{i_{j_1}!i_{j_2}!\cdots i_{j_m}!}E\left(\eta_{j_1}^{i_{j_1}}\eta_{j_2}^{i_{j_2}}\cdots \eta_{j_m}^{i_{j_m}}\right),\nonumber \end{align}\] where we adopt the convention \(0^0=1.\) Since \(\eta_i\in \{0,1\}\) for all \(i\ge 1,\) taking 1 into account, we have \[\begin{align} \label{exs} E\Bigg(\sum_{j=1}^n\eta_j\Bigg)^k&=\sum_{m=1}^k\sum_{\begin{subarray}{c} l_1+...+l_m=k\\ l_{s}>0,\,s=1,...,m \end{subarray}}\frac{k!}{l_1!l_{2}!\cdots l_{m}!}\sum_{1\le j_1<...<j_m\le n}E\Big(\eta_{j_1}\eta_{j_2}\cdots \eta_{j_m}\Big)\\ &=\sum_{m=1}^k\sum_{\begin{subarray}{c}l_1+\dots+l_m=k,\\ l_s\ge1,\,s=1,...,m \end{subarray}}\frac{k!}{l_1!\cdots l_m!}\sum_{1\le j_1<...<j_m\le n}P\left(\eta_{j_1}=1,\eta_{j_2}=1,\cdots ,\eta_{j_m}=1\right)\nonumber\\ &=\sum_{m=1}^k\sum_{\begin{subarray}{c}l_1+\dots+l_m=k,\\ l_s\ge1,\,s=1,...,m \end{subarray}}\frac{k!}{l_1!\cdots l_m!}\Psi_n(m),\nonumber \end{align}\tag{32}\] where and in what follows \[\begin{align} \label{depsi} \Psi_n(m)=\sum_{1\le j_1<...<j_m\le n}P\left(\eta_{j_1}=1,\eta_{j_2}=1,\cdots ,\eta_{j_m}=1\right),\quad n\ge m \ge 1. \end{align}\tag{33}\]
Based on 32 , we divide the remainder of this section into three subsections, which will finish the proofs of Parts (i)–(ii) of Theorem 1 and Theorem 2, respectively.
Let \(D(n),\;n\ge 0\) be a sequence satisfying \(D(0)=1,\) \(D(n)>1\) for \(n\ge1\) and \[\begin{align} \zeta(D)= \sum_{n=1}^\infty \frac{1}{D(n)}<\infty.\label{zed} \end{align}\tag{34}\] By assumption, we have \[\begin{align} P(\eta_i=1)=\frac{1}{\rho(0,i)}, \quad P\left(\eta_j=1\middle|\,\eta_i=1\right)=\frac{1}{\rho(i,j)}=\frac{1}{D(j-i)},\label{dns} \end{align}\tag{35}\] for \(j\ge i\ge 1.\)
Write simply \(\xi_n:=\sum_{j=1}^n\eta_j\) for \(n\ge1.\) Since \(\xi_n\) is nondecreasing in \(n,\) the limit \(\xi:=\lim_{n\rightarrow\infty}\xi_n\) exists almost surely. Moreover, by Lévy’s monotone convergence theorem, we have \[\begin{align} \label{exm1} E\xi&=E\Bigg(\lim_{n\rightarrow\infty}\sum_{j=1}^n\eta_j\Bigg)=\sum_{j=1}^\infty P(\eta_j=1)=\sum_{j=1}^\infty\frac{1}{D(j)}=\zeta(D)<\infty, \end{align}\tag{36}\] where \(\zeta(D)\) is defined in (34 ). Consequently, \(\xi<\infty\) almost surely.
In view of 32 , 33 , and 35 , the \(k\)-th moment of \(\xi_n\) is given by \[\begin{align} E\Bigg(\sum_{j=1}^n\eta_j\Bigg)^k=\sum_{m=1}^k\sum_{\begin{subarray}{c}l_1+\dots+l_m=k,\\ l_s\ge1,\,s=1,...,m \end{subarray}}\frac{k!}{l_1!\cdots l_m!}\Psi_n(m),\nonumber \end{align}\] where \[\begin{align} \Psi_n(m)= \sum_{1\le j_1<...<j_m\le n}\frac{1}{D(j_1)D(j_2-j_1)\cdots D(j_{m}-j_{m-1})}. \end{align}\] By Part (i) of Theorem 3 and 36 , we obtain \[\begin{align} \label{eexk} E\left(\xi^k\right)&=\lim_{n\rightarrow\infty} E\Bigg(\sum_{j=1}^n\eta_j\Bigg)^k\\ &=\lim_{n\rightarrow\infty}\sum_{m=1}^k\sum_{\begin{subarray}{c}l_1+\dots+l_m=k,\\ l_s\ge1,\,s=1,...,m \end{subarray}}\frac{k!}{l_1!\cdots l_m!}\sum_{1\le j_1<...<j_m\le n}\Psi_n(m)\nonumber\\ &=\sum_{m=1}^k\sum_{\begin{subarray}{c}l_1+\dots+l_m=k,\\ l_s\ge1,\,s=1,...,m \end{subarray}}\frac{k!}{l_1!\cdots l_m!}\zeta(D)^m=\sum_{m=1}^k\sum_{\begin{subarray}{c}l_1+\dots+l_m=k,\\ l_s\ge1,\, s=1,...,m \end{subarray}}\frac{k!}{l_1!\cdots l_m!} (E\xi)^m.\nonumber \end{align}\tag{37}\] For \(k\ge2,\) by expanding recursively, we get \[\begin{align} E(\xi^k)&=E\xi+\sum_{m=2}^k\sum_{\begin{subarray}{c} l_1+...+l_m=k\\ l_{s}\ge 1,\,s=1,...,m \end{subarray}}\frac{k!}{l_1!l_{2}!\cdots l_{m}!}\left(E\xi\right)^m\nonumber\\ &=E\xi+\sum_{m=2}^k\sum_{l_1=1}^{k-m+1}\frac{k!}{l_1!\left(k-l_1\right)!}\sum_{\begin{subarray}{c} l_2+...+l_m=k-l_1\\ l_{s}\ge 1,\,s=2,...,m \end{subarray}}\frac{\left(k-l_1\right)!}{l_{2}!\cdots l_{m}!}\left(E\xi\right)^m\nonumber\\ &=E\xi+\sum_{l_1=1}^{k-1}\frac{k!}{l_1!\left(k-l_1\right)!}\sum_{m=2}^{k-l_1+1}\sum_{\begin{subarray}{c} l_2+...+l_m=k-l_1\\ l_{s}\ge 1,\,s=2,...,m \end{subarray}}\frac{\left(k-l_1\right)!}{l_{2}!\cdots l_{m}!}\left(E\xi\right)^m\nonumber\\ &=E\xi+E\xi\sum_{l_1=1}^{k-1}\frac{k!}{l_1!\left(k-l_1\right)!}\sum_{m=1}^{k-l_1}\sum_{\begin{subarray}{c} h_1+...+h_m=k-l_1\\ h_{s}\ge 1,\,s=1,...,m \end{subarray}}\frac{\left(k-l_1\right)!}{h_{1}!\cdots h_{m}!}\left(E\xi\right)^m\cr &=E\xi+E\xi\sum_{l_1=1}^{k-1}\frac{k!}{l_1!\left(k-l_1\right)!}E\left(\xi^{k-l_1}\right).\nonumber \end{align}\] Here the last equality follows from (37 ). Noticing that \(E(\xi^0)=1,\) then consulting to 36 and 37 , we conclude that \(\xi\) satisfies \(E\xi=\zeta(D)\) and \[\begin{align} E\left(\xi^k\right)&=E\xi+E\xi\sum_{j=1}^{k-1}\frac{k!}{j!(k-j)!}E\left(\xi^{k-j}\right) =E\xi \left(E\left((\xi+1)^k\right)- E\left(\xi^k\right)\right).\nonumber \end{align}\] Consequently, we get \[\begin{align} \frac{E\left((\xi+1)^k\right)}{E(\xi^k)}= \frac{1+E(\xi)}{E(\xi)}\text{ for } k\ge1 \text{ and } E\xi= \zeta(D).\nonumber \end{align}\] Suppose \(\eta\) is a random variable and \(\eta\sim \mathrm{Geo}\left(\frac{1}{\zeta(D)+1}\right).\) Then direct computation shows that \[\begin{align} \frac{E\left((\eta+1)^k\right)}{E(\eta^k)}=\frac{1+E\eta}{E\eta}\text{ for } k\ge 1\text{ and } E\eta=\zeta(D).\nonumber \end{align}\] Thus, \(\xi\) and \(\eta\) share identical moments. Since there exists \(C>0\) such that for \(|t|<C\), \[E\left(e^{t\eta}\right)=\frac{\left(1+\zeta(D)\right)^{-1}}{1-e^t\zeta(D)\left(1-\zeta(D)\right)^{-1}}<\infty,\] according to [24], the moment sequence uniquely determines the distribution. Therefore, \[\xi\sim \mathrm{Geo}\left(\frac{1}{\zeta(D)+1}\right).\] This completes the proof of Part (i) of Theorem 1. \(\Box\)
Assume \(0\le \sigma\le 1\). Fix \(0<\varepsilon<1.\) Let \(n_1\) be as in Part (ii) of Theorem 1. By assumption, we have \[\begin{align} \label{dnul} 1-\varepsilon\le \frac{\lambda_\sigma}{\rho(0,n)}\le 1+\varepsilon,\quad 1-\varepsilon\le \frac{\lambda_\sigma D(n)}{\rho(i,i+n)}\le 1+\varepsilon, \quad n\ge n_1,\;i\ge n_1. \end{align}\tag{38}\] In order to apply Theorem 3, we set \(n_0=n_1.\) Let \(j_0=0\). For \(m\geq1\), put \[\begin{align} &H=\{(j_1,...,j_m)\mid 1\le j_1<....<j_m\le n,\;j_{i+1}-j_i>n_0,\; \forall~0\le i\le m-1\},\tag{39}\\ &A=\{(j_1,...,j_m)\mid 1\le j_1<....<j_m\le n,\;(j_1,...,j_m)\notin H\}.\tag{40} \end{align}\] Fix an integer \(k\ge 1.\) From 32 , for \(1\le m\le k\), \[\begin{align} \label{eks} E&\Bigg(\sum_{j=1}^n\eta_j\Bigg)^k=\sum_{m=1}^k\sum_{\begin{subarray}{c}l_1+\dots+l_m=k,\\ l_s\ge1,\,s=1,...,m \end{subarray}}\frac{k!}{l_1!\cdots l_m!}\Psi_n(m), \end{align}\tag{41}\] where \[\Psi_n(m)=\sum_{1\le j_1<...<j_m\le n}\frac{1}{\rho(0,j_1)\rho(j_1,j_2)\cdots \rho(j_{m-1},j_m)}.\] Next, we aim to prove \[\begin{align} \label{lpa} \lim_{n\rightarrow\infty}\frac{\Psi_n(m)}{\left(S(n)\right)^m}=\lambda_\sigma^{-m}. \end{align}\tag{42}\] For a set \(V\subset\mathbb{N}^m,\) write \[\begin{align} \label{dpc} \Psi_n(m,V)=\sum_{(j_1,...,j_m)\in V} \frac{1}{\rho(0,j_1)\rho(j_1,j_2)\cdots \rho(j_{m-1},j_m)}. \end{align}\tag{43}\] Then, \[\begin{align} \label{gab} \Psi_n(m)=\Psi_n(m,H)+\Psi_n(m,A), \end{align}\tag{44}\] where \(H\) and \(A\) are defined in 39 and 40 .
Consider first the term \(\Psi_n(m,H).\) From 38 , we get \[\begin{align} (1-\varepsilon)^m&\lambda_\sigma^{-1}\sum_{(j_1,...,j_m)\in H} \frac{1}{D(j_1)D(j_2-j_1)\cdots D(j_m-j_{m-1})}\cr &\le\Psi_n(m,H)=\sum_{(j_1,...,j_m)\in H}\frac{1}{\rho(0,j_1)\rho(j_1,j_2)\cdots \rho(j_{m-1},j_m)}\\ &\le(1+\varepsilon)^m\lambda_\sigma^{-1} \sum_{(j_1,...,j_m)\in H} \frac{1}{D(j_1)D(j_2-j_1)\cdots D(j_m-j_{m-1})}.\nonumber \end{align}\] Then applying Theorem 3, we obtain \[\begin{align} (1-\varepsilon)^m\lambda_\sigma^{-m}\le \varliminf_{n\rightarrow\infty} \frac{\Psi_n(m,H)}{\left(S(n)\right)^{m}} \le \varlimsup_{n\rightarrow\infty} \frac{\Psi_n(m,H)}{\left(S(n)\right)^{m}}\le (1+\varepsilon)^m\lambda_\sigma^{-m}. \end{align}\] Taking \(\varepsilon\to 0,\) we get \[\begin{align} \label{lpn} \lim_{n\rightarrow\infty}\frac{\Psi_n(m,H)}{\left(S(n)\right)^{m}}=\lambda_\sigma^{-m}. \end{align}\tag{45}\] Next, we consider the term \(\Psi_n(m,A)\) in 44 . We shall show that \[\begin{align} \lim_{n\rightarrow\infty}\frac{\Psi_n(m,A)}{\left(S(n)\right)^{m}}=0.\label{gbi} \end{align}\tag{46}\] To this end, for \(0\le i_1<...<i_l\le m-1\) and \(k_1,...,k_l\ge1,\) set \[A\left(i_1,k_1,...,i_l,k_l\right)=\left\{(j_1,...,j_m)\left|\begin{array}{c}1\le j_1<....<j_m\le n, \\ j_{i_s+1}-j_{i_s}=k_s,\, 0\le s\le l,\\ j_{i+1}-j_i>n_0,\, i\in\{0,..,m-1\}/\{i_1,...,i_l\}\end{array}\right.\right\}.\] Then, we have \[\begin{align} A=\bigcup_{l=1}^{m-1}\bigcup_{\begin{subarray}{c} 0\le i_1<...<i_l\le m-1\\ 1\le k_1,...,k_l\le n_0 \end{subarray} }A\left(i_1,k_1,...,i_l,k_l\right). \end{align}\] Thus, it follows that \[\begin{align} \Psi_n(m,A)=\sum_{l=1}^{m-1}\sum_{\begin{subarray}{c} 0\le i_1<...<i_l\le m-1\\ 1\le k_1,...,k_l\le n_0 \end{subarray}} \Psi_n\left(m,{A(i_1,k_1,...,i_l,k_l)}\right).\label{gbd} \end{align}\tag{47}\] Fix \(1\le l\le m,\) \(0\le i_1<...<i_l\le m-1\) and \(1\le k_1,...,k_l\le n_0.\) By 47 , to prove 46 , it suffices to show that \[\begin{align} \lim_{n\rightarrow\infty}\frac{1}{(S(n))^{m}} \Psi_n\left(m,{A(i_1,k_1,...,i_l,k_l)}\right)=0.\label{bik0} \end{align}\tag{48}\] In fact, by 38 and the fact \(\rho(i,j)\ge 1,\) we have \[\begin{align} & \Psi_n\left(m,{A(i_1,k_1,...,i_l,k_l)}\right)=\sum_{(j_1,...,j_m)\in A\left(i_1,k_1,...,i_l,k_l\right)} \frac{1}{\rho\left(0,j_1\right)\rho\left(j_1,j_2\right)\cdots \rho\left(j_{m-1},j_m\right)}\nonumber\\ &\le\sum_{(j_1,...,j_m)\in A(i_1,k_1,...,i_l,k_l)} \;\prod_{i\in \{0,..,m-1\}/\{i_1,...,i_l\}}\frac{1}{\rho\left(j_i, j_{i+1}\right)}\nonumber\\ &\le (\lambda_{\sigma}^{-1}\vee 1)\sum_{(j_1,...,j_m)\in A(i_1,k_1,...,i_l,k_l)} \;\prod_{i\in \{0,..,m-1\}/\{i_1,...,i_l\}}\frac{1+\varepsilon}{D\left(j_{i+1}-j_i\right)}\nonumber\\ & =(\lambda_{\sigma}^{-1}\vee 1)\sum_{1\le j_1'<...<j'_{m-l}\le n-\sum_{s=1}^lk_s}(1+\varepsilon)^{m-l}\frac{1}{D\left(j_1'\right)D\left(j_2'-j_1'\right)\cdots D\left(j_{m-l}'-j_{m-l-1}'\right)}.\nonumber \end{align}\] Therefore, we conclude that \[\begin{align} \varlimsup_{n\rightarrow\infty}\frac{1}{(S(n))^{m}}&\Psi_n\left(m,{A(i_1,k_1,...,i_l,k_l)}\right) \le \varlimsup_{n\rightarrow\infty}\frac{ (1+\varepsilon)^{m-l}}{(S(n))^{l}}\frac{\lambda_{\sigma}^{-1}\vee 1 }{(S(n))^{m-l}}\nonumber\\ &\times\sum_{1\le j_1'<...<j'_{m-l}\le n-\sum_{s=1}^lk_s}\frac{1}{D\left(j_1'\right)D\left(j_2'-j_1'\right)\cdots D\left(j_{m-l}'-j_{m-l-1}'\right)}=0,\nonumber \end{align}\] where we use Theorem 3 to get the last step. Thus, 48 is true and so is 46 .
Dividing by \((S(n))^{m}\) on both sides of 44 and taking \(n\to\infty,\) owing to 45 and 46 , we get 42 . Now, putting 41 and 42 together, we deduce that \[\begin{align} \lim_{n\rightarrow\infty}\frac{E\left(\sum_{j=1}^n\eta_j\right)^k}{(S(n))^{k}}=k!\lambda_\sigma^{-k}.\nonumber \end{align}\] Finally, since \([(2n)!]^{-\frac{1}{2n}}\sim \frac{e }{2n}\) as \(n\rightarrow\infty,\) we get \(\sum_{k=1}^\infty \left[(2k)!\lambda_\sigma^{-2k}\right]^{-\frac{1}{2k}}=\infty.\) By Carleman’s criterion (see e. g., [25]), the moment problem has a unique solution. Hence, \[\begin{align} \frac{\sum_{j=1}^n\eta_j}{S(n)}\overset{d}\to \xi,\text{ as }n\rightarrow\infty, \end{align}\] where \(\xi\sim \mathrm{Exp}\left(\lambda_\sigma\right)\). Part (ii) of Theorem 1 is proved. \(\Box\)
Based on Proposition 1, the proof of Theorem 2 parallels Part (ii) of Theorem 1. We outline the argument below for completeness.
Assume \(\alpha>0.\) Fix \(0<\varepsilon<1.\) Let \(n_2\) be the one in Theorem 2. From 3 , we have \[\begin{align} \label{dnul2} 1-\varepsilon\le \frac{\beta n}{\rho(0,n)}\le 1+\varepsilon, \quad 1-\varepsilon\le \frac{\beta (i+n)^{1-\alpha}((i+n)^\alpha-i^\alpha)}{\rho(i,i+n)}\le 1+\varepsilon, \quad n\ge n_2,\;i\ge n_2. \end{align}\tag{49}\] Fix \(k\ge 1.\) From 32 , we see that \[\begin{align} \label{eks2} E&\Bigg(\sum_{j=1}^n\eta_j\Bigg)^k=\sum_{m=1}^k\sum_{\begin{subarray}{c}l_1+\dots+l_m=k,\\ l_s\ge1,\, s=1,...,m \end{subarray}}\frac{k!}{l_1!\cdots l_m!}\Psi_n(m), \end{align}\tag{50}\] where \[\Psi_n(m)=\sum_{1\le j_1<...<j_m\le n}\frac{1}{\rho(0,j_1)\rho(j_1,j_2)\cdots \rho(j_{m-1},j_m)}.\] Next, we prove \[\begin{align} \label{lpa2} \lim_{n\rightarrow\infty}\frac{\Psi_n(m)}{(\log n)^m}=\frac{\prod_{j=0}^{m-1}(j+\alpha)}{m!\alpha^m \beta^m}, \quad m\ge1. \end{align}\tag{51}\] To this end, let \(j_0=0\) and set \(n_0=n_2.\) For \(m\ge 1,\) let \(H\) and \(A\) be the sets defined in 39 and 40 . Then we have \[\begin{align} \label{gab2} \Psi_n(m)=\Psi_n(m,H)+\Psi_n(m,A), \end{align}\tag{52}\] where for a set \(V\subset\mathbb{N}^m,\) \(\Psi_n(m,V)\) is defined by 43 . Due to 49 , we have \[\begin{align} (1-\varepsilon)^m\beta^{-m}&\sum_{(j_1,...,j_m)\in H} \frac{1}{j_1j_2^{1-\alpha}(j_2^{\alpha}-j_1^{\alpha})\cdots j_m^{1-\alpha}(j_m^{\alpha}-j_{m-1}^{\alpha})}\\ &\le\Psi_n(m,H)=\sum_{(j_1,...,j_m)\in H}\frac{1}{\rho(0,j_1)\rho(j_1,j_2)\cdots \rho(j_{m-1},j_m)}\\ &\le(1+\varepsilon)^m\beta^{-m}\sum_{(j_1,...,j_m)\in H} \frac{1}{j_1j_2^{1-\alpha}(j_2^{\alpha}-j_1^{\alpha})\cdots j_m^{1-\alpha}(j_m^{\alpha}-j_{m-1}^{\alpha})}.\nonumber \end{align}\] Applying Proposition 1 with \(b_s\equiv n_0,\;s=1,...,m,\) we get \[\begin{align} \label{gau2} (1-\varepsilon)^m\frac{\prod_{j=0}^{m-1}(j+\alpha)}{m!\alpha^m\beta^{m}}&\le \varliminf_{n\rightarrow\infty} \frac{\Psi_n(m,H)}{(\log n)^{m}} \\ &\le \varlimsup_{n\rightarrow\infty} \frac{\Psi_n(m,H)}{(\log n)^{m}}\le (1+\varepsilon)^m \frac{\prod_{j=0}^{m-1}(j+\alpha)}{m!\alpha^m\beta^{m}}.\nonumber \end{align}\tag{53}\] Since \(\varepsilon\) is arbitrary, letting \(\varepsilon\to 0\) in 53 , we deduce that \[\begin{align} \label{lpn2} \lim_{n\rightarrow\infty}\frac{\Psi_n(m,H)}{(\log n)^{m}}=\frac{\prod_{j=0}^{m-1}(j+\alpha)}{m!\alpha^m\beta^{m}}. \end{align}\tag{54}\] Using 4 and Proposition 1, analogous to 46 , we show that \[\begin{align} \lim_{n\rightarrow\infty}\frac{\Psi_n(m,A)}{(\log n)^{m}}=0.\label{gbi2} \end{align}\tag{55}\] Putting 52 , 54 and 55 together, we get 51 . With 51 in hand, dividing 50 by \((\log n)^k\) and taking \(n\to\infty\), we infer that \[\begin{align} \lim_{n\rightarrow\infty}\frac{E\left(\sum_{j=1}^n\eta_j\right)^k}{(\log n)^{k}}=\frac{\prod_{j=0}^{k-1}(j+\alpha)}{\alpha^k\beta^{k}}.\nonumber \end{align}\] Let \(\xi\sim \mathrm{Gamma}(\alpha,1)\). Then \[\begin{align} E\left(\xi^k\right)&=\frac{1}{\Gamma(\alpha)}\int_{0}^\infty x^{k+\alpha-1}e^{-x}dx=\frac{\Gamma(k+\alpha)}{\Gamma(\alpha)}=\prod_{j=0}^{k-1}(j+\alpha).\nonumber \end{align}\] Using Stirling’s approximation, we get \[\left[E\left({\xi}^{2n}\right)\right]^{-\frac{1}{2n}}=\left(\prod_{j=0}^{2n-1}(j+\alpha)\right)^{-\frac{1}{2n}}\ge (\lfloor 2n+\alpha\rfloor !)^{-\frac{1}{2n}}\sim \frac{e}{2n}\] as \(n\rightarrow\infty.\) Thus \(\sum_{n=1}^\infty\left(E\left(\xi^{2n}\right)\right)^{-\frac{1}{2n}}=\infty.\) Applying Carleman’s test for the uniqueness of the moment problem (see e. g., [25]), we have \[\begin{align} \frac{\alpha \beta\sum_{j=1}^n\eta_j}{\log n}\overset{d}\to \xi, \text{ as } n\rightarrow\infty, \end{align}\] where \(\xi\sim \mathrm{Gamma}(\alpha, 1)\). This completes the proof of Theorem 2. \(\Box\)
In this section, we give the proofs of Corollaries 1–4.
As usual we use \(\mathbb{P}_x\) and \(\mathbb{E}_x\) to denote the probability measure and the corresponding expectation operator induced by a \(d\)-dimensional Brownian motion starting at \(x\in\mathbb{R}^d.\) For \(r>0,\) define the stoping time \[T_r=\inf\{t>0:B_t\in \mathcal{S}(0,r)\}.\] Fix \(0<r<R<\infty\) and \(x\in\mathbb{R}^d\) such that \(r<|x|<R.\) Then by solving the Dirichlet problem on the annulus \(A=\{y\in \mathbb{R}^d: r<|y|<R\},\) we obtain(see [26]) \[\begin{align} \label{prr} \mathbb{P}_x(T_r>T_R)=\frac{|x|^{2-d}-r^{2-d}}{R^{2-d}-r^{2-d}}. \end{align}\tag{56}\] Define \(\eta_j=1_{\{j\in C(a,b)\}}\) for \(j\in \mathbb{N},\) so that \[|C(a,b)\cap [1,n]|=\sum_{j=1}^n\eta_j,\quad n\ge1.\] It follows immediately from 56 that \[\begin{align} \mathbb{P}_0(\eta_i=1)=\frac{(bi)^{2-d}-(bi+a)^{2-d}}{(bi)^{2-d}}, \;i\ge 1. \end{align}\] Now fix \(m\ge 1\) and \(1\le j_1<j_2<...<j_m<\infty.\) Using 56 and the strong Markov property, we infer that \[\begin{align} &\mathbb{P}_0\left(\eta_{j_1}=1,...,\eta_{j_m}=1\right)=\mathbb{P}_0\left(j_1\in C(a,b),...,j_m\in C(a,b)\right)\nonumber\\ &=\frac{(j_1b+a)^{2-d}-(j_1b)^{2-d}}{(j_2b+a)^{2-d}-(j_1b)^{2-d}}\cdots \frac{(j_{m-1}b+a)^{2-d}-(j_{m-1}b)^{2-d}}{(j_mb+a)^{2-d}-(j_{m-1}b)^{2-d}} \frac{(j_{m}b)^{2-d}-(j_{m}b+a)^{2-d}}{(j_mb)^{2-d}}\nonumber\\ &=\frac{(j_1+a/b)^{2-d}-j_1^{2-d}}{(j_2+a/b)^{2-d}-j_1^{2-d}}\cdots \frac{(j_{m-1}+a/b)^{2-d}-j_{m-1}^{2-d}}{(j_m+a/b)^{2-d}-j_{m-1}^{2-d}} \frac{j_{m}^{2-d}-(j_{m}+a/b)^{2-d}}{j_m^{2-d}}\nonumber. \end{align}\] Similar to 32 , we have \[\begin{align} \label{eek} \mathbb{E}_0\Bigg(\sum_{j=1}^n\eta_j\Bigg)^k&=\sum_{m=1}^k\sum_{\begin{subarray}{c}l_1+\dots+l_m=k,\\ l_s\ge1,\,s=1,...,m \end{subarray}}\frac{k!}{l_1!\cdots l_m!}\Psi_n(m),\quad k\ge 1. \end{align}\tag{57}\] where for \(n\ge m \ge 1,\) \[\begin{align} \Psi_n(m)=\sum_{1\le j_1<...<j_m\le n}&\frac{(j_1+a/b)^{2-d}-j_1^{2-d}}{(j_2+a/b)^{2-d}-j_1^{2-d}}\cdots \frac{(j_{m-1}+a/b)^{2-d}-j_{m-1}^{2-d}}{(j_m+a/b)^{2-d}-j_{m-1}^{2-d}} \\ &\times\frac{j_{m}^{2-d}-(j_{m}+a/b)^{2-d}}{j_m^{2-d}}. \end{align}\] Clearly, we have \[\begin{align} &(j+a/b)^{2-d}-j^{2-d}\sim a/b(2-d)j^{1-d},\quad j\to\infty.\label{sjab} \end{align}\tag{58}\] Moreover, by the mean value theorem, there exist \(\kappa_1\in (j,j+a/b),\) \(\kappa_2\in (i,j)\) such that \[\begin{align} \frac{(j+a/b)^{2-d}-i^{2-d}}{j^{2-d}-i^{2-d}}=1+\frac{(j+a/b)^{2-d}-j^{2-d}}{j^{2-d}-i^{2-d}} =1+\frac{a/b(2-d)\kappa_1^{1-d}}{(2-d)(j-i)\kappa_2^{1-d}}, \end{align}\] which implies \[\begin{align} 1<\frac{(j+a/b)^{2-d}-i^{2-d}}{j^{2-d}-i^{2-d}}<1+\frac{a}{b(j-i)}, \quad j>i\ge 1.\label{jiab} \end{align}\tag{59}\] Fix \(\varepsilon>0.\) Due to 58 and 59 , there exists \(n_3>0\) such that \[\begin{gather} \tag{60} a/b(d-2)j^{1-d}(1-\varepsilon)\le j^{2-d}-(j+a/b)^{2-d}\le a/b(d-2)j^{1-d}(1+\varepsilon),\quad j\ge n_3,\\ i^{2-d}-j^{2-d}<i^{2-d}-(j+a/b)^{2-d}<(1+\varepsilon)(i^{2-d}-j^{2-d}),\quad j-i\ge n_3.\tag{61} \end{gather}\] Let \(n_0=n_3\) and define \(H\) and \(A\) as in 39 and 40 . Then, \[\begin{align} \label{gaba} \Psi_n(m)=\Psi_n(m,H)+\Psi_n(m,A), \end{align}\tag{62}\] where \[\begin{align} \Psi_n(m,V)=\sum_{(j_1,...,j_m)\in V}& \frac{(j_1+a/b)^{2-d}-j_1^{2-d}}{(j_2+a/b)^{2-d}-j_1^{2-d}}\cdots \frac{(j_{m-1}+a/b)^{2-d}-j_{m-1}^{2-d}}{(j_m+a/b)^{2-d}-j_{m-1}^{2-d}} \\&\times \frac{j_{m}^{2-d}-(j_{m}+a/b)^{2-d}}{j_m^{2-d}} \end{align}\] for \(V=A\) or \(H.\) Owing to 60 and 61 , we have \[\begin{align} \label{pmhu} \Psi_n(m,H)&\le (a/b(d-2)(1+\varepsilon))^{m}\sum_{(j_1,...,j_m)\in H} \left(\prod_{l=1}^{m-1}\frac{j_l^{1-d}}{j_{l}^{2-d}-j_{l+1}^{2-d}}\right)\frac{1}{j_m}\\ &=(a/b(d-2)(1+\varepsilon))^{m}\sum_{(j_1,...,j_m)\in H} \frac{1}{j_1}\prod_{l=1}^{m-1}\frac{1}{j_l^{3-d}\left(j_{l+1}^{d-2}-j_{l}^{d-2}\right)}\nonumber \end{align}\tag{63}\] and \[\begin{align} \label{pmhl} \Psi_n(m,H)&\ge \left(\frac{a/b(d-2)(1-\varepsilon)}{1+\varepsilon}\right)^{m}\sum_{(j_1,...,j_m)\in H} \left(\prod_{l=1}^{m-1}\frac{j_l^{1-d}}{j_{l}^{2-d}-j_{l+1}^{2-d}}\right)\frac{1}{j_m}\\ &=\left(\frac{a/b(d-2)(1-\varepsilon)}{1+\varepsilon}\right)^{m}\sum_{(j_1,...,j_m)\in H} \frac{1}{j_1}\prod_{l=1}^{m-1}\frac{1}{j_l^{3-d}\left(j_{l+1}^{d-2}-j_{l}^{d-2}\right)}.\nonumber \end{align}\tag{64}\] Based on 63 and 64 , putting \(\alpha=d-2,\) \(b_s=n_0,\) \(s=1,2,...,m\) and applying Proposition 1, we get \[\begin{align} \left(\frac{a/b(d-2)(1-\varepsilon)}{1+\varepsilon}\right)^{m}&\frac{\prod_{j=0}^{m-1}(j+d-2)}{m!(d-2)^m}\le \varliminf_{n\rightarrow\infty} \frac{\Psi_n(m,H)}{\left(\log n\right)^{m}}\nonumber\\ &\le \varlimsup_{n\rightarrow\infty} \frac{\Psi_n(m,H)}{\left(\log n\right)^{m}}\le (a/b(d-2)(1+\varepsilon))^{m}\frac{\prod_{j=0}^{m-1}(j+d-2)}{m!(d-2)^m}. \end{align}\] Since \(\varepsilon\) is arbitrary, we have \[\begin{align} \label{ph} \lim_{n\rightarrow\infty} \frac{\Psi_n(m,H)}{\left(\log n\right)^{m}}= (a/b)^{m}\frac{\prod_{j=0}^{m-1}(j+d-2)}{m!}. \end{align}\tag{65}\] On the other hand, based on 60 and 61 , by an argument similar to the proof of 47 , we show that \[\begin{align} \label{pa} \lim_{n\rightarrow\infty} \frac{\Psi_n(m,A)}{\left(\log n\right)^{m}}=0. \end{align}\tag{66}\] In view of 62 , 65 and 66 , dividing 57 by \((a/b\log n)^k\) and taking the limit, we obtain \[\begin{align} \lim_{n\rightarrow\infty}\frac{1}{(a/b\log n)^k}\mathbb{E}_0\Bigg(\sum_{j=1}^n\eta_j\Bigg)^k=\prod_{j=0}^{k-1}(j+d-2), \quad k\ge1. \end{align}\] Applying again Carleman’s test for the uniqueness of the moment problem (see e. g., [25]), we have \[\begin{align} \frac{ \sum_{j=1}^n\eta_j}{a/b\log n}\overset{d}\to \xi, \text{ as } n\rightarrow\infty, \end{align}\] where \(\xi\sim \mathrm{Gamma}(d-2, 1)\). This completes the proof of Corollary 1. \(\Box\)
To prove Corollary 2, recall that the generator of the geometric Brownian motion \(\{X_t\}_{t\ge0}\) defined in 7 or 8 is given by \[Lf(x)=\frac{1}{2}\sigma^2x^2f''(x)+\mu xf'(x),\] where \(f\) is bounded and twice continuously differentiable. A non-degenerate solution \(S(x)\) of the equation \[\begin{align} \frac{1}{2}\sigma^2x^2S''(x)+\mu xS'(x)=0, \text{ or }LS=0\label{hf} \end{align}\tag{67}\] is called the scale function of \(\{X_t\}_{t\ge 0}.\) It is easy to check that \[\begin{align} \label{sx} S(x)=\left\{\begin{array}{ll} x^{1-2\mu/\sigma^2}, & \text{if } 2\mu/\sigma^2\ne 1,\\ \log x, &\text{if } 2\mu/\sigma^2=1 \end{array}\right. \end{align}\tag{68}\] is a special solution of equation 67 .
In what follows, we denote by \(P_x\) and \(E_x\) the probability measure and the corresponding expectation operator of a geometric Brownian motion starting from \(x.\) For \(y>0,\) define \[T_y=\inf\{t>0: X_t=y \}.\] Then, for \(0<r<x<R<\infty\) we have (see for example [14]) \[\begin{align} \label{exgb} P_x(T_r>T_R)=\frac{S(x)-S(r)}{S(R)-S(r)}. \end{align}\tag{69}\] If \(2\mu/\sigma^2>1,\) then from 68 and 69 we obtain \[\begin{align} \label{exgbg} P_x(T_r>T_R)=\frac{x^{1-2\mu/\sigma^2}-r^{1-2\mu/\sigma^2}}{R^{1-2\mu/\sigma^2}-r^{1-2\mu/\sigma^2}}, \end{align}\tag{70}\] for \(0<r<x<R<\infty.\)
Based on 70 , the rest of the proof proceeds verbatim as the analogous part of the proof of Corollary 1 (after 56 ). \(\Box\)
Let \(f_0(s)=s\) for \(s\in[0,1]\). Recall that \(f(s)=1/(2-s)\) for \(s\in[0,1]\). Define recursively \(f_n(s)=f(f_{n-1}(s))\) for \(n\ge 1.\) By induction, we get \(f_n(s)=1-\frac{1-s}{1+n(1-s)}\) for \(n\ge 0\) with derivatives \(f^{(j)}_n(0)=\frac{j!n^{j-1}}{(1+n)^{j+1}}\) for \(j\ge 1.\) For the Galton-Watson process \(\{Y_n\}_{n\geq0}\), the generating function satisfies (see e. g., [27]) \[\begin{align} E\left(s^{Y_{k+n}}\,\middle|\,Y_k=1\right)=f_n(s), \quad k\ge 0,\;n\ge 0. \nonumber \end{align}\] Set \(\eta_i=1_{\left\{Y_i=1\right\}}\) for \(i\ge 1.\) Then \(P(\eta_i=1)=P(Y_i=1)=f_i'(0)=\frac{1}{(1+i)^2},\) \(i\ge1.\) By the Markov property of \(\{Y_n\}_{n\ge 1},\) for \(j\ge i\ge 1\), we have \[\begin{align} P\left(\eta_{j}=1\,\middle|\,\eta_i=1,...,\eta_1=1\right)&=P\left(Y_{j}=1\,\middle|\,Y_i=1\right)=f_{j-i}'(0)=\frac{1}{(j-i+1)^2}. \end{align}\] Therefore, 1 holds with \(D(n)=(1+n)^2,\) \(n\ge1\) and \(\rho(i,j)=D(j-i)=(j-i+1)^2,\;j\ge i\ge 0.\) Notice that \[\begin{align} \zeta(D):=\sum_{n=1}^\infty \frac{1}{D(n)}=\sum_{n=1}^\infty\frac{1}{(1+n)^2}=\frac{\pi^2}{6}-1. \end{align}\] Thus, an application of Part (i) of Theorem 1 yields that \[\sum_{i=1}^n\eta_i\overset{a.s.}{\to} \xi\text{ as }n\rightarrow\infty,\] where \(\xi\sim \mathrm{Geo}(6/\pi^2).\) Corollary 3 is proved. \(\Box\)
Recall that \(\{Z_n\}_{n\ge 0}\) is a BPVE with immigration which satisfies \(P(Z_0=0)=1\) and \[\begin{align} \label{mb2} E\left(s^{Z_{k+1}}\,\middle|\,Z_{k},..,Z_0\right)=\left(\tilde{f}_{k+1}(s)\right)^{Z_{k}+1}, \quad k\ge 0, \end{align}\tag{71}\] where \(\tilde{f}_k,\;k\ge 1\) are given in 10 . For \(j\ge i\ge 0\), define \[\begin{align} F_{i,j}(s)=E\left(s^{Z_{j}}\,\middle |\, Z_i=0\right),\quad s\in [0,1]. \end{align}\] By induction and 71 , for \(j\ge i\ge 0\), we have \[\begin{align} \label{fn} F_{i,j}(s)=\prod_{k=i+1}^j \tilde{f}_{k,j}(s),\quad s\in [0,1], \end{align}\tag{72}\] where \(\tilde{f}_{k,j}(s)=\tilde{f}_k\left(\tilde{f}_{k+1}\left(\cdots\left(\tilde{f}_j(s)\right)\right)\right)\) for \(j\ge k\ge 1.\) Recall that \(m_k=\tilde{f}_k'(1)=\frac{1-p_k}{p_k}\) for \(k\ge1.\) Thus some direct calculation yields \(\tilde{f}_k(s)=1-\frac{m_k(1-s)}{1+m_k(1-s)},\;k\ge1.\) As a consequence, by induction we deduce that \[\begin{align} \tilde{f}_{k,j}(s)&=1-\frac{m_k\cdots m_j(1-s)}{1+\sum_{t=k}^j m_t\cdots m_j(1-s)} =\frac{1+\sum_{t=k+1}^j m_t\cdots m_j(1-s)}{1+\sum_{t=k}^j m_t\cdots m_j(1-s)},\quad j\ge k\ge1.\label{fkn} \end{align}\tag{73}\] Substituting 73 into 72 , we obtain \[\begin{align} F_{i,j}(s)=\frac{1}{1+\sum_{t=i+1}^j m_t\cdots m_j(1-s)},\quad j\ge i\ge 0. \end{align}\] For \(j\ge i\ge 0,\) writing \[\begin{align} \rho(i,j)=1+\sum_{t=i+1}^j m_t\cdots m_j, \end{align}\] then \[\begin{align} \nonumber P\left(Z_j=0\,\middle|\,Z_i=0\right)=F_{i,j}(0)=\frac{1}{1+\sum_{t=i+1}^j m_t\cdots m_j}=\frac{1}{\rho(i,j)}. \end{align}\] Let \(\eta_i=1_{\{Z_i=0\}}\) for \(i\ge 1.\) Then, for \(j\ge i\geq1\), we have \[\begin{align} &P(\eta_i=1)=P(Z_i=0)=F_{0,i}(0)=\frac{1}{\rho(0,i)},\tag{74}\\ & P(\eta_j=1\,|\,\eta_i=1,...,\eta_0=1)=P(Z_j=0\, |\,Z_i=0)=\frac{1}{\rho(i,j)}. \tag{75} \end{align}\]
Lemma 1. Let \(p_i,\;i\ge1\) be as in 12 . (i) Suppose \(\sum_{k=i}^\infty r_k \to 0\) as \(i \to \infty\). Then for any \(\varepsilon > 0\), there exists \(N \ge 1\) such that for all \(i \ge N\) and \(j - i > N\), \[\begin{align} \label{d0n} 1 - \varepsilon \le \frac{i}{\rho(0,i)} \le 1 + \varepsilon, \quad 1 - \varepsilon \le \frac{j-i}{\rho(i,j)} \le 1 + \varepsilon. \end{align}\tag{76}\]
(ii) Fix \(B \in [0,1)\). If \(r_i = \frac{B}{i}\) for \(i \ge 1\), then for any \(\varepsilon > 0\), there exists \(N \ge 1\) such that for all \(i \ge N\) and \(j - i \ge N\), \[\begin{align} \label{d1n} 1 - \varepsilon \le \frac{i}{(1-B)\rho(0,i)} \le 1 + \varepsilon, \quad 1 - \varepsilon \le \frac{j^{B}\left(j^{1-B} - i^{1-B}\right)}{(1-B)\rho(i,j)} \le 1 + \varepsilon. \end{align}\tag{77}\]
Observe that as \(k\rightarrow\infty,\) \[\begin{align} \label{567hyhyt1} m_k=\frac{1-p_k}{p_k}=\frac{1/2+r_k/4}{1/2-r_k/4}=1+r_k+O\left(r_k^{2}\right). \end{align}\tag{78}\] By Taylor expansion, there exists \(\theta_k\in (0, m_k-1)\) such that \[\begin{align} \log m_k=m_k-1-\frac{(m_k-1)^2}{2(1+\theta_k)^2}. \end{align}\] Combining this with 78 , we obtain for \(j\ge t-1\ge 0,\) \[\begin{align} \label{46hyyr4} m_t \cdots m_j&= \exp\left(\sum_{k=t}^j \log m_k \right) = \exp\left(\sum_{k=t}^j \left[m_k - 1 - \frac{(m_k - 1)^2}{2(1 + \theta_k)^2}\right]\right)\\ &= \exp\left(\sum_{k=t}^j \left[r_k + O(r_k^2)\right]\right).\nonumber \end{align}\tag{79}\]
We prove first Part (i). Fix \(\eta>0.\) Since \(\sum_{k=i}^\infty r_k\to 0\) as \(i\to\infty,\) there exists \(i^*\ge 1\) such that for all \(j\ge t-1\ge i^{*},\) \[\begin{align} 1 - \eta < \exp\left(\sum_{k=t}^j \left[r_k + O\left(r_k^2\right)\right]\right) < 1 + \eta. \end{align}\] This implies \[\begin{align} \label{meta} 1 - \eta \le m_t \cdots m_j \le 1 + \eta, \quad j \ge t-1 \ge i^*. \end{align}\tag{80}\] Consequently, \[\begin{align} \label{rhilu} (j - i + 1)(1 - \eta) \le \rho(i,j) \le (j - i + 1)(1 + \eta), \quad j \ge i \ge i^*. \end{align}\tag{81}\] Choose \(N_1 \ge i^*\) such that \(\frac{j - i + 1}{j - i} < 1 + \eta\) for all \(i \ge N_1\) and \(j \ge N_1 + i\). Then, \[\begin{align} \label{rije1} (1 - \eta)^2 \le \frac{\rho(i,j)}{j - i} \le (1 + \eta)^2, \quad j \ge i + N_1, \, i \ge N_1. \end{align}\tag{82}\] For \(i > i^*\), note that \[\begin{align} \rho(0,i) = \sum_{k=1}^{i+1} m_k \cdots m_i = m_{i^*+1} \cdots m_i \sum_{k=1}^{i^*} m_k \cdots m_{i^*} + \rho(i^*,i). \end{align}\] By 80 and 81 , \[\begin{align} (C^* + i - i^* + 1)(1 - \eta) \le \rho(0,i) \le (C^* + i - i^* + 1)(1 + \eta), \quad i > i^*, \end{align}\] where \(C^* = \sum_{k=1}^{i^*} m_k \cdots m_{i^*}\). Choose \(N_2 > i^*\) such that \[\begin{align} 1 - \eta \le \frac{C^* + i - i^* + 1}{i} \le 1 + \eta, \quad i\ge N_2. \end{align}\] Then, \[\begin{align} \label{rie2} (1 - \eta)^2 \le \frac{\rho(0,i)}{i} \le (1 + \eta)^2, \quad i \ge N_2. \end{align}\tag{83}\] For any \(\varepsilon > 0\), choose \(\eta\) sufficiently small so that \[\begin{align} 1 - \varepsilon < (1 + \eta)^{-2} < (1 - \eta)^{-2} < 1 + \varepsilon, \end{align}\] and set \(N = N_1 \vee N_2\). From 82 and 83 , we obtain 76 . This completes the proof of Part (i).
Next, we turn to the proof of Part (ii). Fix \(B\in [0,1)\) and assume \(r_i=B/i\) for \(i\ge1.\) By 79 , \[\begin{align} \label{46hyyr42} m_t \cdots m_j = \exp\left(\sum_{k=t}^j \left[\frac{B}{k} + O\left(k^{-2}\right)\right]\right), \quad j \ge t-1 \ge 0. \end{align}\tag{84}\] Fix \(\eta > 0\). There exists \(i^{**} \ge 1\) such that for all \(j \ge t-1 \ge i^{**}\), \[\begin{align} 1 - \eta < \exp\left(\sum_{k=t}^j O\left(k^{-2}\right)\right) < 1 + \eta. \end{align}\] Observe that for \(j \ge t-1 > 0\), \[\begin{align} B \log\left(\frac{j+1}{t}\right) \le \sum_{k=t}^j \frac{B}{k} \le B \log\left(\frac{j}{t-1}\right). \end{align}\] Combining this with 84 , we obtain for \(j \ge t-1 \ge i^{**}\): \[\begin{align} \label{min} (1 - \eta)\left(\frac{j+1}{t}\right)^B \le m_t \cdots m_j \le (1 + \eta)\left(\frac{j}{t-1}\right)^B. \end{align}\tag{85}\] Thus, for \(j \ge i \ge i^{**}\), \[\begin{align} \rho(i,j) &\le (1 + \eta) \sum_{t=i+1}^{j+1} \left(\frac{j}{t-1}\right)^B = (1 + \eta)j^B \sum_{t=i}^j \frac{1}{t^B} \le \frac{1 + \eta}{1 - B} j^B \left(j^{1-B} - (i-1)^{1-B}\right). \end{align}\] For \(j > i + 2/\eta\), by the mean value theorem, there exist \(\kappa_1 \in (i, i + 2/\eta)\) and \(\kappa_2 \in (i-1, i)\) such that \[\begin{align} \frac{j^{1-B} - (i-1)^{1-B}}{j^{1-B} - i^{1-B}} \le 1 + \frac{i^{1-B} - (i-1)^{1-B}}{(i + 2/\eta)^{1-B} - i^{1-B}} = 1 + \left(\frac{\kappa_1}{\kappa_2}\right)^B \frac{\eta}{2} \le 1 + \left(\frac{i + 2/\eta}{i - 1}\right)^B \frac{\eta}{2}. \end{align}\] Choose \(\tilde{N}_1 > i^{**} \vee \lfloor \eta/2 + 1 \rfloor\) such that \(\left(\frac{i + 2/\eta}{i - 1}\right)^B < 1 + \eta/2\) for all \(i\ge\tilde{N}_1\). Then, \[\begin{align} \frac{j^{1-B} - (i-1)^{1-B}}{j^{1-B} - i^{1-B}} \le 1 + \frac{\eta}{2}(1 + \eta/2) < 1 + \eta, \quad i \ge \tilde{N}_1, \, j - i \ge \tilde{N}_1. \end{align}\] Consequently, \[\begin{align} \rho(i,j) \le \frac{(1 + \eta)^2}{1 - B} j^B \left(j^{1-B} - i^{1-B}\right), \quad j - i \ge \tilde{N}_1, \, i \ge \tilde{N}_1. \end{align}\] Similarly, there exists \(\tilde{N}_2\ge 1\) such that \[\begin{align} \rho(i,j) \ge \frac{(1 - \eta)^2}{1 - B} j^B \left(j^{1-B} - i^{1-B}\right), \quad j - i \ge \tilde{N}_2, \, i \ge \tilde{N}_2. \end{align}\] Let \(\tilde{N}_3 = \tilde{N}_1 \vee \tilde{N}_2\). Then, \[\begin{align} \label{dij} \frac{(1 - \eta)^2}{1 - B} j^B \left(j^{1-B} - i^{1-B}\right) \le \rho(i,j) \le \frac{(1 + \eta)^2}{1 - B} j^B \left(j^{1-B} - i^{1-B}\right) \end{align}\tag{86}\] for \(j - i \ge \tilde{N}_3\) and \(i > \tilde{N}_3\). For \(\rho(0,i)\), fix \(i_0 \ge \tilde{N}_3\) and write \[\begin{align} \rho(0,i) = \sum_{k=1}^{i+1} m_k \cdots m_i = m_{i_0+1} \cdots m_i \sum_{k=1}^{i_0} m_k \cdots m_{i_0} + \rho(i_0,i), \quad i> i_0. \end{align}\] Let \(C_0 = \sum_{k=1}^{i_0} m_k \cdots m_{i_0}\). By 85 and 86 , for \(i-i_0\ge \tilde{N}_3\) we have \[\begin{align} \rho(0,i) &\le C_0(1 + \eta)\left(\frac{i}{i_0}\right)^B + \frac{(1 + \eta)^2}{1 - B} i^B \left(i^{1-B} - i_0^{1-B}\right), \\ \rho(0,i) &\ge C_0(1 - \eta)\left(\frac{i+1}{i_0+1}\right)^B + \frac{(1 - \eta)^2}{1 - B} i^B \left(i^{1-B} - i_0^{1-B}\right). \end{align}\] Therefore, we can choose \(N > \tilde{N}_3 + i_0\) such that \[\begin{align} \label{pin} \frac{i}{1 -B}(1 - \eta)^3 \le \rho(0,i) \le \frac{i}{1 - B} \left(\eta + (1 + \eta)^2\right), \quad i > N. \end{align}\tag{87}\] For any \(\varepsilon > 0\), selecting sufficiently small \(\eta\) yields 77 from 86 and 87 . Part (ii) of the lemma is proved. \(\Box\)
We are now ready to complete the proof of Corollary 4. To prove Part (i) of Corollary 4, for \(n\ge 1,\) we set \(D(n)=n\) and let \[S(n):=\sum_{i=1}^n\frac{1}{D(i)}.\] Clearly, \(S(n)\sim \log n\) as \(n\rightarrow\infty.\) Thus, the sequence \(\{S(n)\}_{n\ge1}\) is regularly varying with index \(\tau=0.\) By 74 , 75 and Part (i) of Lemma 1, we may apply Part (ii) of Theorem 1 to conclude that \[\frac{\sum_{j=1}^n\eta_i}{\log n}\overset{d}{\to} \xi\text{ as }n\rightarrow\infty,\] where \(\xi\sim \mathrm{Exp}(1).\)
Finally, we give the proof of Part (ii) of Corollary 4. Utilizing Part (ii) of Lemma 1, we substitute the parameters \(\alpha=1-B\) and \(\beta=(1-B)^{-1}\) into Theorem 2. This yields \[\frac{\sum_{j=1}^n\eta_i}{\log n}\overset{d}{\to} \xi\text{ as }n\rightarrow\infty,\] where \(\xi\sim \mathrm{Gamma}(1-B, 1).\) \(\Box\)
We list here some questions related to Corollaries 3 and 4.
Corollary 3 focuses on critical Galton-Watson processes with geometric offspring distributions. A natural extension, as explored in [13], involves considering near-critical branching processes in varying environments. Specifically, let \(p_i=\frac{1}{2}+\frac{B}{4 i}\) for \(i\ge1\) and define \(f_i,\;i\ge1\) as in 10 . Let \(\{Z_n\}_{n\ge0}\) be a branching process satisfies \(Z_0=1\) and \(E\left(s^{Z_n}\,\middle|\,Z_0,...,Z_{n-1}\right)=\left(f_i(s)\right)^{Z_{n-1}},\; n\ge 1.\) Try to find the limit distribution of \(\#\{1\le t\le n: Z_t=1\}\) (or \(\#\{1\le t\le n: Z_t=a\}\) in general for \(a\ge 1\)) which depends on the value of \(B.\)
Corollary 4 addresses the case \(0 \le B < 1\). The case \(B \ge 1\), however, presents distinct behavior. Remark 5 indicates that \(C(0) = \{t \ge 1 : Z_t = 0\}\) is almost surely finite, making \(\#C(0)\) a finite random variable. A key challenge is to determine its exact distribution, which likely requires analyzing the asymptotics of some other multiple sums similar to the one in ?? .
Both Corollaries 3 and 4 assume geometric distributions. As extension, one may generalize these results to arbitrary offspring distributions. A potential strategy involves bounding the probability generating function of \(Z_n\) using linear-fractional functions (see e.g., [17]).
Supported by Nature Science Foundation of Anhui Educational Committee, Anhui, China (Grant No. 2023AH040025).↩︎