May 17, 2024
In this paper we propose an \(\ell_1\)-regularized GLS estimator for high-dimensional regressions with potentially autocorrelated errors. We establish non-asymptotic oracle inequalities for estimation accuracy in a framework that allows for highly persistent autoregressive errors. In practice, the Whitening matrix required to implement the GLS is unkown, we present a feasible estimator for this matrix, derive consistency results and ultimately show how our proposed feasible GLS can recover closely the optimal performance (as if the errors were a white noise) of the LASSO. A simulation study verifies the performance of the proposed method, demonstrating that the penalized (feasible) GLS-LASSO estimator performs on par with the LASSO in the case of white noise errors, whilst outperforming it in terms of sign-recovery and estimation error when the errors exhibit significant correlation.
When performing regression in the high-dimensional setting, where the number of covariates \(p\) is greater than the number of data-points \(n\), it is common to utilize regularized estimators that constrain parameters to lie in some restricted sub-space. Consider the linear regression model \[y=X\beta_{0}+\epsilon,\quad\mathrm{Cov}(\epsilon)=\Gamma_{0}\;,\label{eq:32regression}\tag{1}\] where \(X\in\mathbb{R}^{n\times p}\) is a high-dimensional (reduced-rank) matrix of stochastic explanatory variables and \(\beta_{0}\in\mathbb{R}^{p}\) is the true vector of unkown parameters. If we assume sparsity on the coefficients \(\beta_{0}\), i.e. many are zero, then a popular estimator for the regression coefficients is given by the least-absolute shrinkage and selection operator (LASSO hereafter). This is defined as the minimizer:
\[\begin{align} \hat{\beta} & =\arg\min_{\beta\in\mathbb{R}^{p}}\left[\frac{1}{2n}\|y-X\beta\|_{2}^{2}+\lambda_{n}\|\beta\|_{1}\right],\label{eq:lasso} \end{align}\tag{2}\] for \(\lambda_{n}\geq0.\) There has been much research on such estimators (see [1], [2] for a review), including many alternative forms of regularization, e.g. ridge regression (Tikhonov regularization), elastic-net, and the group-LASSO penalty, among others. Broadly speaking, different regularization functions enable one to easily impose restrictions or priors on the parameters [3]–[5]. In our case, we desire to understand, and improve such estimators in the presence of autocorrelated noise - that is when \(\Gamma_{0}\) in (1 ) is no longer a diagonal matrix with elements \(\Gamma_{0}^{ij}=\rho^{\lvert i-j\rvert}\) for some \(\lvert\rho\rvert<1\).
If one considers (2 ), then we would intuitively expect the estimator to be more efficient with independent errors, compared to when the errors possess some general autocorrelation structure1. Generalized Least Squares (GLS) represents a simple extension to the least-squares objective that can mitigate this increased variation, whereby we whiten the data prior to performing the estimation, i.e., \(\tilde{y}=Ry\) and \(\tilde{X}=RX\) where \(R=\Gamma_{0}^{-1/2}\)—thus after Whitening, the errors will appear to have zero correlation. In a classical \(n>p\), \(n\rightarrow\infty\) asymptotic setting, GLS estimators have been well studied theoretically [6], [7]. For our study, we consider the simple GLS-LASSO estimator of the following form: \[\tilde{\beta}=\arg\min_{\beta\in\mathbb{R}^{p}}\left[\frac{1}{2n}\|R(y-X\beta)\|_{2}^{2}+\tilde{\lambda}_{n}\|\beta\|_{1}\right]\;,\label{eq:regGLS}\tag{3}\] where \(\tilde{\lambda}_{n}\) is distinct from \(\lambda_{n}\) in (2 ). In practice, we need to estimate the covariance structure of the errors, resulting in a feasible GLS-LASSO (FGLS) where we replace \(R\) with \(\hat{R}\). In classical regression settings where \(p<n\), asymptotically, if we can provide a consistent estimator of the Whitening matrix \(R\), then the GLS attains the best linear unbiased estimator (BLUE) status [8], it also satisfies the Cramer-Rao lower bound.
The perils of autocorrelated errors within a least-squares framework and spurious regression are famously discussed in [9] and [10], and whilst there has been considerable research that considers applications of the LASSO to time-series, to our knowledge little work has been done to investigate how to correct for autocorrelation, to enable more efficient finite sample estimation. Examples of work in the general time-series setting include the study of asymptotic robustness to autocorrelated errors [11], and applications of the LASSO to vector auto-regressive models [12]. There has also been work looking at how heteroskedasticity can be taken into account when using the LASSO, e.g., when \(\Gamma_{0}\) is diagonal, but has time-varying entries [13]–[16]. In contrast to these studies, we here look at how one can correct for autocorrelated errors and thus potentially improve the efficiency of our estimator. As a motivating application, one could consider regressing say, asset returns \(r_{t}\), against some financial indicator, such as the dividend-to-price ratio \(q_{t}.\) In such circumstances, \(r_{t}\) is often stationary, i.e., \(r_{t}\sim I(0)\), whereas \(q_{t}\) may be integrated of order \(d\), where \(d\in\mathbb{N}\), or \(q_{t}\) is a long-memory and fractionally integrated process with \(1/2\leq d<1\), and infinite variance [10], [17]. In such scenarios the returns and the indicator are not co-integrated, and the error of the regression may be a non-stationary process. Whilst we do not study this case specifically, we do consider the case where the error process of the regression may approach that of a random-walk—we let \(\epsilon_{t}\) be an AR process with parameter \(\rho\), and allow \(|\rho|\) to approach one from below, thus examining how the performance of the LASSO and subsequent GLS-LASSO relates to the level of persistence in the error process.
The advantage of working with this simple error assumption is that we can carefully track the errors incurred at each stage of the regularized GLS procedure. Specifically, one of our paper’s contribution is to understand how the error incurred by the first-stage LASSO estimate impacts the subsequent (feasible) estimation of the autocorrelation structure. We then look at how the estimated Whitening matrix \(\hat{R}\) impacts the performance of the second-stage estimator (3 ), for which we provide an oracle inequality. Our main result shows that in the autoregressive error situation (even in the highly persistent \(|\rho|\rightarrow1\) setting) the feasible GLS-LASSO is able to provide consistent and efficient estimation. This contrasts with the results of [16] who show poor performance of the GLS-LASSO with certain kinds of heteroskadisticity, and underlines the importance of mapping how the matrix \(\hat{R}\) impacts the eigenvalues of the design matrix. When completing this paper, we came across the recent work of [18] who propose the same GLS procedure as we do. Although the estimators are the same, our paper has a slightly different focus. We consider a simpler form of temporal dependence, however, rather than imposing the assumption that a restricted eigenvalue on \(\hat{R}X\) holds, we argue precisely that if an RE condition holds on \(X\) then \(\hat{R}X\) also satisfies such a condition.
The structure of the paper is as follows: Section 2 provides the theoretical framework of the LASSO estimator in terms of error bounds and its empirical behavior in the presence of autocorrelated errors. Section 3 introduces the GLS extension of the LASSO estimator, we present a result on a restricted eigenvalue condition for the GLS, and subsequently provide an oracle inequality for the GLS-LASSO. Bounding the error of the first-stage lasso, we derive a bound on the error incurred by \(\hat{\rho}\), based on the estimated residuals. We use this bound to derive conditions on the regulariser and thus provide an error bound for the FGLS-LASSO procedure. Finally, Section 4 assesses the empirical performance of the LASSO, GLS-LASSO and the FGLS-LASSO estimators in terms of estimation error and sign recovery, before concluding the paper in Section 5.
Define the support of the vector \(x\in\mathbb{R}^{p}\) to be \(\mathrm{supp}(x)=\left\{ i\in\{1,\cdots,p\}\:\vert\:x_{i}\ne0\right\}\), the \(\ell_{0}\) norm is given by the number of non-zero elements \(\|x\|_{0}=|\mathrm{supp}(x)|\). The \(\ell_{q}\) norm of a vector is denoted \(\|x\|_{q}=(\sum_{i=1}^{p}\lvert x_{i}\rvert^{q})^{1/q}\). Matrix norms are denoted as \(\|X\|_{F}:=(\sum_{ij}\lvert X_{ij}\rvert^{2})^{1/2}\), \(\|X\|_{\infty}:=\max_{ij}\lvert X_{ij}\rvert\), the \(\ell_{2}\) operator norm is denoted \(\|X\|_{2}:=\sup_{\|v\|_{2}\le1}\|Xv\|_{2}\). If \(\mathcal{S}\subseteq\{1,\ldots,p\}\) we refer to \(x_{\mathcal{S}}\) as the vector formed by this subset of elements, with the rest of the elements set to zero, i.e. \(x_{\mathcal{S}}=(x_{i}\;\mathrm{if}\;i\in\mathcal{S},x_{i}=0\;\mathrm{otherwise})_{i=1}^{p}\in\mathbb{R}^{p}\). We define the sub-Gaussian and sub-Exponential norms of the random variable \(z\) according to \(\normiii{z}_{2}=\inf\{t>0\:|\;\mathbb{E}[e^{z^{2}/t^{2}}]\le2\}\), and \(\normiii{z}_{1}=\inf\{t>0\:|\;\mathbb{E}[e^{|z|/t}]\le2\}\).
In this section, we provide a discussion of how the LASSO behaves when faced with autocorrelated (autoregressive) errors. As one might expect, we can see in certain situations that when the persistence of the error process becomes too great - i.e., the autocorrelation function of the errors becomes very flat, then the LASSO will experience a significant increase in estimation error. To motivate the GLS correction we give theoretical and empirical evidence for this increase in error. Whilst asymptotically, the LASSO may still recover coefficients, the finite-sample error associated with the estimation may decay slowly which can be of particular concern in applications where we think a highly persistent error structure may be present.
Let \(\hat{\Delta}:=\hat{\beta}-\beta_{0}\) be the estimation error vector obtained by the LASSO regression in Eq. (2 ). If \(L_{0}=I_{n}\), it is known that an estimation error of order \(\|\hat{\Delta}\|_{2}=\mathcal{O}_{P}(n^{-1/2}(s\log p)^{1/2})\) can be obtained under appropriate conditions on the eigenvalues of the design matrix \(X\), as long as the regularizer is set to be sufficiently large - i.e., \(\lambda_{n}\geq2n^{-1}\lVert X^{\top}u\rVert_{\infty}\) [1]. A deviation bound on this empirical quantity thus forms a first-step in controlling the estimation error in the regression parameters, as it allows us to calibrate the level of regularization required.
Consider the the linear time-series regression \[y=X\beta_{0}+L_{0}u\;,\label{eq:32regression1}\tag{4}\] where \(X\) and \(u\) are sub-Gaussian with \(\mathrm{Cov}(X)=\Sigma\otimes I_{n}\) and \(\mathrm{Cov}(u)=\sigma_{u}^{2}I_{n}\), and we define \(\mathcal{S}_{0}=\mathrm{supp}(\beta_{0})\) with \(s=|\mathcal{S}_{0}|\). Thus, we allow our design matrix to be dependent across columns, but uncorrelated across rows, and the autocovariance of the errors is given by \(\Gamma_{0}=\sigma_{u}^{2}L_{0}L_{0}^{\top}\). To illustrate how implementing GLS impacts the LASSO, we will focus most of our attention on the simple AR(1) setting, where the errors \(\epsilon=L_{0}u=(\epsilon_{1},\epsilon_{2},\cdots,\epsilon_{n})^{\top}\), in (1 ,4 ) follow the process \(\epsilon_{t}=\rho\epsilon_{t-1}+u_{t}\) for \(t=2,\cdots,n\). In such a case, the autocovariance matrix has a well known Toeplitz form \(\Gamma_{0;ij}=a^{2}\sigma_{u}^{2}\rho^{|i-j|}\) where \(a=(1-\rho^{2})^{-1/2}\), and we have \(L_{0}=\Psi_{0}\) where: \[\Psi_{0}=\begin{pmatrix}a & 0 & \cdots & 0 & 0\\ a\rho & 1 & \cdots & 0 & 0\\ \vdots & \vdots & \ddots & \vdots & \vdots\\ a\rho^{n-1} & \rho^{n-2} & \cdots & \rho & 1 \end{pmatrix}\;.\label{eq:chol95cov95AR}\tag{5}\]
Lemma 1. Define \(K=\max_{ij}\normiii{X_{ij}}_{2}\normiii{u_{i}}_{2}\), then \[P\left[\|X^{\top}L_{0}u\|_{\infty}\ge t\right]\le2p\exp\left(-c\min\left[\frac{t^{2}}{4K^{2}\|L_{0}\|_{F}^{2}},\frac{t}{K\|L_{0}\|_{2}}\right]\right)\;,\label{eq:bi-linear}\qquad{(1)}\] for some absolute constant \(c>0\).
We can immediately see from the above that the autocovariance of \(\epsilon=L_{0}u\) manifests itself in the tail of the bound, e.g. we need to consider how the term \(\|L_{0}\|_{F}^{2}\) may grow as a function of \(n\). For a stationary error process, we should expect this to grow linearly with \(n\), however, the specific rate achieved will in general depend on the specification of \(L_{0}\). It is also worth commenting that the covariance structure of \(X\) relates directly to \(K\), where we note a larger variance will result in larger \(K\) in (?? ), and thus a comparatively larger bound on the probability.
To illustrate how the above bound can potentially impact the rate of convergence, we could consider the case where the errors are not initialized from the stationary variance of the AR(1) process, e.g., if \(\mathrm{Var}[\epsilon_{1}]\ne a^{2}\). In this case, there will be a period of adaption in the variance of the process as \(n\) increases. For instance, if we set \(\mathrm{Var}[\epsilon_{1}]=1\), then the term \(\|\Psi_{0}\|_{F}^{2}\) would initially grow as a quadratic in \(t\) until \(t>\min_{t}(\mathrm{Var}[\epsilon_{t}]=a^{2})\) after which it becomes linear (c.f. Figure 1). Although not typical in time-series applications, this setting may prove relevant if one assumes the regression errors compound from a known starting distribution. This case also relates to the so-called local-to-unity framework [19] where one lets \(\rho_{n}=\exp(\tilde{c}/n)\) and \(\rho_{n}\rightarrow1\) when the non-centrallity parameter \(\tilde{c}\) is kept fixed at \(\tilde{c}\approx0\), as \(n\to\infty\) . In such a setting, it would be important to track not just the scale, but also the rate (in \(n\)) of the \(\|\Psi_{0}\|_{F}^{2}\) term.
To make our discussion more precise, we can consider the setting of \(L_{0}=\Psi_{0}\), which enables us to make a choice on the regularizer \(\lambda_{n}\) which we will express in relation to a scaling function \[\delta(n,r):=\sqrt{\frac{\log r}{n}}\;.\]
Corollary 1. Thresholding with AR(1) errors
Define the event \[E_{1}:=\left\{ \frac{2}{n}\|X^{\top}\Psi_{0}u\|_{\infty}<\lambda_{n}\right\} \;.\] Then, with \(n\ge(2c)^{-1}\tau\log p\), and \(\tau>1\), we have for any \[\lambda_{n}\ge\frac{4K}{c^{1/2}}\delta\left(a^{-2}n,p^{\tau}\right)\;,\label{eq:lambda95choice}\qquad{(2)}\] \(P[E_{1}]\ge1-2p^{1-\tau}\rightarrow1\).
From this choice of \(\lambda_{n}\) we see the regularizer is comparable with standard choices [1], [4], [5], however, also adjusts in-line with the variance of the AR errors \((\propto a^{2})\), i.e. as \(|\rho|\rightarrow1\) we expect the choice of \(\lambda_{n}\) to be very large. For example, if we consider the local-unity \(\rho=\exp(\tilde{c}/n)\), for \(\tilde{c}<0\) we find \(\lambda_{n}=\Omega(\sqrt{\log p^{\tau}})\) is required to enable \(P[E_{1}]\rightarrow1\).
Let us now examine how inflation in the scale of \(\|X^{\top}\epsilon\|_{\infty}\) due to autocorrelation impacts the estimation error \(\|\hat{\Delta}\|\). Given the LASSO (2 ) is an M-estimator, and thus the estimator \(\hat{\beta}\) necessarily results in a lower objective over the sample data than the true parameters, such that \(\mathrm{Loss}(\hat{\beta};\{x_{t},y_{t}\}_{t=1}^{n})\le\mathrm{Loss}(\beta_{0};\{x_{t},y_{t}\}_{t=1}^{n})\), and thus
\[0\leq\frac{1}{n}\lVert X\hat{\Delta}\rVert_{2}^{2}\leq\frac{2}{n}(X\hat{\Delta})^{\top}Lu+2\lambda_{n}\{\lVert\beta_{0}\rVert_{1}-\lVert\hat{\beta}\rVert_{1}\}\;.\label{eq:32rev3}\tag{6}\] Furthermore, based on the linear decomposability of the \(\ell_{1}\) norm, i.e. \(\|\beta\|_{1}=\|\beta_{\mathcal{S}}\|_{1}+\|\beta_{\mathcal{S}^{\perp}}\|_{1}\) for any \(\mathcal{S}\subset\{1,\ldots,p\}\) and \(\mathcal{S}^{\perp}=\{1,\ldots,p\}\backslash\mathcal{S}\), applying Holder’s inequality we find an upper bound based on \(\|\hat{\Delta}\|_{1}\), specifically \[0\leq\frac{1}{n}\lVert X\hat{\Delta}\rVert_{2}^{2}\le\frac{2}{n}\lVert X^{\top}Lu\rVert_{\infty}\lVert\hat{\Delta}\rVert_{1}+2\lambda_{n}\{\lVert\beta_{0;\mathcal{S}}\rVert_{1}-\lVert\beta_{0;\mathcal{S}}+\hat{\Delta}_{\mathcal{S}}\rVert_{1}-\lVert\hat{\Delta}_{\mathcal{S}^{\perp}}\rVert_{1}\}\;.\label{eq:lasso95general95upper95bound}\tag{7}\]
It is now apparent, how the choice of \(\lambda_{n}\) in Corollary 1 can act to help bound the error, controlling in high-probability the scale of the random component, \(\lVert X^{\top}Lu\rVert_{\infty}\), in the right of the above expression. To ultimately control the error, we also need to lower bound the left-hand side of Eq. 7 . Typically, this involves assuming a so-called restricted-eigenvalue (RE) condition, that is the inner products of \(X\Delta\in\mathbb{R}^{n}\) are bounded, for all vectors in a cone \(\Delta\in\mathbb{C}_{\alpha}(\mathcal{S})\subset\mathbb{R}^{p}\) where: \[\mathbb{C}_{\alpha}(\mathcal{S}):=\{\Delta\in\mathbb{R}^{p}\;|\;\|\Delta_{\mathcal{S}^{\perp}}\|_{1}\le\alpha\|\Delta_{\mathcal{S}}\|_{1}\}\;.\] Intuitively, the definition of this cone allows for us to bound the size of the vector on the out-of-subspace (\(\mathcal{S}^{\perp}\)) components based on those within the model sub-space—if we can say something about the on-support errors, we can also bound the whole error vector.
Definition 1. We say that \(X\) satisfies the RE condition over \(\mathcal{S}\), with \(\kappa,\alpha\) if the event: \[E_{2}:=\left\{ \frac{1}{n}\|Xv\|_{2}^{2}\ge\kappa\|v\|_{2}^{2}\quad\forall v\in\mathbb{C}_{\alpha}(\mathcal{S})\;\right\} \;\label{eq:RE}\qquad{(3)}\] holds.
If \(\hat{\Delta}\in\mathbb{C}_{\alpha}(\mathcal{S})\) and the RE condition holds with \((\kappa,\alpha)\), then we can apply (?? ) to lower bound (7 ) and obtain a direct bound on the estimation error: \[\|\hat{\Delta}\|_{2}^{2}\le\frac{1}{\kappa}\left(\frac{2}{n}\|X^{\top}\epsilon\|_{\infty}\lVert\hat{\Delta}\rVert_{1}+2\lambda_{n}\left[\lVert\beta_{0;\mathcal{S}}\rVert_{1}-\lVert\beta_{0;\mathcal{S}}+\hat{\Delta}_{\mathcal{S}}\rVert_{1}-\lVert\hat{\Delta}_{\mathcal{S}^{\perp}}\rVert_{1}\right]\right)\;.\label{eq:RE95hold95lasso95upper}\tag{8}\]
Combining the above with Corollary 1 we obtain the following bound on the errors of the LASSO.
Proposition 1. LASSO Estimation (with AR errors)
Let \(\tau>1\) , then for \(n\ge(4c)^{-1}\tau\log p\) and conditional on \(E_{2}\) holding for \(\alpha=3\) and \(\mathcal{S}=\mathcal{S}_{0}\) then we have2: \[\|\hat{\Delta}\|_{2}\le\frac{12K}{c^{1/2}\kappa}\delta\left(a^{-2}n,p^{s\tau}\right)\;,\] with probability greater than \(1-2p^{1-\tau}\).
In summary, if the autoregressive errors are stationary then the increased bound for the threshold (Corollary 1) will not change the rate of consistency for the LASSO (in \(n,p\)) which is of order \(\|\hat{\Delta}\|_{2}=\mathcal{O}_{P}(\sqrt{s\log p/n})\). However, in the non-stationary, or local-unity setting (where \(\rho_{n}\) is a sequence in \(n\)) the error rate itself may also change—either way the persistence of the errors will inflate the error of the LASSO estimator through reducing the effective sample size according to \(n/a\).
To illustrate the behavior of the LASSO with autocorrelated errors, here we provide a simple synthetic study, where the coefficient \(\rho\) is varied from \(0\) to very near \(1\). In this case, both the design matrix \(X\) and the errors \(\epsilon\) are assumed to be Gaussian, with the former being isotropic in nature. To investigate the specific impact of the autocorrelation, we contrast the performance of the LASSO with AR errors to that of i.i.d. errors, and let \(\sigma_{u}^{2}=1\). The results of this experiment are given in Figure 2, where we see that the performance of the LASSO, degrades both as a function of the overall error variance, and the autocorrelation. To illustrate the increased probability of exceedance (for a given \(\lambda\)) in this experiment the \(\hat{\lambda}_{n}\) was fixed by minimizing \(\|\hat{\Delta}\|\) obtained over a hold-out-sample of size \(n\). This same \(\hat{\lambda}_{n}\) was used for both the AR(\(1\)) and i.i.d error examples.
In this section we detail our regularized GLS approach, with the aim to reduce the error inflation caused by persistent autocorrelation in the error terms. We consider first a version which requires knowledge of the true autocovariance structure, which we show leads to performance similar to that of the LASSO with independent errors. An important step here is to verify that the RE condition can still be met with a similarly high probability. The second estimator constitutes a feasible GLS estimator, whereby the autocovariance of the errors is estimated via a simple parametric model - in our case, we consider the AR(1) error series and OLS estimation of the related regression coefficients.
Let us turn our attention to regression (4 ) with the additional assumption of Gaussian errors and design, and suppose that the Whitening matrix \(L\) is known a priori. It is immediately evident that pre-multiplying both sides of said equation by \(R=L^{-1}\), gives a standard regression \(\tilde{y}=\tilde{X}\beta_{0}+u\), with \(\tilde{y}=Ry\), \(\tilde{X}=RX\) and \(u\sim N(0,\sigma_{u}^{2}I_{n})\).We now consider the second-stage (GLS) estimator \[\tilde{\beta}=\arg\min_{\beta\in\mathbb{R}^{p}}\left[\frac{1}{2n}\lVert\tilde{y}-\tilde{X}\beta\rVert_{2}^{2}+\tilde{\lambda}_{n}\lVert\beta\rVert_{1}\right]\;,\label{eq:32lasso2}\tag{9}\] and develop a set of bounds that hold for a range of support sets \(\mathcal{S}\) of bounded size. In practice, the estimated support set will depend on both the Whitened data and the chosen values of \(\tilde{\lambda}>0\), a larger \(\tilde{\lambda}\) resulting in a smaller support—we note that an upper-bound on the size of the support set implies a lower-bound on the size of \(\tilde{\lambda}\). Ultimately, the size of the support we can consider is dictated by the RE condition, we therefore need to assess how likely this is to hold under the Whitened design.
Proposition 2. Eigenvalues of Whitened Design
Assuming \(X\sim\mathcal{N}_{n\times p}(0,\Sigma\otimes I_{n})\), let \(\sigma_{\max}^{2}=\max_{i}\Sigma_{ii}\), and \(\gamma_{n}=(2n)^{-1}\|R\|_{F}^{2}\) . For all \(v\in\mathbb{R}^{p}\) we have: \[\frac{1}{n}\|\tilde{X}v\|_{2}^{2}\ge2^{-4}(1+\gamma_{n})^{2}\eta_{\min}^{2}(\Sigma^{1/2})\|v\|_{2}^{2}-3^{2}\sigma_{\max}^{2}\delta^{2}(n,p)\|v\|_{1}^{2}\;\] in probability greater than \(1-4\exp(-c_{1}n)\) for all \(n\ge4c_{1}^{-1}\), where \(c_{1}=3^{-2}2^{-1}(1-\gamma_{n})^{2}\lVert R\rVert_{2}^{-2}\).
The proof (see Appendix) of the above proceeds as in [20], however, with careful tracking of the matrix \(R\). In contrast to [20], we see that the term multiplying \(\|v\|_{2}^{2}\) is dependent on the average eigenvalues of \(R\) via \(\gamma_{n}\), however, for a stationary error series, this will be given by a constant. Following our example, in the stationary AR(1) case (5 ), we find: \[R=\Psi^{-1}=\left(\begin{array}{cccc} a^{-1}\\ -\rho & 1\\ & \ddots & 1\\ & & -\rho & 1 \end{array}\right)\;,\label{eq:Rdef}\tag{10}\] leading to \(\|R\|_{2}=1\). Furthermore, a bound on \(\gamma_{n}\) can be obtained by noting \(\sigma_{\max}(R)\le\max(\mathrm{diag}(R))+|\rho|=1+|\rho|\), and thus \(1/2\le\gamma_{n}<1\) for \(\rho\in(-1,1)\). Whilst \(c_{1}\) is technically dependent on \(n\) through \(\gamma_{n}\), this can always be bounded by a constant for large enough \(n\) and stationary error process, and thus we can expect an RE condition (?? ) to hold in high-probability.
Corollary 2. Proposition 2 implies an RE condition (?? ) holds for support sets \(\mathcal{S}\) up to size \[|\mathcal{S}|\le\frac{(1+\gamma_{n})^{2}\eta_{\min}^{2}(\Sigma^{1/2})}{3^{2}2^{4}\sigma_{\max}^{2}(1+\alpha)^{2}}\frac{n}{\log p}\;,\] with probability greater than \(1-4\exp(-c_{1}n)\rightarrow1\).
Theorem 1. GLS-LASSO Oracle Inequality
Let \(\tilde{\Delta}:=\tilde{\beta}-\beta_{0}\), \(\tilde{\kappa}=2^{-5}(1+\gamma_{n})^{2}\eta_{\min}^{2}(\Sigma^{1/2})\), and set \(\tilde{\lambda}\ge4c^{-1/2}K\delta(n,p^{\tau})\) for \(\tau>1\). Then, for any \(\mathcal{S}\subset\{1,\ldots,p\}\) such that \(|\mathcal{S}|\le3^{-2}2^{-5}\sigma_{\max}^{-2}\tilde{\kappa}(\log p)^{-1}n\), and \(n\ge\|R\|_{2}^{2}\tau\log p\max(3^{2}2^{3},2^{-2}c^{-1})\) we have \[\|\tilde{\Delta}\|_{2}^{2}\le\frac{27|\mathcal{S}|\tilde{\lambda}_{n}^{2}}{8\tilde{\kappa}^{2}}+\frac{\tilde{\lambda}_{n}\|\beta_{0;\mathcal{S}^{\perp}}\|_{1}}{\tilde{\kappa}}+\frac{144}{\tilde{\kappa}}\sigma_{\max}^{2}\delta^{2}(n,p)\lVert\beta_{0;\mathcal{S}^{\perp}}\rVert_{1}^{2}\] in probability greater than \(1-8p^{1-\tau}\rightarrow1\).
Overall, the bound is of order \(\|\tilde{\Delta}\|_{2}^{2}=\mathcal{O}_{P}(|S|\delta^{2}(n,p))\), however, we see there is a trade-off between the estimation error (from the first term), and the bias incurred due to the incorrect specification of the support, i.e. in this case, the bound holds for \(\mathcal{S}\ne\mathcal{S}_{0}\) up to a bounded size. As desired, we note that the sample size \(n\) in the scaling function \(\delta(n,p\)) is now corrected, and is no longer a function of \(a\), or \(\rho\). Of course, this version of the GLS has assumed exact knowledge of the Whitening matrix, which in practice is not available to us, we consider a feasible version of the estimator in the following section.
To enable a feasible GLS procedure, we first require an estimate of the AR parameter based on an estimated residual series. Let us consider \(\hat{\epsilon}_{t}=y_{t}-X_{t\cdot}\hat{\beta}\), where \(\hat{\beta}\) is obtained from the first round of LASSO regression, i.e. Eq. 2 . For an AR(1) model the coefficients can be estimated using least-squares according to \[\hat{\rho}=\frac{\sum_{t=2}^{n}\hat{\epsilon}_{t-1}\hat{\epsilon}_{t}}{\sum_{t=2}^{n}\hat{\epsilon}_{t-1}^{2}},\label{eq:32OLS}\tag{11}\]
Proposition 3. Feasible AR(1) Error
Using (11 ) where \(\hat{\epsilon}=y-X\hat{\beta}\) is obtained from an initial LASSO estimate (2 ), we have for some \(c_{2}>0\) and \(n\ge c_{2}s\log(p^{\tau})\) that
\[\frac{|\hat{\rho}-\rho|}{|\rho|}\le C\delta(n,p^{s\tau})\] where \(C=(30)^{1/2}2^{3}(c\kappa\sigma_{u}^{2})^{-1/2}\) in probability greater than \(1-10p^{(1-\tau)}\rightarrow1\) as \(p\rightarrow\infty\).
The above bound shows that asymptotically in \(n,p\) we obtain consistency of the AR coefficients in terms of relative error. The proof relies on splitting the error into a component driven by the error in \(\|\hat{\Delta}\|_{2}\) from the first-stage LASSO, and a self-normalized process based on the AR(1) error structure. The latter allows us to achieve the standard \(O_{P}(n^{-1/2})\) rate when \(n=\Omega(s\log(p^{\tau}))\), however, if the sample size grows at a slower rate then the bound becomes order \(\delta^{2}(n,p)\). As our proof directly harnesses results on the LASSO error, we see that \(C\) also depends on quantities such as the lower-bound on eigenvalues via \(\kappa\).3 Finally, it should be noted that in Prop. 3 we use the bound on \(\|\Delta\|_{2}\) in relation to the true sparsity structure, that is, we consider \(s\) known and that RE condition is satisfied over supports of this size. The specific \(c_{2}\) required in the sample size condition can broadly be understood as being the maximum of all the sufficient sample sizes from prior arguments, it is worth remarking that this does not depend on \(\rho\).
With a bound on the estimation error for \(\hat{\rho}\), we can apply the argument of Theorem 1 replacing \(R\) with \(\hat{R}\) (i.e. Eq. 10 where \(\rho\) is replaced with \(\hat{\rho}\)) to obtain a bound on the FGLS estimator \[\bar{\beta}=\arg\min_{\beta\in\mathbb{R}^{p}}\left[\frac{1}{2n}\lVert\hat{R}(y-X\beta)\rVert_{2}^{2}+\bar{\lambda}_{n}\lVert\beta\rVert_{1}\right]\;.\]
Corollary 3. FGLS Error Bound
Let \(\bar{\Delta}=\bar{\beta}-\beta_{0}\), then with the choice \[\bar{\lambda}_{n}=\frac{4}{c^{1/2}}\left(1+\rho^{2}C\delta\left(a^{-2}n,p^{s\tau}\right)\right)^{1/2}\delta(n,p^{\tau})\;.\] and \(n\ge c_{2}s\log(p^{\tau})\) we have \[\begin{align} \|\bar{\Delta}\|_{2}\le & \frac{2^{7}}{c^{1/2}\eta_{\min}^{2}(\Sigma^{1/2})}\sqrt{\frac{2}{3}}\bigg(1+\underset{\rightarrow0}{\underbrace{\frac{\rho^{2}}{1-\rho^{2}}C\sqrt{\frac{s\log(p^{\tau})}{n}}}\bigg)}\delta(n,p^{s\tau}) \end{align}\] in probability greater than \(1-12p^{(1-\tau)}\).
We thus find that the FGLS estimation error is inflated over the optimal GLS correction (Theorem 1), however, this inflation decays as a function of \(n\) such that our procedure corrects for the autocorrelated errors both in finite samples and to an optimal level asymptotically, i.e. we observe the limiting term \(\delta(n,p^{s\tau})\) is no longer a function of \(\rho\).
In this Section, we present the Monte Carlo simulation results in terms of estimation error and sign recovery for the LASSO, GLS-LASSO and FGLS-LASSO, for a data-generating process as described by the model (1 ) with varying degrees of error persistence. For the simplicity of exposition GLS-LASSO and FGLS-LASSO will be referred to as GLS and FGLS hereafter. In our setup, we assume a sparsity parameter of \(s=p/10\), such that for \(p=100\), we would have \(\lVert\beta_{0}\rVert_{0}=10\), the design is simulated independently across rows according to \(X_{t\cdot}\sim N(0,\Sigma)\). The estimators \(\hat{\beta}\) are obtained for GLS and FGLS according to the description in Section 3. Finally, given the temporal nature of the regression, the regularization parameter \(\lambda_{n}\) is tuned using 2-fold cross-validation, where the folds are defined as the first and second halves of the data to preserve temporal ordering.
We conduct the simulation study for sample sizes \(n=50,100,\cdots,450,500\) and with different degrees of dimensionality \(p=128,256,512\). We further consider different autocorrelation parameters \(\rho\in\{0,0.5,0.9,0.99\}\) where \(\rho=0\) is the classic case of independent errors, where the LASSO and GLS are expected to perform on par or superior to the FGLS estimator. On the other hand, with \(\rho=0.9\) and \(\rho=0.99\), the error is highly persistent, and realizations would look similar to those of a random walk. The Monte Carlo study entails \(1000\) runs for each of the aforementioned scenarios, whereby the errors are measured by \(\lVert\hat{\Delta}\rVert_{q}\) for \(q=1,2,\infty\), and sign-recovery is calculated by the empirical probability \(\hat{P}[\mathrm{sign}(\hat{\beta})=\mathrm{sign}(\beta_{0})]\), and \(\hat{P}\) is estimated by averaging over the 1000 replcations. Note that \(\mathrm{sign}(\hat{\beta})=\mathrm{sign}(\beta_{0})\) requires equality for all elements, i.e. \(\mathrm{sign}(\hat{\beta}_{i})=\mathrm{sign}(\beta_{0})\) for all \(i=1,\ldots,p\).
Figure 3 present the outcome of the simulations in terms of the mean \(\ell_{2}\) norm of the differences between the true parameter vector \(\beta_{0}\) and the their estimated counterpart \(\hat{\beta}\), using the three approaches of GLS, FGLS and LASSO for the high-dimensional case with \(p=512\). The \(95\%\) (empirical) confidence bands have also been included in the plots to show the degree of dispersion under different autocorrelation regimes. Similar results have been compiled using both the \(\ell_{1}\)and \(\ell_{\infty}\)-norms, as well as lower-dimensional cases of \(p=128,256\) with similar outcomes. These results can be found in the supplementary material.
Figure 3: Estimation error (top: \(p^{-1/2}\|\hat{\Delta}\|_{2}\), bottom: \(\|\hat{\Delta}\|_{\infty}\)error)as a function of \(n\) for \(p=512\) for different settings of \(\rho\),dashed lines indicate empirical 95% confidence intervals..
As shown in Fig. 3, the performances of the three estimators are comparable when the errors exhibit zero to a moderate autocorrelation - i.e., \(\rho=0\) and \(\rho=0.5\). On the other hand, the lower-panel with \(\rho=0.9\) shows that while the performance of GLS and FGLS are comparable, they significantly out-perform the LASSO. As one may expect, given that the FGLS estimator is predicated on \(\hat{\rho}\) as opposed to the true autocorrelation parameter \(\rho\), the FGLS estimates appear to exhibit higher variance. In the case of the LASSO, the larger degree of dispersion around the mean error is striking, with a 95% confidence band almost twice that of both the GLS and FGLS estimators. Moreover, the convergence rate of LASSO to zero is slower than both of our proposed estimators, with a mean \(\ell_{2}\)-error of \(\approx1.5\) for \(n=500\) for LASSO, while with an \(\ell_{2}\)-error of less than \(1\) for the GLS and FGLS estimators. The most interesting results are presented in the bottom-right panel (Fig. 3), where with \(\rho=0.99\), the autocorrelation parameter is close to unity. While the GLS and FGLS estimators demonstrate similar performances to the earlier results, LASSO does not show any signs of consistency, with the \(\ell_{2}\)- error being significantly larger and approximately \(3\) to \(6\) times greater than both the GLS and FGLS estimators.
Figure 4: Empirical probability of sign recovery \(P(\mathrm{sgn}(\hat{\beta})=\mathrm{sgn}(\beta_{0}))\)..
Similar findings can be observed in the simulations concerning support recovery. Sign recovery performances are identical between the different estimators when the error process exhibits zero or moderate autocorrelation. However, Figure 4 demonstrates a significant difference between the estimators when the errors are highly persistent. When \(\rho=0.9\) and \(p=128,256\) and \(512\), although the performances of the estimators are comparable, the GLS and FGLS clearly outperform the LASSO. A more interesting case is when \(\rho=0.99\), where even in the low-dimensional settings - i.e., \(p=128\), sign recovery for LASSO and FGLS converge towards \(100\%\) with a relatively small sample size of \(n\approx100\), whereas even with \(n=500\) the LASSO only recovers about \(70\%\) of the support. This contrast becomes increasingly more evident in high-dimensional settings - i.e, \(p=512\), where the GLS and FGLS estimators eventually recover the entire support, yet the LASSO only recovers roughly \(25\%\) of the support when \(n=500\).
This paper provides a detailed study on the properties of a simple regularised GLS procedure for linear regressions with potentially autocorrelated error terms, in a high-dimensional setting. We study the case with a Gaussian design and sub-Gaussian errors, where the error process is assumed to be autoregressive in nature. Our contributions are three-fold:
First, we confirm that in the presence of autocorrelated errors and without the GLS (or FGLS) transformation, the choice of the regularization parameter should be inflated in relation to the degree of persistence in the error terms. A quick glance at Corollary 1 reveals that for any fixed \(p,n\), we have \(\lambda\propto(1-\rho^{2})^{-1}\), hence \(\lambda\to\infty\) if \(|\rho|\to1\). Consequently, with regards to estimation consistency, it is evident from Proposition 1 that the estimator’s convergence rate (under the \(\ell_{2}\) norm) can slow as the autocorrelation of the error process approaches unity. These theoretical results are further fortified by the Monte Carlo simulations exercise in Section 4, specifically in the local-to-unity settings, i.e., \(\rho=0.99\).
Second, in the case of the GLS estimator, we show a restricted eigenvalue condition for the transformed design matrix holds in high-probability. This result generalizes that found in [20] to the autocorrelated setting, where our whitening matrix induces such autocorrelation. Once this is obtained, we subsequently provide an oracle-inequality for the GLS-LASSO, which holds over a range of supports of bounded size. Crucially, we demonstrate that when whitening is performed in relation to the AR errors, the eigenvalues of the design are still adequately lower-bounded. We note that this is in contrast to the re-scaling that the design may undergo with different error assumptions, e.g., as in [16]—whilst we have shown GLS-LASSO works well with autoregressive errors, it may not generally be optimal to perform the GLS rotation.
Third, we present non-asymptotic bounds for the parameter in the AR(1) errors that take into account estimation error (in the regression coefficients) from the first-stage of the FGLS-LASSO. While the asymptotic consistency of this parameter using FGLS is well-established in classical settings (\(p<n\)), to our knowledge, our work is the first of it’s kind in the high-dimensional setting, using the LASSO. We further use this result to enable the construction of an error bound for the FGLS procedure through an appropriate choice of regularization parameter in the second stage of estimation. Our result shows that whilst the choice of \(\bar{\lambda}_{n}\) is inflated slightly compared to the nominal choice (in presence of iid errors), however, this inflation is bounded and asymptotically \(n,p\) tends to zero for \(n=\Omega(s\log p)\).
Finally, our simulations exercises in Section 4, compare the LASSO, GLS-LASSO and FGLS LASSO estimators in terms of estimation error and sign-recovery for different degrees of persistence, and with different numbers of covariates and sample sizes. Our results in this section corroborate the theoretical findings obtained in terms of estimation error. Furthermore, while we have not presented theoretical results pertaining to sparsistency, the simulations indicate the superiority of the GLS-LASSO and FGLS-LASSO in terms of sign recovery when the errors are highly persistent, whilst giving identical performance in the absence of the correlation of the error terms. Theoretical results for sparsitency could be derived by extending the now standard primal-dual witness argument, c.f.. [1].
Our work further paves the path for future research for performing GLS type corrections within a regularized M-estimation framework. Beyond generalizing to other structured penalties/priors, c.f., the group-lasso, trend-filtering, we can also consider relaxing (the rather strict) distributional assumptions used in our results. For example, one could consider distributions with polynomial tails, or where the design matrix may exhibit more general dependency assumptions (e.g., some general mixing conditions) on the covariance structure of the errors. To some extent, the work of [18] provides work in this direction, and we note they also consider the task of post-selection inference, however, there are still key differences between that work and our own, in particular with regards to the analysis of the RE condition in the FGLS setting. Overall, we have shown that GLS (and a feasible variant) can be highly effective when combined with the LASSO.
Proof. Lemma 1
Let \(x_{j}\) be the \(j\)th column of \(X\), and define \(w_{j}=x_{j}^{\top}Lu\). We note that this is the inner product of two random sub-Gaussian vectors of dimension \(n\) with sub-exponential norm bounded by: \[K=\max_{i,j}\normiii{X_{ij}}_{2}\normiii{u_{j}}_{2}\;.\] Via comparison with the Gaussian chaos we find the moment-generating function of this quadratic form bounded according to \[\begin{align} \mathbb{E}[\exp(\omega w_{j})] & \le\mathbb{E}[\exp(c_{1}K\omega z^{\top}Lz')]\\ & \le\exp(c_{1}K^{2}\omega^{2}\|L\|_{F}^{2}) \end{align}\] where \(z,z'\sim\mathcal{N}(0,I_{n})\), and \(c_{1}>0\) is an absolute constant (see Section 6.2 of [21]). The second inequality comes from a bound on the Gaussian Chaos and holds for all \(\omega\) where \(|\omega|\le K^{-1}\|L\|_{2}^{-1}c_{2}\). Applying the Markov inequality we obtain \[P[\exp(\omega w_{j})\ge\exp(\omega t)]\le\frac{\mathbb{E}[\exp(\omega w_{j})]}{\exp(\omega t)}\leq\frac{\exp(c_{1}K^{2}\omega^{2}\|L\|_{F}^{2})}{\exp(\omega t)}\;.\]
Minimizing over \(\omega\) and applying the union bound gives \[\begin{align} P[\|X^{\top}Lu\|_{\infty}\ge t] & \le2p\exp\left(-\min\left[\frac{t^{2}}{c_{1}4K^{2}\|L\|_{F}^{2}},\frac{c_{2}t}{K\|L\|_{2}}\right]\right)\;,\\ & \le2p\exp\left(-c\min\left[\frac{t^{2}}{4K^{2}\|L\|_{F}^{2}},\frac{t}{K\|L\|_{2}}\right]\right) \end{align}\] for some absolute constant \(0<c\le\min[c_{1}^{-1},c_{2}]\). ◻
Proof. Corollary 1
Note that \(\|\Psi_{0}\|_{F}^{2}=na^{2}\) and \(\|\Psi_{0}\|_{2}=a\), from Lemma 1, we have
\[P[n^{-1}\|X^{\top}\epsilon\|_{\infty}\ge t/2]\le2p\exp\left(-c\min\left[\frac{t^{2}n}{16K^{2}a^{2}},\frac{tn}{2Ka}\right]\right)\] letting \(\lambda_{n}\) be set according to (?? ), and assuming \(n\ge(2c)^{-1}\tau\log p\) we choose the Gaussian tail (order \(t^{2}\) decay) and obtain the stated result. ◻
Proof. Proposition 1
We know from Corollary 1 that \(E_{1}\) holds with \(\lambda_{n}=c^{-1/2}4K\delta(a^{-2}n,p^{\tau})\). Now, if \(E_{2}\) holds with \(\kappa\) and \(\alpha=3\), then we have \(\|\hat{\Delta}\|_{2}^{2}\le\kappa^{-1}\lambda_{n}(3\|\hat{\Delta}_{\mathcal{S}}\|_{1}-\|\hat{\Delta}_{\mathcal{S}^{\perp}}\|_{1})\), and we note since \(3\|\hat{\Delta}_{\mathcal{S}}\|_{1}-\|\hat{\Delta}_{\mathcal{S}^{\perp}}\|_{1}>0\) we have \(\hat{\Delta}\in\mathbb{C}_{\alpha}(\mathcal{S})\) holds with \(\alpha=3\). Focussing on the on-support terms, we have \(\|\hat{\Delta}_{\mathcal{S}}\|_{1}\le\sqrt{|\mathcal{S}|}\|\hat{\Delta}\|_{2}\) and the result follows. ◻
Proof. Our argument is based closely on that of [20] whilst carefully tracking the impact of the Whitening matrix \(R\). Let us consider the rescaling \(\tilde{v}=\lVert\Sigma^{1/2}v\rVert_{2}^{-1}v\) and note that \(\lVert\Sigma^{1/2}\tilde{v}\rVert_{2}=1\) by construction. We will adopt a peeling argument, that is, we consider a restricted set \(V(r):=\{v\in\mathbb{R}^{p}\mid\lVert\Sigma^{1/2}v\rVert_{2}=1,\lVert v\rVert_{1}\leq r\}\) and then expand this set to cover all \(v\in\mathbb{R}^{p}\). For a given \(\ell_{1}\) radius, we consider the quantity: \[M(r,\tilde{X}):=1-\inf_{v\in V(r)}\frac{\lVert RXv\rVert_{2}}{\sqrt{n}}=\sup_{v\in V(r)}\left\{ 1-\frac{\lVert RXv\rVert_{2}}{\sqrt{n}}\right\} \;.\] We desire to construct a deviation bound based on this quantity, and its expectation, to show that in high-probability it can be bounded from below by a constant. There are two steps, first calculate the expectation (or a bound on this) via Gordon’s inequality, and then noting that \(M(r,\tilde{X})\) is a Lipschitz function of \(X\) consider the deviation away from this expectation.
For the first step, consider the sphere \(S^{n-1}=\{u\in\mathbb{R}^{n}\mid\lVert u\rVert_{2}=1\}\) and the fact \(-\inf_{v\in V(r)}\lVert RXv\rVert_{2}=\sup_{v\in V(r)}\inf_{u\in S^{n-1}}(u^{\top}RXv)\) where we have applied the definition of the norm, and swapped the minimum and maximum. Let \(Y_{u,v}:=u^{\top}RXv\) be a Gaussian process indexed by \(u\in S^{n-1}\) and \(v\in V(r)\), we can then relate \(M(r,\tilde{X})\) to the extrema of this process via \[M(r,\tilde{X})=1+n^{-1/2}\sup_{v\in V(r)}\inf_{u\in S^{n-1}}(Y_{u,v})\] Consider the process \(Z_{u,v}\) given by \[Z_{u,v}=g^{\top}R^{\top}u+h^{\top}\Sigma^{1/2}v\quad\forall\;v\in V(r),u\in S^{n-1}\;,\] where \(g\sim N(0,I_{n\times n})\) and \(h\sim N(0,I_{p\times p})\) are standard Gaussian vectors. Calculating the variance of the difference process, we find \[\begin{align} \mathrm{Var}(Z_{u,v}-Z_{u',v'}) & =\lVert R^{\top}(u-u')\rVert_{2}^{2}+\lVert\tilde{v}-\tilde{v}'\rVert_{2}^{2}\\ \mathrm{Var}(Y_{u,v}-Y_{u',v'}) & =\lVert R^{\top}u\tilde{v}^{\top}-R^{\top}u'\tilde{v}'^{\top}\rVert_{F}^{2}\\ & =\lVert R^{\top}(u-u')\rVert_{2}^{2}+\lVert\tilde{v}-\tilde{v}'\rVert_{2}^{2}-2(\underset{\ge0}{\underbrace{\lVert R^{\top}u'\rVert_{2}^{2}-u^{\top}RR^{\top}u'}})(\underset{\ge0}{\underbrace{\tilde{v}^{\top}\tilde{v}-\lVert\tilde{v}'\rVert_{2}^{2}}})\\ & \le\mathrm{Var}(Z_{u,v}-Z_{u',v'})\quad\forall\quad(u,v),(u',v')\in S^{n-1}\times V(r)\;, \end{align}\] where the third line comes from expanding the Frobenius norm and the final inequality utilizes the Cauchy-Schwarz inequality. We now obtain an upper bound on \(\mathbb{E}_{X}[M(r,X)]\) using the Gaussian comparison inequality (e.g., Gordon’s inequality) and the definition of \(Z_{u,v}\), i.e. \[\begin{align} \mathbb{E}[\sup_{u\in S^{n-1}}\inf_{v\in V(r)}Y_{u,v}] & \leq\mathbb{E}[\sup_{v\in V(r)}\inf_{u\in S^{n-1}}Z_{u,v}]\nonumber \\ & \le\mathbb{E}[\inf_{u\in S^{n-1}}g^{\top}R^{\top}u]+\mathbb{E}[\sup_{v\in V(r)}h^{\top}\Sigma^{1/2}v]\nonumber \\ & \leq-\mathbb{E}[\inf_{u\in S^{n-1}}\lVert R^{\top}g\rVert_{2}\lVert u\rVert_{2}]+\mathbb{E}[\sup_{v\in V(r)}h^{\top}\Sigma^{1/2}v]\nonumber \\ & =-\mathbb{E}[\lVert R^{\top}g\rVert_{2}]+\mathbb{E}[\sup_{v\in V(r)}h^{\top}\Sigma^{1/2}v]\;.\label{eq:supinfY} \end{align}\tag{12}\] Let \(m:=\|R^{\top}g\|_{2}^{2}\), a lower bound on \(n^{-1/2}\mathbb{E}[\|R^{\top}g\|_{2}]\) is obtained through the bound \[\frac{\mathbb{E}[m]+m}{2n}-\frac{\sigma_{m}^{2}}{2n^{2}}\le\sqrt{\frac{m}{n}}\le\frac{\mathbb{E}[m]+m}{2n}\] where \(\sigma_{m}^{2}=\mathrm{Var}[\|R^{\top}g\|_{2}^{2}]\). Calculating expectations we obtain \(\mathrm{tr}(RR^{\top})-2^{-1}\sigma_{m}^{2}\le\mathbb{E}[\|R^{\top}g\|_{2}]\le\mathrm{tr}(RR^{\top})\). It is known that \(\lVert R^{\top}g\rVert_{2}^{2}=g^{\top}RR^{\top}g\) is a generalized \(\chi^{2}\) distribution, where \(\sigma_{m}^{2}=2\mathrm{tr}(RR^{\top}RR^{\top})\). Noting \(RR^{\top}\) is symmetric, then \(|\mathrm{tr}(RR^{\top}RR^{\top})|\le\|R\|_{2}^{2}\|R\|_{F}^{2}\) and \[\frac{1}{\sqrt{n}}\mathbb{E}[\|R^{\top}g\|_{2}]\ge\gamma_{n}:=\frac{\|R\|_{F}^{2}}{2n}\;,\] for all \(n\ge2\|R\|_{2}^{2}\). Correspondingly, we have \(-\mathbb{E}[\lVert R^{\top}g\rVert_{2}]\le-\gamma_{n}n^{1/2}\).
By definition of \(V(r)\), we have \[\sup_{v\in V(r)}\lvert h^{\top}\Sigma^{1/2}v\rvert\leq\sup_{v\in V(r)}\lVert v\rVert_{1}\lVert\Sigma^{1/2}h\rVert_{\infty}\leq r\lVert\Sigma^{1/2}h\rVert_{\infty}\] Each element \((\Sigma^{1/2}h)_{j}\)is Gaussian with variance \(\Sigma_{jj}.\) Moreover, it is known that for a Gaussian \(h\), we have \(\mathbb{E}[\lVert\Sigma^{1/2}h\rVert_{\infty}]\leq3\sqrt{\sigma_{\max}^{2}\log p}\). Combining with (12 ), dividing by \(\sqrt{n}\) and adding \(1\) to both sides yields:
\[\begin{align} \mathbb{E}[M(r,RX)] & \leq\underset{=:t(r)}{\underbrace{(1-\gamma_{n})+3r\sigma_{\max}\delta(n,p)}}\;\forall\gamma_{n}>0. \end{align}\]
The second step is to provide a concentration around the expectation. This can be done by considering \(M(r,\tilde{X})\) as a Lipschitz function of a standard Gaussian random variable. In our case, we have \[M(r,\tilde{X})=h(W):=\sup_{v\in V(r)}(1-n^{-1/2}\lVert RW\Sigma^{1/2}v\rVert_{2}),\] where \(W\sim\mathcal{N}_{n\times p}(0,I_{p}\otimes I_{n})\) is an isotropic Gaussian matrix. Comparing \(h()\) across two matrices \(W,W'\), we find \[n^{1/2}[h(W)-h(W')]=\sup_{v\in V(r)}\left(-\lVert RW\Sigma^{1/2}v\rVert_{2}\right)-\sup_{v\in V(r)}\left(-\lVert RW'\Sigma^{1/2}v\rVert_{2}\right)\;,\label{eq:lip95part1}\tag{13}\] and since \(V(r)\) is closed and bounded (non-empty) and the objective function is continuous, there exists a \(\hat{v}\in V(r)\) such that \(\hat{v}=\arg\max_{v\in V(r)}-\lVert RW\Sigma^{1/2}v\rVert_{2}\). Applying this definition to (13 ), gives \[\begin{align} n^{1/2}[h(W)-h(W')] & =-\lVert RW\Sigma^{1/2}\hat{v}\rVert_{2}-(\sup_{v\in V(r)}-\lVert RW'\Sigma^{1/2}v\rVert_{2})\\ & \le\lVert RW'\Sigma^{1/2}\hat{v}\rVert_{2}-\lVert RW\Sigma^{1/2}\hat{v}\rVert_{2}\\ & \le\sup_{v\in V(r)}(\lVert R(W'-W)\Sigma^{1/2}v\rVert_{2})\\ & \le\lVert R\lVert_{2}\lVert W-W'\rVert_{F}\sup_{v\in V(r)}\lVert\Sigma^{1/2}v\rVert_{2}\\ & \le\lVert R\lVert_{2}\lVert W-W'\rVert_{F} \end{align}\] Thus, we have shown that \(h\) has Lipschitz constant \(L\leq n^{-1/2}\lVert R\rVert_{2}\) w.r.t to Euclidean norm on \(W\) (viewed as vector of \(np\) entries). Applying Massart’s concentration inequality for Lipschitz functions \(P[\lvert h(w)-\mathbb{E}[h(w)]\rvert\geq t]\leq2\exp(-(2L^{2})^{-1}t^{2})\), we obtain \[\begin{align} P\left[|M(r,\tilde{X})-\mathbb{E}[M(r,\tilde{X})]\rvert\geq\frac{t(r)}{2}\right] & \leq2\exp\left(-\frac{nt^{2}(r)}{2^{3}\lVert R\rVert_{2}^{2}}\right)\;, \end{align}\] and recalling \(\mathbb{E}[M(r,RX)]\le t(r)\) we have
\[P\left[M(r,\tilde{X})\geq\frac{1}{4}t(r)\right]\;.\le2\exp\left(-\frac{nt^{2}(r)}{2^{5}3^{2}\lVert R\rVert_{2}^{2}}\right)\;.\label{eq:Mbound}\tag{14}\]
We now apply the same peeling argument as in [20], whose Lemma 3 states that if we have \[P\left[\sup_{v\in A\;q(v)\le r}f(v,\tilde{X})\ge g(r)\right]\le2\exp(-ca_{n}g^{2}(r))\;,\label{eq:RaskuttiLemma3}\tag{15}\] for some positive sequence \(a_{n}>0\), constraint function \(q:\mathbb{R}^{p}\mapsto\mathbb{R}_{+}\), non-empty set \(A\), and constant \(c>0\). Then the event \[\mathcal{B}:=\left\{ \exists v\in\mathbb{R}^{p}\:|\;f(v,\tilde{X})\ge2g(q(v))\right\}\] can be bounded in probability \[P[\mathcal{B}]\le\frac{2\exp(-4ca_{n}\mu_{n}^{2})}{1-2\exp(-4ca_{n}\mu_{n}^{2})}\;,\] where \(\mu_{n}\le g(r)\) for all \(r>0\).
We can apply this result, with the following choices of function \[f(v,\tilde{X})=1-\|\tilde{X}v\|_{2}/\sqrt{n}\quad;q(v)=\|v\|_{1}\quad;\quad g(r)=\frac{1}{4}t(r)\] Consider the bad event \[\mathcal{B}:=\left\{ \exists v\in\mathbb{R}^{p}\:|\:\|\Sigma^{1/2}v\|_{2}=1\;,\;(1-\frac{1}{\sqrt{n}}\|\tilde{X}v\|_{2})\ge\frac{1}{2}t(\|v\|_{1})\right\} \;.\] Let \(\mu_{n}=2^{-2}(1-\gamma_{n})\le g(r)\) and choose \(a_{n}=n\) then applying (14 ) in lieu of (15 ), we find \(c=(3^{2}2\|R\|_{2}^{2})^{-1}\) leading to \[P[\mathcal{B}]\le\frac{2\exp(-c_{1}n)}{1-2\exp(-c_{1}n)}\] where \(c_{1}=3^{-2}2^{-1}(1-\gamma_{n})^{2}\lVert R\rVert_{2}^{-2}\). Correspondingly, we have \(P[\mathcal{B}^{c}]\ge1-4\exp(-c_{1}n)\) for all \(n\ge4/c_{1}\), and conditioning on \(\mathcal{B}^{c}\) gives \[\begin{align} 1-\frac{1}{\sqrt{n}}\|\tilde{X}v\|_{2} & \le\frac{1}{2}t(\|v\|_{1})\\ & =\left[1-\frac{1}{2}(1-\gamma_{n})-\frac{3}{2}\|v\|_{1}\sigma_{\max}\delta(n,p)\right] \end{align}\] and therefore \[\frac{1}{\sqrt{n}}\|\tilde{X}v\|_{2}\ge\underset{wlog=1}{\underbrace{\|\Sigma^{1/2}v\|_{1}}}[\frac{1}{2}(1+\gamma_{n})]-\frac{3}{2}\|v\|_{1}\sigma_{\max}\delta(n,p)\;.\label{eq:l295lb}\tag{16}\] Now consider \(c=\max(a-b,0)\), then for all \(d\in(0,1)\) we have \(c^{2}\ge(1-d)^{2}a^{2}-d^{-2}b^{2}\), picking \(d=2^{-1}\) and applying to (16 ) gives the final result. ◻
Proof. Consider \(\mathbb{C}(\alpha,\mathcal{S})\) such that \(\|\Delta_{S^{\perp}}\|_{1}\le\alpha\|\Delta_{S}\|_{1}\) implies \(\|\Delta\|_{1}\le(1+\alpha)\|\Delta_{S}\|_{1}\le(1+\alpha)\sqrt{|\mathcal{S}|}\|\Delta\|_{2}\) thus \[3^{2}\sigma_{\max}^{2}\frac{\log p}{n}\|v\|_{1}^{2}\le3^{2}\sigma_{\max}^{2}\frac{\log p}{n}(1+\alpha)^{2}|\mathcal{S}|\|\Delta\|_{2}^{2}\] \[\begin{align} \frac{1}{n}\|\tilde{X}\Delta\|_{2}^{2} & \ge\left(2^{-4}(1+\gamma_{n})^{2}\eta_{\min}^{2}(\Sigma^{1/2})-3^{2}\sigma_{\max}^{2}\frac{\log p}{n}(1+\alpha)^{2}|\mathcal{S}|\right)\|\Delta\|_{2}^{2}\\ & =\underset{\kappa}{\underbrace{\left(2^{-9}\tilde{\kappa}-3^{2}\sigma_{\max}^{2}\frac{\log p}{n}(1+\alpha)^{2}|\mathcal{S}|\right)}}\|\Delta\|_{2}^{2} \end{align}\] and the result follows since \(\kappa>0\). ◻
Proof. Consider the LASSO basic inequality, for a generic support set \(\mathcal{S}\subset\{1,\ldots,p\}\), we obtain the upper bound \[\begin{align} 0 & \leq\frac{1}{n}\lVert\tilde{X}\hat{\Delta}\rVert_{2}^{2}\leq\frac{2(Lu)^{\top}RX\hat{\Delta}}{n}+2\tilde{\lambda}_{n}\left\{ \lVert\beta_{0;\mathcal{S}}\rVert_{1}-\lVert\beta_{0;\mathcal{S}}+\hat{\Delta}_{\mathcal{\mathcal{S}}}\rVert_{1}-\lVert\hat{\Delta}_{\mathcal{S}^{\perp}}\rVert_{1}\right\} \nonumber \\ & \le\tilde{\lambda}_{n}\left\{ 3\lVert\hat{\Delta}_{\mathcal{S}}\rVert_{1}-\lVert\hat{\Delta}_{\mathcal{S}^{\perp}}\rVert_{1}+2\lVert\beta_{0;\mathcal{S}^{\perp}}\rVert_{1}\right\} \label{eq:basic-inequality952} \end{align}\tag{17}\] for suitably chosen \(\tilde{\lambda}_{n}\ge2n^{-1}\|X^{\top}u\|_{\infty}\). Noting \(3\lVert\hat{\Delta}_{\mathcal{S}}\rVert_{1}-\lVert\hat{\Delta}_{\mathcal{S}^{\perp}}\rVert_{1}+2\lVert\beta_{0;\mathcal{S}^{\perp}}\rVert_{1}\ge0\) leads to \[\begin{align} \lVert\hat{\Delta}\rVert_{1}^{2} & \le(4\lVert\hat{\Delta}_{\mathcal{S}}\rVert_{1}+2\lVert\beta_{0;\mathcal{S}^{\perp}}\rVert_{1})^{2}\\ & \le2^{5}|\mathcal{S}|\lVert\hat{\Delta}_{\mathcal{S}}\rVert_{2}^{2}+2^{3}\lVert\beta_{0;\mathcal{S}^{\perp}}\rVert_{1}^{2}\;, \end{align}\] substituting into (17 ), and letting \(\tilde{\kappa}=2^{-5}(1+\gamma_{n})^{2}\eta_{\min}^{2}(\Sigma^{1/2})\) gives \[\begin{align} \frac{1}{n}\|\tilde{X}\Delta\|_{2}^{2} & \ge2\tilde{\kappa}\|\Delta\|_{2}^{2}-3^{2}\sigma_{\max}^{2}\frac{\log p}{n}\left(2^{5}|\mathcal{S}|\lVert\hat{\Delta}_{\mathcal{S}}\rVert_{2}^{2}+2^{3}\lVert\beta_{0;\mathcal{S}^{\perp}}\rVert_{1}^{2}\right)\\ & \ge\|\Delta\|_{2}^{2}\left(2\tilde{\kappa}-3^{2}\sigma_{\max}^{2}|\mathcal{S}|\frac{\log p}{n}\right)-3^{2}2^{3}\sigma_{\max}^{2}\frac{\log p}{n}\lVert\beta_{0;\mathcal{S}^{\perp}}\rVert_{1}^{2}\;, \end{align}\] assuming \(|\mathcal{S}|\le3^{-2}2^{-5}\tilde{\kappa}\sigma_{\max}^{-2}(\log p)^{-1}n\), we have \[\frac{1}{n}\|\tilde{X}\Delta\|_{2}^{2}\ge\underset{a}{\underbrace{\tilde{\kappa}\|\Delta\|_{2}^{2}}}-\underset{b}{\underbrace{3^{2}2^{3}\sigma_{\max}^{2}\frac{\log p}{n}\lVert\beta_{0;\mathcal{S}^{\perp}}\rVert_{1}^{2}}}\;.\] First, consider the case where the approximation error term (\(b\)) is dominated by the estimation error, such that \(b\le a/2\). In this case, we find \(2^{-1}\tilde{\kappa}\|\Delta\|_{2}^{2}\le\tilde{\lambda}_{n}\left(3|\mathcal{S}|^{1/2}\|\Delta\|_{2}+2\lVert\beta_{0;\mathcal{S}^{\perp}}\rVert_{1}\right)\), solving the quadratic leads to a bound on the \(\ell_{2}\) error. In the case where \(b>a/2\), the estimation error is directly bound by the approximation error. Overall we have \[\begin{align} \implies\|\Delta\|_{2}^{2} & \le\begin{cases} \frac{3^{3}2|\mathcal{S}|\tilde{\lambda}_{n}^{2}}{\tilde{\kappa}^{2}}+\frac{2^{2}\tilde{\lambda}_{n}\|\beta_{0;\mathcal{S}^{\perp}}\|_{1}}{\tilde{\kappa}} & b\le\frac{a}{2}\\ \frac{3^{2}2^{4}}{\tilde{\kappa}}\sigma_{\max}^{2}\frac{\log p}{n}\lVert\beta_{0;\mathcal{S}^{\perp}}\rVert_{1}^{2} & \mathrm{otherwise} \end{cases} \end{align}\] which gives the stated bound.
From Lemma 1, and letting \(\tilde{\lambda}_{n}\ge c^{-1/2}\sigma_{\max}\sigma_{u}\sqrt{\log p^{\tau}/n}\) then we have \[P[\|X^{\top}R^{\top}u\|_{\infty}\ge2^{-1}n\tilde{\lambda}_{n}]\le4p^{1-\tau}\] if \(n\ge2^{-2}c^{-1}\|R\|_{2}^{2}\tau\log p\). Recall, the probability that the RE condition fails (Prop. 2) is bounded according by \(4\exp(-c_{R}n)\), then we have \(4\exp(-c_{R}n)\le4p^{1-\tau}\) if \(n\ge(\tau-1)c_{R}^{-1}\log p\). Combining the sample size conditions leads to our oracle inequality holding in probability at least \(1-8p^{1-\tau}\rightarrow1\). ◻
Proof. Feasible AR(1) Error
Let us note \(\hat{\epsilon}_{t}=(y_{t}-x_{t}^{\top}\beta_{0}+x_{t}^{\top}\beta_{0}-x_{t}^{\top}\hat{\beta})=\epsilon_{t}+z_{t}\), where \(z_{t}:=x_{t}^{\top}\hat{\Delta}\). We note that the bounds and events considered here are taken to be conditional on \(\hat{\Delta}\), specific bounds can be obtained by bounding this error from the first-stage LASSO:
\[\begin{align} \hat{\rho}-\rho & =\rho\underset{A}{\underbrace{\left(\frac{-\sum_{t=1}^{n-1}\epsilon_{t}z_{t}-\sum_{t=1}^{n-1}z_{t}^{2}}{\sum_{t=1}^{n-1}\epsilon_{t}^{2}+2\sum_{t=1}^{n-1}\epsilon_{t}z_{t}+\sum_{t=1}^{n-1}z_{t}^{2}}\right)}}+\underset{B}{\underbrace{\left(\frac{\sum_{t=2}^{n}\hat{\epsilon}_{t-1}(u_{t}+z_{t})}{\sum_{t=2}^{n}\hat{\epsilon}_{t-1}^{2}}\right)}}\\ |\hat{\rho}-\rho| & \le|\rho||A|+|B|\;. \end{align}\] Let \(S_{n}^{(\epsilon)}=\sum_{t=1}^{n}\epsilon_{t}^{2}\), \(S_{n}^{(z)}=\sum_{t=1}^{n}z_{t}^{2}=\|X\hat{\Delta}\|_{2}^{2}\), and \(S_{n}^{(\epsilon,z)}=\sum_{t=1}^{n}\epsilon_{t}z_{t}\), then: \[\begin{align} |A| & =\frac{S_{n-1}^{(\epsilon,z)}+S_{n-1}^{(z)}}{S_{n-1}^{(\epsilon)}+2S_{n-1}^{(\epsilon,z)}+S_{n-1}^{(z)}}\\ & =\frac{S_{n}^{(\epsilon,z)}+S_{n}^{(z)}-\epsilon_{n}z_{n}-z_{n}^{2}}{S_{n}^{(\epsilon)}+2S_{n}^{(\epsilon,z)}+S_{n}^{(z)}-\hat{\epsilon}_{n}^{2}}\\ & \le\frac{|S_{n}^{(\epsilon,z)}|+|S_{n}^{(z)}|+\hat{\epsilon}_{n}^{2}}{S_{n}^{(\epsilon)}+2S_{n}^{(\epsilon,z)}+S_{n}^{(z)}-\hat{\epsilon}_{n}^{2}} \end{align}\] From the Lagrangian basic inequality (6 ), we have
\[S_{n}^{(z)}\le\underset{2S_{n}^{(\epsilon,z)}}{\underbrace{2(X\hat{\Delta})^{\top}Lu}}+2n\underset{\le b_{n}}{\underbrace{\lambda_{n}\{\lVert\beta_{0}\rVert_{1}-\lVert\hat{\beta}\rVert_{1}\}}}\] For the sequence \(b_{n}\), letting \(|\mathcal{S}|=s\), and considering \(\lambda_{n}(\lVert\beta_{0}\rVert_{1}-\lVert\hat{\beta}\rVert_{1})\le\lambda_{n}\sqrt{s}\|\hat{\Delta}\|_{2}\) we can use \[\begin{align} \|\hat{\Delta}\|_{2}^{2} & \le\kappa^{-1}\lambda_{n}(3\|\hat{\Delta}_{\mathcal{S}}\|_{1}-\|\hat{\Delta}_{\mathcal{S}^{\perp}}\|_{1})\\ \|\hat{\Delta}\|_{2} & \le\frac{3\sqrt{s}\lambda_{n}}{\kappa} \end{align}\] for \(\kappa>0\), that allows for a choice of \(b_{n}:=\kappa{}^{-1}3s\lambda_{n}^{2}\). Thus
\[S_{n}^{(z)}\le2S_{n}^{(\epsilon,z)}+2nb_{n}\;,\] where (from [eq:lasso_bound]) we note \(\lambda_{n}\sqrt{s}\|\hat{\Delta}\|_{2}\le b_{n}\), and from the RE condition over \(\mathcal{S}\), with parameters \((\kappa,\alpha\)),we have \[S_{n}^{(z)}\ge\kappa n\|\hat{\Delta}\|_{2}^{2}\quad\forall\mathbb{C}_{\alpha}(\mathcal{S}).\] To simplify our expression, condition on \(E_{3}:=\{\hat{\epsilon}_{n}^{2}<nb_{n}\}\) to obtain \[\begin{align} |A| & \le\frac{3|S_{n}^{(\epsilon,z)}|+3nb_{n}}{S_{n}^{(\epsilon)}+2n\kappa\|\hat{\Delta}\|_{2}^{2}-3nb_{n}}\le\frac{3|S_{n}^{(\epsilon,z)}|+3nb_{n}}{S_{n}^{(\epsilon)}-3nb_{n}}\;. \end{align}\] Now consider that \(E_{1}\) holds, i.e., \(\lambda_{n}\ge2n^{-1}\|X^{\top}\epsilon\|_{\infty}\), and \(S_{n}^{(\epsilon,z)}\le\|X^{\top}\epsilon\|_{\infty}\|\hat{\Delta}\|_{1}\le(1+\alpha)n\lambda_{n}\sqrt{s}\|\hat{\Delta}\|_{2}\le(1+\alpha)nb_{n}\), where for the latter we have used \(\hat{\Delta}\in\mathbb{C}_{\alpha}(\mathcal{S})\). Using this we obtain \[|A|\le\frac{3(2+\alpha)b_{n}}{n^{-1}S_{n}^{(\epsilon)}-3b_{n}}\;,\] conditioning on \(E_{4}:=\{n^{-1}S_{n}^{(\epsilon)}>\frac{1}{2}\sigma_{\epsilon}^{2}\}\) and requiring \(b_{n}\le\sigma_{\epsilon}^{2}/12\) gives
\[|A|\le\frac{12(2+\alpha)b_{n}}{\sigma_{\epsilon}^{2}}\]
We are considering a stochastic process\(\{(x_{t}^{\top},y_{t}):\Omega\to\mathbb{R}^{p+1},t=0,1,2,\cdots,n\}\) defined on a probability space \((\Omega,\mathcal{\mathcal{F}},\mathbb{F},P)\) \(.\) Recall the definition \(\hat{\epsilon}_{t}=\epsilon_{t}+x_{t}^{\top}\hat{\Delta}\), and further assume that \(u\) and \(z\) are independent. Define \(m_{t}:=\hat{\epsilon}_{t-1}(u_{t}+z_{t})\), from \(B\) we have the definition: \[B=\frac{\sum_{t=2}^{n}m_{t}}{\sum_{t=2}^{n}\hat{\epsilon}_{t-1}^{2}}\;,\] where \(M_{n}=\sum_{t=2}^{n}m_{t}\) is a martingale adapted to filtration \(\text{\mathbb{F}=(}\mathcal{F}_{t})_{0\leq t\leq n}\) - i.e., \(\mathbb{E}[M_{n+1}\mid\mathcal{F}_{n}]=M_{n},\) with \(M_{1}=0\), and \(\mathcal{F}_{s}\subseteq\mathcal{F}_{t}\), for \(s<t\), where \(\mathcal{F}_{t}=\sigma(\{(x_{0}^{\top},y_{0}),\cdots,(x_{t}^{\top},y_{t})\})\) is the \(\sigma\)-field spanned by \(\{(x_{0}^{\top},y_{0}),\cdots,(x_{n}^{\top},y_{n})\}\). Further let: \[m_{t}|\mathcal{F}_{t-1}\sim\mathcal{N}(0,\hat{\epsilon}_{t-1}^{2}\sigma_{m}^{2})\;,\] where \[\begin{align} \sigma_{m}^{2} & =\mathrm{Var}[u_{t}+z_{t}]\\ & =\sigma_{u}^{2}+\Delta^{\top}\Sigma\Delta\;. \end{align}\] Importantly, we note that the predictable conditional variance \[\begin{align} \langle M\rangle_{n} & =\sum_{t=2}^{n}\mathbb{E}[m_{t}^{2}|\mathcal{F}_{t-1}]\;,\\ & =\sigma_{m}^{2}\sum_{t=2}^{n}\hat{\epsilon}_{t-1}^{2}\;, \end{align}\] normalizes \(B\), i.e. \[B=\sigma_{m}^{2}\frac{M_{n}}{\langle M\rangle_{n}}\;,\] and thus \(B\) is an example of a self-normalized process. Such processes are well studied (see [22] for a review of self-normalized processes), and for our purposes we can use the pre-established results of [23] to provide a deviation bound on \(|B|\). Applying Corollary 5.2 of [23], then for all \(n\geq2\) and \(2^{-1}>t>0\) we obtain \[\begin{align} P\left[|B|\geq t\right] & \le2\exp\left(-\frac{nt^{2}}{2[1+2t]}\right)\nonumber \\ & \le2\exp\left(-\frac{nt^{2}}{4}\right)\;.\label{eq:B95prob} \end{align}\tag{18}\]
For convenience, let us choose: \[\lambda_{n}=\frac{4K}{c^{1/2}}\delta(a^{-2}n,p^{\tau})\] as per 1. This leads to a choice of \[\begin{align} b_{n} & =\frac{3s\lambda_{n}^{2}}{\kappa}=\frac{48K^{2}}{\kappa c}\delta^{2}(a^{-2}n,p^{\tau s}) \end{align}\] and working with \(\alpha=3\) we have \[|A|\le\underset{=:C^{2}}{\underbrace{\frac{3^{2}.2^{6}.5K^{2}}{\sigma_{u}^{2}\kappa c}}}\delta^{2}(n,p^{\tau s})\] which holds if \(n\ge5^{-1}C^{2}s\log(p^{\tau})\).
Noting that our bounds (for \(A\) and \(B\)) decay at different rates we consider a split bound, adding the deviations for \(|A|\) and \(\rho|B|\). Conditioning on \(E_{5}:=\{|B|\le\rho^{-1}C\delta(n,p^{\tau s})\}\) and the bound for \(A\), we have \[\begin{align} \frac{|\hat{\rho}-\rho|}{|\rho|} & \le|A|+\rho|B|\\ & \le2C\delta(n,p^{\tau s}) \end{align}\]
for any \(n>C^{2}s\log(p^{\tau})\), in probability greater than \(1-5\max_{k=1,\ldots,5}P[E_{k}^{c}]\). We recall the definition of these (good) events below: \[\begin{align} E_{1} & =\{\lambda_{n}\ge2n^{-1}\|X^{\top}\epsilon\|_{\infty}\}\\ E_{2} & =\left\{ \frac{1}{n}\|Xv\|_{2}^{2}\ge\kappa\|v\|_{2}^{2}\quad\forall v\in\mathbb{C}_{\alpha}(\mathcal{S})\;\right\} \\ E_{3} & =\{\hat{\epsilon}_{n}^{2}<\frac{48K^{2}}{\kappa c}\delta^{2}(a^{-2},p^{\tau s})\}\\ E_{4} & =\{n^{-1}S_{n}^{(\epsilon)}>\sigma_{\epsilon}^{2}/2\}\\ E_{5} & =\{|B|<\frac{C}{\rho}\delta(n,p^{\tau s})\}\;. \end{align}\]
From Corrollary 1 with choice of regularizer \(\lambda_{n}=4Kc^{-1/2}\delta(a^{-2}n,p^{\tau})\), then for any \(n\ge2^{-1}c^{-1}\log(p^{\tau})\), we have \(P[E_{1}^{c}]\le2p^{1-\tau}\).
Applying Corrollary 2 with \(R=I_{n}\) and \(\alpha=3\), for all \[n\ge s\log p^{\tau}\underset{c_{3}}{\underbrace{\max\left[\frac{2^{10}\sigma_{\max}^{2}}{\eta_{\min}^{2}(\Sigma^{1/2})},2^{5}3^{2}\right]}}\] there exists a \(\kappa>0\) such that \(P[E_{2}^{c}]\le4\exp(-c_{3}s\log p^{\tau})\le p^{1-\tau}\)
Since the errors are assumed Gaussian, \(\hat{\epsilon}_{n}^{2}\) is bounded by an exponential distribution whose variance is bounded in relation to \(\mathrm{Var}[\epsilon]+\mathrm{Var}[x_{n}^{\top}\Delta]\) which is upper-bound by \(\sigma_{\epsilon}^{2}+\eta_{\max}(\Sigma)\|\hat{\Delta}\|_{2}^{2}\). Using the exponential tail we obtain \[\begin{align} P[E_{3}^{c}]=P[\hat{\epsilon}_{n}^{2}\ge nb_{n}] & \le\exp\left[-\frac{nb_{n}}{(\sigma_{\epsilon}^{2}+\eta_{\max}(\Sigma)\|\hat{\Delta}\|_{2}^{2})^{1/2}}\right]\\ & \le\exp\left[-\frac{48K^{2}a^{2}}{\kappa c}\frac{\log(p^{\tau s})}{(\sigma_{\epsilon}^{2}+\eta_{\max}(\Sigma)\|\hat{\Delta}\|_{2}^{2})^{1/2}}\right]\\ & \le\exp\left[-\underset{c_{4}}{\underbrace{\frac{24K^{2}a}{\kappa c\sigma_{u}}}}\log(p^{\tau s})\right]\quad\mathrm{if}\;\{\eta_{\max}(\Sigma)\|\hat{\Delta}\|_{2}^{2}\le\sigma_{\epsilon}^{2}\} \end{align}\] such that \(P[E_{3}^{c}]\le p^{-c_{4}\tau s}\le p^{1-\tau}\), under the condition we have sufficient samples \(n\ge3C^{2}(\kappa\sigma_{u}^{2})^{-1}\eta_{\max}(\Sigma)s\log p^{\tau}\) such that the condition \(\{\eta_{\max}(\Sigma)\|\hat{\Delta}\|_{2}^{2}\le\sigma_{\epsilon}^{2}\}\) holds.
We write \(S_{n}^{(\epsilon)}=u^{\top}\Psi^{\top}\Psi u\) and apply the Hanson-Wright inequality [21] to obtain \[P\big[S_{n}^{(\epsilon)}<\mathbb{E}[S_{n}^{(\epsilon)}]-t\big]\le\exp\left[-c_{5}\min\left(\frac{t^{2}}{\sigma_{u}^{4}\lVert\Psi^{\top}\Psi\rVert_{F}^{2}},\frac{t}{\sigma_{u}^{2}\lVert\Psi^{\top}\Psi\rVert_{2}}\right)\right]\] noting that \(\mathbb{E}[S_{n}^{(\epsilon)}]=n\sigma_{\epsilon}^{2}\), \(\lVert\Psi^{\top}\Psi\rVert_{2}=a^{2}\), and \(\|\Psi^{\top}\Psi\|_{F}^{2}\le2na^{6}\). Letting \(t=2^{-1}\sigma_{\epsilon}^{2}n\), then we obtain \[P\big[n^{-1}S_{n}^{(\epsilon)}<\frac{\sigma_{\epsilon}^{2}}{2}\big]\le\exp\left[-c_{5}\min\left(\frac{\sigma_{\epsilon}^{4}n}{8\sigma_{u}^{4}a^{6}},\frac{\sigma_{\epsilon}^{2}n}{2\sigma_{u}^{2}a^{2}}\right)\right]\] from which it follows that \(P[E_{4}^{c}]\le\exp(-c_{6}n)\) where \(c_{6}=c_{5}/8a^{2}\).
From the Self-Normalised process, applying (18 ) we have \[\begin{align} P[|B|\geq\frac{C}{\rho}\delta(n,p^{\tau s})] & \le2\exp\left(-n\frac{C^{2}}{4\rho^{2}}\delta^{2}(n,p^{\tau s})\right)\\ & =2\exp\left(-\underset{c_{7}}{\underbrace{\frac{C^{2}}{4\rho^{2}}}}\log p^{\tau s}\right) \end{align}\] and \(P[E_{5}^{c}]\le2p^{-c_{7}\tau s}\)
Now, we need to check the sample size requirements for our bounds, that is
\[n\ge\underset{c_{7}}{\underbrace{\max\left[\frac{C^{2}}{5},C^{2},2^{-1}c^{-1},\frac{2^{10}\sigma_{\max}^{2}}{\eta_{\min}^{2}(\Sigma^{1/2})},2^{5}3^{2},\frac{3C^{2}}{\kappa\sigma_{u}^{2}}\eta_{\max}(\Sigma)\right]}}s\log(p^{\tau})\] In general, one can find a \(c_{2}>c_{7}\) such that \(P[E_{k}^{c}]\le2p^{1-\tau}\), i.e. we can use \(E_{1}\) as a limiting case, and thus overall we obtain \[P\left[\frac{|\hat{\rho}-\rho_{0}|}{|\rho_{0}|}\le2C\sqrt{\frac{s\log(p^{\tau})}{n}}\right]\ge1-10p^{(1-\tau)}\rightarrow1\;.\] ◻
Proof. Proof of Proposition 4
Consider establishing \(\bar{\lambda}_{n}\) based on Lemma 1 subject to estiamtion error in the whitening matrix. Let \(\hat{a}=(1-\hat{\rho}^{2})^{-1/2}\), then \(\|\hat{R}^{\top}L_{0}\|_{F}^{2}=\sum_{i=1}^{n}\sigma_{i}^{2}(\hat{R}^{\top}L_{0})=(n-1)+a\hat{a}^{-1}\) and we have
\[\begin{align} P\left[\|X^{\top}\hat{R}L_{0}u\|_{\infty}\leq\frac{tn}{2}\right] & \ge1-2p\exp\left(-c\min\left[\frac{n^{2}t^{2}}{16((n-1)+a\hat{a}^{-1})},\frac{nt}{2\max{a\hat{a}^{-1},1}}\right]\right)\\ & \ge1-2p\exp\left(-c\frac{nt^{2}}{16a\hat{a}^{-1}}\right) \end{align}\] where on the second line we have used \(\hat{a}/a<1\). Taylor expanding \(a/\hat{a}\) (noting this is a concave function) around \(\hat{\rho}=\rho\) gives \[(1-\hat{\rho}^{2})^{1/2}\le a^{-1}+a|\rho(\rho-\hat{\rho})|\] and using Prop. 3 allows us to choose any \(\bar{\lambda}\) that satisfies \[\bar{\lambda}\ge\left(\frac{16}{c}+\frac{16}{c}a^{2}\rho^{2}C\sqrt{\frac{|\mathcal{S}|\log(p^{\tau})}{n}}\right)^{1/2}\sqrt{\frac{\log p^{\tau}}{n}}\;.\] Now, we apply Theorem 1, consider the feasible \(\hat{\gamma}_{n}=(2n)^{-1}\|\hat{R}\|_{F}^{2}=(2n)^{-1}(a^{-2}+n-1)\ge2^{-1}\), then with \(|\mathcal{S}|\le|\mathcal{S}_{0}|=s\) and \(\hat{\kappa}=2^{-7}3^{2}\eta_{\min}^{2}(\Sigma^{1/2})\) we have \[\begin{align} \|\Delta\|_{2} & \le2^{5}\sqrt{\frac{2}{3}}\frac{s^{1/2}\bar{\lambda}_{n}}{\eta_{\min}^{2}(\Sigma^{1/2})}\\ & \le\frac{2^{7}}{c^{1/2}\eta_{\min}^{2}(\Sigma^{1/2})}\sqrt{\frac{2}{3}}\left(1+\frac{\rho^{2}}{1-\rho^{2}}C\sqrt{\frac{s\log(p^{\tau})}{n}}\right)\delta(n,p^{s\tau}) \end{align}\] ◻
Figure 5: Estimation error \(p^{-1/2}\|\hat{\Delta}\|_{2}\) as a function of \(n\) for different settings of \(\rho\) (top \(p=128\), bottom \(p=256\)), dashed lines indicate empirical 95% confidence intervals..
Figure 6: Estimation error \(\|\hat{\Delta}\|_{\infty}\) as a function of \(n\) for different settings of \(\rho\) (top \(p=128\), bottom \(p=256\)), dashed lines indicate empirical 95% confidence intervals..
For a given stationary variance \(\mathrm{diag}(\Gamma)=\sigma^{2}\).↩︎
We are not assuming here that the true support \(\mathcal{S}_{0}\) is recovered, but only that the RE condition holds over the true support.↩︎
Note: the stated probability for Prop 3 holds with a choice of \(\kappa=2^{-2}\eta_{\min}^{2}(\Sigma^{1/2})-3^{2}\sigma_{\max}^{2}\frac{\log p}{n}(1+\alpha)^{2}s\) as per Corollary 2.↩︎