May 04, 2025
This paper highlights a formal connection between two families of widely used matrix factorization algorithms in numerical linear algebra. One family consists of the Jacobi eigenvalue algorithm and its variants for computing the Hermitian eigendecomposition and singular value decomposition. The other consists of Gaussian elimination and the Gram-Schmidt procedure with various pivoting rules for computing the Cholesky decomposition and QR decomposition respectively.
Both families are cast as special cases of a more general class of factorization algorithms. We provide a randomized pivoting rule that applies to this general class (which differs substantially from the usual pivoting rules for Gaussian elimination / Gram-Schmidt) which admits a unified analysis of the entire class of algorithms. The result is the same linear rate of convergence for each algorithm, irrespective of which factorization it computes.
One important consequence of this randomized pivoting rule is a provable, effective bound on the numerical stability of the Jacobi eigenvalue algorithm, which addresses a longstanding open problem of Demmel and Veselić ’92 [1].
At first glance, the Jacobi eigenvalue algorithm for computing the eigendecomposition of Hermitian matrices and the Gram-Schmidt procedure for computing the QR decomposition of tall matrices are quite distinct: they take different inputs, compute different factorizations, have different asymptotic complexities, etc. However, there is an important commonality: both methods are iterative and consider only pairs of columns of their inputs at a time. In this work, this connection is taken further, and both algorithms are realized as special cases of a more general matrix factorization algorithm.
Our primary result, Corollary 1, is a unified analysis of all special cases of this general procedure when using a particular randomized pivoting rule; we show that these algorithms are all linearly convergent in expectation, and further that the rate of convergence is unaffected by which particular factorization is being computed.
In finite arithmetic, we provide a framework for showing that instances of this general procedure are numerically stable. This framework is applied to several orthogonalization methods (Theorem 3), as well as the Jacobi eigenvalue algorithm (Theorems 4, 5, and Corollary 4), building on the work of [1] in the latter case.
We first give a brief overview of the history and context of these algorithms. As these algorithms have been extensively studied, we cannot give a comprehensive overview. We instead highlight their origins, practical relevance today, and interactions between them.
Eigendecomposition / Singular value decomposition: The classical Jacobi eigenvalue algorithm (which we henceforth refer to as the two-sided Jacobi eigenvalue algorithm2) is an iterative algorithm for computing an approximate eigendecomposition of a given Hermitian3 matrix that dates back to 1846 [3]. It was rediscovered and popularized in the 1940s and 50s when the advent of computers made large numerical computations feasible (we refer the reader to [4] for a detailed history). Today, it is included in popular textbooks [5], [6] and software packages [7]. Although it is often less preferred due to a slower runtime [5], it has several strengths when compared with other popular methods for diagonalization. First, if the given matrix \(B\) is already close to diagonal, the arithmetic cost is greatly reduced [8]. Second, the computation is highly parallelizable, and each thread only needs a small amount of memory (see, for example, [9]). Third, numerical experiments reveal classes of matrices for which all the eigenvalues appear to be computed with small relative error [1].
The corresponding algorithm for computing the singular value decomposition of a given matrix \(A\in{\mathbb{C}}^{d\times n}\), today called the one-sided Jacobi eigenvalue algorithm, was introduced as an independent algorithm by Hestenes [10], then rediscovered and connected to the classical two-sided Jacobi eigenvalue algorithm by Chartres [11]. It can be viewed as implicitly running the two-sided Jacobi eigenvalue algorithm on the matrix \(B=A^*A\), without ever forming \(B\). It enjoys many of the same advantages as the two-sided Jacobi eigenvalue algorithm [1], [12].
The numerical stability of both the one- and two-sided variants was investigated by Demmel and Veselić in a highly influential paper [1]. They primarily focus on the accuracy with which the small eigenvalues of positive definite matrices are computed. They find “modulo an assumption based on extensive numerical tests” that the Jacobi eigenvalue algorithm essentially produces the best approximation of the small eigenvalues one could hope for in the floating point arithmetic model. A family of matrices with more problematic behavior that those tested by [1] was published by [13], though they do not result in instability of the method overall.
Cholesky decomposition / QR decomposition: Gaussian elimination is the oldest method for factoring a given matrix \(B\) as \(B=LU\) where \(L\) is lower triangular and \(U\) is upper triangular [14]. When \(B\) is positive definite, one is guaranteed that \(U=L^*\) resulting in the Cholesky decomposition \(B=LL^*\). Gaussian elimination with various pivoting rules is the primary algorithm used in practice for the Cholesky decomposition [7], [15], [16].
There are many methods for the QR decomposition. Both classical and modified Gram-Schmidt applied to \(A\in{\mathbb{C}}^{d\times n}\) are mathematically equivalent to Gaussian elimination applied to the matrix \(B=A^*A\) [17]. Some other algorithms actively exploit the connection between the QR and Cholesky decompositions: [18] proposes computing the Cholesky decomposition \(A^*A=R^*R\) and then computing \(Q=AR^{-1}\), and the algorithm(s) of [19] and [20] explore this idea further, with modifications to improve stability.
Several works consider using these algorithms in conjunction to improve accuracy. For instance, [21] observes that one can accelerate the one-sided Jacobi eigenvalue algorithm for computing the SVD by first computing \(A=QR\) and then using one-sided Jacobi algorithm to find the SVD of the upper triangular factor \(R\). Similarly, [22] shows the stability of two-sided Jacobi can be improved by first running Cholesky on a positive definite matrix \(B\), then applying one-sided Jacobi to the Cholesky factor of \(B\).
Other works take inspiration from algorithms for one task for the other. For instance, [23] proposes a Jacobi-like method for the QR decomposition, applying Givens rotations to transform a matrix \(A\) into an upper triangular one \(R\). The accumulation of all the Givens rotations then forms \(Q\) (which is akin to the method for the QR decomposition using Householder reflections [15]). This method for QR now commonly appears in numerical linear algebra textbooks, e.g. [5].
The central focus of this paper is the striking similarity between the operations performed by the once-sided Jacobi eigenvalue algorithm and the modified Gram-Schmidt procedure. This similarity has been previously noted, for instance [24] observes that the operations are actually the same under a certain kind of infinite matrix limit. We show that this similarity can be exploited to obtain a unified analysis of these algorithms and more.
In Section 2, we present the Jacobi eigenvalue algorithm and modified Gram-Schmidt procedure (MGS) in a way that highlights their similarity. In Section 3 we formally describe an abstract algorithm which encompasses the Jacobi eigenvalue algorithm and MGS as special cases.
Our results about this generalized algorithm are presented in the final two sections. Section 4 focuses on exact-arithmetic implementations of the algorithm, in the real RAM model. Modulo a mild condition number assumption (which is satisfied by small perturbations of fixed matrices), we show that \(O(n^2\log(n/\delta))\) iterations of this procedure suffice to produce \(\delta\)-approximate factorizations (stated precisely in Corollary 1). For many reasonable special cases of this generalized algorithm, each iteration costs only \(O(n)\) arithmetic operations, resulting in many \(\widetilde{O}(n^3)\)-time factorization algorithms. Additionally, this section confirms a recent conjecture of Steinerberger about the convergence of particular method for orthogonalization [25] (see Remark 6).
Section 5 studies the general algorithm in finite arithmetic. Here, we have two primary results: the first is that a large class of orthogonalization algorithms are numerically stable (including an algorithm for computing the QR decomposition specifically). The second is that with randomized pivoting, the Jacobi eigenvalue algorithm is stable. Specifically, the “assumption based on extensive numerical tests” needed by Demmel and Veselić in [1] is proved to hold with high probability over the randomness of the pivoting (see Corollary 2).
Here we describe a simple connection between the SVD and QR decomposition (and the corresponding connection between the eigendecomposition and Cholesky decomposition) and algorithms for computing them. To see the connection, we first express the QR decomposition and a factorization that is almost the SVD in a common form: given \(A\) with full column rank, write \[{\label{a11}A=QT^{-1}}\tag{1}\] where \(Q\) has orthogonal columns and \(T\) is nonsingular. When \(T\) is unitary, by further decomposing \(Q=U\sqrt{Q^*Q}\), one obtains the singular value decomposition. When \(T\) is upper triangular and \(Q^*Q=I\), this is exactly the QR decomposition. Analogously, the eigendecomposition and Cholesky decomposition of \(B=A^*A\) can both be expressed as \(B=(T^{-1})^*DT\) where \(D=Q^*Q\). The four cases are summarized in the following table.
\(B=(T^{-1})^*DT^{-1}\) | \(A=QT^{-1}\) | |
---|---|---|
\(T\) is unitary and \(D=Q^*Q\) is diagonal | Eigendecomposition | Singular value decomposition |
\(T\) is upper triangular and \(D=Q^*Q=I\) | Cholesky decomposition | QR decomposition |
Notably, both Modified Gram Schmidt (MGS) and one-sided Jacobi can be interpreted as methods for producing an orthogonal basis \(Q=\begin{bmatrix}q_1&\cdots&q_n\end{bmatrix}\) for a given collection of vectors, expressed as columns of \(A=\begin{bmatrix}a_1&\cdots&a_n\end{bmatrix}\) (along with the corresponding column operation sending \(A\) to \(Q\)). The algorithms themselves closely resemble each other too: they both iteratively pick pairs of indices \((i,j)\) and replace the columns \(a_i,a_j\) with a different orthogonal basis for their span. By repeating this many times with appropriate selection of \((i,j)\) each time, the collection of vectors will approach an orthogonal set. More specifically let \(A^{(t)}=\begin{bmatrix}a_1^{(t)}&\cdots&a_n^{(t)}\end{bmatrix}\) be the state after this has been done for \(t\) pairs \((i,j)\). For each \(t\), the Jacobi eigenvalue algorithm selects a pair \((i,j)\) and then updates \[\label{a12}\begin{align} a_i^{(t+1)}&=\cos(\theta)a_i^{(t)}+ \sin(\theta)a_j^{(t)} \\ a_j^{(t+1)}&=-\sin(\theta)a_i^{(t)}+ \cos(\theta)a_j^{(t)} \\ a_\ell^{(t+1)}&=a_\ell^{(t)}\quad\forall\ell\not\in{\left\{{i,j}\right\}}\end{align}\tag{2}\] where \(\theta\) is such that the resulting columns \(a_i^{(t+1)}\) and \(a_j^{(t+1)}\) are orthogonal. MGS selects \(i<j\) then updates \[\label{a13}\begin{align} a_{j}^{(t+1)}&=\frac{a^{(t)}_{j}-\left\langle a^{(t)}_{j},a^{(t)}_{i}\right\rangle a^{(t)}_{i}}{\left\|a^{(t)}_{j}-\left\langle a^{(t)}_{j},a^{(t)}_{i}\right\rangle a^{(t)}_{i}\right\|} \\ a_\ell^{(t+1)}&=a_\ell^{(t)}\quad\forall\ell\neq j.\end{align}\tag{3}\] The update equation 2 is a Givens rotation of \(A^{(t)}\) (also called a plane rotation); importantly it is unitary. The product of these rotations forms \(T\) in the decomposition 1 . The update equation 3 is an elementary matrix operation followed by a scaling; importantly, it is upper triangular. The product of these elementary operations forms \(T\) in the decomposition 1 .
In both cases, the result of this update is that the \(i\)th and \(j\)th columns of \(A^{(t+1)}\) will be orthogonal. Despite their similarity, the existing analysis of these algorithms is quite different, as is their empirical performance measured in terms of computational and communication costs and numerical stability.
In both cases, the strategy for picking \((i,j)\) at each time \(t\) is determined by a “pivoting rule”. Multiple pivoting rules have been proposed; we group them into three categories to discuss: fixed, adaptive, and randomized. Before doing so, we must clarify some terminology. There is some subtlety in what the literature refers to as “one iteration” of either procedure. For the Jacobi eigenvalue algorithm, one iteration is sometimes taken to mean one “sweep” (which we describe in a moment) of all \(n(n-1)/2\) pairs \((i,j)\). For MGS (more specifically, left-looking MGS), one iteration typically refers to the grouping of updates \[{\label{a14} (1,i),(2,i),\ldots,(i-1,i)}\tag{4}\] for each \(i\). For Gaussian elimination (and right-looking MGS), it typically refers to the grouping of updates \[{\label{a15} (i,i+1),(i,i+2),\ldots,(i,n)}\tag{5}\] for each \(i\). Because of this, the word “pivot” in those contexts typically refer to the single column index \(i\). In this paper, we refer to each selection of \((i,j)\) and update of the corresponding column(s) as one iteration, i.e. \(A^{(t)}\) defined by either 2 , 3 is the state after \(t\) iterations. One pivot then corresponds to a pair of column indices \((i,j)\).
Fixed pivot rules: These are rules that fix a sequence of \((i,j)\) ahead of time. The ones we highlight for the Jacobi eigenvalue algorithm are called cyclic or sweep rules, due to [26], see also [27]. The row-cyclic version uses the sequence \[(1,2),(1,3),(1,4),(1,5),\ldots,(1,n),(2,3),(2,4),(2,5),\ldots,(2,n),\ldots (n-1,n)\] and repeats it over and over again. The column-cyclic version uses the sequence \[(1,2),(1,3),(2,3),(1,4),(2,4),(3,4),(1,5),(2,5),(3,5),(4,5),\ldots,(n-1,n)\] and repeats it over and over again. Each run through the sequence of \(n(n-1)/2\) pairs is referred to as one “sweep.” In the context of Gaussian elimination and MGS, those sweeps correspond to applying the groupings 4 or 5 for \(i=1,\ldots,n-1\). Running MGS for just one or two sweeps are both popular proposals.
For appropriately chosen rotations, [27] show that row-cyclic and column-cyclic Jacobi converges, and [8] improve this to (eventual) quadratic convergence. However, there is no rigorous theory to predict the number of sweeps required to achieve a desired distance to diagonal [18]. In practice, it is observed to be a constant number of sweeps [5]. Better theoretical results are available for the next category of pivoting rules.
Adaptive pivot rules: These are pivoting rules that use the current value of \(A^{(t)}\) to decide the next pivots. The original rule proposed by Jacobi [3], which we call the greedy pivoting rule, was to pick \[{\label{a16}(i,j)=\mathop{\mathrm{arg\,max}}_{i,j:i\neq j}\mleft|\left\langle a_i,a_j\right\rangle\mright|.}\tag{6}\] This is somewhat similar to column-pivoting rule of Gram Schmidt (implicitly equivalent to complete pivoting rule of Gaussian elimination). This rule for Gram Schmidt maintains a list of indices \(\mathcal{I}\) initialized to \([n]\). Then \[{\label{a17}i=\mathop{\mathrm{arg\,max}}_{i\in\mathcal{I}}\left\|a_i\right\|}\tag{7}\] is computed, \(i\) is removed from \(\mathcal{I}\), and the updates \({\left\{{(i,j):j\in\mathcal{I}}\right\}}\) are performed (in any order). This is repeated until \(\mathcal{I}\) is empty. Strictly speaking, these rules will not necessarily compute the QR decomposition of \(A\), since we no longer are assured that the update pairs \((i,j)\) satisfy \(i<j\). However, if one lets \(\sigma\) be the permutation which records the order in which indices are removed from \(\mathcal{I}\), then we are assured \(\sigma(i)<\sigma(j)\) and therefore the method computes QR decomposition of \(AP_\sigma\) where \(P_\sigma\) is the permutation matrix associated with \(\sigma\).
The same pivoting strategy has been considered for one-sided Jacobi [28]. Numerical experiments showed that this pivoting strategy can improve the rate of convergence in several (but not all) cases (see [28] for more details).
Randomized pivot rules: There seems to be much less attention given to randomized pivot rules. A recent work of Chen et al. [29] considers the partial Cholesky and QR decompositions for computing low-rank approximations. They study a randomized variant of the complete pivoting rule 7 where \(i\in\mathcal{I}\) is sampled with probability proportional to \(\left\|a_i\right\|^2\). This and similar ideas had been considered before as well [30]–[32].
One exceedingly simple randomized rule is just to sample \((i,j)\) uniformly at random among all pairs of indices. We were unable to find a reference that explicitly describes this rule (or any randomized rule) for the Jacobi eigenvalue algorithm; one reason that this is surprising is that the proof of global convergence of the greedy pivoting rule 6 essentially applies the probabilistic method to the hypothetical algorithm which uses this uniformly random pivot (see, for example, Section 5 of [33]).
In the context of orthogonalization, we note an important distinction between sampling \((i,j)\) uniformly at random among all pairs of indices and just pairs of indices with \(i<j\) (or \(\sigma(i)<\sigma(j)\) for some permutation \(\sigma\)). In the latter case, we are assured the computed decomposition (if the method converges) is the QR decomposition \(AP_\sigma=QR\). However, in the former case, we are given no assurances about \(R\) being upper triangular. That rule (without the \(i<j\) constraint) was proposed by Steinerberger [25], and both rules (both with and without the \(i<j\) constraint) were concurrently proposed by the authors in [34]. Both those works were motivated by a designing a preconditioner for the Kaczmarz linear system solver, and made analogies of the method to the Kaczmarz solver itself; neither work identified the connection to the Jacobi eigenvalue algorithm. Steinerberger conjectured that the rate of convergence of this procedure is \(\kappa(A^{(t)})\approx\exp(-O(t/n^2))\kappa(A)\) based on numerical experiments and a heuristic. The authors proved that \((-\log\det|A^{(t)}|)\le\exp(-O(t/n^2))(-\log\det\mleft|A\mright|)\).
Both Gram Schmidt and Jacobi admit block variants, which can be abstractly described as follows: rather than pick a pair of indices \((i,j)\) at each step and updating columns \(i\) and \(j\) of \(A^{(t)}\), one can instead pick any subset of indices \(J\subset[n]\) and update all the corresponding columns. We call \(J\) the pivot set. In the non-block case, pivoting on \((i,j)\) ensures that \[\left\langle a^{(t+1)}_i,a^{(t+1)}_j\right\rangle=0.\] In the block variant, pivoting on \(J\) ensures that \[\left\langle a^{(t+1)}_i,a^{(t+1)}_j\right\rangle=0\quad\forall i,j\in J,i\neq j.\] These methods are often motivated by considering the practical performance of these algorithms on modern hardware. For example, one may be able to take advantage of large cache sizes to perform one iteration for \(\mleft|J\mright|>2\) very quickly.
The block Jacobi eigenvalue algorithm of Drmač [35] is one such method. It fixes a partition \(I_1\sqcup\cdots\sqcup I_m\subset[n]\) and divides the given matrix \(B\) into blocks corresponding to those index sets. Then, the pivot set \(J\) is selected to be \(I_{\ell}\cup I_{\ell'}\), where the pair \((\ell,\ell')\) can be selected analogously to how \((i,j)\) were selected in the non-block version. Both [36] and [37] show convergence of block Jacobi under various pivoting strategies. There are many block versions of Gram Schmidt along similar lines, see [38] for a recent survey.
In describing the connection between Gram Schmidt and the one-sided Jacobi eigenvalue algorithm, we’ve already hinted at what the generalized algorithm is. Once a pivot set \(J\) has been selected, we may abstract away the process by which the corresponding columns are updated: all we assume is that it is an invertible column operation which results in orthogonal columns. We will use the letter \(S\) for the \(\mleft|J\mright|\times \mleft|J\mright|\) matrix performing this column operation. The Jacobi eigenvalue algorithm corresponds to the special case where \(S\) is unitary, and Gram Schmidt to the case where \(S\) is upper triangular. The general pseudo-code for this and the two-sided version is below. We first clarify some notation. \(\text{GL}_k({\mathbb{C}})\) is the set of all invertible \(k\times k\) matrices over \({\mathbb{C}}\). \(\binom{[n]}k\) is the set of all subsets of \([n]\) of size \(k\). For an index set \(J={\left\{{i_1,\ldots,i_k}\right\}}\in\binom{[n]}k,i_1<\cdots<i_k\), denote the submatrices \[A_J=\begin{bmatrix} a_{1,i_1} &\cdots& a_{1,i_k}\\ \vdots&&\vdots\\ a_{d,i_1} &\cdots& a_{d,i_k}\\ \end{bmatrix}\in{\mathbb{C}}^{d\times k} \quad\And\quad B_{JJ}=\begin{bmatrix}b_{i_1i_1} &\cdots& b_{i_1i_k} \\ \vdots && \vdots \\ b_{i_ki_1} &\cdots& b_{i_ki_k}\end{bmatrix}\in{\mathbb{C}}^{k\times k}.\] For a permutation \(\sigma\) on \([n]\) let \(P_\sigma\) be the associated permutation matrix, i.e. \(P_\sigma e_{j}=e_{\sigma(j)}\).
We present both versions of the algorithms side-by-side to highlight their mathematical equivalence: one can start with \(B=A^*A\) and maintain \(A^{(t)*}A^{(t)}=B^{(t)}\) by selecting the same \(T\) in line 7 of either algorithm. Because of this correspondence, we will refer to various lines of the algorithm without needing specify which version we are talking about. We also highlight that \(T\) need not be formed and inverted explicitly.
There are many pivot rules one can use for the selection in line 5. This paper uses a randomized pivoting rule where \(J\) is sampled uniformly at random from \(\binom{[n]}k\). We refer to this as “randomized size \(k\) pivoting”.
We obtain a unified analysis of all special cases of Algorithm 1 with randomized size \(k\) pivoting. The mechanism by which \(S\) is selected in line 6 and the additional structure one wishes to impose upon \(S\) do not affect our analysis in any way—in expectation, the rate of convergence depends only on \(k\) and \(n\). For concreteness, 2 lists several decompositions one can obtain by imposing different constraints on \(S\) and \((A_J^{(t)}S)^*(A_J^{(t)}S)=S^*B^{(t)}S\).
Decomposition name | \(S\) | \((A_J^{(t)}S)^*(A_J^{(t)}S)=S^*B^{(t)}S\) |
---|---|---|
Eigendecomposition / SVD | Unitary | Diagonal |
Cholesky / QR | Upper triangular | Identity |
LDL / unnamed | Unit upper triangular | Diagonal |
unnamed / QL | Lower triangular | Identity |
Gram matrix / Orthogonalization | Nonsingular | Identity |
Remark 1 (Using recursion to compute \(S\)). Let’s revisit line 6 of either algorithm. Computing \(S\) is itself a matrix decomposition problem. In fact, the decomposition of \(B_{JJ}\) or \(A_J\) one must find is exactly the kind of decomposition one seeks to compute of \(B\) or \(A\). This points to the natural choice of using the algorithm recursively. Alternatively, one may use any expensive factorization algorithm since the problem instance is smaller.
Remark 2 (Parallelizability). When two pivot sets are disjoint, the corresponding updates may occur in parallel. The results in this paper can be extended to the case where multiple disjoint pivots \(J_1,\ldots,J_\ell\in\binom{[n]}k\) are sampled at a time where the marginal distribution of each \(J_j\) is uniform, and then the corresponding pivots are performed in any order.
Remark 3 (Fixed pivot size). There’s no reason to keep the pivot size \(\mleft|J\mright|=k\) constant, other than it simplifies the analysis. \(J\) may be sampled from any set such that the probability \({\left\{{i,j}\right\}}\subset J\) is equal for all distinct pairs \(i,j\).
This section shows that one- and two-sided versions of Algorithm 1 quickly converge to the factorizations \(A=QT_{\textrm{acc}}\) and \(B=T_{\textrm{acc}}^*DT_{\textrm{acc}}\) where \(Q\) is nearly orthogonal and \(D\) is nearly diagonal. Concretely, we have the following results, stated in terms of the diagonal-normalized condition number \[\label{a20} \hat{\kappa}(B) = \kappa(D_B^{-1/2}BD_B^{-1/2})\tag{8}\] where \(\kappa(B)=\left\|B\right\|\cdot\|B^{-1}\|\) and \(D_B = B \odot I\). This condition number can be arbitrarily smaller than the usual condition number \(\kappa(\cdot)\) (consider, for example, any diagonal matrix), but cannot be much larger, \[\hat{\kappa}(B)\le n\kappa(B).\]
Corollary 1. Let \(0<\delta < 1\). Given positive definite \(B\in{\mathbb{C}}^{n\times n}\), run Algorithm 1 with randomized size \(k\) pivoting for \[t\ge \frac{n(n-1)}{k(k-1)}\log\mleft({4n\hat{\kappa}(B)}/{\delta^2}\mright)\] iterations. Let \(T_{\textrm{acc}}=T_{\textrm{acc}}^{(t)}\), \(D=B^{(t)}\) be the values returned by the algorithm. Let \(D'=D\odot I\) be just the diagonal entries of \(D\). Then \[\left\|B-T_{\textrm{acc}}^*D'T_{\textrm{acc}}\right\|\le\left\|B\right\|\frac{n\delta}{1-n\delta}\] with probability at least \(1-1/n\). Additionally, \(B=T_{\textrm{acc}}^* DT_{\textrm{acc}}\) exactly where \(D\) is close to diagonal in the sense that \[\mathop{\mathrm{\mathbb{E}}}\sqrt{\sum_{i\neq j}\frac{\mleft|d_{ij}\mright|^2}{d_{ii}d_{jj}} }\le\delta\] where \(d_{ij}\) is the i\(j^{th}\) entry of \(D\). Equivalently, if \(B=A^*A\), then the one-sided version returns \(Q=A^{(t)}\) forming the exact factorization \(A=QT_{\textrm{acc}}\) where \(Q\) is nearly orthogonal in the sense that \[\mathop{\mathrm{\mathbb{E}}}\sqrt{\sum_{i\neq j}\frac{\mleft|\left\langle q_i,q_j\right\rangle\mright|^2}{\left\|q_i\right\|^2\left\|q_j\right\|^2} }\le\delta\] where \(q_i\) is the \(i^{th}\) column of \(Q\). Finally, it is possible to execute lines 6-8 so that \(T_{\textrm{acc}}\) is ensured to be (special) unitary or (unit) upper/lower triangular, giving any of the factorizations listed in 2.
Our analysis of Algorithm 1 follows a similar strategy as the classical analysis of the Jacobi eigenvalue algorithm. To highlight the connection, we review the classical analysis here. Say that \(B=B^{(0)},B^{(1)},B^{(2)},\ldots\) is the sequence of matrices produced by the iterations of the Jacobi eigenvalue algorithm. The proof strategy is to identify a monotone invariant, i.e. some quantity which strictly decreases as iterations are performed. In the classical analysis, this monotone invariant is taken to be the distance of the current iterate \(B^{(t)}\) to the closest diagonal matrix in Frobenius norm, denoted \(\mathop{\mathrm{off}}(\cdot)\). This quantity can be compactly expressed as \[\mathop{\mathrm{off}}(B) :=\inf_{D\text{ diagonal}}\left\|B-D\right\|_F^2 =\sum_{i,j\in[n],i\neq j}\mleft|b_{ij}\mright|^2.\] A straightforward calculation reveals that that\[{\label{a21}\mathop{\mathrm{off}}(B^{(t+1)})=\mathop{\mathrm{off}}(B^{(t)})-2|b^{(t)}_{ij}|^2}\tag{9}\] where \((i,j)\) is the pivot. By appealing to the probabilistic method, if \((i,j)\) is picked according to the greedy pivot rule (i.e. \(|b^{(t)}_{ij}|^2=\mathop{\mathrm{arg\,max}}_{i,j:i\neq j}|b^{(t)}_{ij}|^2\)), then one has \[|b^{(t)}_{ij}|^2\ge\frac{1}{n(n-1)}\mathop{\mathrm{off}}(B^{(t)})\] which ensures global convergence of \(\mathop{\mathrm{off}}(B^{(t)})\to0\) at a linear rate. By using a randomized size 2 pivoting strategy, one has equality in expected value, \[{\label{a22}\mathop{\mathrm{\mathbb{E}}}\mathop{\mathrm{off}}(B^{(t+1)})=\mleft(1-\frac{2}{n(n-1)}\mright)\mathop{\mathrm{off}}(B^{(t)}),}\tag{10}\] which again ensures global convergence in expectation at a linear rate.
In the more general setting where \(B^{(t)}\) is the sequence produced by non-unitary transformations, we no longer have 9 (and therefore not 10 either). Instead, we obtain an expression similar to 10 but with a different function in place of \(\mathop{\mathrm{off}}(\cdot)\).
Our potential function is the following, \[{\label{a23} \Gamma(B):=\mathop{\mathrm{tr}}(B\odot B^{-1}-I) =\sum_{i\in[n]}(b_{ii}\cdot(B^{-1})_{ii}-1) =-\sum_{i,j\in[n],i\neq j}b_{ij}\cdot(B^{-1})_{ji}.}\tag{11}\] It is not immediately clear that convergence of \(\Gamma(\cdot)\to0\) implies convergence of positive definite \(B^{(t)}\) to diagonal, and in fact this is not true for indefinite matrices. For definite matrices, this implication does hold; see Section 6 for several proofs and quantitative statements of this implication.
The analysis to show \(\Gamma(B^{(t)}) \to 0\) consists of two lemmas: the first is an update formula analogous to 9 for \(\Gamma(\cdot)\), and the second is the computation of its expectation under the randomized pivoting rule analogous to 10 .
Lemma 1 (Deterministic update formula). Fix positive definite \(B\in{\mathbb{C}}^{n\times n}\). Let \(J\subset[n]\) be any subset of indices and \(S\in{\mathbb{C}}^{\mleft|J\mright|\times\mleft|J\mright|}\) be any nonsingular matrix such that \(S^*B_{J,J}S\) is diagonal. Denote \(T=P_\sigma\begin{bmatrix}S^{-1}\\&I\end{bmatrix}P_\sigma^*\) where \(\sigma(J)=[k]\). Then \[\Gamma((T^{-1})^*BT^{-1})=\Gamma(B)+\sum_{i,j\in J,i\neq j}b_{ij}\cdot(B^{-1})_{ji}.\]
Proof. Put \(P=P_\sigma\). Note that \(\Gamma(\cdot)\) is invariant under conjugation by permutation matrices, so \[\Gamma(T^{-*}BT^{-1})=\Gamma((P^*T^{-*}P)(P^*BP)(P^*T^{-1}P)).\] We can express \(P^*BP\) and \(P^*TP\) as block matrices \[P^*BP=\begin{bmatrix}B_{JJ}&B_{J,J^c}\\B_{J^c,J}&B_{J^c,J^c}\end{bmatrix} \quad\And\quad P^*TP=\begin{bmatrix}S^{-1}\\&I\end{bmatrix}.\] For the remainder of the proof, we replace \(P^*BP\) with \(B\) and \(P^*TP\) with \(T\). Then \[\begin{align} \Gamma(T^{-*}BT^{-1})-\Gamma(B) =&\mathop{\mathrm{tr}}\mleft(T^{-*}BT^{-1}\odot TB^{-1}T^*\mright)-\mathop{\mathrm{tr}}\mleft(B\odot B^{-1}\mright) \\=&\mathop{\mathrm{tr}}\mleft(\begin{bmatrix} S^*\\&I\end{bmatrix}B\begin{bmatrix} S\\&I\end{bmatrix}\odot\begin{bmatrix} S^{-1}\\&I\end{bmatrix}B^{-1}\begin{bmatrix} S^{-*}\\&I\end{bmatrix}\mright)-\mathop{\mathrm{tr}}\mleft(B\odot B^{-1}\mright) \\=&\mathop{\mathrm{tr}}\mleft( S^*B_{JJ} S\odot S^{-1}(B^{-1})_{JJ} S^{-*}\mright)-\mathop{\mathrm{tr}}(B_{JJ}\odot(B^{-1})_{JJ}) \end{align}\] By assumption, \(S^*B_{J,J} S=D\) is diagonal. In particular, we have \[\mathop{\mathrm{tr}}\mleft( S^*B_{JJ} S\odot S^{-1}(B^{-1})_{JJ} S^{-*}\mright) =\mathop{\mathrm{tr}}\mleft(S^*B_{JJ} S\cdot S^{-1}(B^{-1})_{JJ} S^{-*}\mright) =\mathop{\mathrm{tr}}\mleft(B_{J,J}\cdot(B^{-1})_{J,J}\mright).\] This gives \[\begin{align} \Gamma(T^*BT)-\Gamma(B) &=\mathop{\mathrm{tr}}\mleft(B_{JJ}\cdot(B^{-1})_{JJ}\mright)-\mathop{\mathrm{tr}}\mleft(B_{JJ}\odot(B^{-1})_{JJ}\mright) \\&= \sum_{i,j\in J}b_{ij}\cdot(B^{-1})_{ji} - \sum_{j\in J}b_{jj}\cdot(B^{-1})_{jj} \\&=\sum_{i,j\in J,i\neq j}b_{ij}\cdot(B^{-1})_{ji} \end{align}\] as required. ◻
As promised, we now compute the expected value of the bound from Lemma 1 under the randomized size \(k\) pivot rule.
Lemma 2 (Expected update). Fix positive definite \(B\in{\mathbb{C}}^{n\times n}\). Let \(T\) be as in Lemma 1 for a uniformly randomly sampled size \(k\) pivot set \(J\in\binom{[n]}k\). Then \[\mathop{\mathrm{\mathbb{E}}}\Gamma((T^{-1})^*BT^{-1})=\mleft(1-\frac{k(k-1)}{n(n-1)}\mright)\Gamma(B).\]
Proof. \[\mathop{\mathrm{\mathbb{E}}}\mleft(\sum_{i,j\in J,i\neq j}b_{ij}\cdot(B^{-1})_{ji}\mright) =\binom nk^{-1}\sum_{J\in\binom{[n]}k} \sum_{i,j\in J,i\neq j}b_{ij}\cdot(B^{-1})_{ji}.\] Note for each ordered pair \((i,j)\) that the term \(b_{ij}\cdot(B^{-1})_{ji}\) appears exactly \(\binom{n-2}{k-2}\) times. Thus \[\begin{align} \mathop{\mathrm{\mathbb{E}}}\mleft(\sum_{i,j\in J,i\neq j}b_{ij}\cdot(B^{-1})_{ji}\mright) &={\binom nk}^{-1}\binom{n-2}{k-2}\sum_{i,j\in [n],i\neq j}b_{ij}\cdot(B^{-1})_{ji} \\&=-\frac{k(k-1)}{n(n-1)}\Gamma(B).\end{align}\] ◻
An immediate consequence is global linear convergence of Algorithm 1.
Theorem 1 (Global linear convergence of Algorithm 1). Let \(A,^{(t)},B^{(t)}\) be the sequence of matrices produced by any instantiation of the the one- and two-sided versions Algorithm 1 respectively with randomized size \(k\) pivoting. Then\[\mathop{\mathrm{\mathbb{E}}}\Gamma(B^{(t)})=\mleft(1-\frac{k(k-1)}{n(n-1)}\mright)^t\Gamma(B^{(0)}) \quad\And\quad \mathop{\mathrm{\mathbb{E}}}\Gamma(A^{(t)*}A^{(t)})=\mleft(1-\frac{k(k-1)}{n(n-1)}\mright)^t\Gamma(A^{(0)*}A^{(0)}).\]
Proof. Lemma 2 implies \[{\label{a28}\mathop{\mathrm{\mathbb{E}}}\mleft(\Gamma(B^{(t)})\,|\,B^{(t-1)}\mright)=\mleft(1-\frac{k(k-1)}{n(n-1)}\mright)\Gamma(B^{(t-1)}).}\tag{12}\] By using induction with the law of iterated expectation, we obtain \[\begin{align} \mathop{\mathrm{\mathbb{E}}}\Gamma(B^{(t)}) &=\mathop{\mathrm{\mathbb{E}}}\mleft(\mathop{\mathrm{\mathbb{E}}}\mleft(\Gamma(B^{(t)})\,|\,B^{(t-1)}\mright)\mright) \\&=\mleft(1-\frac{k(k-1)}{n(n-1)}\mright)\mathop{\mathrm{\mathbb{E}}}\Gamma(B^{(t-1)}) \\&=\mleft(1-\frac{k(k-1)}{n(n-1)}\mright)^t\Gamma(B^{(0)}).\end{align}\] ◻
Remark 4 (Monotonicity and determinism). Unlike \(\mathop{\mathrm{off}}(\cdot)\), the potential function \(\Gamma(\cdot)\) is not strictly monotone, it is only monotone in expectation. If one seeks a deterministic pivoting strategy with monotone convergence, one could adopt a greedy strategy which is analogous to 6 . Specifically, pick \[(i,j)=\mathop{\mathrm{arg\,min}}_{i,j:i\neq j}\mleft|b_{ij}\mright|\mleft|(B^{-1})_{ji}\mright|.\] With this pivoting rule, the bounds of Theorem 1 would hold with an inequality \(\le\) instead of equality in expectation. We speculate that this could be done efficiently in the setting where one is initially given both \(B\) and \(B^{-1}\), and updates both \(B^{(t)}\) and \((B^{(t)})^{-1}\) in each iteration.
Remark 5 (Indefinite matrices). Theorem 1 holds for indefinite matrices as well. However, \(\Gamma(B)\) is only a reasonable measure of distance to diagonal matrices among positive definite matrices, so even the statement \(\Gamma(B)=0\) doesn’t imply convergence.
Remark 6 (Kaczmarz-Kac walk). Consider the special case of the one-sided Algorithm 1, where \(k=2\) and \(S\) is either upper or lower triangular. This is precisely the procedure proposed in [25] (where it is dubbed the “Kaczmarz-Kac walk”) and concurrently in [34]. With Lemma 9 to relate \(\Gamma(\cdot)\) to \(\kappa(\cdot)\), Theorem 1 confirms the rate of convergence of this method conjectured by Steinerberger.
Proof of Corollary 1. First note that as outlined in Table 2, we can apply constraints to \(S\) to produce a specific factorization. Note that each listed property of \(S\) implies the corresponding property of \(T\) in line 7. That is, if \(S\) is unitary, \(T\) will be unitary, if \(S\) is upper triangular, \(T\) will be upper triangular, etc. These properties also are closed under multiplication, so the returned transformation \(T_{\textrm{acc}}\) will have the corresponding property as well. Picking \(S\) with the desired constraint is also always possible: these are just the corresponding decompositions of \(B_{JJ}\) and \(A_J\), which always exist.
We now turn to the quantitative statements in the two-sided case. By Theorem 1, our choice of \(t\), and Lemma 9 we have \[\mathop{\mathrm{\mathbb{E}}}\Gamma(B^{(t)})=\mleft(1-\frac{k(k-1)}{n(n-1)}\mright)^t\Gamma(B^{(0)}) \leq \delta^2/4.\] By Lemma 8 and Jensen’s inequality, we have \[\mathop{\mathrm{\mathbb{E}}}\sqrt{\sum_{i\neq j}\frac{\mleft|d_{ij}\mright|^2}{d_{ii}d_{jj}} } \leq \delta.\] The statements for the one-sided case follow identically. The first statement in the corollary follows by considering the event \[\sqrt{\sum_{i\neq j}\frac{\mleft|d_{ij}\mright|^2}{d_{ii}d_{jj}} } \leq n\delta,\] which occurs with probability \(1-1/n\) by Markov’s inequality. This is equivalent to \[\left\| D'^{-1/2}T_{\textrm{acc}}^{-*}BT_{\textrm{acc}}^{-1}D'^{-1/2} - I\right\|_F \le n\delta\] which by the triangle inequality and submultiplicativity implies \[\begin{align} \left\|B - T_{\textrm{acc}}^* D'T_{\textrm{acc}}\right\|_F &\le\|D'^{1/2}T_{\textrm{acc}}\|^2n\delta \\&=\|T_{\textrm{acc}}^* D'T_{\textrm{acc}}\|n\delta \\&\le(\left\|B\right\|+\|B-T_{\textrm{acc}}^* D'T_{\textrm{acc}}\|)n\delta. \end{align}\] The desired result follows by rearranging. ◻
The previous section showed that every instantiation of Algorithm 1 (i.e. every process for selecting \(S\) in line 6) with randomized size \(k\) pivoting has the same expected behavior in terms of \(\Gamma(\cdot)\). To obtain a convergence result in the presence of round-off error, one needs to assume the updates line 8 and 9 are performed with small mixed forward-backward error (see the diagram 13 ).
In this setting, \(\Gamma(\cdot)\) will play two roles: the first role, as before, is that the convergence of \(\Gamma(\cdot)\to0\) tracks the convergence of the algorithm. The second role is in controlling the accumulation of the error acquired during each iteration.
We let \(\widetilde{A}^{(t)}\), \(\widetilde{B}^{(t)}\) denote the state of the one- and two-sided algorithm respectively after \(t\) iterations are performed in finite precision with randomized size \(k\) pivoting (to contrast with \(A^{(t)}\), \(B^{(t)}\) without tildes computed exactly). The stability of each iteration can be defined in terms of the following commutative diagram, corresponding to a mixed forward-backward guarantee. \[\label{a31} \begin{tikzcd} {\widetilde{A}^{(t)}} & {\widetilde{A}^{(t+1)}} \\ {\widetilde{A}^{(t+1/3)}} & {\widetilde{A}^{(t+2/3)}} \arrow["\textrm{floating}", from=1-1, to=1-2] \arrow["\approx"{description}, tail reversed, from=1-1, to=2-1] \arrow["\textrm{exact}", from=2-1, to=2-2] \arrow["\approx"{description}, tail reversed, from=1-2, to=2-2] \end{tikzcd} \quad\quad\quad\quad\quad \begin{tikzcd} {\widetilde{B}^{(t)}} & {\widetilde{B}^{(t+1)}} \\ {\widetilde{B}^{(t+1/3)}} & {\widetilde{B}^{(t+2/3)}} \arrow["\textrm{floating}", from=1-1, to=1-2] \arrow["\approx"{description}, tail reversed, from=1-1, to=2-1] \arrow["\textrm{exact}", from=2-1, to=2-2] \arrow["\approx"{description}, tail reversed, from=1-2, to=2-2] \end{tikzcd}\tag{13}\] To avoid ambiguity, we always take \(t\) to be an integral index and \(s\) an index in \(\frac{1}{3}{\mathbb{N}}\). The diagram commutes in the following sense: if \(\widetilde{A}^{(t+1)}\) is the result of performing one iteration on \(\widetilde{A}^{(t)}\) in floating point arithmetic, then we are guaranteed the existence of “intermediate states” \(\widetilde{A}^{(t+1/3)}\) which is close to \(\widetilde{A}^{(t)}\) and \(\widetilde{A}^{(t+2/3)}\) which is close to \(\widetilde{A}^{(t+1)}\) such that \(\widetilde{A}^{(t+2/3)}\) is the result of applying one iteration to \(\widetilde{A}^{(t+1/3)}\) in exact arithmetic. Analogous states exist for the two-sided variant. We define “closeness” by \[A\approx A'\iff \mathop{\mathrm{dist}}_{\textrm{o}}(A,A')\le\varepsilon\quad\And\quad B\approx B'\iff \mathop{\mathrm{dist}}_{\textrm{t}}(B,B')\le\varepsilon\] for pseudo-metrics \[\mathop{\mathrm{dist}}_{\textrm{o}}(A,A')= \sup_j \left\|\frac{a_j}{ \left\|a_j\right\| }-\frac{a_j'}{ \|a_j'\| }\right\| \quad\And\quad \mathop{\mathrm{dist}}_{\textrm{t}}(B,B')=\sup_{ij}\mleft| \frac{b_{ij}}{\sqrt{b_{ii}b_{jj}}} - \frac{b'_{ij}}{\sqrt{b'_{ii}b'_{jj}}}\mright|\] where \(a_j,a'_j\) is the \(j\)th column of \(A,A'\) respectively. Note that we are implicitly assuming here that the diagonal entries of \(B\) and \(B'\) are positive. Importantly, \(\mathop{\mathrm{dist}}_{\textrm{o}}(A,A')\le\varepsilon\) implies that \(\mathop{\mathrm{dist}}_{\textrm{t}}(A^T A,A'^TA')\le2\varepsilon+\varepsilon^2=O(\varepsilon)\) so it suffices to prove our theorem under the hypothesis of just the two-sided assumption of 13 .
We begin by proving a finite-arithmetic analog of Theorem 1, i.e., we wish to show \(\Gamma(\widetilde{B}^{(t)})\) exhibits linear convergence. In the last section, specifically, 12 , we showed that the stochastic process \(C_{n,k}^{-t}\Gamma(B^{(t)})\) was a martingale where \(C_{n,k}=\mleft(1-\frac{k(k-1)}{n(n-1)}\mright)\). Thus \(C^{t}\to0\) roughly implied \(\Gamma(B^{(t)})\to0\) at the same rate. In this section, we define a new martingale and show \(C_{n,k}^{-t}\Gamma(\widetilde{B}^{(t)})\) is (nearly) bounded by it with high probability. To define the martingale, we introduce the following random variables.\[X_t=\frac{\Gamma(\widetilde{B}^{(t+2/3)})}{\Gamma(\widetilde{B}^{(t+1/3)})} \quad\And\quad Y_{t_1t_2}=\prod_{t=t_1}^{t_2}X_t\] where we use the convention \(Y_{t_1t_2}=1\) if \(t_2<t_1\).
Lemma 3 (Martingale behavior). For any index \(t_1\), the process\[{\left\{{C_{n,k}^{t_1-1-t}Y_{t_1,t}}\right\}}_{t=t_1}^\infty\] is a nonnegative martingale.
Proof. Lemma 2 states that \[\mathop{\mathrm{\mathbb{E}}}\mleft(X_t\,|\,\widetilde{B}^{(t+1/3)}\mright)=\mleft(1-\frac{k(k-1)}{n(n-1)}\mright).\] Lemma 1 shows that \(X_t\) is a deterministic function of \(\widetilde{B}^{(t+1/3)}\) and the randomly chosen pivot \(J\in\binom{[n]}k\). Since the pivot chosen at each step is independent of previously chosen pivots, one has \[\mathop{\mathrm{\mathbb{E}}}\mleft(X_t\,|\,X_{t-1},\ldots,X_0\mright) =C_{n,k} \implies\mathop{\mathrm{\mathbb{E}}}(Y_{t_1,t}\,|\,Y_{t_1,t-1},\ldots,Y_{t_1,t_1})=C_{n,k} Y_{t_1,t-1} .\] ◻
Our next lemma allows us to control the change in \(\Gamma(\cdot)\) between the states \(\widetilde{B}^{(t\pm 1/3)}\) and the state \(\widetilde{B}^{(t)}\).
Lemma 4 (Perturbation theory for \(\Gamma(\cdot)\)). Let \(B\) and \(P\) be square matrices with positive diagonal entries. Suppose that \(B\) is positive definite and that \(\mathop{\mathrm{dist}}_{\textrm{t}}(B,P)\le\varepsilon\) for \(\varepsilon<1/(n\Gamma(B)+n^2)\) then \(P\) is also positive definite and \[\mleft|\Gamma(B)-\Gamma(P)\mright|\le n\varepsilon\cdot \frac{(\Gamma(B)+n)^2}{1-n\varepsilon(\Gamma(B)+n)}.\]
Proof. Let \(X=D_B^{-1/2}BD_B^{-1/2}\) and \(Y=D_P^{-1/2}PD_P^{-1/2}\). By definition of \(\Gamma(\cdot)\), we have \[\Gamma(B)=\Gamma(X)=\mathop{\mathrm{tr}}(X^{-1})-n\quad\And\quad\Gamma(P)=\Gamma(Y)=\mathop{\mathrm{tr}}(Y^{-1})-n.\] Consequently, we can express their difference as \[\begin{align} \mleft|\Gamma(X)-\Gamma(Y)\mright| =\mleft|\mathop{\mathrm{tr}}(X^{-1}-Y^{-1})\mright| &=\mleft|\mathop{\mathrm{tr}}\mleft(X^{-1}\mleft(Y-X\mright)Y^{-1}\mright)\mright| \\&=\mleft|\mathop{\mathrm{tr}}\mleft(Y^{-1}X^{-1}\mleft(Y-X\mright)\mright)\mright|.\end{align}\] Since the entries of \(Y-X\) are bounded by \(\varepsilon\), we can apply Hölder’s inequality and Cauchy-Schwarz to obtain \[\mleft|\mathop{\mathrm{tr}}\mleft(Y^{-1}X^{-1}\mleft(Y-X\mright)\mright)\mright| \le\varepsilon\sum_{i,j}\mleft|(Y^{-1}X^{-1})_{ij}\mright| \le\varepsilon\left\|Y^{-1}\right\|_F\left\|X^{-1}\right\|_F.\] Then by rearranging the triangle inequality \[\begin{align} \left\|Y^{-1}\right\|_F &\le\left\|X^{-1}\right\|_F+\left\|X^{-1}-Y^{-1}\right\|_F \\&\le\left\|X^{-1}\right\|_F+\left\|X^{-1}\right\|_F\left\|Y^{-1}\right\|_F\left\|X-Y\right\|_F \end{align}\] one obtains \[\left\|Y^{-1}\right\|_F \le\frac{\left\|X^{-1}\right\|_F}{1 - \left\|Y-X\right\|_F\left\|X^{-1}\right\|_F } \le\frac{\left\|X^{-1}\right\|_F}{1 - n\varepsilon\left\|X^{-1}\right\|_F }.\] Finally, note that the above quantity is monotone in \(\|X^{-1}\|_F\) and apply \[\left\|X^{-1}\right\|_F\le\mathop{\mathrm{tr}}(X^{-1})=\Gamma(X)+n.\] Note that this argument applies if \(Y\) is replaced with any matrix in the line segment \(Y_s=sY+(1-s)X\). In particular, \(\left\|Y_s^{-1}\right\|_F\) is finite for each choice of \(s\in[0,1]\). Since \(Y_t\) is a continuous path among Hermitian invertible matrices with one end point, \(X\) that is positive definite, we conclude that the other end point, \(Y\), is positive definite as well. ◻
To stitch these two lemmas together, we essentially argue that by selecting \(\varepsilon\) sufficiently small, one can ensure that the additive error one obtains from Lemma 4 between states \(\widetilde{B}^{(t-1/3)}\) to \(\widetilde{B}^{(t+1/3)}\) does not accumulate too much.
Theorem 2. For any time step \(\tau\) and parameters \(\rho,\delta>0\), if \[\varepsilon\le\frac{\delta\rho}{ 2n(\Gamma(B)\tau\rho^{-1}+\delta+n)^2\cdot\tau^2}\] then \[\Pr\mleft(\Gamma(\widetilde{B}^{(t)})\le\Gamma(B)\mleft(1-\frac{k(k-1)}{n(n-1)}\mright)^t\rho^{-1}+\delta\quad\forall t<\tau\mright)\ge1-2\rho.\]
Proof. Set \(C=1-\frac{n(n-1)}{k(k-1)}\). Let \(y=\tau\rho^{-1}\). Let \(\Omega\) be the event that \[\Omega\equiv\sup_{0\le t_1,t_2<\tau}C^{t_1-1-t_2}Y_{t_1t_2}\le y\] and \(E_t\) the event that \[E_t\equiv \Gamma(\widetilde{B}^{(t)})\le\Gamma(B)Y_{0,t-1}+\delta.\] We claim that \(\Omega\subset E_0\cap\cdots\cap E_{\tau-1}\). We argue via induction on \(t\). The base case of \(\Omega\subset E_0\) follows since \(E_0\) is guaranteed. It thus suffices to argue that \(\Omega\cap E_0\cap\ldots\cap E_t\subset E_{t+1}\). Condition on the events \(\Omega,E_0,\ldots,E_t\). Define \[d_t=\Gamma(\widetilde{B}^{(t+1/3)})-\Gamma(\widetilde{B}^{(t-1/3)}) \quad\And\quad d'_t=\Gamma(\widetilde{B}^{(t)})-\Gamma(\widetilde{B}^{(t-1/3)})\] where we adopt the convention \(\widetilde{B}^{(-1/3)}=\widetilde{B}^{(0)}\) Then \[\label{a35}\begin{align} \Gamma(\widetilde{B}^{(t+1)}) &= \Gamma(B)Y_{0t}+\sum_{j=0}^td_jY_{jt}+d'_{t+1} \\&\le\Gamma(B)Y_{0t}+(t+1)y\sup_{0\le j\le t}d_j+d'_{t+1}.\end{align}\tag{14}\] Lemma 4 shows how to bound \(d_i,d'_i\) in terms of \(\Gamma(\widetilde{B}^{(t)})\). Specifically, \[{\label{a36} d_j,d'_j \le 2n\varepsilon\cdot\frac{(\Gamma(\widetilde{B}^{(j)}) + n)^2}{1 - n\varepsilon(\Gamma(\widetilde{B}^{(j)}) + n)} \le 2n\varepsilon\cdot\frac{(\Gamma(B)C^jy+\delta + n)^2}{1 - n\varepsilon(\Gamma(B)C^jy+\delta + n)}}\tag{15}\] where we use \(\Omega,E_j\) in the second inequality. The selection of \(\varepsilon\) is such that 14 with 15 implies \(E_{t+1}\) occurs, completing the induction step. The consequence is \[\Pr(E_{\tau-1}\cap\cdots\cap E_0)\ge\Pr(\Omega).\] Now consider the event \[\Omega'\equiv\sup_{t<\tau}C^{-1-t}Y_{0t}\le\rho^{-1}.\] Then \[\Omega'\cap E_t\implies \Gamma(\widetilde{B}^{(t)})\le\Gamma(B) C^t\rho^{-1}+\delta.\] Therefore by union bound, \[\label{a37}\begin{align} \Pr\mleft( \Gamma(\widetilde{B}^{(t)})\le\Gamma(B) C^t\rho^{-1}+\delta\quad\forall t<\tau \mright) &\ge \Pr(E_{\tau-1}\cap\cdots\cap E_0\cap\Omega') \\&\ge \Pr(E_{\tau-1}\cap\cdots\cap E_0)-\Pr(\neg\Omega') \\&\ge \Pr(\Omega)-\Pr(\neg\Omega').\end{align}\tag{16}\] By Lemma 3, events \(\Omega,\Omega'\) concern the concentration of a nonnegative supermartingale. For each \(t_1\), one has by Ville’s inequality that \[{\label{a38} \Pr\mleft(\sup_{t_2\ge t_1}C^{t_1-1-t_2} Y_{t_1,t_2} \ge a\mright)\le a^{-1}.}\tag{17}\] Apply 17 for \(t_1\) such that \(0 \leq t_1 < \tau\) and \(a=y\) to obtain by union bound \[\Pr(\neg\Omega)\le y^{-1}\tau=\rho.\] Apply 17 for \(t_1=0\) and \(a=\rho^{-1}\) to obtain \[\Pr(\neg\Omega)\le\rho.\] Plugging these inequalities into 16 gives the final result. ◻
As we stated at the outset, \(\Gamma(\cdot)\) plays two roles: it must converge to \(0\) to show the algorithm halts, and it also must never see a large intermediate value to ensure the numerical error does not blow up. To these ends, we have two consequences of Theorem 2. In both, we set \[\hat{B}^{(t)}= \mathop{\mathrm{diag}}\mleft(\widetilde{b}_{11},\ldots,\widetilde{b}_{nn}\mright)^{-1/2} \widetilde{B}^{(t)} \mathop{\mathrm{diag}}\mleft(\widetilde{b}_{11},\ldots,\widetilde{b}_{nn}\mright)^{-1/2}\] to be the diagonal-normalization of a matrix.
Corollary 2 (Boundedness of \(\Gamma(\cdot)\)). For any time step \(\tau\) and parameters \(\rho>0\), if \[\varepsilon\le\frac{\rho^3}{3n^3\|\hat{B}^{-1}\|^2\tau^4}\] then \[\Pr\mleft( \sup_{t<\tau}\|(\hat{B}^{(t)})^{-1}\| \le\|\hat{B}^{-1}\|\cdot n\rho^{-1}+3\mright)\ge1-2\rho.\]
Proof. Apply Lemma 7 to Theorem 2, set \(\delta=1\), and use \(\mleft(1-\frac{k(k-1)}{n(n-1)}\mright)^t\le1\). ◻
Corollary 3 (Approximate convergence of \(\Gamma(\cdot)\)). For any \(\rho\in(0,1/2),\delta\in(0,1/4)\), let \[t\ge \frac{n(n-1)}{k(k-1)}\log\mleft({n\|\hat{B}^{-1}\|}/{\rho\delta}\mright).\] If \[\varepsilon\le\frac{\delta\rho^3}{3n^3\|\hat{B}^{-1}\|^2\tau^4}\] then \[\max\mleft(\|\hat{B}^{(t)}\|,\|(\hat{B}^{(t)})^{-1}\|\mright)\le1+2\sqrt\delta\] with probability \(1-2\rho\).
The most general result is that if each iteration of the one-sided algorithm satisfies 13 , then the overall algorithm stably computes an orthonormal basis of the column-space of \(A=\begin{bmatrix}a_1&\cdots&a_n\end{bmatrix}\in{\mathbb{C}}^{d\times n}\). This can be extended to a basis of the flag \(\mathop{\mathrm{span}}(a_1), \mathop{\mathrm{span}}(a_1,a_2), \mathop{\mathrm{span}}(a_1,a_2,a_3),\cdots.\) We give a handful of examples of implementations of the algorithm which satisfy 13 .
We quantify error in terms of the following metric on subspaces: \[\mathop{\mathrm{dist}}(V,W)=\max\mleft( \sup_{v\in V,\left\|v\right\|=1}\inf_{w\in W}\left\|v-w\right\|, \sup_{w\in W,\left\|w\right\|=1}\inf_{v\in V}\left\|v-w\right\|\mright).\]
Lemma 5 (Perturbation theory for subspaces). Let \(\hat{A},\hat{A}'\in{\mathbb{C}}^{d\times n}\) have unit length columns. Then \[\mathop{\mathrm{dist}}(\mathop{\mathrm{Col}}(\hat{A}),\mathop{\mathrm{Col}}(\hat{A}'))\le \frac{\sqrt n \mathop{\mathrm{dist}}_{\textrm{o}}(\hat{A},\hat{A}')}{\min(\sigma_n(A),\sigma_n(A'))}.\]
Proof. Let \(\hat{A}=QR\) and \(\hat{A}'=Q'R'\) be the QR decompositions of \(\hat{A}\) and \(\hat{A}'\) respectively. Then \[\begin{align} \mathop{\mathrm{dist}}(\mathop{\mathrm{Col}}(A),\mathop{\mathrm{Col}}(A')) &\le \max\mleft( \left\|Q-Q'R'R^{-1}\right\|, \left\|QRR'^{-1}-Q'\right\|\mright) \\&= \max\mleft( \left\|(\hat{A}-\hat{A}')R^{-1}\right\|, \left\|(\hat{A}-\hat{A}')R'^{-1}\right\|\mright) \\&\le \|\hat{A}-\hat{A}'\|\max\mleft( \left\|R^{-1}\right\|,\left\|R'^{-1}\right\|\mright) \\&= \|\hat{A}-\hat{A}'\|\max\mleft( \frac{1}{\sigma_n(A)},\frac{1}{\sigma_n(A)}\mright).\end{align}\] ◻
Theorem 3. Say one is given \(A\in{\mathbb{C}}^{d\times n}\). Let \(\hat{A}=A\mathop{\mathrm{diag}}\mleft(\left\|a_1\right\|,\ldots,\left\|a_n\right\|\mright)^{-1}\) normalize the columns of \(A\). Let \(Q=A^{(\tau)}\) be the result of running the one-sided Algorithm 1 with randomized size \(k\) pivoting on \(A\in{\mathbb{C}}^{d\times n}\) with full column rank for \[\tau=O\mleft(n^2k^{-2}\log\mleft(\frac{n}{\sigma_n(\hat{A})\delta}\mright)\mright)\] iterations. Assume each iteration is computed stably in the sense of 13 for \[\varepsilon\le\frac{\delta\sigma_n(\hat{A})^4}{n^6\tau^4}.\] Then with probability \(1-2/n\), \[\left\|Q^*Q-I\right\|\le3\sqrt\delta\quad\And\quad\mathop{\mathrm{dist}}(\mathop{\mathrm{Col}}(Q),\mathop{\mathrm{Col}}(A))\le\delta.\]
Proof. Corollary 3 immediately implies \(\left\|Q^*Q-I\right\|\le\delta\). By the triangle inequality, \[\begin{align} \mathop{\mathrm{dist}}(\mathop{\mathrm{Col}}(A),\mathop{\mathrm{Col}}(\widetilde{A}^{(\tau)}))\le\sum_{t=1}^\tau \biggr( &\mathop{\mathrm{dist}}(\mathop{\mathrm{Col}}(\widetilde{A}^{(t-1)}),\mathop{\mathrm{Col}}(\widetilde{A}^{(t-2/3)})) \\+ &\mathop{\mathrm{dist}}(\mathop{\mathrm{Col}}(\widetilde{A}^{(t-2/3)}),\mathop{\mathrm{Col}}(\widetilde{A}^{(t-1/3)})) \\+ &\mathop{\mathrm{dist}}(\mathop{\mathrm{Col}}(\widetilde{A}^{(t-1/3)}),\mathop{\mathrm{Col}}(\widetilde{A}^{(t)})) \biggr) .\end{align}\] Since \(\widetilde{A}^{(t-1/3)}\) is exactly a column operation applied to \(\widetilde{A}^{(t-2/3)}\), the middle term in the summand vanishes. Note that one may replace \(\widetilde{A}^{(s)}\) with its column-normalization \(\hat{A}^{(s)}\). The first and last term are bounded by Lemma 5: \[\mathop{\mathrm{dist}}(\mathop{\mathrm{Col}}(A),\mathop{\mathrm{Col}}(\widetilde{A}^{(\tau)}))\le\tau\sup_{t\in[\tau]} \mleft( \frac{\sqrt n\mathop{\mathrm{dist}}_{\textrm{o}}(\hat{A}^{(t-1)},\hat{A}^{(t-2/3)})}{ \min(\sigma_n(\hat{A}^{(t-1)}),\sigma_n(\hat{A}^{(t-2/3)})) } \\+ \frac{\sqrt n\mathop{\mathrm{dist}}_{\textrm{o}}(\hat{A}^{(t-1/3)},\hat{A}^{(t)})}{ \min(\sigma_n(\hat{A}^{(t-2/3)}),\sigma_n(\hat{A}^{(t)})) } \mright).\] But the distances \(\mathop{\mathrm{dist}}_{\textrm{o}}(\cdot,\cdot)\) in the numerator are bounded by \(\varepsilon\) by assumption. Thus we have \[\mathop{\mathrm{dist}}(\mathop{\mathrm{Col}}(A),\mathop{\mathrm{Col}}(\widetilde{A}^{(\tau)}))\le\frac{2\tau\sqrt n}{ \inf_{s\le\tau} \sigma_n(\hat{A}^{(s)}) }\cdot\varepsilon.\] Corollary 2 shows that \[\mleft(\inf_{s\le\tau} \sigma_n(\hat{A}^{(s)})\mright)^{-2}\le\sigma_n(\hat{A})^{-2}n^2+3\] with probability \(1-2/n\). The selection of \(\varepsilon\) gives the desired result. ◻
Remark 7. The following update rules orthogonalize a pair of vectors and can be performed in floating point arithmetic with unit round-off \({\mathbf{u}}\) to satisfy 13 with \(\varepsilon=O(n){\mathbf{u}}\). We describe them for \(J={\left\{{1,2}\right\}}\) with \(\alpha=\left\langle a_1,a_2\right\rangle\). \[\begin{bmatrix}a_1&a_2\end{bmatrix}\gets \begin{bmatrix} \frac{a_1+a_2}{\sqrt{2+2\alpha}} & \frac{a_1-a_2}{\sqrt{2-2\alpha}}\end{bmatrix}.\] \[\begin{bmatrix}a_1&a_2\end{bmatrix}\gets \begin{bmatrix} a_1& \frac{a_2-\alpha a_1}{\sqrt{1-\alpha^2}}\end{bmatrix}.\] \[\begin{bmatrix}a_1&a_2\end{bmatrix}\gets \begin{bmatrix} \frac{ (\sqrt{2-2\alpha}+\sqrt{2+2\alpha})a_1 + (\sqrt{2-2\alpha}-\sqrt{2+2\alpha})a_2 }{\left\|(\sqrt{2-2\alpha}+\sqrt{2+2\alpha})a_1 + (\sqrt{2-2\alpha}-\sqrt{2+2\alpha})a_2\right\|} & \frac{ (\sqrt{2-2\alpha}-\sqrt{2+2\alpha})a_1 + (\sqrt{2-2\alpha}+\sqrt{2+2\alpha})a_2 }{\left\|(\sqrt{2-2\alpha}-\sqrt{2+2\alpha})a_1 + (\sqrt{2-2\alpha}+\sqrt{2+2\alpha})a_2\right\|}. \end{bmatrix}.\]
Remark 8 (The QR decomposition). One can strengthen the result of Theorem 3 if one implements the update \(A_J\gets A_JS\) by performing modified Gram-Schmidt to \(A_J\). The returned basis \(Q=\begin{bmatrix}q_1&\cdots&q_n\end{bmatrix}\) approximates the flag: \[\mathop{\mathrm{dist}}(\mathop{\mathrm{span}}(q_1,\ldots,q_m), \mathop{\mathrm{span}}(a_1,\ldots,a_m))\le\delta\] for all \(m\le n\). To see this, note that when using MGS in this way, the ordering of the pivot indices \(i_1<\cdots<i_k\) in line 5 ensures that each column is completely unaffected by columns to their right. Because of this, the algorithm using MGS run on input \(\begin{bmatrix}a_1&\cdots&a_n\end{bmatrix}\) can be seen as simulating the same algorithm on all the inputs \[\begin{bmatrix}a_1\end{bmatrix}\quad\And\quad\begin{bmatrix}a_1&a_2\end{bmatrix}\quad\And\quad\begin{bmatrix}a_1&a_2&\cdots&a_m\end{bmatrix}\] with a certain number of no-ops when the pivot \(J\) (partially) falls outside the index set. However, these no-ops exactly counteract the reduced number of iterations required by the smaller input size for convergence: just replace \(k\) with the random variable \(\mleft|J\cap{\left\{{1,\cdots,m}\right\}}\mright|\). The result would then follow by Theorem 3.
Demmel and Veselić study the stability of the Jacobi eigenvalue algorithm with an arbitrary pivoting rule. Their results make two assumptions. The first is that the pivoting rule is such that the method converges after some specified number of steps. The second is that the diagonal-normalized condition number of any iterate remains bounded. Under our randomized pivoting rule, both properties are satisfied. We restate the results of [1] concerning eigenvalue and singular value error in terms of our notation.
Theorem 4 (Corollary 3.2 of [1]). Given positive definite \(B\in{\mathbb{C}}^{n\times n}\), let \(\widetilde{B}^{(t)}\) be the \(t^{\text{th}}\) iterate of the two-sided Algorithm 1 using a Givens rotation in step 6 (i.e. use \(k=2\) and follow the Jacobi eigenvalue algorithm) with unit round-off \({\mathbf{u}}\). Then for each \(\tau\), \[\sup_{j\in[n]}\frac{\mleft|\lambda_j(B)-\widetilde{\lambda}_j^{(\tau)}\mright|}{\lambda_j(B)} \le O\mleft(\sqrt n\tau {\mathbf{u}}+n\mathop{\mathrm{dist}}_{\textrm{t}}(\widetilde{B}^{(\tau)},I)\mright)\sup_{t\le\tau}\hat{\kappa}(\widetilde{B}^{(t)})\] where \(\widetilde{\lambda}_j^{(\tau)}\) is the \(j\)th largest diagonal entry of \(\widetilde{B}^{(\tau)}\).
Theorem 5 (Corollary 4.2 of [1]). Given positive definite \(A\in{\mathbb{C}}^{n\times n}\), let \(\widetilde{A}^{(t)}\) be the \(t^{\text{th}}\) iterate of the one-sided Algorithm 1 using a Givens rotation in step 6 (i.e. use \(k=2\) and follow the Jacobi eigenvalue algorithm) with unit round-off \({\mathbf{u}}\). Then for each \(\tau\), \[\sup_{j\in[n]}\frac{\mleft|\sigma_j(B)-\widetilde{\sigma}_j^{(\tau)}\mright|}{\sigma_j(B)} \le O\mleft( \tau{\mathbf{u}}+ n^2{\mathbf{u}}+ n\mathop{\mathrm{dist}}_{\textrm{t}}(A^{(\tau)*}A^{(\tau)},I)\mright)\sup_{t\le\tau}\hat{\kappa}(\widetilde{A}^{(t)*}\widetilde{A}^{(t)})\] where \(\widetilde{\sigma}_j^{(\tau)}\) is the length of the \(j\)th longest column of \(\widetilde{A}^{(\tau)}\), the infimum is taken over orthogonal matrices, and \(\widetilde{B}^{(t)}=\widetilde{A}^{(t)*}\widetilde{A}^{(t)}\).
Our results allow us to provide a concrete upper bound for randomized size 2 pivoting.
Corollary 4. When using a randomized size \(2\) pivoting rule in the above theorems with \(\varepsilon\) as in Corollary 3, taking \[\tau=O(n^2\log(n\hat{\kappa}(B)/\delta))\] iterations ensures \[\mathop{\mathrm{dist}}_{\textrm{t}}(\widetilde{B}^{(t)},I)\le \sqrt\delta \qquad \text{and} \qquad \sup_{t\le\tau}\hat{\kappa}(\widetilde{B}^{(t)}) \le\kappa(\widetilde{B})n^3+3n.\] with probability \(1-O(1/n)\)
Proof. The bound on \(\mathop{\mathrm{dist}}_{\textrm{t}}(\widetilde{B}^{(t)},I)\) follows from Corollary 3. The bound on \(\sup\hat{\kappa}(\cdot)\) follows from Corollary 2 along with the observation that \(\|\hat{B}\|^{(t)}\le n\). ◻
Similar bounds for the eigenvectors and singular vectors can also be found in [1] (see their Theorems 3.3 and 4.3). We omit repeating the precise statements, but note that the application of Corollary 4 provides concrete bounds analogous to the bounds for eigenvalues and singular values.
We thank Nikhil Srivastava who brought the Jacobi eigenvalue algorithm to our attention when discussing extensions of our earlier work [34]. We also thank Jim Demmel and Ryan Schneider for helpful conversations about the Jacobi eigenvalue algorithm.
Note that \(\Gamma(\cdot)\) is invariant under diagonal scaling, i.e. \[\Gamma(B)=\Gamma(DBD)\] for any diagonal \(D\). In this section, we fix a positive definite \(B\in{\mathbb{C}}^{n\times n}\) and let \[B'= \mathop{\mathrm{diag}}\mleft(b_{11},\cdots,b_{nn}\mright)^{-1/2} B \mathop{\mathrm{diag}}\mleft(b_{11},\cdots,b_{nn}\mright)^{-1/2}\] be the “diagonal normalization” of \(B\), so that \(B'\) has all 1s along the diagonal and \[\Gamma(B)=\Gamma(B')=\mathop{\mathrm{tr}}(B'^{-1})-n.\] We let \(\lambda_j(B')\) be the \(j\)th largest eigenvalue of \(B'\).
Lemma 6. \(\Gamma(B) = 0\) if and only if \(B\) is diagonal.
Proof. By Jensen’s inequality, \[\frac{\Gamma(B)+n}{n}=\frac{\mathop{\mathrm{tr}}(B'^{-1})}{n}=\frac{1}{n}\sum_{j=1}^n\frac{1}{\lambda_j}\ge\frac{1}{n}\sum_{j=1}^n\lambda_j=\frac{\mathop{\mathrm{tr}}(B')}{n}=1\] with equality if and only if all eigenvalues of \(B'\) are equal. In particular, \(\Gamma(B)=0\) implies \(B'=I\), which in turn implies \(B=\mathop{\mathrm{diag}}\mleft(b_{11},\cdots,b_{nn}\mright)\). ◻
A few quantifiable versions of this lemma are available. They each can be derived by considering an optimization program of the eigenvalues \(\lambda_j\) of \(B'\). The constraints are the following: \[\label{a43}\begin{align} \lambda_1\ge\cdots\ge\lambda_n>0\quad\And\quad\lambda_1+\cdots+\lambda_n=n\quad\And\quad\lambda_1^{-1}+\cdots+\lambda_n^{-1}=n+\Gamma(B').\end{align}\tag{18}\]
Lemma 7. \[\max\mleft(\|B'\|,\|B'^{-1}\|\mright)\le1+\frac{\Gamma(B)}{2}+\sqrt{\mleft(1+\frac{\Gamma(B)}{2}\mright)^2-1}.\]
Proof. Put \(g=\Gamma(B')\). Consider minimizing \(\left\|B'^{-1}\right\|^{-1}={\lambda_n}\) subject to 18 . Using Lagrange multipliers, one sees that the minimum is achieved for the smaller root of \[(1+g/n)\lambda_n^2-(2+g)\lambda_n+1=0.\] This root is a monotonically decreasing function of \(n\), so is bounded by the limit as \(n\to\infty\). Applying the quadratic formula gives \[\lambda_n\ge 1+\frac{\Gamma(B)}{2}-\sqrt{\mleft(1+\frac{\Gamma(B)}{2}\mright)^2-1}.\] Now consider maximizing \(\left\|B'\right\|={\lambda_1}\) subject to 18 . Using Lagrange multipliers, one sees that the maximum is achieved for the larger root of \[(1+g/n)\lambda_1^2-(2+g)\lambda_1+1=0.\] Again we may take the limit as \(n\to\infty\). Applying the quadratic formula gives \[\lambda_1\le 1+\frac{\Gamma(B)}{2}+\sqrt{\mleft(1+\frac{\Gamma(B)}{2}\mright)^2-1}.\] ◻
Lemma 8. \[\left\|B'-I\right\|_F^2 \le\Gamma(B)\cdot\mleft(\sqrt{1+{\Gamma(B)}/4}+\sqrt{\Gamma(B)/4}\mright)^2\]
Proof. Put \(g=\Gamma(B')=\Gamma(B)\). We have \[\left\|B'-I\right\|_F^2=-n+\left\|B'\right\|_F^2=-n+\lambda_1^2+\cdots+\lambda_n^2.\] We consider maximizing \(\textrm{off}(B')\) subject to 18 . Using Lagrange multipliers, we see that the maximum is achieved for \(\lambda_i\) satisfying \(\lambda_i^3 + a\lambda_i^2 - b = 0\) for some \(a, b\). We note this polynomial can have at most two positive roots: if \(b \geq 0\), the product of the roots must be non-positive; if \(b<0\), by intermediate value theorem there is at least one negative root. Solving for the optimal values of \(\lambda_i\), we have \[\lambda_1=\cdots=\lambda_{n-1}= \frac{1+\frac{g}{2\left(n-1\right)}-\sqrt{\frac{g}{\left(n-1\right)n}+\frac{g^{2}}{4\left(n-1\right)^{2}}}}{1+{g}/{n}}\quad\And\quad\lambda_n=n-(n-1)\lambda_1.\] Then the maximum of the objective function can be expressed as \[\begin{align} -n+(n-1)\lambda_1^2+\lambda_n^2 &=-n+(n-1)\lambda_1^2+\mleft(n-(n-1)\lambda_1\mright)^2 \\&=n(n-1)(\lambda_1-1)^2. \end{align}\] The resulting expression is monotonically increasing in \(n\), so one may upper bound this quantity by taking the limit as \(n\to\infty\). This yields \[\text{off}(B')\le\mleft(g/2+\sqrt{g+{g^2}/4}\mright)^2.\] ◻
Lemma 9. \[1+\Gamma(B')/n\le\kappa(B')=\hat{\kappa}(B)\le(1+\sqrt{\Gamma(B)/2}+\Gamma(B)/2)^2.\]
Proof. For the lower bound on \(\kappa(B')\), note that \[n\Gamma(B')=n\mathop{\mathrm{tr}}(B'^{-1})-n^2=\mathop{\mathrm{tr}}(B')\cdot\mathop{\mathrm{tr}}(B'^{-1})-n^2\le n\left\|B'\right\|\cdot n\left\|B'^{-1}\right\|-n^2=n^2\kappa(B')-n^2.\] For the upper bound, put \(g=\Gamma(B)=\Gamma(B')\). We have \[\kappa(B')=\lambda_1/\lambda_n\] We consider maximizing this objective subject to subject to 18 . Using Lagrange multipliers, we see that the optimum is achieved for \[\lambda_2=\cdots=\lambda_{n-1}=\sqrt{\frac{n}{n+g}},\] and \(\lambda_1,\lambda_n\) are the larger and smaller roots of \[z^2-\mleft(n-(n-2)\lambda_2\mright)z+\lambda_2^2=0\] respectively. Simplifying the resulting value of \(\lambda_1/\lambda_n\) yields \[\kappa(B')\le1+\frac{1}{2}\mleft(x(x+4)+(x+2)\sqrt{x(x+4)}\mright)\] where \[x=\frac{g}{\lambda_2^{-1}+1}.\] From this expression, one can see our bound on \(\kappa(B')\) is increasing in \(n\), so for fixed \(g\) is upper bounded by the limit as \(n\to\infty\). From \(\lim_{n\to\infty}\lambda_2=1\), one has \(\lim_{n\to\infty}x=g/2\) so \[\kappa(B')\le 1+\sqrt{2g+g^2(g^{2}+16g+80)/64}+g+{g^{2}}/{8}\] By treating this as a function of \(\sqrt g\) and applying Taylor’s theorem, this gives \[\begin{align} \max\kappa &\le 1+\sqrt{2g}+g+\frac{5}{8\sqrt{2}}g^{1.5}+\frac{g^{2}}{4} \\&\le(1+\sqrt{g/2}+g/2)^2 \end{align}\] as required. ◻
Supported by NSF CCF-2420130↩︎
The literature also uses “two-sided” to refer to a version of the Jacobi eigenvalue algorithm due to [2] for computing the singular value decomposition; that method is different from the version of the Jacobi eigenvalue for computing the singular value decomposition that this paper considers.↩︎
One can replace \({\mathbb{C}}\) with \({\mathbb{R}}\) throughout the paper. Hermitian becomes symmetric, (Special) unitary becomes (special) orthogonal, and the Hermitian adjoint \((\cdot)^*\) becomes the transpose \((\cdot)^\top\).↩︎