Compressed indexing enables powerful queries over massive and repetitive textual datasets using space proportional to the compressed input. While theoretical advances have led to highly efficient index structures, their practical construction remains a bottleneck, especially for complex components like recompression RLSLP — a grammar-based representation crucial for building powerful text indexes that support widely used suffix and LCP array queries.
In this work, we present the first implementation of recompression RLSLP construction that runs in compressed time, operating on an LZ77-like approximation of the input. Compared to state-of-the-art uncompressed-time methods, our approach achieves up to \(46\times\) speedup and \(17\times\) lower RAM usage on large, repetitive inputs. These gains unlock scalability to larger datasets and affirm compressed computation as a practical path forward for fast index construction.
Data compression is a classic area in computer science, where the goal is to reduce the size of a given file for transfer (e.g., over a slow network) or for storage. In recent years, however, compression has been applied in a new way. The field of compressed indexing aims to store any string \(T\in \Sigma^{n}\) of \(n\) symbols over an alphabet \(\Sigma\) in space proportional to the size of \(T\) in compressed form, while simultaneously supporting queries over the (uncompressed) text \(T\).
Such functionality is a fundamental part of the toolbox in applications where massive textual datasets exhibit significant redundancy, including storage and retrieval of source code repositories (such as GitHub) [1], [2], versioned text documents (such as Wikipedia), and computational biology, where massive genomic datasets—due to projects like the 100,000 Genomes Project [3] or the ongoing 1+ Million Human Genome Initiative [4]—produce datasets on the order of terabytes (and are projected to reach exabytes [5], [6]).
The queries currently supported by compressed text indexes include a rich repertoire of operations, ranging from basic queries such as random access to \(T\) [7]–[12] to more complex longest common extension (LCE) queries [13]–[16], and even full suffix array and LCP array functionality [15], [17].
One of the biggest and most widely recognized challenges in the field of compressed indexing is index construction [1]. This is particularly difficult for the most powerful indexes, such as [15], [17], which are capable of supporting the widely utilized suffix and LCP array queries [18]. Continuous progress on this problem has resulted in algorithms that achieve improved scalability [19]–[27], but index construction remains a major bottleneck in practice [1].
A common feature of all the above algorithms is that they construct the index in a single step; that is, the algorithm takes the text \(T\in \Sigma^{n}\) as input and, in \(\mathcal{O}(n)\) time, directly outputs the index. However, in the worst case, these methods use \(\Theta(n)\) extra disk or RAM space, even for compressible strings.
Recently, an alternative method for constructing powerful text indexes was proposed in [15], [28]. In this method, the index is constructed by first applying a lightweight but fast compression that shrinks the input text close to the size of its LZ77-compressed representation (e.g., [29]). This step takes \(\mathcal{O}(n)\) time and shrinks the input text \(T\) down from \(n\) characters to, say, \(C(T)\) bytes, where \(C(T) \ll n\). Following this step is a sequence of algorithms operating on the compressed representation of \(T\), all taking time \(\mathcal{O}(C(T) \log^{\mathcal{O}(1)} n)\), i.e., roughly proportional to the size of the data in compressed form. This new approach to index construction is still in its early stages of adoption in practice, as each of the steps running in compressed time is usually a complex algorithm that relies on not-yet-implemented data structures.
Recently, however, the potential of compressed computation has started to materialize, and one of the first algorithms running in compressed time was implemented in [30]. More specifically, [30] implemented an algorithm that takes the LZ77-like compressed representation of size \(C(T)\) of the input text \(T\) as input and, in \(\mathcal{O}(C(T) \log^{\mathcal{O}(1)} n)\) time, outputs a grammar-compressed representation of \(T\) (a variant of the so-called AVL grammar [8]). Indeed, [30] showed that this conversion represents only a tiny fraction of the time needed for the initial LZ77 approximation, confirming the potential of compressed computation.
In this work, we continue the quest for achieving fast index construction via compressed computation and address the second step in the pipeline of index construction algorithms proposed in [15], [28], i.e., the conversion of the grammar-compressed text into the so-called recompression RLSLP [31], [32]. Recompression RLSLP is a compressed representation of a text \(T\) as a context-free grammar (CFG) producing only \(T\) that additionally satisfies the local consistency property; that is, different occurrences of the same substring of \(T\) are represented in a very similar way in the grammar. Recompression RLSLP represents one of the most complex steps in the pipeline of compressed-time index construction from [15], [28]—it is the basis of all indexes for the so-called “internal pattern matching” (see Section 5 in [28] and Section 5 in [15]), which are the key data structures in the construction of indexes like those in [15], [28].
Unlike in the case of [30] (where no prior implementation existed for the problem addressed in the paper), construction of the recompression RLSLP does already have a highly engineered (and parallelized) implementation [33], which runs in \(\mathcal{O}(n)\) time. Given the importance of efficient construction of recompression RLSLP in building powerful text indexes, we thus ask:
Can we speed up the construction of recompression RLSLP by using compressed computation?
1.0ex -.5em plus -.1em Our Results
We present the first implementation of recompression running in compressed time. Compared to the state-of-the-art implementation of recompression [33], our modular pipeline, where we first approximate the LZ77 (we developed our own prototype of the Bentley–McIlroy LZ77 approximation [34]) and then construct recompression in compressed time, achieves the following:
Drastic time reduction: On a highly repetitive file of size 4 GiB, our implementation is \(46\times\) faster than the state-of-the-art sequential implementation of recompression in uncompressed \(\mathcal{O}(n)\) time. When enabling parallelism in [33], our variant is still at least \(18\times\) faster.
Massive RAM reduction: On the same 4 GiB file as above, our implementation uses \(17\times\) less RAM. When compared to the parallel implementation from [33], we use \(14\times\) less RAM.
This improved scalability translates not only to time and RAM usage gains but also to the ability to process larger datasets. For example, when attempting to run our experiments on an 8 GiB test file, the state-of-the-art implementation always runs out of RAM on our 94 GiB system, resulting in significantly worse performance.
Our implementation confirms the enormous potential of computation in compressed time for the construction of compressed text indexes [15], [28]. All our implementations are available at https://github.com/AnkithReddy02/fast-recompression.
1.0ex -.5em plus -.1em Strings
For any string \(S\), we write \(S[i \mathinner{.\,.}j]\), where \(1 \leq i, j \leq |S|\), to denote the substring of \(S\) starting at position \(i\) and ending at \(j\). If \(i > j\), we define \(S[i \mathinner{.\,.}j]\) to be the empty string \(\varepsilon\). By \([i \mathinner{.\,.}j)\) we denote \([i \mathinner{.\,.} j-1]\). Throughout the paper, we consider a string (text) \(T[1 \mathinner{.\,.} n]\) of length \(n\geq 1\) over an integer alphabet \(\Sigma = [0 \mathinner{.\,.}\sigma)\). The function \(\mathrm{LCE}_{T}(i,i')\) denotes the length of the longest common prefix of \(T[i \mathinner{.\,.} n]\) and \(T[i' \mathinner{.\,.}n]\).
1.0ex -.5em plus -.1em LZ77 Compression
An LZ77-like factorization of a string \(T\) is a partition \(T= F_1 \cdots F_f\) into non-empty phrases such that each phrase \(F_j\) with \(|F_j| > 1\) appears earlier in \(T\), i.e., letting \(i = 1 + |F_1 \cdots F_{j-1}|\) and \(\ell = |F_j|\), there exists \(p \in [1 \mathinner{.\,.}i)\) such that \(\mathrm{LCE}_{T}(p,i) \geq \ell\). The phrase \(F_j = T[i \mathinner{.\,.}i + \ell)\) is represented by the pair \((p, \ell)\). If multiple values of \(p\) are valid, one is chosen arbitrarily. The segment \(T[p \mathinner{.\,.}p + \ell)\) is called the source of \(F_j\). If \(|F_j| = 1\), then we do not require that \(F_j = T[i]\) occurs earlier. Such phrases are represented as \((T[i], 0)\).
The LZ77 factorization [35] (or LZ77 parsing) of a string \(T\) is an LZ77-like factorization obtained by greedily scanning \(T\) from left to right and selecting the longest possible phrases. More precisely, the \(j\)th phrase \(F_j\) is the longest substring starting at position \(i = 1 + |F_1 \cdots F_{j-1}|\) that has a prior occurrence in \(T\). If no such substring exists, then \(F_j = T[i]\). The number of phrases in the LZ77 parsing is denoted by \(z(T)\). For example, the string \(\texttt{bbabaababababaababa}\) has the LZ77 parsing \(\texttt{b} \cdot \texttt{b} \cdot \texttt{a} \cdot \texttt{ba} \cdot \texttt{aba} \cdot \texttt{bababa} \cdot \texttt{ababa}\), with \(z(T) = 7\) phrases, represented by the sequence \((\texttt{b},0), (1,1), (\texttt{a},0), (2,2), (3,3), (7,6), (10,5)\).
The non-overlapping LZ77 factorization is defined similarly, except that the previous occurrence of each phrase must not overlap the phrase itself. The number of phrases in this variant is denoted \(z_{\rm no}(T)\).
1.0ex -.5em plus -.1em Grammar Compression
A context-free grammar is a tuple \(G = (N, \Sigma, R, S)\), where \(N\) is a finite set of nonterminals, \(\Sigma\) is a finite set of terminals, and \(R \subseteq N \times (N \cup \Sigma)^*\) is a set of rules. We assume \(N \cap \Sigma = \emptyset\) and \(S \in N\). The symbol \(S\) is called the start symbol. If \((A, \gamma) \in R\), we write \(A \rightarrow \gamma\). The language of \(G\) is the set \(L(G) \subseteq \Sigma^*\) obtained by starting from \(S\) and repeatedly replacing nonterminals according to \(R\).
A grammar \(G = (N, \Sigma, R, S)\) is called a straight-line grammar (SLG) if each \(A \in N\) appears in exactly one production on the left-hand side, and the nonterminals can be ordered as \(A_1, \ldots, A_{|N|}\) such that, whenever \(A_i \rightarrow \gamma\), it holds that \(\gamma \in (\Sigma \cup \{A_{i+1}, \ldots, A_{|N|}\})^*\). This ensures that the grammar rule graph is acyclic. The unique \(\gamma\) such that \(A \rightarrow \gamma\) is called the definition of \(A\), and denoted \(\mathrm{rhs}_{G}(A)\). For any \(u \in (N \cup \Sigma)^*\), there is exactly one \(w \in \Sigma^*\) derivable from \(u\). We call such \(w\) the expansion of \(u\), and denote it \(\mathrm{exp}_{G}(u)\).
The principle of grammar compression involves constructing, for a given text \(T\), a small SLG \(G\) such that \(L(G) = \{T\}\). The size of the grammar is the total length of all definitions: \(|G| := \sum_{A \in N}|\mathrm{rhs}_{G}(A)|\).
A straight-line program (SLP) is an SLG in which each production is either \(A \rightarrow XY\) with \(X,Y \in N\), or \(A \rightarrow c\) with \(c \in \Sigma\). Every SLG \(G\) can be converted in \(\mathcal{O}(|G|)\) time into an SLP \(G'\) representing the same string.
A run-length SLP (RLSLP) is a generalized SLP that also allows rules of the form \(A \rightarrow X^k\), where \(X \in N\) and \(k \in \mathbb{Z}_{>0}\). For each such nonterminal, we define \(|\mathrm{rhs}_{G}(A)| = 2\). The size of an RLSLP is defined analogously to that of SLPs.
Theorem 1 ([7], [8]). Given the non-overlapping LZ77 factorization of a string \(T\) of length \(n\), we can in \(\mathcal{O}(z_{\rm no}(T) \log n)\) time construct an SLP \(G\) such that \(L(G) = \{T\}\).
Theorem 2 ([36], [28]). Given the LZ77 factorization of a string \(T\) of length \(n\), we can in \(\mathcal{O}(z(T) \log n)\) time construct an SLP \(G\) such that \(L(G) = \{T\}\).
Theorem 3 ([15]). Given a string \(T\) of length \(n\), represented using an LZ77-like parsing consisting of \(f\) phrases, the LZ77 parsing of \(T\) can be constructed in \(\mathcal{O}(f \log^4 n)\) time.
Recompression is a lossless compression algorithm introduced by Jeż [31], [32] that builds a run-length straight-line program (RLSLP) by repeatedly applying local rewriting rules to a string. In each recompression round, the input string is reduced using two key operations:
Block Compression (BComp). This step replaces every maximal block of at least two identical symbols with a fresh nonterminal, ensuring that adjacent characters in the resulting string are distinct. For example, applying BComp to \(T= \texttt{abaaabccaaa}\) produces two new rules, \(X \rightarrow \texttt{a}^3\) and \(Y \rightarrow \texttt{c}^2\), and replaces \(T\) with \(\texttt{ab}X\texttt{b}YX\).
Pair Compression (PComp). This step replaces frequent pairs of adjacent symbols with new nonterminals. The algorithm partitions the current alphabet into two disjoint sets: a left set \(\Sigma_L\) and a right set \(\Sigma_R\). Only pairs \(ab\) with \(a \in \Sigma_L\) and \(b \in \Sigma_R\) are replaced. For example, applying PComp to \(T= \texttt{abacaabad}\) with \(\Sigma_{L} = \{\texttt{a}, \texttt{c}\}\) and \(\Sigma_{R} = \{\texttt{b}, \texttt{d}\}\) creates two new rules, \(X \rightarrow \texttt{ab}\) and \(Y \rightarrow \texttt{ad}\), and \(T\) is replaced with \(X\texttt{aca}XY\). The computation of the partition \(\Sigma = \Sigma_L \cup \Sigma_R\) is performed to maximize the number of replaced pairs (i.e., pairs \(ab\) in \(T\) with \(a \in \Sigma_L\) and \(b \in \Sigma_R\)). One simple strategy is to assign every \(c \in \Sigma\) to either \(\Sigma_L\) or \(\Sigma_R\) randomly. The expected number of reduced pairs is then \(\tfrac{1}{4}(|T|-1)\). This strategy can be derandomized, resulting in a linear-time partitioning algorithm that guarantees at least one quarter of all adjacent symbol pairs are replaced. This ensures that the string shrinks by a constant factor [31], [32].
The BComp and PComp steps are applied alternately in rounds, resulting in a sequence of strings \(T_0, T_1, T_2, \dots\) such that \(T_0 = T\), and for every \(i > 0\), \(T_i\) is obtained from \(T_{i-1}\) by applying BComp (if \(i\) is odd) or PComp (if \(i\) is even). The process stops when the string reduces to a single symbol, i.e., when \(|T_k| = 1\) for some \(k\). For a string of length \(n\), each BComp and PComp round runs in \(\mathcal{O}(n)\) time and space. Since the string length is reduced by a constant factor every two rounds, the overall algorithm runs in \(\mathcal{O}(n)\) time and uses \(\mathcal{O}(n)\) space [31], [32].
A complete example of the recompression process, including round-by-round transformations, can be found in [31], [32]. We refer the reader to that work for additional technical details and formal analysis.
1.0ex -.5em plus -.1em Recompression in Compressed Time
In the above scenario, we assumed that the recompression algorithm was applied to an uncompressed text \(T\) of length \(n\). However, the main goal of our paper is to implement the recompression algorithm directly on the SLP-compressed input. In [31], [32], Jeż describes how to achieve this. More precisely, the paper shows that, given an SLP \(G\) of size \(|G| = g\), one can perform recompression of the text \(T\) (obtained by decompressing \(G\)) in only \(\mathcal{O}(g \log n)\) time.
Our implementation is modeled after the description of a variant of recompression by Kempa and Kociumaka in [15], which achieves the optimal space bound of4 \[\mathcal{O}\!\left(\delta(T)\log \frac{n\log \sigma}{\delta(T)\log n}\right)\] (where the substring complexity \(\delta(T)\leq z(T)\) is a fundamental measure of compressibility [37]), although we did not implement the more intricate optimizations needed to achieve this space, and the result of our computation is still the original recompression grammar, as defined in [31], [32]. More specifically, in our implementation we follow Section 5.2 of [15]—notably Definitions 5.2 and 5.4, Lemmas 5.3 and 5.5, and Construction 5.6.
In this section, we describe the practical techniques we employed in our implementation to achieve small runtime and low peak RAM usage.
In the construction from [15], the strings \(T_0, T_1, T_2, \dots\) during recompression are represented using their LZ77 factorization. When computing \(T_{i}\) from \(T_{i-1}\), the algorithm first converts the LZ77 factorization of \(T_{i-1}\) into an SLP \(G_{i-1}\) in \(\mathcal{O}(z(T_{i-1}) \log n)\) time (Theorem 2). In \(\mathcal{O}(|G_{i-1}|)\) time it then performs either BComp or PComp on the resulting SLP. The result of the computation is an SLG \(G'_i\) representing \(T_{i}\). This SLG is then converted in \(\mathcal{O}(|G'_i|)\) time into an LZ77-like factorization of \(T_{i}\) and finally into the LZ77 factorization of \(T_{i}\) using Theorem 3.
One of the changes in our implementation, compared to [15], is that each of the intermediate strings \(T_0, T_1, T_2, \dots\) during recompression is represented as an SLG. This affects the implementation as follows:
Adopting the SLG representation slightly changes the implementation of BComp and PComp, since the right-hand side of a nonterminal can now be longer than two symbols. However, this way we avoid the conversions to and from the LZ77 factorization.
Storing an SLG in small space is more challenging than storing an SLP, since definitions of nonterminals are variable-length strings. In our implementation we store the definitions of all nonterminals in a single array. Each nonterminal then stores a single integer indicating the starting position of its definition. This reduces memory overhead while supporting fast access to every definition.
The general approach to performing the BComp procedure over a text \(T_{i-1}\) represented using an SLG \(G\) is to perform a bottom-up construction of another SLG \(G'\) representing the run-length compressed string; see Lemma 5.3 in [15].
1.0ex -.5em plus -.1em Computing Left and Right Runs
The construction of grammar \(G'\) requires computing, for each nonterminal \(A\), the maximal prefix and suffix blocks that consist of the same symbol. These are denoted \(\mathsf{LR}(A) = (X, \ell)\) and \(\mathsf{RR}(A) = (Y, r)\), representing runs of length \(\ell\) and \(r\), respectively. To
optimize storage, we use two arrays: \[\texttt{LR\_vec}[A] = (X, \ell), \qquad \texttt{RR\_vec}[A] = (Y, r)\] where \(\ell, r < 255\) are stored as 8-bit values packed alongside the
symbol ID. In rare cases where \(\ell\) or \(r\) are at least 255, we store them in auxiliary arrays large_LR_vec and large_RR_vec. In this case, we set the
corresponding 8-bit value to 255 and use binary search to retrieve the actual count.
1.0ex -.5em plus -.1em Supporting Grammar Updates
During the construction of grammar \(G'\), we need to add new nonterminals and their definitions. An important property of the construction of \(G'\) is that it never deletes or changes any already added rule of \(G'\). This lets us implement dynamic updates in \(G'\) in a relatively straightforward way, i.e., the array storing the definitions of all nonterminals is implemented as a standard dynamic array that supports insertion at the end.
Our implementation uses a custom dynamic array (which we call a space_efficient_vector—a drop-in replacement for standard C++ solutions like std::vector) that stores elements in a bounded number of dynamically
allocated blocks. Each block’s size doubles only when all existing blocks are full, so growth requires reallocating and copying at most two adjacent blocks at a time rather than the entire array. This chunked strategy keeps the logical interface of a
random-access array while reducing peak memory during expansions, since no full duplicate of the data is ever created.
The implementation of the PComp procedure, which, given the SLG \(G\) representing the string \(T_{i-1}\), computes the SLG \(G'\) representing \(T_i\) (i.e., the result of applying PComp to \(T_{i-1}\)), is slightly more complex than BComp, since it needs to first compute a good partitioning of the alphabet \(\Sigma\) into \(\Sigma_{L}\) and \(\Sigma_{R}\), and then perform the transformation of \(T_{i-1}\) that replaces every pair of symbols \(ab\), with \(a \in \Sigma_{L}\) and \(b \in \Sigma_{R}\), by a fresh nonterminal.
The second step is similar to BComp and reduces to processing all nonterminals of \(G\) bottom up; see Lemma 5.5 of [15].
The more complex step of PComp is the computation of the alphabet partitioning. In our implementation, we employ two strategies, which alternate in subsequent executions of PComp.
The randomized partition simply assigns terminals to the left or right sets uniformly at random.
The deterministic partition uses a MaxCut-inspired greedy strategy that guarantees the string shrinks by at least 25% after applying PComp [31].
Implementing the random partition is straightforward. One of the key steps to implement the deterministic partition is to compute the frequency of every pair \(ab\) in the string \(T_{i-1}\) represented using the SLG \(G\). It is known (see, e.g., [7]) that the number of such distinct pairs is bounded by \(2|G|\), and the computation can be done in \(\mathcal{O}(|G|)\) time (see, e.g., [13]). The computation of the partition \(\Sigma_{L}\) and \(\Sigma_{R}\) is then a straightforward adaptation of the MaxCut approximation algorithm (see, e.g., [15] or [13]).
All experiments were conducted on a machine equipped with two Intel Xeon X5690 CPUs, each operating at 3.47 GHz, totaling 12 physical cores. The system had 94 GiB of DDR3 RAM and 931 GiB of local disk space. Disk throughput was measured at approximately
102 MiB/s. The operating system was Ubuntu 16.04.7 running Linux kernel 4.15.0. All programs were compiled with g++ version 5.4.0 using the flags -funroll-loops -O3 -DNDEBUG -march=native -std=c++17 -pthread. All runtimes are
reported as wall-clock times.
We evaluate our algorithms on highly repetitive datasets derived from the Saccharomyces cerevisiae genome. Specifically, we use the chr16.fsa file (941.4 KiB) from the Saccharomyces Genome Database (SGD) database5 as the base input. Larger instances are generated by repeated duplication of the sequence followed by uniform random mutations at rates \(10^{-4}\) and
\(10^{-5}\) per symbol. Basic statistics of our datasets are listed in Table 1.
Each 4 GiB dataset is a strict prefix of its corresponding 8 GiB version, allowing fair comparison across scales while preserving structure. We refer to each dataset using the shorthand YSME, where S is the input size in GiB and E denotes the exponent in the mutation rate \(10^{-E}\). For example, Y4M5 refers to a 4 GiB input with a mutation rate of \(10^{-5}\). Unless otherwise specified, experiments default to Y4M5.
| Name | \(n/2^{30}\) | \(|\Sigma|\) | \(n/z\) |
|---|---|---|---|
| Y4M4 | 4 | 4 | 4,625 |
| Y4M5 | 4 | 4 | 23,087 |
| Y8M4 | 8 | 4 | 5,043 |
| Y8M5 | 8 | 4 | 30,990 |
In our experiments, we evaluate two categories of algorithms for computing a recompression RLSLP.
Standard (uncompressed) recompression: We use the recompression implementation by Osthues [33], available at https://github.com/christopherosthues/recompression. This implementation performs recompression directly on the input string without prior LZ parsing. We evaluate the following variants:
fast_seq: A sequential implementation that alternates between block compression (on repeated symbols) and pair compression (on disjoint symbol pairs).
parallel_rnd: A parallel variant that randomly partitions the alphabet during pair compression, improving parallel scalability at the cost of nondeterminism.
parallel_gr: A parallel method that applies a greedy MaxCut-based partitioning strategy to deterministically improve compression quality compared to the randomized variant.
Recompression in compressed time (this work): Our pipeline performs compression in five stages:
Text \(\rightarrow\) Approximate LZ77: Approximate LZ77 parsing is computed using the Bentley–McIlroy algorithm [34].6 Our implementation of the Bentley–McIlroy algorithm is available at https://github.com/AnkithReddy02/fast-recompression.
Approximate LZ77 \(\rightarrow\) SLG: The approximate LZ77 factorization is converted into a straight-line grammar (SLG) using the lazy AVL grammar construction described in [30]. This implementation is available at https://github.com/dominikkempa/lz77-to-slp.
SLG \(\rightarrow\) SLP: The SLG is expanded into a full straight-line program (SLP) generating the original string. Our tool for this task is available at https://github.com/AnkithReddy02/fast-recompression.
SLP \(\rightarrow\) Pruned SLP: Unused rules are eliminated via pruning without affecting the represented string. Our implementation of this pruning step is also available at https://github.com/AnkithReddy02/fast-recompression.
Pruned SLP \(\rightarrow\) RLSLP: A recompression RLSLP is constructed in compressed time from the SLP-compressed input text, as described in Section 4. This implementation constitutes the main contribution of our paper and is available at https://github.com/AnkithReddy02/fast-recompression.
During the Pruned SLP \(\rightarrow\) RLSLP stage, we evaluate three partitioning strategies for pairwise compression: deterministic, randomized, and mixed. The deterministic strategy applies a
greedy MaxCut-based heuristic to partition the alphabet; the randomized strategy selects partitions uniformly at random; and the mixed strategy alternates between deterministic and randomized partitioning at each pairwise compression round. Unless
otherwise stated, we use the mixed strategy. A comparison of the three strategies is presented in Section 5.8.
The correctness of the recompression RLSLPs produced by our compressed pipeline was verified by decompressing them and comparing the results with the original texts. To verify the local consistency property, we implemented longest-common-extension (LCE) queries on the recompression RLSLPs (as described in [15]) and compared their outputs with those from an independent LCE implementation.
The peak RAM usage in all implementations was measured using manual tracking of memory allocations. In the case of the uncompressed versions, we used the provided tracking mechanism. All tools in our compressed pipeline use our custom memory tracking.
In the first experiment, we evaluate the end-to-end performance of our pipeline for constructing a recompression RLSLP in compressed time. We compare it against three state-of-the-art uncompressed baselines (fast_seq,
parallel_gr, and parallel_rnd).
We run all methods on the four datasets described in Section 5.2 and report runtime, peak memory usage, and the number of productions in the resulting recompression RLSLP in Table 2. Our compressed method is evaluated with block sizes 50 and 500 (a parameter in the Bentley–McIlroy algorithm controlling the trade-off between time/space and output size). We tested a wider range of values and found these two to represent a good balance.
| Dataset | Algorithm | Runtime (s) | Peak RAM (GiB) | RLSLP Productions |
|---|---|---|---|---|
| Y4M4 | Compressed (50) | 109.61 | 4.06 | 5,307,093 |
| Y4M4 | Compressed (500) | 171.48 | 4.25 | 5,332,128 |
| Y4M4 | fast_seq | 1394.00 | 70.56 | 4,890,650 |
| Y4M4 | parallel_gr | 529.00 | 58.38 | 6,036,842 |
| Y4M4 | parallel_rnd | 484.00 | 58.38 | 6,084,384 |
| Y4M5 | Compressed (50) | 41.99 | 4.02 | 936,193 |
| Y4M5 | Compressed (500) | 24.45 | 4.03 | 934,699 |
| Y4M5 | fast_seq | 1134.00 | 70.56 | 864,360 |
| Y4M5 | parallel_gr | 520.00 | 58.38 | 1,046,708 |
| Y4M5 | parallel_rnd | 457.00 | 58.38 | 1,055,740 |
| Y8M4 | Compressed (50) | 241.83 | 8.09 | 8,998,801 |
| Y8M4 | Compressed (500) | 365.52 | 8.44 | 9,038,760 |
| Y8M4 | fast_seq | 4674.00 | 137.83 | 8,953,448 |
| Y8M4 | parallel_gr | 2594.00 | 114.06 | 10,903,809 |
| Y8M4 | parallel_rnd | 2366.00 | 114.06 | 10,967,268 |
| Y8M5 | Compressed (50) | 101.81 | 8.02 | 1,521,123 |
| Y8M5 | Compressed (500) | 48.16 | 8.04 | 1,508,313 |
| Y8M5 | fast_seq | 3750.00 | 137.83 | 1,526,272 |
| Y8M5 | parallel_gr | 2560.00 | 114.06 | 1,853,228 |
| Y8M5 | parallel_rnd | 2633.00 | 114.06 | 1,866,394 |
1.0ex -.5em plus -.1em Key Observations
Drastic RAM Reduction The peak RAM usage of the compressed pipeline scales linearly with the input size: from 4.02–4.25 GiB for 4 GiB inputs to 8.02–8.44 GiB for 8 GiB inputs. In contrast, the uncompressed methods require 58.38–70.56 GiB on 4 GiB inputs and 114.06–137.83 GiB on 8 GiB inputs, corresponding to a memory overhead of 13–17\(\times\).
The memory usage of uncompressed variants not only affects space requirements but also performance. On the 8 GiB datasets, all uncompressed methods exceed the 94 GiB memory limit of the system and trigger swap usage, whereas the compressed pipeline runs fully in memory.
Massive Runtime Gains Our compressed pipeline delivers its most striking gains on large, highly repetitive datasets, which are the primary focus of this work. Against the sequential uncompressed recompression
(fast_seq), the advantage is overwhelming: on Y4M5 our Compressed (500) configuration finishes in 24 s versus 1,134 s, a 47\(\times\) speedup; on Y8M5 the gap widens to 48 s versus 3,750 s, or
78\(\times\) faster.7 Even when compared to parallel uncompressed implementations (which benefit from multiple cores), our
single-threaded approach remains far ahead: parallel variants need at least 457 s on 4 GiB inputs and at least 2,560 s on 8 GiB inputs, yielding speedups of 19\(\times\) and 53\(\times\).
Even for the less repetitive datasets (Y4M4 and Y8M4), the benefits of compressed computation remain substantial; e.g., Compressed (500) requires 171 s on Y4M4 and 366 s on Y8M4, cutting sequential runtime by roughly 8\(\times\)–13\(\times\) and staying faster than every parallel uncompressed variant despite their multicore advantage.
Stable Output Size Our compressed pipeline produces RLSLPs that are either smaller or only slightly larger (but never by more than 10%) than the sequential uncompressed implementation of recompression. Furthermore, our RLSLPs are always smaller than those produced by the parallel variants of uncompressed recompression.
Figure 1: Impact of block size on compression pipeline components (dataset: Y4M5).. a — Number of LZ77 phrases, b — Number of SLP productions, c — Number of RLSLP productions, d — Pipeline runtime
This experiment examines the effect of block size on intermediate metrics within the compressed pipeline. We report the number of phrases in the resulting LZ77-like parsing, the number of productions in the pruned SLP, the number of productions in the final recompression RLSLP, and the overall runtime. All results are computed for the Y4M5 dataset (we also ran the experiment on Y8M5 and found the trends to be very similar to Y4M5). Results are shown in Figures 1 (a)–1 (d).
1.0ex -.5em plus -.1em Number of Phrases
Figure 1 (a) shows that the number of phrases increases with block size. At block size 20, the parser emits roughly 1.1 million phrases, increasing to over 2 million at block size 500.
1.0ex -.5em plus -.1em Number of SLP Productions
As shown in Figure 1 (b), the number of productions in the pruned SLP correlates closely with the phrase count. The smallest SLP occurs at block size 20, while the largest occurs at block size 500.
1.0ex -.5em plus -.1em Number of RLSLP Productions
Figure 1 (c) indicates that the number of productions in the resulting recompression RLSLP remains stable across all block sizes, fluctuating within the narrow range of 925,000–940,000 productions. This stability suggests that downstream compression is effective in absorbing variation introduced by the LZ77 approximation.
1.0ex -.5em plus -.1em Pipeline Runtime
As shown in Figure 1 (d), runtime decreases substantially as block size increases. At block size 20, the full pipeline takes nearly 100 s. Beyond block size 100, the runtime flattens, with block sizes 100 and 500 both completing in approximately 25 s. The largest gains come from reducing parsing overhead at small block sizes.
These results highlight a trade-off between compression and speed: smaller blocks allow longer matches, producing fewer phrases and slightly smaller intermediate grammars, but they sharply increase the cost of parsing. When the pipeline starts from raw text, that extra effort brings little benefit: the downstream recompression stage quickly reduces the grammar to nearly the same size regardless of block size. The size of the initial LZ77-like parsing is not, however, irrelevant. As shown in Experiment 4, if a high-quality LZ77-like parse is already available — for example, computed on another machine — the peak RAM of the entire pipeline scales with the size of that parse, so a smaller, more refined factorization can reduce memory use.
This experiment analyzes how runtime is distributed across different stages of the compressed pipeline. For each block size, we measure the time spent in five steps: (1) computing the approximate LZ77 from the input text, (2) converting the approximate LZ77 to an SLG, (3) expanding the SLG to an SLP, (4) pruning the SLP, and (5) converting the pruned SLP to a recompression RLSLP. The results are reported on the Y4M5 and Y8M5 datasets.
Figure 2 shows the breakdown for selected block sizes. Stages (3) and (4) are omitted from the plot, as their runtimes are consistently negligible (\(<0.5\) s).
The Text \(\rightarrow\) LZ77 step dominates runtime at small block sizes. For Y4M5, at block size 20, it accounts for over 80 s of total time. Runtime decreases sharply with increasing block
size, stabilizing below 8 s beyond block size 100. In contrast, the final step (Pruned SLP \(\rightarrow\) RLSLP) increases with block size, reaching 13.8 s at block size 500. The intermediate
LZ77 \(\rightarrow\) SLG step remains inexpensive across all settings, with runtime below 7 s. The results for Y8M5 are analogous.
These results align with trends from Experiment 2. Small blocks result in smaller LZ77-like parsings but increase parsing overhead. Larger blocks reduce front-end cost but shift more work to the back-end stages.
This experiment measures the peak memory consumption of individual components in the compressed pipeline, assuming a precomputed LZ77-like parse as input. This setup models a scenario in which the LZ77-like factorization is produced externally and the
remaining grammar construction must operate under memory constraints. We report the peak RAM usage of the four downstream components: LZ77 \(\rightarrow\) SLG, SLG \(\rightarrow\) SLP, SLP \(\rightarrow\) Pruned SLP, and Pruned SLP \(\rightarrow\) RLSLP. Results
are based on the Y4M5 and Y8M5 datasets.
Figure 3 shows peak memory usage across block sizes. For Y4M5, the final stage (Pruned SLP \(\rightarrow\) RLSLP) consistently uses the most memory, increasing
from 104 MiB at block size 20 to 134 MiB at block size 500. Earlier stages remain lightweight: LZ77 \(\rightarrow\) SLG, SLG \(\rightarrow\)
SLP, and SLP \(\rightarrow\) Pruned SLP all stay well under, or only slightly above, 50 MiB. The results on Y8M5 are analogous.
These findings show that memory usage grows moderately with block size and that the increase is concentrated in the final RLSLP construction step. This observation is consistent with Experiments 2 and 3: a larger block size leads to a slightly bigger LZ77-like parse (Experiment 2), which in turn increases both the runtime of the final step (Experiment 3) and the peak RAM usage observed here.
| Strategy | Dataset | Runtime (s) | Peak RAM (MiB) | RLSLP Productions |
|---|---|---|---|---|
| Deterministic | Y4M4 | 87.68 | 699.36 | 4,981,200 |
| Mixed | Y4M4 | 77.44 | 753.22 | 5,307,093 |
| Randomized | Y4M4 | 56.20 | 778.78 | 6,080,788 |
| Deterministic | Y4M5 | 11.37 | 103.36 | 869,055 |
| Mixed | Y4M5 | 10.36 | 109.33 | 936,193 |
| Randomized | Y4M5 | 9.00 | 118.31 | 1,073,603 |
This experiment examines how different partitioning strategies (Section 5.3) affect the Pruned SLP \(\rightarrow\) RLSLP transformation. We compare
deterministic, randomized, and mixed strategies on the Y4M4 and Y4M5 datasets, using a block size of 50 in the Bentley–McIlroy step. For each strategy, we record the RLSLP construction time, peak RAM usage during construction, and the number of productions
in the resulting recompression RLSLP. The results are summarized in Table 3.
The deterministic strategy yields the smallest output RLSLP and the lowest peak RAM usage, but it also incurs the highest runtime on both datasets. This is because deterministic partitioning guarantees a constant-factor reduction of the current string, leading to fewer recompression rounds and a more compact output. However, determining the partition \(\Sigma = \Sigma_{L} \cup \Sigma_{R}\) is computationally expensive. In contrast, the randomized partition is fast and easy to compute, yet it reduces the string length less in each PComp step, requiring more recompression rounds and producing a larger output. The mixed strategy, which alternates between deterministic and randomized steps, offers a balance between these two approaches.
Department of Computer Science, Stony Brook University, Stony Brook, NY, USA, aadudodla@cs.stonybrook.edu.↩︎
Department of Computer Science, Stony Brook University, Stony Brook, NY, USA, kempa@cs.stonybrook.edu.↩︎
The full version of this paper is available at https://arxiv.org/abs/2506.12011. This work has been partially funded by the NSF CAREER Award 2337891 and the Simons Foundation Junior Faculty Fellowship.↩︎
The \(\mathcal{O}\!\left(\delta(T)\log \frac{n\log \sigma}{\delta(T)\log n}\right)\) bound is known to be the smallest possible space to represent a string \(T\in [0 \mathinner{.\,.}\sigma)^{n}\) for every combination of \(n\), \(\sigma\), and \(\delta(T)\) [37].↩︎
http://sgd-archive.yeastgenome.org/sequence/S288C_reference/chromosomes/fasta/↩︎
We use the term approximate LZ77 parsing as an alternative to LZ77-like parsing. The approximate LZ77 parsing still encodes the text \(T\) exactly, i.e., we can restore \(T\) without any errors.↩︎
This increased gap is likely at least partially due to the uncompressed implementation resorting to swap (disk) memory, and hence we accept the 47\(\times\) speedup as our maximal reliable speedup.↩︎