Imitater: An Efficient Shared Mempool Protocol
with Application to Byzantine Fault Tolerance


Abstract

Byzantine Fault Tolerant (BFT) consensus, a cornerstone of blockchain technology, has seen significant advancements. While existing BFT protocols ensure security guarantees, they often suffer from efficiency challenges, particularly under conditions of network instability or malicious exploitation of system mechanisms.

We propose a novel Shared Mempool (SMP) protocol, named Imitater, which can be seamlessly integrated into BFT protocols. By chaining microblocks and applying coding techniques, Imitater efficiently achieves totality and availability. Furthermore, a BFT protocol augmented with Imitater ensures order preservation of client transactions while mitigating the risks of over-distribution and unbalanced workload.

In the experiment, we integrate Imitater into the HotStuff protocol, resulting in Imitater-HS. The performance of Imitater-HS is validated in a system with up to 256 nodes. Experimental results demonstrate the efficiency of our approach: Imitater-HS achieves higher throughput and lower latency in the presence of faulty nodes compared to Stratus-HS, the state-of-the-art protocol. Notably, the throughput improvement increases with the number of faulty nodes.

1 Introduction↩︎

With the rapid advancement of blockchain technology, Byzantine Fault Tolerant (BFT) consensus has become a cornerstone of modern blockchain systems, driving extensive research and applications [1][4]. At its core, BFT consensus protocols are designed to implement State Machine Replication (SMR) in the presence of Byzantine faults, ensuring consistent transaction ordering across distributed nodes by maintaining identical state machines and logs. They provide strong guarantees of both safety—all honest nodes agree on a single transaction order—and liveness—all valid client requests are eventually processed despite Byzantine faults.

Existing methods focus on optimizing the message complexity of protocols to enhance performance [5][7], culminating in HotStuff [8], which achieves linear complexity. Most of these protocols are leader-based, where a designated principal node packages transactions to propose, and other replicas vote on the leader’s proposal to reach consensus. Under the framework of the leader-replica structure, the leader becomes a bottleneck, thereby affecting scalability. To address this, some approaches adopt scale-out via sharding [9][11], dividing replicas into groups. However, these methods rely on a different fault model that limits fault tolerance within each group, rather than across the entire system.

1.0.1 Shared Mempool Based Protocols.↩︎

The scalability bottleneck in leader-based BFT protocols primarily stems from the leader’s responsibility for transaction distribution. Recent studies [12][15] have explored another approach to improving the scalability of BFT consensus by decoupling the block distribution and consensus phases, as described in [16][18], and have demonstrated significant performance improvements in recent implementations [13]. Each node independently packages its local transactions into microblocks and distributes them to other nodes. The leader then aggregates the identifiers (e.g., hash values) of these microblocks into a candidate block for consensus at the start of each consensus round. This decoupling technique improves the utilization of both computational and bandwidth resources. This decoupling approach also abstracts the microblock distribution phase as a Shared Mempool (SMP) [14], [15].

Under the decoupled framework, a key challenge is ensuring the microblock availability property. When a replica receives a proposal, the microblocks it references may not be locally available, requiring it to wait until all are available before proceeding (e.g., voting). This may involve passive waiting or active requests to other nodes—both of which can significantly delay consensus.

To address this, existing Shared Mempool implementations [13], [14] ensure microblock availability by having replicas to generate an availability certificate when distributing microblocks. The leader attaches these certificates to the proposal, enabling replicas to proceed with subsequent steps as soon as they receive the proposal. The certificates assure replicas that all referenced microblocks will eventually be received, eliminating the need for local availability at the time of proposal processing.

1.0.2 Efficient Totality.↩︎

An essential property of Byzantine reliable broadcast [19] is totality, which ensures that if an honest node delivers a message \(v\), then all honest nodes will eventually deliver \(v\). This property is equally critical in BFT state machine replication (BFT-SMR). Specifically:

  • Sequential Execution. In State Machine Replication, the execution of transactions is contingent upon the order established by consensus. However, Byzantine faults or the asynchronous nature of the network may cause some nodes to miss certain blocks. When blocks are missing, subsequent blocks, despite achieving consensus, cannot be executed due to the sequential execution requirement.

  • Garbage Collection. From the implementation perspective, nodes aim to release memory resources associated with a given request as soon as a decision is reached. However, because other nodes may later request this data, nodes cannot predict when it is safe to release the associated memory, forcing them to maintain these resources indefinitely.

It is worth noting that achieving totality is not inherently guaranteed by microblock availability. Most current systems do not address totality explicitly within BFT protocols. Instead, they rely on alternative mechanisms, such as state transfer (i.e., reliably broadcasting the decision value of each block)[5], [6] or a pull mechanism (i.e., requesting missing data from other nodes)[13], [15]. While resource consumption prior to reaching consensus is often a primary focus, post-consensus resource usage also deserves significant attention. Both state transfer and pull mechanisms operate after the consensus decision point, potentially contributing to further resource overhead.

However, these approaches are often communication-inefficient and may even undermine the original performance of the protocol. State transfer requires quadratic communication overhead, which remains unavoidable even in synchronous environments [20]. For the pull mechanism, considering the presence of faulty nodes, they might request every block from all nodes, regardless of whether the block is genuinely missing, thus imposing unnecessary bandwidth consumption on the system. In the worst-case scenario, if all faulty nodes request blocks from all honest nodes, this could result in quadratic bandwidth consumption, severely degrading system performance. Even one faulty node could halve the throughput, which will be confirmed in the experiments in Section 4

1.0.3 Contribution.↩︎

In this paper, we pay efforts for solving the above concern. To encounter the above concerns, a series of techniques are exploited, making the following contributions.

We propose a novel Shared Mempool (SMP) protocol (Section 3) named Imitater that efficiently guarantees the totality of microblock distribution through the use of erasure codes, ensuring that all microblocks are eventually ready for execution without relying on communication-intensive request-based mechanisms.

We demonstrate how to compile Imitater into an efficient BFT protocol, Imitater-BFT(Section [sec:building-bft]), which offers the following advantages: (i) the totality and microblocks availability can be naturally inherited from the SMP totality; (ii) by appropriately scheduling the Dispersal and Retrieval components within SMP, we achieve bandwidth adaptability (Section [sec:trigger-dispersal]); (iii) the protocol ensures order keeping of client transactions in the final decision, effectively handles over distribution risks, and accommodates unbalanced workload scenarios without compromising protocol performance (Section [sec:practical-problems]).

We integrate Imitater into the HotStuff protocol, creating Imitater-HS, and validate it through large-scale experiments(Section 4). In a 100-node system with up to 1/3 faulty nodes, Imitater-HS achieves 21.5Kops/s throughput—approximately 9× higher than Stratus-HS’s 2.3Kops/s—demonstrating its superior performance and practical scalability.

2 Background↩︎

2.1 System Model↩︎

In our setup, the system consists of \(n = 3f + 1\) nodes or replicas, with up to \(f\) Byzantine nodes controlled by an adversary \(\mathcal{A}\). Honest nodes strictly follow the protocol, while Byzantine nodes can behave arbitrarily. We assume reliable communication and a public key infrastructure (PKI). The adversary \(\mathcal{A}\) has limited computational power and cannot break cryptographic primitives.

We consider a partially synchronous network as outlined in [21]. After an unknown Global Stabilization Time (GST), messages transmitted between honest nodes arrive within a known maximum bound \(\Delta\). However, the transmission of network messages can be manipulated by the adversary \(\mathcal{A}\), who may delay or reorder messages. Our consensus protocol follows the leader-replica framework, with the leader proposing blocks and the replicas deciding to accept or deny.

External clients continuously submit transaction requests to the system, choosing replicas randomly, by proximity, or preferentially. Given that Byzantine nodes may censor transactions, clients use a timeout mechanism to resubmit transactions to other replicas until they find an honest node.

2.2 Primitives↩︎

We use a \((2f+1,n)\) threshold signature scheme [22], [23], which includes algorithms \((TSign, TComb, TVerf)\). The \(TSign\) algorithm allows node \(i\) to sign message \(m\) with its private key \(tsk_i\), producing a partial signature \(\sigma_i\). The \(TComb\) algorithm combines \(2f+1\) partial signatures for the same message \(m\) into an aggregated signature \(\sigma\). The \(TVerf\) algorithm then verifies \(\sigma\) using the public key \(pk\), message \(m\), and the combined signature \(\sigma\).

We employ an \((f+1,n)\)-erasure code, Reed-Solomon code [24] by default, which includes a pair of encoding and decoding algorithms \(( Enc, Dec )\). The encoding algorithm \(Enc\) encodes a message into \(n\) chunks \(s_1,s_2,...,s_n\), while the decoding algorithm decodes the message using any \(f+1\) out of \(n\) coded chunks.

We utilize Merkle trees [25] to demonstrate the set relationships of coded chunks, including a set of Merkle tree generation functions and Merkle proof verification functions denoted as \((M.Gen, M.Vrf)\). We use \(R(\cdot)\) to represent the root of the Merkle tree and \(\boldsymbol{P}(\cdot)\) to denote the Merkle proofs. \(M.Gen\) generates the Merkle root and the Merkle proofs for a set of elements \(\boldsymbol{s}\), i.e., \((\boldsymbol{P}(\boldsymbol{s}), R(\boldsymbol{s})) \gets M.Gen(\boldsymbol{s})\), while \(M.Vrf\) verifies the validity of a proof \(P_i(\boldsymbol{s})\) for an element \(s_i\), i.e., \(\{True, False\} \gets M.Vrf(P_i(\boldsymbol{s}), s_i, R(\boldsymbol{s}))\).

3 Imitater Protocol↩︎

3.1 Overview↩︎

In this paper, we adopt a Shared Mempool (SMP)-based architecture [12][15], where each node independently packages its local transactions into microblocks and distributes them to other nodes. The leader then packages the identifiers (e.g., hash values) of these microblocks into a candidate block for consensus. The distributed microblocks is referred to as the Shared Mempool (SMP) [14], [15], which conceptually represents a shared memory pool. We use the term mempool to denote each node’s actual local storage.

Figure 1: Chain-based organization of a node’s microblock mempool. Each node maintains n local microblock chains, where the i-th chain sequentially records microblocks disseminated by node i.

3.1.1 Chained Microblocks.↩︎

Imitater is designed in the form of a chained structure, as shown in Figure 1. Let \(b_p^{i,j}\) denote a microblock at position \(p\) on the \(i\)th chain in the mempool of node \(j\) and \(R(b_p^{i,j})\) be the identifier of \(b_p^{i,j}\). In this paper, we use the Merkle root \(R(\cdot)\) as the identifier of an microblock, which we will elaborate in Section 3.2.1. Since all honest nodes eventually store the same chains, as shown in Lemma [lemma:Microblock32consistency], \(j\) is omitted, and the microblock is denoted as \(b_p^i\) for simplicity.

When node \(i\) distributes a microblock, each node receiving the distribution message replies an ack message together with a partial signature of the corresponding identifier. The partial signatures are aggregated into a threshold signature \(\sigma(b_p^i)\), forming the Availability Certificate (AC) \(C_p^i = (\sigma(b_p^i), R(b_p^i))\). Once node \(i\) has collected the AC for \(b_p^i\), it can construct the next microblock \(b_{p+1}^i\) and distribute it, hence \(b_{p+1}^i\) implies the availability of \(b_{p}^i\). Since a microblock includes the AC of its predecessor, this naturally forms a chain.

Now we formally define the structure of a microblock as \(b_p^i = (R(b_p^i), C_{p-1}^i, t_p^i)\), consisting of the identifier \(R(b_p^i)\), the availability certificate of the previous block \(C_{p-1}^i\), and the content \(t_p^i\) of the current block.

A chain-based SMP efficiently maintains the order of microblocks and reduces proposal size during consensus. At the start of each round, the leader packages the highest-position identifiers from each chain into the proposal. If some identifiers are not included, later proposals include the latest identifier, implicitly incorporating all preceding uncommitted microblocks of the chain, ensuring efficient consensus and sequential integrity.

For example, in Figure 1, \(b_2^4\) and \(b_3^4\) were not included in earlier proposals. When a successor microblock such as \(b_4^4\) is included, \(b_2^4\) and \(b_3^4\) are implicitly added in the correct order, preserving their sequence.

3.1.2 Coding and Totality.↩︎

To avoid the request-based methods with high communication overhead, Imitater addresses totality by exploiting coding techniques. More concretely, our approach to microblock distribution involves distributing encoded chunks, nodes forwarding and collecting these chunks, instead of directly broadcasting the complete microblocks.

Distributing blocks via coding techniques has been extensively explored in previous works [26][28]. Imitater repurposes coding technique to solve the totality problem, ensuring complete microblock delivery to all nodes. In addition, this method of coding produces bandwidth adaptability, detailed in Section [sec:trigger-dispersal].

3.2 SMP Protocol↩︎

Our SMP protocol, based on the communication model in Figure 2, consists of two phases: Dispersal and Retrieval, defined in Algorithms 3 and 4. The Dispersal phase handles chunk distribution and Availability Certificate (AC) generation, while the Retrieval phase ensures all honest nodes acquire complete microblock content.

Figure 2: The communication pattern of disseminating a microblock in SMP

3.2.1 Dispersal Phase.↩︎

The Dispersal phase is initiated by an Mb-Dispersal event. When a node \(i\) distributes a microblock \(b_p^i\), it must utilize the AC for its predecessor \(b_{p-1}^i\). Upon triggering an Mb-Dispersal event, node \(i\) employs \(Enc\) to encode the microblock \(b_p^i\) into \(n\) chunks \(s^{i,1}_p,\ldots,s^{i,n}_p\). It then uses \(M.Gen\) to compute the corresponding Merkle root \(R(b_p^i)\) and Merkle proof \(\boldsymbol{P}(s_p^{i,j})\) for each \(j\in [n]\). The Merkle root enables direct verification of the correspondence between any given data chunk and its parent microblock, thus serving as its identifier. Node \(i\) then sends the message \(\langle\text{Mb-Dis},R(b_p^i),C_{p-1}^i,s^{i,j}_p,P(s_p^{i,j})\rangle_{i}\) to all nodes \(j\).

Upon receiving an Mb-Dis message, node \(j\) verifies the AC \(C_{p-1}^i\) and the Merkle proof using \(TVrf\) and \(M.Vrf\). It also ensures that this is the first Mb-Dis message for position \(p\) from node \(i\); otherwise, the message is ignored. After passing all verification, node \(j\) stores the data in its local SMP and signs \(R(b_p^i)\) using \(TSign\), sending the resulting signature \(\sigma_j(b_p^i)\) in an Mb-Ack message to node \(i\). Once node \(i\) collects \(2f+1\) Mb-Ack messages for \(R(b_p^i)\) from different nodes, it aggregates the signatures into a proof \(\sigma(b_p^i)\) using \(TComb\). Thus, \((\sigma(b_p^i),R(b_p^i))\) forms the AC for \(b_p^i\). Therefore, node \(i\) can utilize this certificate to initiate the Dispersal of its next microblock \(b_{p+1}^i\).

Figure 3: Microblock Dispersal with b_p^i at node i

The AC organizes microblocks from the same node into a chained structure, facilitating the resolution of the sequencing issues of microblocks from honest nodes. Secondly, once an honest node collects an AC, it can be assured that all honest nodes will eventually obtain the complete microblock. Thus, during consensus, nodes can proceed without waiting for full content, avoiding delays due to missing microblocks.

Figure 4: Microblock Retrieval with R(b_p^j) at i

3.2.2 Retrieval Phase.↩︎

When a node triggers the \(\langle\text{Mb-Retrieval}, R(b_p^j)\rangle\) event, it enters the retrieval phase for \(R(b_p^j)\). Upon entering the retrieval phase, node \(i\) checks whether it has previously received a chunk \(s_i\) corresponding to \(R(b_p^j)\). If so, it broadcasts the message \(\langle\text{Mb-Chk}, R(b_p^j), s_i, P_i\rangle\). It then checks whether an Mb-Retrieval event for the predecessor microblock of \(b_p^j\) has ever been triggered; if not, it recursively triggers the retrieval event for the predecessor microblock.

Each node will trigger the retrieval for each microblock at most once. This means that once consensus has been reached and the microblock has been executed, it can be safely removed from memory without concerning that other nodes will need to fetch contents of the microblock.

Upon receiving an \(\langle \text{Mb-Chk}\rangle\) message corresponding to \(R(b_p^i)\), the node verifies the accompanying Merkle proof. If the verification is successful, the chunk is added to the local mempool; otherwise, the message is ignored.

When a node collects \(f+1\) chunks for \(R(b_p^j)\), decoding is performed to reconstruct \(b'\). The decoded\(b'\) is then re-encoded using \(Enc\) into a set of chunks \(\boldsymbol{Chk'}\) and its Merkle root is recomputed. If the recomputed Merkle root matches \(R(b_p^j)\), decoding is successful. Otherwise, \(b_p^j\) is marked as empty (Nil). Future consensus on this microblock will regard it as empty.

Additionally, if a node receives \(2f+1\) Mb-Chk messages for \(R(b_p^i)\) before broadcasting its own chunk, it can consider the Retrieval of corresponding microblock as completed. Since at least \(f+1\) messages are from honest nodes and seen by others, all honest nodes will eventually reach the decoding threshold.

3.2.3 Phase Triggering.↩︎

We defer the discussion on when to trigger the Dispersal phase and Retrieval phase here. By default, the \(\langle\text{Mb-Dispersal}\rangle\) and \(\langle\text{Mb-Retrieval}\rangle\) events are triggered automatically, but when and how to trigger them depend on the specific requirements of BFT consensus.

A straightforward approach is to periodically trigger the \(\langle\text{Mb-Dispersal}\rangle\) event and have nodes broadcast their AC(s) once they are collected. Upon receiving an AC message, the corresponding \(\langle\text{Mb-Retrieval}\rangle\) event is triggered. Once all nodes have triggered the Retrieval event, they can decode the complete content of the microblock, thereby achieving totality. Consequently, each node can maintain a local mempool in a chained structure, allowing microblocks to be efficiently shared among nodes.

We will discuss in Section [Sec:bft-protocol] how we trigger the \(\langle\)Mb-Dispersal\(\rangle\) and \(\langle\)Mb-Retrieval\(\rangle\) events to meet our requirements when combined with BFT consensus.

3.3 Analysis of Imitater↩︎

3.3.1 Communication Efficiency.↩︎

To achieve a Shared Mempool (SMP) with the totality property, we employ coding techniques for microblock distribution. Here, we briefly analyze the efficiency of completing the distribution of a microblock.

Assume the microblock size is \(m\), and both signatures and hash values are of the same size \(\lambda\) for simplicity. During the Dispersal phase, the distributor needs to encode the microblock into \(n\) chunks, each of size \(\frac{m}{f+1}\), with each Mb-Dis message containing an identifier (\(\lambda\)), an availability certificate (AC) (\(\lambda\)), a chunk (\(\frac{m}{f+1}\)), and a Merkle proof (\(\lambda \log n\)). When receiving an Mb-Dis message, other nodes send back an Mb-ack message containing a partial signature (\(\lambda\)). The total communication cost for this phase is \(2n\lambda + n\lambda \log n + \frac{mn}{f+1}\). For large messages (\(m \gg n\)), the Dispersal cost approximates \(3m\) with \(n = 3f + 1\).

In the Retrieval phase, each node broadcasts a message with an identifier (\(\lambda\)), a chunk (\(\frac{m}{f+1}\)), and a Merkle proof (\(\lambda \log n\)). The total communication cost for this phase is \(\frac{mn^2}{f+1} + n^2\lambda \log n\), which approximates \(3mn\) for large microblocks.

In summary, the total communication overhead for completing microblock distribution is \(2n\lambda + (n^2+n)\lambda \log n + \frac{m(n^2+n)}{f+1}\), and for large microblocks, the cost is dominated by the Retrieval phase at \(3mn\).

3.3.2 Correctness.↩︎

Here we briefly show the correctness of Imitater.

Lemma 1 (Dispersal Termination). Every honest node initiating microblock Dispersal will eventually obtain a corresponding AC after GST.

Proof. Honest nodes distribute the Mb-Dis message to all nodes, and other honest nodes respond with Mb-Ack messages upon receipt. The sender can collect Mb-Ack messages from at least \(2f+1\) honest nodes and synthesize the final signature from the partial signatures to form the AC. ◻

While Lemma 1 ensures that nodes engaging in honest distribution will collect a certificate, it is also essential to guarantee that all other honest nodes can obtain the corresponding microblock. Now, we show that any node could collect at most one valid AC while distributing microblock at position \(p\).

Lemma 2 (Microblock Uniqueness). For any position \(p\) on any mempool chain \(i\), at most one microblock can have a valid AC.

Proof. Suppose node \(i\) distributes two microblocks, \(b_p^i\) and \(\hat{b}_p^i\), at position \(p\), and collects valid ACs \(C_p^i\) and \(\hat{C}_p^i\) for both. Since an AC is formed by aggregating partial signatures from the ack messages of at least \(2f+1\) honest nodes, at least one honest node must have sent ack messages for both microblocks at the same position \(p\). This contradicts the protocol, as an honest node should only ack one microblock per position. Therefore, at most one valid AC can be formed. ◻

Next, we need to ensure that once an AC is formed from honest node, all other honest nodes can obtain the corresponding microblock.

Theorem 1 (SMP Totality). If an honest node collects an AC for a microblock, then all honest nodes will eventually obtain a consistent copy of that microblock.

Proof. We first show that any honest node will ultimately be able to decode a microblock that has the AC. As an AC comprises \(2f+1\) of Mb-Ack messages, of which at least \(f+1\) must originate from honest nodes, this indicates that at least \(f+1\) honest nodes possess distinct chunks. These honest nodes will broadcast their chunks when triggering the retrieval, allowing all honest nodes to decode.

Since each chunk comes with a Merkle tree proof, a node will re-encode after decoding and generate a corresponding Merkle tree, then check whether the newly generated Merkle tree matches the original one. Hence, we require that either (i) every honest node’s verification succeeds, or (ii) every honest node’s verification fails. We can prove it by contradiction. Let’s assume that for a set of segments \(Chk\), there exist two subsets \(S_1\) and \(S_2\) of size \(f+1\), corresponding to valid Merkle trees. Without loss of generality, we assume that decoding is successful for \(S_1\) and fails for \(S_2\).

Since the decoding for \(S_1\) is successful, let the decoding result be \(b\). Thus, re-encoding \(b\) will correctly yield \(Chk' = Chk\). According to the principles of erasure codes, using any \(f+1\) segments from \(Chk'\) will result in consistent decoding. In other words, any subset of \(Chk\) will obtain consistent encoding, which contradicts the fact that decoding failed for \(S_2\).

Hence, different honest nodes decoding chunks corresponding to the same Merkle root \(R(b_p)\) will obtain consistent results. ◻

Since our SMP is maintained in a chained structure, we also need to provide a chain-based consistency constraint for microblocks.

Theorem 2 (SMP Chain-Consistency). For any microblock chain \(i\), where \(i \in [n]\) on the SMP, if any honest node \(j\) and \(k\) have \(b_p^{i,j}\) and \(b_p^{i,k}\), respectively, and \(b_p^{i,j} = b_p^{i,k}\), then \(b_{p'}^{i,j} = b_{p'}^{i,k}\) for all \(p' \leq p\).

Proof. Given that \(b_p^{i,j} = b_p^{i,k}\), it follows that the predecessor identifiers included within the microblock are also equal, that is, \(R(b_{p-1}^{i,j}) = R(b_{p-1}^{i,k})\). By Lemma [lemma:Microblock32consistency], we have \(b_{p-1}^{i,j} = b_{p-1}^{i,k}\), and this logic can be recursively applied. ◻

4 Evaluation↩︎

4.1 Implementation and Experimental Setup↩︎

We implemented a prototype of the Imitater protocol in Golang, utilizing threshold signatures2 [22] and employing Reed-Solomon codes3 [24] for erasure coding.

To evaluate its performance, we compared it with Stratus, a state-of-the-art protocol, using a modified version of bamboo-stratus4, where we replaced the memory pool logic with Imitater. The comparison was made with Stratus-HS from bamboo-stratus, which, to our knowledge, demonstrates superior throughput as the number of replicas increases. The consensus part was HotStuff.

Experiments were conducted on Cloud SA5.2XLARGE16 instances within a single datacenter, each with 8 vCPUs and 3 Gbps of internal bandwidth. Each replica ran on a separate EC2 instance, and we simulated a WAN environment with 100 Mbit/s replica bandwidth using tc.

Four instances ran client processes, continuously sending requests to the replicas. Latency was measured as the time from when a node (the microblock proposer) receives a request to when it completes the consensus process. Throughput is the number of requests committed per second, averaged across all replicas. All measurements were taken after the system’s performance had stabilized.

4.2 Performance↩︎

To evaluate the performance of Imitater, we conducted a thorough assessment of Imitater and compared with Stratus, the SOTA SMP protocol.

Figure 5: Throuput with n=49 and bandwidth settiing to 100Mbps, varying faulty nodes number f

First, we analyzed the impact of faulty nodes on throughput. Figure 5 shows that Stratus-HS throughput declines sharply as the number of faulty nodes increases from 0 to 16, in a 49-node system with 100Mbps bandwidth. For instance, just one faulty node reduces throughput by nearly half. In contrast, Imitater-HS maintains constant throughput regardless of faulty nodes. With one faulty node, Imitater-HS achieves 0.8\(\times\) the throughput of Stratus-HS, and with 16 faulty nodes, it reaches 6\(\times\) the throughput.

Figure 6: Performance comparison of Imitater-HS and Stratus-HS with bandwidth setting to 100 Mbps and up to n/3 nodes being faulty. (a) Throughput vs. nodes. (b) Latency vs. nodes.

Figure [fig:tp95vs95nodes] demonstrates how throughput varies with the number of nodes under maximum fault tolerance, i.e., the number of faulty nodes is \(n/3\). The throughput of Stratus-HS decreases much faster than Imitater-HS as the number of nodes increases. At 100 nodes, Imitater-HS achieves 21.5 Kops/s, nearly 9\(\times\) that of Stratus-HS (2.3 Kops/s). The throughput decline of Stratus-HS is primarily due to all malicious nodes requesting all microblocks from each honest node, which consumes the honest nodes’ bandwidth by forcing them to send these redundant microblocks, thereby impacting throughput. Thanks to the design of Imitater-HS, malicious nodes cannot make malicious requests, thus avoiding any performance degradation. In contrast, as the number of nodes increases under maximum fault tolerance, the number of malicious nodes also rises, leading to a noticeable performance decline in Stratus-HS.

Similarly, the latency in Stratus-HS is affected by attacks from malicious nodes. Figure [fig:lat95vs95nodes] shows how latency changes as the number of nodes increases. It can be observed that Imitater-HS maintains a relatively low latency, while Stratus-HS experiences a sharp increase in latency as the number of nodes grows. Specifically, with 100 nodes, Stratus-HS’s latency reaches 5565 ms, whereas Imitater-HS’s latency remains around 550 ms.

5 Conclusion↩︎

In this paper, we propose a novel SMP protocol, namely Imitater, to improve the efficiency of microblock distribution with Byzantine faulty nodes under partially synchronous setting. Imitater ensures that all microblocks are correctly distributed, available, and ordered even in the presence of faulty nodes. Imitater is easy to integrate into a BFT protocol achieving improved performance by address specific practical problems and better bandwidth use. Our experiments, conducted in a large-scale node deployment environment, showed that the protocol is efficient, maintaining high throughput and low latency, even when some nodes are faulty.

5.0.1 ↩︎

This work was supported by National Natural Science Foundation of China (Grant 62301190), Shenzhen Colleges and Universities Stable Support Program (Grant GXWD20231129135251001), National Key Research and Development Program of China (Grant 2023YFB3106504) and Major Key Project of PCL (Grant PCL2023A09).

5.0.2 ↩︎

The authors have no competing interests to declare that are relevant to the content of this article.

6 Integrating Imitater into Fast-HotStuff↩︎

In this section, we illustrate how to utilize Imitater to implement a partially synchronous consensus protocol, namely Imitater-BFT protocol.

6.1 Overview↩︎

As shown in Figure [fig:Imitater_BFT], Imitater family BFT consensus works in a disperse-consensus-retrieval framework, where the disperse and retrieval parts are derived from the Dispersal and Retrieval phases, respectively. In each round of consensus, all nodes disperse microblocks, collect corresponding ACs and report them to the leader. The leader packages the latest ACs reported by all nodes, along with the corresponding microblock identifiers, into a block for consensus. When the block carrying ACs is broadcast, the retrieval part can be triggered.

To prevent a malicious leader from censoring certain nodes’ microblocks by deliberately excluding some fully dispersed microblocks from the proposal, we integrate Imitater into a leader-rotation BFT framework. Since our SMP is maintained in a chain structure where each microblock contains the AC of its predecessor, all uncommitted predecessor blocks on the chain of a microblock included in a block are implicitly included in that block as well. With the leader-rotation mechanism, as long as an honest node is elected as the leader, even if certain microblocks were censored by a malicious leader in a previous round, subsequent honest leaders can include these censored or omitted microblocks in the consensus process without additional cost. This design ensures 2/3 chain quality, meaning that at least 2/3 of the final consensus content originates from honest nodes.

6.2 Detail of Imitater-FHS↩︎

We now take Fast-HotStuff [7], an improved version of HotStuff [8], as an example and present the details of how we integrate Imitater into a BFT protocol. Imitater-FHS operates in a pipelined manner, with the leader changing in each round. Note that Imitater is compatible with most SMP-based BFT protocols.

\(B.view \gets v\)

\(B.qc \gets qc\)

\(B.aggQc \gets aggQc\)

\(B.parent \gets B'\)

for \(i\in [n]\)

\(B.mbs\gets B.mbs\cup hC^i\)

Return \(B\)

\(qc.view\gets v\)

\(qc.block\gets m.block:m\in \boldsymbol{V_s}\)

\(qc.sigs\gets Tcomb (qc.block,\{m.sig|m\in\boldsymbol{V_{s}} \})\)

Return \(qc\)

\(aggQc.qc_{set}\) \(\gets\)Extract Quorum Certificates from \(\boldsymbol{N_s}\)

Return \(aggQc\)

\(B\gets P.block\)

for \(C^j\in B.mbs\)

if \(C^j\) is not valid

Return False.

if \(B.qc\)

Return \((B.view \geq v)\wedge (B.view=qc.view+1)\)

if \(B.aggQc\)

\(hqc\gets\) highest QC in \(aggQc\)

Return \(B\) extends from \(hqc\)

6.2.1 Utilities↩︎

Before detailing Imitater-FHS, we introduce some useful utilities, as shown in Algorithm [alg:utilities].

View. The protocol operates in rounds, each referred to as a view. Each view has a designated leader and is assigned a view number. A deterministic algorithm is used to select the leader, following a round-robin scheme, so that every node knows which node is the leader for the current view. The process of switching the leader is called view change. Note that even in the normal case, where the leader performs honestly, view change occurs due to the leader-rotation scheme.

Quorum Certificate (QC) and Aggregated QC. When the leader starts a proposal, the leader collects a set \(V_s\) consisting of \(n-f\) Vote messages together with partial signatures from the previous view, it runs procedure CreateQC, using \(TComb\) to aggregate the partial signatures into a Quorum Certificate (QC). A QC proves that at least \(f+1\) honest nodes receive the block in the normal case.

If the leader from the previous view fails, causing the nodes to not vote properly and triggering the view change, all the nodes send New-View messages and their known highest QCs to the new leader. The new leader starts the CreateAgg procedure to generate an Aggregated QC (AggQC) by concatenating \(2f+1\) QCs upon receiving a set \(N_s\) consisting of \(2f+1\) new view messages. An AggQC proves that at least \(f+1\) honest nodes admit AggQC as the highest QC.

Block Structure. For proposing a block, the leader runs the CreateBlock procedure to construct a block. The block includes the view number \(v\) for the current view, a QC of the previous block (or an AggQC), the latest position ACs reported by all nodes and the hash of the parent block. By default, the first block is empty.

SafeProposal. Upon receiving the proposal message, node \(i\) executes the SafeProposal procedure to check the validity of proposal \(P\) in current view. This involves verifying the legitimacy of all included ACs and assessing whether the block is derived from 1) the highest QC in the normal case or 2) the highest QC in AggQC when the previous leader fails.

6.2.2 Imitater-FHS Protocol↩︎

By combining the utilities in Algorithm [alg:utilities], we obtain Imitater-FHS, as shown in Algorithm 7. Note that we omit the Dispersal phase of SMP in Algorithm 7 as 1) it can be executed in parallel with Algorithm 7, 2) triggering it depends on the bandwidth adaptability requirements and involves many details. We leave the discussion on triggering the Dispersal phase in Section [sec:trigger-dispersal].

Figure 7: Imitater-FHS Protocol (for node i)

In each round of consensus, the leader constructs a QC \(qc\) using collected Vote messages or New-View messages to prove that its proposal is secure. Then the leader constructs a block based on the aforementioned \(qc\) (derived from Vote messages) or \(aggQc\) (derived from New-View messages) by call of CreatBlock procedure in Algorithm [alg:utilities]. The leader then broadcast the block.

As a non-leader node, upon receiving a proposal, node \(i\) execute SafeProposal to verify its validity. If the verification is passed, the node signs on the block and sends a Vote message to the leader of view \(v+1\), also attaching the latest AC for the \(i\)th chain. Then node \(i\) enters view \(v+1\).

If the parent block \(B' = B.parent\) and its grandparent block \(B'' = B'.parent\) have consecutive view numbers, then the grandparent block \(B''\) can be safely committed. Node \(i\) will initiate the Mb-Retrieval event of all microblocks within \(B''\) and wait for these microblocks to become available. As is discussed in Section 3.2.2, the retrieval of a microblock will recursively trigger the retrieval of its predecessor microblocks which have never been triggered before. This ensures that the chain of blocks is fully available for processing and adherence to the protocol’s consistency requirements.

Once all the microblocks in block \(B\) have been decoded and are available, the confirmation process for the block can begin. Specifically, all requests contained within these microblocks are extracted and sorted by their microblock position and timestamp to form a transaction list. Following this, the node executes all transactions in the transaction list in sequence and returns response values to the clients.

Each node would set a timer when entering a new view. When the local timer expires, node \(i\) sends its highest local QC along with the latest AC for the \(i\)th chain in a New-View message to the leader of view \(v+1\) and enters view \(v+1\).

6.3 Analysis↩︎

6.3.1 Safety and Liveness↩︎

Imitater-FHS ensures security, including safety and liveness, justified in Theorem 3 and Theorem 4, respectively.

Theorem 3. Safety: No equivocated requests will be committed by honest nodes at the same position.

Here, we provide a brief proof and the complete proof is provided in Appendix 7.

Requests are organized into microblocks and distributed among nodes. During consensus, the identifiers of microblocks are included in the proposal, and the confirmation of requests is achieved by confirming the proposal block. Thus, we need to prove: (i) All honest nodes commit a consistent proposal block in each round (Lemma 6), and (ii) they can obtain a consistent transaction list from a consistent proposal (Lemma 7).

For point (i), the proof aligns with Fast-HotStuff, which ensures that all honest nodes commit a consistent block. This consistency in microblock identifiers is inherited from the guarantees provided by Fast-HotStuff. For point (ii), a microblock may be explicitly or implicitly included in a block. Since the block includes the AC for each microblock, all honest nodes can access consistent content for explicitly included microblocks (Lemma [lemma:Microblock32consistency]). Furthermore, all microblocks that are implicitly included in the proposal can be retrieved as well (Theorem 2). A consistent sorting algorithm ensures that all honest nodes confirm requests in the same order, leading to consistent execution.

Theorem 4. Liveness: After GST, the confirmation for a pending request will always be reached.

Proof. Under a leader-rotation rule, for Byzantine nodes to prevent protocol liveness, they would need to interrupt the formation of a one-chain by ensuring a Byzantine leader follows each honest leader. In a system with \(n = 3f + 1\) nodes, where there are \(f\) Byzantine nodes, in the worst-case scenario, each could be strategically placed after an honest node. However, there would still be an additional \(f + 1\) honest nodes capable of forming a one-chain. Therefore, after GST, there would at least be two consecutive leaders forming a direct-chain, and a decision will eventually reach.

Given that a chain-based shared mempool (SMP) is employed and a Round-robin election method is used, microblocks from all honest nodes will eventually be included in some proposal. Consequently, every request will ultimately be executed. ◻

7 Proof of Theorem 3↩︎

We define a sequence of blocks with consecutive views as a direct chain. Specifically, if there are two blocks \(B\) and \(B'\) such that \(B'.view = B.view + 1\) and \(B'.parent = B\), then \(B\) and \(B'\) are said to form an one-direct chain. Similarly, if there exists a block \(B''\) such that \(B''.view = B'.view + 1\), \(B'.view = B.view + 1\), \(B''.parent = B'\), and \(B'.parent = B\), then \(B''\), \(B'\), and \(B\) together constitute a two-direct chain.

We define two blocks \(B\) and \(B'\) as conflicting if \(B\) is neither a predecessor nor a successor of \(B'\). This notion of conflict naturally extends to Quorum Certificates; specifically, \(qc1\) and \(qc2\) are considered conflicting if \(qc1.block\) and \(qc2.block\) are conflicting.

Lemma 3. If any two valid Quorum Certificates, \(qc1\) and \(qc2\), are conflicting, then it must be the case that \(qc1.view \neq qc2.view\).

Proof. We will prove this by contradiction. Assume that in the same view \(v\), two conflicting blocks \(B\) and \(B^*\) both receive sufficient votes and respectively form \(qc\) and \(qc^*\). Since a QC requires \(2f+1\) vote messages, it implies that at least one honest node voted twice in view \(v\), which is a contradiction because honest nodes vote at most once in each view. Therefore, for any two conflicting QCs, \(qc\) and \(qc^*\), it must be true that \(qc.view \neq qc^*.view\). ◻

Lemma 4. If \(B\) and \(B^*\) are conflicting, then only one of them can be committed by an honest replica

Proof. From Lemma 3, we know that \(B.view \neq B^*.view\). Assuming block \(B\) has been committed by an honest node \(r\), at least \(f+1\) honest nodes are aware of its corresponding QC, \(qc\). When node \(r\) receives a proposal for block \(B^*\), which is conflicting with \(B\), the highQC in the proposal points to \(B\)’s ancestor block \(B^\circ\), such that \(B^*.qc.block = B^\circ\). If we consider \(B^\circ\) as the parent block of \(B\) (i.e., \(B.parent = B^\circ\)), then the AggQC of \(B^*\) will be deemed invalid. This is because any QC with a view number greater than or equal to \(qc\) should be included in the AggQC, yet the highQC included is for \(B\)’s parent, \(qc^\circ\), not \(qc\). Thus, this proposal will fail the SafeProposal check. ◻

Next, we demonstrate that if a node commits a block \(B\) at view \(v\), then all other honest nodes will also commit the same block at height \(v\).

Lemma 5. If an honest node \(r\) commits a block \(B\), then \(B\)’s QC, \(qc\) will be used as the highQC for the next block \(B'\).

Proof. When the primary node starts a new view \(v'\), where \(v' > v\), with \(2f+1\) new-view messages and proposes block \(B'\), we need to demonstrate that among any set of \(2f+1\) new-view messages, at least one will contain the QC of block \(B\), \(qc\), or a successor QC of \(qc\).

In view \(v\), node \(r\) committed block \(B\), meaning that at least \(2f+1\) nodes voted for \(B\)’s QC, implying that at least \(f+1\) honest nodes are aware of \(qc\)’s existence. Therefore, in any view greater than \(qc'.view\), when the primary node collects \(2f+1\) new-view messages, at least one of these messages, originating from one of these \(f+1\) honest nodes, will include \(qc\) or a QC inherited from \(qc\). Hence, block \(B'\) will have \(B\) as an ancestor. ◻

Next, we will prove that if an honest node commits block \(B\), then all other honest nodes will not commit any conflicting block \(B^*\).

Figure 8: B and B^* both getting committed (impossible).

Lemma 6. It is impossible for any two conflicting blocks, \(B\) and \(B^*\), to be committed each by an honest node.

Proof. We will prove this by contradiction. Suppose there is a fork in the blockchain, as shown in Figure 8. Without loss of generality, let us assume that an honest node \(i\) commits block \(B\) on the upper branch chain, and that another honest node \(i'\) commits a conflicting block \(B^*\).

Let’s analyze the different possible scenarios under this assumption involving \(B^*\)’s quorum certificate \(qc^*\). As \(qc^*\) is in conflict with both \(qc\) and \(qc'\) as indicated by Lemma 3, thus \(qc^*.view \neq qc.view\) and \(qc^*.view \neq qc'.view\).

Case 1: \(qc^*.view < qc.view\). In this case, \(qc^*\) will not be chosen by any honest node as a highQC, because there exists a QC, \(qc\), with a view number higher than \(qc^*.view\).

Case 2: \(qc.view < qc^*.view < qc'.view\). Since \(B\) was committed through a two-direct chain, it follows that \(qc.view + 1 = qc'.view\). Therefore, there cannot exist a \(qc^*\) whose view number lies between \(qc\) and \(qc'\).

Case 3: \(qc'.view < qc^*.view\). The formation of \(qc'\) arises from \(2f+1\) nodes voting for \(B'\) and \(qc\), indicating that at least \(f+1\) honest nodes have seen \(qc\). In any view higher than \(qc'.view\), the primary node collects \(2f+1\) new-view messages, at least one of which will include \(qc\) or a QC inherited from \(qc\) (Lemma 5). Since \(qc^*\) does not inherit from \(qc\), the quorum certificate \(qc^*\) for the conflicting block \(B^*\) cannot form.

In conclusion, \(B^*\) cannot be committed by any honest node, rendering the assumption invalid. Therefore, it is impossible for two conflicting blocks \(B\) and \(B^*\) to be committed by different honest nodes respectively. ◻

Lemma 45 and 6 ensure that all honest nodes commit a consistent block. However, since our protocol organizes transactions in microblocks and the block contains only microblock identifiers, it is also necessary to confirm that the set of requests referenced by each block is identical. This requires verification that all nodes decode the same transactions from these microblock identifiers to achieve a fully consistent state across the network.

Lemma 7. If two honest nodes have the same block \(B\), then it is guaranteed that they can construct the same transaction list.

Proof. Since all microblocks referenced in the block are accompanied by an AC, Lemma [lemma:Microblock32consistency] ensure that each microblock associated with an AC can be consistently retrieved. Moreover, Theorem 2 guarantees that the predecessors of these microblocks are also accessible and consistent. Therefore, all honest nodes can obtain a consistent set of microblocks. Utilizing a deterministic local sorting algorithm, each node can achieve a consistent ordering of these requests. ◻

Combing the above lemmas, the proof of Theorem 3 proceeds as follows.

Proof. Lemma 6 ensures that all honest nodes will commit the same block at the same blockchain height, while Lemma 7 guarantees that all honest nodes can derive a consistent transaction list based on a consistent block. Therefore, all honest nodes will execute requests in a consistent order. ◻

References↩︎

[1]
Team, A.: The aptos blockchain: Safe, scalable, and upgradeable web3 infrastructure. https://aptosfoundation.org/whitepaper(2022).
[2]
Diem: The libra blockchain. https://developers.diem.com/docs/technical-papers/the-diem-blockchain-paper(2020).
[3]
Buterin, V.: Ethereum: A next-generation smart contract and decentralized application platform. by vitalik buterin. https://ethereum.org/en/whitepaper(2014).
[4]
Androulaki, E., Barger, A., Bortnikov, V., Cachin, C., Christidis, K., De Caro, A., Enyeart, D., Ferris, C., Laventman, G., Manevich, Y., et al.: Hyperledger fabric: a distributed operating system for permissioned blockchains. In: Proceedings of the thirteenth EuroSys conference. pp. 1–15 (2018).
[5]
Castro, M., Liskov, B., et al.: Practical byzantine fault tolerance. In: OsDI. vol. 99, pp. 173–186 (1999).
[6]
Kotla, R., Alvisi, L., Dahlin, M., Clement, A., Wong, E.: Zyzzyva: speculative byzantine fault tolerance. In: Proceedings of twenty-first ACM SIGOPS symposium on Operating systems principles. pp. 45–58 (2007).
[7]
Jalalzai, M.M., Niu, J., Feng, C., Gai, F.: Fast-hotstuff: A fast and robust bft protocol for blockchains. IEEE Transactions on Dependable and Secure Computing (2023).
[8]
Yin, M., Malkhi, D., Reiter, M.K., Gueta, G.G., Abraham, I.: Hotstuff: Bft consensus with linearity and responsiveness. In: Proceedings of the 2019 ACM Symposium on Principles of Distributed Computing. pp. 347–356 (2019).
[9]
Kokoris-Kogias, E., Jovanovic, P., Gasser, L., Gailly, N., Syta, E., Ford, B.: Omniledger: A secure, scale-out, decentralized ledger via sharding. In: 2018 IEEE symposium on security and privacy (SP). pp. 583–598. IEEE (2018).
[10]
Wang, J., Wang, H.: Monoxide: Scale out blockchains with asynchronous consensus zones. In: 16th USENIX symposium on networked systems design and implementation (NSDI 19). pp. 95–112 (2019).
[11]
Zamani, M., Movahedi, M., Raykova, M.: Rapidchain: Scaling blockchain via full sharding. In: Proceedings of the 2018 ACM SIGSAC conference on computer and communications security. pp. 931–948 (2018).
[12]
Hu, K., Guo, K., Tang, Q., Zhang, Z., Cheng, H., Zhao, Z.: Leopard: Towards high throughput-preserving bft for large-scale systems. In: 2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS). pp. 157–167. IEEE (2022).
[13]
Danezis, G., Kokoris-Kogias, L., Sonnino, A., Spiegelman, A.: Narwhal and tusk: a dag-based mempool and efficient bft consensus. In: Proceedings of the Seventeenth European Conference on Computer Systems. pp. 34–50 (2022).
[14]
Gai, F., Niu, J., Beschastnikh, I., Feng, C., Wang, S.: Scaling blockchain consensus via a robust shared mempool. In: 2023 IEEE 39th International Conference on Data Engineering (ICDE). pp. 530–543. IEEE (2023).
[15]
Hu, Z., Guan, S., Xu, W., Xiao, Z., Shi, J., Li, P., Ding, Q., Ding, H., Zeng, C.: A data flow framework with high throughput and low latency for permissioned blockchains. In: 2023 IEEE 43rd International Conference on Distributed Computing Systems (ICDCS). pp. 1–12. IEEE (2023).
[16]
Biely, M., Milosevic, Z., Santos, N., Schiper, A.: S-paxos: Offloading the leader for high throughput state machine replication. In: 2012 IEEE 31st Symposium on Reliable Distributed Systems. pp. 111–120. IEEE (2012).
[17]
Zhao, H., Zhang, Q., Yang, Z., Wu, M., Dai, Y.: Sdpaxos: Building efficient semi-decentralized geo-replicated state machines. In: Proceedings of the ACM Symposium on Cloud Computing. pp. 68–81 (2018).
[18]
Bagaria, V., Kannan, S., Tse, D., Fanti, G., Viswanath, P.: Prism: Deconstructing the blockchain to approach physical limits. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. pp. 585–602 (2019).
[19]
Cachin, C., Guerraoui, R., Rodrigues, L.: Introduction to reliable and secure distributed programming. Springer Science & Business Media (2011).
[20]
Dolev, D., Strong, H.R.: Authenticated algorithms for byzantine agreement. SIAM Journal on Computing 12(4), 656–666 (1983).
[21]
Dwork, C., Lynch, N., Stockmeyer, L.: Consensus in the presence of partial synchrony. Journal of the ACM (JACM) 35(2), 288–323 (1988).
[22]
Boneh, D., Lynn, B., Shacham, H.: Short signatures from the weil pairing. In: International conference on the theory and application of cryptology and information security. pp. 514–532. Springer (2001).
[23]
Shoup, V.: Practical threshold signatures. In: Advances in Cryptology—EUROCRYPT 2000: International Conference on the Theory and Application of Cryptographic Techniques Bruges, Belgium, May 14–18, 2000 Proceedings 19. pp. 207–220. Springer (2000).
[24]
Reed, I.S., Solomon, G.: Polynomial codes over certain finite fields. Journal of the society for industrial and applied mathematics 8(2), 300–304 (1960).
[25]
Kocher, P.C.: On certificate revocation and validation. In: International Conference on Financial Cryptography. pp. 172–177. Springer, Anguilla, British West Indies (1998).
[26]
Miller, A., Xia, Y., Croman, K., Shi, E., Song, D.: The honey badger of bft protocols. In: Proceedings of the 2016 ACM SIGSAC conference on computer and communications security. pp. 31–42 (2016).
[27]
Kaklamanis, I., Yang, L., Alizadeh, M.: Poster: Coded broadcast for scalable leader-based bft consensus. In: Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security. pp. 3375–3377 (2022).
[28]
Yang, L., Park, S.J., Alizadeh, M., Kannan, S., Tse, D.: DispersedLedger: High-Throughput Byzantine Consensus on Variable Bandwidth Networks. In: 19th USENIX Symposium on Networked Systems Design and Implementation (NSDI 22). pp. 493–512 (2022).

  1. Corresponding author↩︎

  2. https://github.com/dfinity-side-projects/bn↩︎

  3. https://github.com/templexxx/reedsolomon↩︎

  4. https://github.com/gitferry/bamboo-stratus↩︎