Faithful and Privacy-Preserving Implementation of Average Consensus\(^\ast\)


Abstract

We propose a protocol based on mechanism design theory and encrypted control to solve average consensus problems among rational and strategic agents while preserving their privacy. The proposed protocol provides a mechanism that incentivizes the agents to faithfully implement the intended behavior specified in the protocol. Furthermore, the protocol runs over encrypted data using homomorphic encryption and secret sharing to protect the privacy of agents. We also analyze the security of the proposed protocol using a simulation paradigm in secure multi-party computation. The proposed protocol demonstrates that mechanism design and encrypted control can complement each other to achieve security under rational adversaries.

1 Introduction↩︎

Average consensus is a fundamental problem in multi-agent systems to reach an agreement on the average of agents’ states. It arises in numerous applications, such as rendezvous of mobile robots, data fusion in sensor networks, and distributed optimization [1][3]. This problem is usually solved by exchanging information among agents and updating their states locally based on the information when they are cooperative. However, a rational and strategic agent may be incentivized to manipulate the average consensus algorithm (e.g., by misreporting information) to drive an outcome to its own benefit. Furthermore, adversarial agents may learn the secrets of honest agents through information exchanges, thereby compromising their privacy.

We address these challenges by designing a protocol that combines mechanism design and encrypted control. Mechanism design theory deals with the design of rules to achieve preferable social outcomes in the presence of strategic agents [4]. In classical mechanism design, a social planner asks agents to report their private information and announces a social decision and tax computed using the collected information. In contrast, distributed mechanism design considers determining the outcome in a distributed manner [5]. Previous studies [6], [7] have shown that some distributed optimization and control algorithms can be faithfully implemented using distributed mechanisms.

Encrypted control is a framework that applies cryptographic primitives to decision-making in dynamical systems [8]. Within this framework, previous studies considered consensus control [9], [10], formation control [11], and cooperative control [12], [13] using homomorphic encryption and secret sharing. These cryptographic primitives enable the computation of sensitive information in an encrypted form. Thus, encrypted control is effective in mitigating privacy compromises in multi-agent systems.

Although mechanism design and encrypted control have been developed individually so far, integrating these methodologies would produce a promising approach to simultaneously achieve both faithfulness and privacy in cooperative decision-making. Traditional mechanism design based on the revelation principle [4] to attain faithful implementation requires agents to disclose their private information to a social planner. This process is clearly undesirable from a privacy perspective and will be improved by running the computation of mechanisms over encrypted data. On the other hand, existing encrypted controls assume semi-honest agents who may attempt to learn private information from received messages but do not deviate from a protocol, thus failing to address strategic manipulation by agents. A mechanism can dissuade strategic agents from manipulating a protocol, which is expected to enhance the achievable security of encrypted controls.

The main contribution of this study lies in clarifying the synergy between mechanism design and encrypted control. We demonstrate how incentivization by mechanism design and secure computation using cryptographic primitives can achieve average consensus in the presence of rational agents rather than semi-honest agents. Specifically, our contributions are listed as follows. 1) We propose a privacy-preserving protocol to provide a mechanism that implements an average consensus by adopting the algorithm in [6] tailored for multi-party computation. In contrast to the previous algorithm, the mechanism computation in the proposed protocol is distributedly performed by the agents instead of a single leader. 2) Building on previous studies [6], [7], we show that the agents do not deviate from the intended behavior even though their private information reports are encrypted. 3) We also demonstrate that the proposed protocol fulfills a standard privacy requirement for secure multi-party computation through simulation-based proofs [14], [15].

The remainder of this paper is organized as follows. Section 2 describes a problem setting. Section 3 introduces definitions of mechanism design and secure multi-party computation. Section 4 presents a solution to the problem and demonstrates its security under semi-honest adversaries. Section 5 proposes a protocol providing a mechanism that implements the solution under rational adversaries. Section 6 illustrates the effectiveness of the proposed protocol through numerical simulations. Section 7 describes the conclusions of this study.

2 Problem Setting↩︎

In this study, we consider the multi-agent system that consists of \(N\) agents shown in Fig. 1. Each agent is directly connected to the supervisor, whose role will be explained later. The network topology of the agents is described by a strongly connected and balanced digraph \(G = (V, E)\) with a vertex set \(V = \{1, \dots, N\}\) and edge set \(E \subset V \times V\). A weighted adjacency matrix of \(G\) is \(A = [a_{ij}] \in \mathbb{R}^{N \times N}\), where \(a_{ij} > 0\) if \((i, j) \in E\) and \(a_{ij} = 0\) otherwise. Define the input and output neighbors of agent \(i\) as \(N^{\mathrm{in}}_i \mathrel{\vcenter{:}}= \{ j \in V \mid (i, j) \in E \}\) and \(N^{\mathrm{out}}_i \mathrel{\vcenter{:}}= \{ j \in V \mid (j, i) \in E \}\), respectively. Here, \((i, j) \in E\) means that agent \(j\) can send a message to agent \(i\). The dynamics of agent \(i\) is given by \[\mathrm{x}_i(k + 1) = \mathrm{x}_i(k) + \mathrm{u}_i(k), \label{eq:agent}\tag{1}\] where \(k \in \mathbb{N}_0 \mathrel{\vcenter{:}}= \{ 0, 1, 2, \dots \}\) is the time index, \(\mathrm{x}_i(k) \in \mathbb{R}\) is the state, \(\mathrm{u}_i(k) \in \mathbb{R}\) is the input, and the initial state is \(\mathrm{x}_i(0) = \mathrm{x}_{i, 0}\). A common average-consensus control for 1 is \[\mathrm{u}_i(k) = \mathrm{w}_{ii} \mathrm{x}_i(k) + \sum{}_{j \in N^{\mathrm{in}}_i} \mathrm{w}_{ij} \mathrm{x}_j(k), \label{eq:control}\tag{2}\] where \(\mathrm{w}_{ij} = \epsilon a_{ij}\) for \(i \ne j\), \(\mathrm{w}_{ii} = -\sum_{j \in N^{\mathrm{in}}_i} \mathrm{w}_{ij}\), and \(\epsilon \in (0, 1 / \max_i \sum_{j \ne i} a_{ij})\). Applying 2 to 1 , the system achieves average consensus, i.e., \(\lim_{k \to \infty} \mathrm{x}_i(k) = \frac{1}{N} \sum_{j=1}^N \mathrm{x}_{j,0}\) for all \(i \in V\), if \(G\) is strongly connected and balanced [2].

Figure 1: Multi-agent system with a supervisor (\(N = 5\)).

The goal of this study is to design a secure multi-party computation protocol that achieves average consensus under the following computation and security models.

2.0.0.1 Computation model

We employ a preprocessing model [16] to simplify a protocol and to improve its efficiency. A protocol in this model consists of offline and online phases. In the offline phase, agents receive auxiliary inputs from a third party independently of their inputs. In the online phase, the agents compute outputs using the auxiliary inputs without the third party. The supervisor in Fig. 1 plays the role of both the third party in this model and a social planner in mechanism design. More precisely, the supervisor distributes random numbers to the agents in the offline phase and verifies tax payments once the protocol is terminated.

2.0.0.2 Security model

We propose a secure protocol under rational adversaries. Unlike a semi-honest adversary, a rational adversary does not only attempt to compromise the privacy of honest agents but also acts strategically to minimize its own cost. Such an adversary may rationally deviate from the designated protocol if the protocol is not incentive compatible (defined below). The proposed protocol aims to prevent rational adversaries from learning information beyond their inputs and outputs while attaining incentive compatibility.

Remark 1. The supervisor is a third party independent from the multi-agent system. It provides the moderation and coordination service for the agents to faithfully and privately realize an average consensus task. In this scenario, the tax payment can be thought of as the fee that each agent is asked to pay for the service, which is ensured by contract.

3 Mechanism Design and Multi-party Computation↩︎

3.1 Mechanism design↩︎

The notation used in this section is consistent with [7]. Suppose each agent \(i \in \{1, \dots, N\}\) has private information (called type) \(\theta_i \in \Theta_i\), which defines its cost \(u_i(d(\theta), t(\theta); \theta_i) = v_i(d(\theta); \theta_i) + t_i(\theta)\) for a decision rule \(d: \Theta \to X\) and transfer rule \(t: \Theta \to \mathbb{R}^N\), where \(t(\theta) = (t_1(\theta), \dots, t_N(\theta))\), \(\theta = (\theta_1, \dots, \theta_N) \in \Theta = \Theta_1 \times \cdots \times \Theta_N\), \(v_i: X \to \mathbb{R}\), and \(X\) is the set of feasible outcomes. The pair \(f = (d, t): \Theta \to X \times \mathbb{R}^N\) is called a social choice function.

Consider that each agent \(i\) reports a message \(s_i(\theta_i) \in \Sigma_i\) to a social planner using a message function \(s_i: \Theta_i \to \Sigma_i\) based on its type while attempting to minimize its own cost. The social planner obtains the value of social choice function from the messages using an outcome function \(g: \Sigma \to X \times \mathbb{R}^N\) such that \(g \circ s(\theta) = f(\theta)\), where \(\Sigma = \Sigma_1 \times \cdots \times \Sigma_N\) and \(s(\theta) = (s_1(\theta_1), \dots, s_N(\theta_N))\). Under this setting, the goal of the social planner is to design a mechanism consisting of \(g\), \(\Sigma\), and \(s\) that implements \(f\).

Definition 1 (Incentive compatibility [7]). A mechanism \(M = (g, \Sigma, s)\) is incentive compatible* or implements a social choice function \(f\) in ex-post Nash equilibria if \(g \circ s = f\) and \(u_i(g(s_i(\theta_i), s_{-i}(\theta_{-i})); \theta_i) \le u_i(g(\sigma_i, s_{-i}(\theta_{-i})); \theta_i)\) hold for every \(i \in \{1, \dots, N\}\), for all \(\sigma_i \in \Sigma_i\), and for all \(\theta \in \Theta\), where \(\theta_{-i} \mathrel{\vcenter{:}}= (\theta_1, \dots, \theta_{i-1}, \theta_{i+1}, \dots, \theta_N)\), and \(s_{-i}\) is defined in the same manner as \(\theta_{-i}\).*

If a social choice function is implemented in ex-post Nash equilibria, no agent gains benefit by adopting a strategy \(\sigma_i \ne s_i(\theta_i)\). Thus, no agent is incentivized to deviate from the equilibrium provided all other agents play the equilibrium strategy. Meanwhile, in the nature of algorithmic mechanism design, the outcome \((d(\theta), t(\theta))\) is sometimes given by a solution to an optimization algorithm. In that case, the exact optimal outcome cannot be obtained in a finite iteration \(n\) and must be approximated by a near-optimal one. Then, a mechanism cannot be incentive compatible in general [17]. To avoid this difficulty, the following notion is introduced.

Definition 2 (Asymptotically incentive compatibility [7]). For every \(n \in \mathbb{N}\), let \(M^n = (g^n, \Sigma^n, s^n)\) be a mechanism. A sequence of mechanisms \(\{M^n\}_{n \in \mathbb{N}}\) is asymptotically incentive compatible* or asymptotically implements a social choice function \(f\) in ex-post Nash equilibria if, for every \(\varepsilon > 0\), there exists \(n_0 \in \mathbb{N}\) such that \(\lim_{n \to \infty} g^n \circ s^n = f\) and \(u_i(g^n(s^n_i(\theta_i), s^n_{-i}(\theta_{-i})); \theta_i) \le u_i(g^n(\sigma^n_i, s^n_{-i}(\theta_{-i})); \theta_i) + \varepsilon\) hold for all \(n \ge n_0\), for every \(i \in \{1, \dots, N\}\), for all \(\sigma_i^n \in \Sigma_i^n\), and for all \(\theta \in \Theta\).*

This definition implies that if a mechanism given by an iterative algorithm is asymptotically incentive compatible, it converges to be incentive compatible as the number of iterations \(n\) goes to infinity. Moreover, even when \(n\) is finite, the decrease of cost by manipulating messages is bounded by any small \(\varepsilon\) for all agents if \(n\) is sufficiently large.

3.2 Secure multi-party computation↩︎

Let \(F_i : \{0, 1\}^\ast \times \cdots \times \{0, 1\}^\ast \to \{0, 1\}^\ast\) be a deterministic function such that \(\mathsf{y}_i = F_i(\mathsf{x}_1, \dots, \mathsf{x}_N)\), where \(\mathsf{x}_i\) and \(\mathsf{y}_i\) are private inputs and outputs of agent \(i\), and \(\{0, 1\}^\ast\) is the set of binary sequences of any length. Secure multi-party computation aims to compute a functionality \(F = (F_1, \dots, F_N)\) on \((\mathsf{x}_1, \dots, \mathsf{x}_N)\) without revealing any information other than \(\mathsf{x}_i\) and \(\mathsf{y}_i\) for each agent \(i\). In an ideal world, the agents can achieve the objective by sending their inputs \(\mathsf{x}_i\) to a trusted third party that computes and returns \(\mathsf{y}_i\) to each agent \(i\). By contrast, in the real world, they jointly compute \(F\) with communication because there is no trusted third party. From this perspective, if all information that adversaries can obtain in the real world is also obtained in the ideal world, a protocol in the real world is considered at least as secure as one in the ideal world.

The simulation paradigm is a standard approach to formally define such security in multi-party computation. In this approach, a protocol is considered secure under semi-honest adversaries if the view of the adversaries are computationally indistinguishable from the information computed by their inputs and outputs. Here, the view of agent \(i\) during an execution of a protocol \(\Pi\) on a security parameter \(\lambda\) and inputs \((\mathsf{x}_1, \dots, \mathsf{x}_N)\), denoted \(\mathsf{view}^{\Pi}_i(\lambda, \mathsf{x}_1, \dots, \mathsf{x}_N)\), is a tuple of \(\mathsf{x}_i\), internal random coins \(\mathsf{r}_i\) of the agent, and messages \(\mathsf{m}_i\) it has received [14]. Note that the computational indistinguishability of two families of random variables implies that no polynomial-time algorithm can distinguish them [14].

Definition 3 (Secure multi-party computation [15]). Let \(F\) be a functionality that takes \((\mathsf{x}_1, \dots, \mathsf{x}_N)\) as input and outputs \((\mathsf{y}_1, \dots, \mathsf{y}_N)\). A protocol \(\Pi\) \(h\)-privately computes* \(F\) in the presence of semi-honest adversaries if there exist probabilistic polynomial-time algorithms (called simulators) \(\mathsf{Sim}\) such that two families of random variables \(\{ \mathsf{Sim}(1^\lambda, C, \{ \mathsf{x}_i, \mathsf{y}_i \mid i \in C \}) \}_{\lambda \in \mathbb{N}, \mathsf{x}_1, \dots, \mathsf{x}_N \in \{0, 1\}^\ast}\) and \(\{ \{ \mathsf{view}^{\Pi}_i(\lambda, \mathsf{x}_1, \dots, \mathsf{x}_N) \mid i \in C \} \}_{\lambda \in \mathbb{N}, \mathsf{x}_1, \dots, \mathsf{x}_N \in \{0, 1\}^\ast}\) are computationally indistinguishable for every \(C \subset \{1, \dots, N\}\) satisfying \(|C| < h\), where \(\mathsf{x}_i\) are of equal length for all \(i\).*

Intuitively, the definition implies that information obtained by a coalition of adversaries through the execution of a secure protocol can be simulated from their inputs and outputs. In other words, the adversaries can learn nothing except for information given by their inputs and outputs.

3.3 Homomorphic encryption and secret sharing↩︎

Let \(\mathcal{M}\) be a plaintext space, and \(\mathcal{C}\) be a ciphertext space. Additively homomorphic encryption, such as learning with errors (LWE) encryption, is an encryption scheme that allows addition over encrypted data. That is, there exists a binary operation \(\oplus: \mathcal{C}\times \mathcal{C}\to \mathcal{C}\) such that \(\mathsf{Dec}(\mathsf{sk}, \mathsf{Enc}(\mathsf{pk}, m_1) \oplus \mathsf{Enc}(\mathsf{pk}, m_2)) = m_1 + m_2 \in \mathcal{M}\) for all \(m_1, m_2 \in \mathcal{M}\), where \(\mathsf{Enc}\) is an encryption algorithm, \(\mathsf{Dec}\) is a decryption algorithm, \(\mathsf{pk}\) is a public key, and \(\mathsf{sk}\) is a secret key. With the homomorphic addition \(\oplus\), a binary operation \(\odot: (\mathsf{ct}, n) \mapsto \mathsf{ct}\oplus \dots \oplus \mathsf{ct}\) is defined for \(\mathsf{ct}= \mathsf{Enc}(\mathsf{pk}, m)\) and \(n \in \mathbb{N}\) such that \(\mathsf{Dec}(\mathsf{sk}, \mathsf{ct}\odot n) = mn \in \mathcal{M}\). Furthermore, we assume that additively homomorphic encryption satisfies semantic security [15]. This implies that the encryption of a plaintext gives no information on the plaintext to a polynomial-time adversary.

Additive secret sharing over \(\mathbb{Z}_q \mathrel{\vcenter{:}}= \{0, 1, \dots, q - 1\}\) is a cryptographic technique to store a secret distributedly. It splits a message \(m \in \mathbb{Z}_q\) into \(n\) shares by a share generation algorithm \((\mathrm{s}_1, \dots, \mathrm{s}_n) \gets \mathsf{Share}(m, n)\), where \(\mathrm{s}_1, \dots, \mathrm{s}_{n - 1}\) are sampled from \(\mathbb{Z}_q\) uniformly at random, and \(\mathrm{s}_n = m - \sum_{i = 1}^{n - 1} \mathrm{s}_i \bmod q\). The message can be recovered by a reconstruction algorithm as \(\mathsf{Reconst}(\mathrm{s}_1, \dots, \mathrm{s}_n) = \sum_{i = 1}^n \mathrm{s}_i \bmod q\). Correctness of additive secret sharing is obvious, namely \(\mathsf{Reconst}(\mathsf{Share}(m, n)) = m\) for all \(m \in \mathbb{Z}_q\) and \(n \ge 2\). Moreover, any \(n - 1\) shares are uniformly at random over \(\mathbb{Z}_q\) and independent of \(m\) by construction.

4 Privacy-preserving Average Consensus↩︎

Using the cryptographic tools in Section 3.3, we present Protocol 2 that computes \[\mathrm{v}_i(k) \mathrel{\vcenter{:}}= \sum{}_{j \in N^{\mathrm{in}}_i} \mathrm{v}_{ij}(k), \quad \mathrm{v}_{ij}(k) \mathrel{\vcenter{:}}= \mathrm{w}_{ij} \mathrm{x}_j(k) \label{eq:aggregation}\tag{3}\] over encrypted data. Here, we focus on securely computing the second term in 2 because agent \(i\) can locally compute the first term. For the sake of simplicity, assume that \(\mathrm{w}_{ij}\) are rational numbers for all \(i, j \in V\), and the plaintext space is \(\mathcal{M}= \mathbb{Z}_q\) with a large prime \(q\). These assumptions are reasonable in practice because a real-valued weight can be approximated by a rational number with any desired precision to inherit the stability and performance of the original control, and \(q\) can be chosen freely.

Figure 2: Privacy-preserving average consensus

In the offline phase, the supervisor generates and sends \(\{ \mathsf{ct}_{\mathrm{s}, ji}(k) \gets \mathsf{Enc}(\mathsf{pk}_j, \mathrm{s}_{ji}(k)) \}_{k = 0, \, j \in N^{\mathrm{out}}_i}^{n - 1}\) to agent \(i\), where \((\mathrm{s}_{ij}(k))_{j \in N^{\mathrm{in}}_i} \gets \mathsf{Share}(0, |N^{\mathrm{in}}_i|)\) for every \(k \in \mathbb{N}_0\). Simultaneously, agent \(i\) encrypts its weights as \(\mathsf{ct}_{\mathrm{w}, ij} \gets \mathsf{Enc}(\mathsf{pk}_i, \tilde{\mathrm{w}}_{ij})\) and broadcasts \(\{\mathsf{ct}_{\mathrm{w}, ij}\}_{j \in N^{\mathrm{in}}_i}\) to all agents, where \(\tilde{\mathrm{w}}_{ij} \mathrel{\vcenter{:}}= \bar{\mathrm{w}}_{ij} \bmod q\), \(\bar{\mathrm{w}}_{ij} \mathrel{\vcenter{:}}= \Delta_\mathrm{w}^{-1} \mathrm{w}_{ij}\), and \(\Delta_\mathrm{w}> 0\). Note that since \(\mathrm{w}_{ij}\) are rational numbers, there exists \(\Delta_\mathrm{w}\) such that \(\bar{\mathrm{w}}_{ij} \in \mathbb{Z}\) for all \(i \in V\) and for all \(j \in N^{\mathrm{in}}_i\). Additionally, the broadcast process can be performed by message passing via the supervisor.

In the online phase, agent \(i\) encodes its state as \(\tilde{\mathrm{x}}_i(k) \mathrel{\vcenter{:}}= \bar{\mathrm{x}}_i(k) \bmod q\), where \(\bar{\mathrm{x}}_i(k) \mathrel{\vcenter{:}}= \lfloor \Delta_\mathrm{x}^{-1} \mathrm{x}_i(k) \rceil\), \(\Delta_\mathrm{x}> 0\), and \(\lfloor \cdot \rceil\) represents a rounding of a real number into the nearest integer. It then computes and sends \(\mathsf{ct}_{\mathrm{v}, ji}(k) = \mathsf{ct}_{\mathrm{w}, ji} \odot \tilde{\mathrm{x}}_i(k) \oplus \mathsf{ct}_{\mathrm{s}, ji}(k)\) to agent \(j \in N^{\mathrm{out}}_i\). Upon receiving \(\mathsf{ct}_{\mathrm{v}, ij}(k)\), agent \(i\) computes \(\mathrm{v}_i(k) = \Delta [ \mathsf{Reconst}( (\mathsf{Dec}(\mathsf{sk}_i, \mathsf{ct}_{\mathrm{v}, ij}(k)))_{j \in N^{\mathrm{in}}_i} ) ]_q\), where \(\Delta = \Delta_\mathrm{w}\Delta_\mathrm{x}\), and \([z]_q \mathrel{\vcenter{:}}= z - \lfloor \frac{z + q / 2}{q} \rfloor q\) is the minimal residue of \(z\) modulo \(q\). The agent then updates its state as \(\mathrm{x}_i(k + 1) = \mathrm{x}_i(k) + \mathrm{w}_{ii} \mathrm{x}_i(k) + \mathrm{v}_i(k)\). Note that although the resultant \(\mathrm{v}_i(k)\) includes a quantization error due to the rounding process, we ignore it in the following because it can be arbitrarily small by choosing sufficiently small \(\Delta_\mathrm{x}\). Consequently, we obtain the proposition below.

Proposition 1. The outputs of Protocol 2 achieve \(\lim_{n \to \infty} \mathrm{x}_i(n) = \frac{1}{N} \sum_{j=1}^N \mathrm{x}_{j,0}\) for all \(i \in V\) if, for every \(k = 0, \dots, n\), it holds that \(| \bar{\mathrm{v}}_{ij}(k) | < q / 2\) and \(| \bar{\mathrm{v}}_i(k) | < q / 2\) for all \(i \in V\) and for all \(j \in N^{\mathrm{in}}_i\), where \(\bar{\mathrm{v}}_i(k) = \sum_{j \in N^{\mathrm{in}}_i} \bar{\mathrm{v}}_{ij}(k)\) and \(\bar{\mathrm{v}}_{ij}(k) = \bar{\mathrm{w}}_{ij} \bar{\mathrm{x}}_j(k)\).

Proof. The claim holds from that \(\mathrm{v}_i(k)\) in the protocol is equivalent to 3 due to the homomorphism and correctness of additively homomorphic encryption and secret sharing. ◻

Proposition 1 shows that Protocol 2 achieves average consensus when \(| \bar{\mathrm{v}}_{ij}(k) | < q / 2\) and \(| \bar{\mathrm{v}}_i(k) | < q / 2\) hold, where recall that \(q\) can be chosen freely to satisfy the conditions. However, the protocol guarantees nothing about whether the agents follow it faithfully. Indeed, the protocol outputs deviate from an average value if an agent misreports its initial state or modifies its input. This problem will be solved later based on mechanism design theory.

The rest of this section demonstrates the security of Protocol 2 under semi-honest adversaries. In what follows, the supervisor is regarded as the \(0\)th party. The following assumptions are also made to specify an attack scenario.

Assumption 1. Assume the following conditions.

  • The supervisor does not collude with any agent.

  • Every agent has more than two input neighbors.

  • \(n\), \(G\), \(\Delta_\mathrm{w}\), \(\Delta_\mathrm{x}\), \(\mathcal{M}\), \(\mathcal{C}\), and \(\{\mathsf{pk}_i\}_{i \in V}\) are public.

Note that the first and second assumptions are necessary in our scenario. If agent \(i\) colludes with the supervisor, it can identify \(\tilde{\mathrm{x}}_j(k)\) for all \(j \in N^{\mathrm{in}}_i\) and for all \(k \in \mathbb{N}_0\) because the supervisor has all shares \(\mathrm{s}_{ij}(k)\). Additionally, if \(| N^{\mathrm{in}}_i | = 1\), agent \(i\) can easily identify agent \(j\)’s state as \(\mathrm{x}_j(k) = \mathrm{w}_{ij}^{-1} \mathrm{v}_i(k)\) from 3 . With the assumption, the lemma below shows the security of Protocol 2 under semi-honest adversaries less than \(\min_i |N^{\mathrm{in}}_i|\).

Lemma 1. Let \(\mathsf{x}_i = (\mathrm{w}_{ii}, \{\mathrm{w}_{ij}\}_{j \in N^{\mathrm{in}}_i}, \mathrm{x}_{i,0}, \mathsf{sk}_i)\) and \(\mathsf{y}_i = \{ \mathrm{x}_i(k) \}_{k=0}^n\) for \(i \in V\) and \(n \in \mathbb{N}\). Protocol 2 \(h\)-privately computes functionality \((\Lambda, \mathsf{y}_1, \dots, \mathsf{y}_N) = F(\Lambda, \mathsf{x}_1, \dots, \mathsf{x}_N)\) in the presence of semi-honest adversaries under Assumption 1, where \(h = \min_i |N^{\mathrm{in}}_i|\), and \(\Lambda\) is the empty string.

Proof. Let \(\Pi\) be Protocol 2. Under Assumption 1, this proof constructs simulators \(\mathsf{Sim}\) that satisfy the condition in Definition 3 for adversarial agents (\(C \subset V\)) and the supervisor (\(C = \{0\}\)) separately. From Corollary 2 in [18], a sequential composition of protocols \(\Pi_1, \dots, \Pi_T\), which respectively and privately compute functionalities \(F_1, \dots, F_T\), privately computes a composition of the functionalities in the presence of semi-honest adversaries. Thus, the proof suffices only for \(n = 1\) because \(\Pi\) with any \(n \in \mathbb{N}\) can be realized by sequentially repeating \(\Pi\) with \(n = 1\).

Simulator for agents: The view of agent \(i\) is given by \(\mathsf{view}^{\Pi}_i(\lambda, \Lambda, \mathsf{x}_1, \dots, \mathsf{x}_N) \!=\! (\mathsf{x}_i, \mathsf{r}_i, \mathsf{m}_i)\), where \(\mathsf{r}_i \!=\! \{r_{ij}\}_{j \in N^{\mathrm{in}}_i}\) are seeds for random numbers used in the encryption of \(\tilde{\mathrm{w}}_{ij}\), and \(\mathsf{m}_i = (\mathsf{m}_{i, \mathrm{s}}, \mathsf{m}_{i, \mathrm{w}}, \mathsf{m}_{i, \mathrm{v}}) = (\{\mathsf{ct}_{\mathrm{s}, ji}(0)\}_{j \in N^{\mathrm{out}}_i}, \{\mathsf{ct}_{\mathrm{w}, ji}\}_{i \in V, j \in N^{\mathrm{out}}_i}, \{\mathsf{ct}_{\mathrm{v}, ij}(0)\}_{j \in N^{\mathrm{in}}_i})\). Construct a simulator \(\mathsf{Sim}\) as follows: 1) Generate seeds \(\hat{\mathsf{r}}_i\) of the equal length as \(\mathsf{r}_i\) uniformly at random for all \(i \in C\). 2) Sample \(\hat{\mathsf{ct}}_{\mathrm{s}, ji}\) from \(\mathcal{C}\) uniformly at random for all \(i \in C\) and for all \(j \in N^{\mathrm{out}}_i\). 3) For all \(i \in V\) and for all \(j \in N^{\mathrm{out}}_i\), compute \(\hat{\mathsf{ct}}_{\mathrm{w}, ji} \gets \mathsf{Enc}(\mathsf{pk}_j, \tilde{\mathrm{w}}_{ji})\) if \(j \in C\); otherwise sample \(\hat{\mathsf{ct}}_{\mathrm{w}, ji}\) from \(\mathcal{C}\) uniformly at random. 4) For all \(i \in C\) and for all \(j \in N^{\mathrm{in}}_i\), compute \(\hat{\mathsf{ct}}_{\mathrm{v}, ij} \gets \mathsf{Enc}(\mathsf{pk}_i, \tilde{\mathrm{w}}_{ij} \tilde{\mathrm{x}}_{j,0} \bmod q) \oplus \hat{\mathsf{ct}}_{\mathrm{s}, ij}\) if \(j \in C\); otherwise sample \(\hat{\mathsf{ct}}_{\mathrm{v}, ij}\) from \(\mathcal{C}\) uniformly at random. 5) Let \(\hat{\mathsf{m}}_i \!=\! (\hat{\mathsf{m}}_{i, \mathrm{s}}, \hat{\mathsf{m}}_{i, \mathrm{w}}, \hat{\mathsf{m}}_{i, \mathrm{v}}) \!=\! (\{ \hat{\mathsf{ct}}_{\mathrm{s}, ji} \}_{j \in N^{\mathrm{out}}_i}, \! \{ \hat{\mathsf{ct}}_{\mathrm{w}, ji} \}_{i \in V, j \in N^{\mathrm{out}}_i}, \{ \hat{\mathsf{ct}}_{\mathrm{v}, ij} \}_{j \in N^{\mathrm{in}}_i})\). Output \(\{ (\mathsf{x}_i, \hat{\mathsf{r}}_i, \hat{\mathsf{m}}_i) \mid i \in C \}\).

By construction, it holds that \(\mathsf{Dec}(\mathsf{sk}_i, \mathsf{ct}_{\mathrm{v}, ij}(0)) = \tilde{\mathrm{w}}_{ij} \tilde{\mathrm{x}}_{j,0} + \mathrm{s}_{ij}(0) \bmod q\) and \(\mathsf{Dec}(\mathsf{sk}_i, \hat{\mathsf{ct}}_{\mathrm{v}, ij}) = \tilde{\mathrm{w}}_{ij} \tilde{\mathrm{x}}_{j,0} + \hat{\mathrm{s}}_{ij} \bmod q\) for all \(i \in C\) and for all \(j \in N^{\mathrm{in}}_i\), where \(\hat{\mathrm{s}}_{ij}\) is uniformly random over \(\mathbb{Z}_q\). From the randomness of additive secret sharing, if \(|C| < \min_i |N^{\mathrm{in}}_i|\), \(\{ \{\hat{\mathrm{s}}_{ji}\}_{j \in N^{\mathrm{out}}_i} \mid i \in C\}\) and \(\{ \{\mathsf{Dec}(\mathsf{sk}_i, \hat{\mathsf{ct}}_{\mathrm{v}, ij})\}_{j \in N^{\mathrm{in}}_i} \mid i \in C \}\) have the same distribution as \(\{ \{\mathrm{s}_{ji}(0)\}_{j \in N^{\mathrm{out}}_i} \mid i \in C\}\) and \(\{ \{\mathsf{Dec}(\mathsf{sk}_i, \mathsf{ct}_{\mathrm{v}, ij}(0))\}_{j \in N^{\mathrm{in}}_i} \mid i \in C \}\), respectively. Hence, semantic security of additively homomorphic encryption implies that \(\{ (\hat{\mathsf{m}}_{i, \mathrm{s}}, \hat{\mathsf{m}}_{i, \mathrm{v}}) \mid i \in C \}\) and \(\{ (\mathsf{m}_{i, \mathrm{s}}, \mathsf{m}_{i, \mathrm{v}}) \mid i \in C \}\) are computationally indistinguishable even given \(\{ (\mathsf{x}_i, \mathsf{r}_i) \mid i \in C\}\). It also implies that \(\{ \mathsf{m}_{i, \mathrm{w}} \mid i \in C \}\) and \(\{ \hat{\mathsf{m}}_{i, \mathrm{w}} \mid i \in C \}\) are computationally indistinguishable even given \(\{ (\mathsf{x}_i, \mathsf{r}_i) \mid i \in C\}\), and \(\{ \mathsf{m}_{i, \mathrm{w}} \mid i \in C \}\) is conditionally independent of \(\{ (\mathsf{m}_{i, \mathrm{s}}, \mathsf{m}_{i, \mathrm{v}}) \mid i \in C \}\) given \(\{ \mathsf{x}_i \mid i \in C\}\). Consequently, the condition in Definition 3 holds.

Simulator for the supervisor: The construction is obvious because the supervisor receives no message. ◻

Note that the supervisor in Lemma 1 takes and outputs the empty string, which means that it gives no input and receives no output in the protocol. This is because, to assist agents’ computation, it just sends the encryption of shares in the offline phase.

5 Distributed Mechanism for Privacy-Preserving Average Consensus↩︎

In this section, we assume a rational adversary model instead of a semi-honest adversary model.

Definition 4. Agent \(i\) is a rational adversary* if it performs \(\min_{s_i(\theta_i)} u_i(d(\theta), t(\theta); \theta_i)\) and attempts to learn information about other agents from one’s view, where \(s_i(\theta_i) = \{\sigma_{i, k} \mid k = 0, \dots, n - 1\}\), and \(\sigma_{i, k}\) are outgoing messages that agent \(i\) sends to its output neighbors at time \(k\).*

The rational adversaries formulated in the definition are allowed to cooperate with each other to learn the private information of honest agents. Meanwhile, they are supposed to minimize their own costs individually. This is a natural setting because, in practice, adversaries would have conflicting objectives (i.e., minimizing each cost), even if they agree to compromise the privacy of honest agents.

Our objective is to design a privacy-preserving protocol for providing a mechanism that implements a social choice function with decision rule \(d(\theta) = (\frac{1}{N} \sum_{i = 1}^N \mathrm{x}_{i,0}, \dots, \frac{1}{N} \sum_{i = 1}^N \mathrm{x}_{i,0})\) under rational adversaries. Here, computing average \(\frac{1}{N} \sum_{i = 1}^N \mathrm{x}_{i,0}\) is equivalent to minimizing \(\sum_{i = 1}^N (\mathrm{z}_i - \theta_i)^2\) for \((\mathrm{z}_1, \dots, \mathrm{z}_N)\) with \(\mathrm{x}_{i,0} = \theta_i\) [19]. This fact suggests that the average consensus problem can be regarded as a mechanism design problem with individual costs \(u_i(d(\theta), t(\theta); \theta_i) = (\mathrm{z}_i - \theta_i)^2 + t_i(\theta)\), where \(d(\theta) = (\mathrm{z}_1, \dots, \mathrm{z}_N)\).

We propose Protocol 3 based on the above observation. In the offline phase, the supervisor generates and sends \(\{ \mathsf{ct}_{\mathrm{t}, ji} \gets \mathsf{Enc}(\mathsf{pk}_j, \mathrm{t}_{ji}) \}_{j \in V \setminus \{i\}}\) to each agent \(i \in V\), where \((\mathrm{t}_{ij})_{j \in V \setminus \{i\}} \gets \mathsf{Share}(0, N - 1)\) for every \(i \in V\). Then, the protocol invokes Protocol 2 with \(\mathrm{x}_{i,0} = \theta_i\). After the online phase of Protocol 2, agent \(1\) encrypts its state as \(\{ \mathsf{ct}_{\mathrm{x}, i1} \gets \mathsf{Enc}(\mathsf{pk}_i, \tilde{\mathrm{x}}_1(n)) \}_{i \in V}\) and broadcasts them to all agents, where \(\tilde{\mathrm{x}}_1(n) = \bar{\mathrm{x}}_1(n) \bmod q\) is the encoded terminal state of agent \(1\). Simultaneously, agent \(i\) broadcasts \(\{ \mathsf{ct}_{v, ji} \gets \mathsf{Enc}(\mathsf{pk}_j, \tilde{v}_i) \oplus \mathsf{ct}_{\mathrm{t}, ji} \}_{j \in V \setminus \{i\}}\) to all agents, where \(\tilde{v}_i = \lfloor \Delta_\mathrm{x}^{-1} (\mathrm{x}_i(n) - \theta_i)^2 \rceil \bmod q\). Then, agent \(i\) obtains the social decision as \(d_i(\theta) = \Delta_\mathrm{x}[ \mathsf{Dec}(\mathsf{sk}_i, \mathsf{ct}_{\mathrm{x}, i1}) ]_q\) and \(t_i(\theta) = \Delta_\mathrm{x}[ \mathsf{Reconst}( (\mathsf{Dec}(\mathsf{sk}_i, \mathsf{ct}_{v, ij}))_{j \in V \setminus \{i\}} ) ]_q\). Consequently, for a sufficiently large \(n\), the social outcome is given as \(d_i(\theta) = \mathrm{x}_1(n) \approx \frac{1}{N} \sum_{j = 1}^N \theta_j\) with \(t_i(\theta) = \sum_{j \ne i} (d_j(\theta) - \theta_j)^2\).

Proposition 2. Suppose that, for every \(k = 0, \dots, n\) and \(n \in \mathbb{N}\), \(| \bar{\mathrm{v}}_{ij}(k) | < q / 2\), \(| \bar{\mathrm{v}}_i(k) | < q / 2\), \(|\bar{\mathrm{x}}_1(n)| < q / 2\), and \(\sum_{j \ne i} \lfloor \Delta_\mathrm{x}^{-1} (\mathrm{x}_i(n) - \theta_i)^2 \rceil < q / 2\) hold for all \(i \in V\) and for all \(j \in N^{\mathrm{in}}_i\), where \(\bar{\mathrm{v}}_{ij}\) and \(\bar{\mathrm{v}}_i\) are as in Proposition 1. Let \(M^n = (g^n, \Sigma^n, s^n)\) be a mechanism provided by Protocol 3. The sequence of mechanisms \(\{M^n\}_{n \in \mathbb{N}}\) asymptotically implements the social choice function \(f = (d, t)\) given by \(d(\theta) = ( d_1(\theta), \dots, d_N(\theta) )\), \(d_i(\theta) = \frac{1}{N} \sum_{j = 1}^N \theta_j\), \(t(\theta) = ( t_1(\theta), \dots, t_N(\theta) )\), and \(t_i(\theta) = \sum_{j \ne i} (d_j(\theta) - \theta_j)^2\) in ex-post Nash equilibria.

Proof. Let \(\hat{d}(\theta) = (\hat{d}_1(\theta), \dots, \hat{d}_N(\theta))\) and \(\hat{t}(\theta) = (\hat{t}_1(\theta), \dots, \hat{t}_N(\theta))\) be decision and transfer rules computed by the protocol. By construction, it follows that \(\hat{d}_i(\theta) = \Delta_\mathrm{x}[ \mathsf{Dec}(\mathsf{sk}_i, \mathsf{ct}_{\mathrm{x}, i1}) ]_q = \mathrm{x}_1(n)\) and \(\hat{t}_i(\theta) = \Delta_\mathrm{x}[ \mathsf{Reconst}( (\tilde{v}_j + \mathrm{t}_{ij})_{j \in V \setminus \{i\}} ) ]_q = \sum_{j \ne i} (\mathrm{x}_j(n) - \theta_j)^2\) for every \(i \in V\). Proposition 1 implies \(\lim_{n \to \infty} g^n \circ s^n = f\) because \(\lim_{n \to \infty} \mathrm{x}_i(n) = \frac{1}{N} \sum_{j = 1}^N \theta_j\) for all \(i \in V\). Therefore, the claim follows from Proposition 2 in [6]. ◻

Figure 3: Privacy-preserving distributed mechanism

Proposition 2 implies that, as \(n \to \infty\), all agents report their types honestly, i.e., \(\theta_i = \mathrm{x}_{i,0}\), and follow Protocol 3 faithfully. Then, the behavior of rational adversaries in Protocol 3 is equivalent to semi-honest adversaries. The theorem below demonstrates the security of Protocol 3 in the same manner as Lemma 1.

Theorem 1. Let \(\mathsf{x}_i = (\mathrm{w}_{ii}, \{\mathrm{w}_{ij}\}_{j \in N^{\mathrm{in}}_i}, \theta_i, \mathsf{sk}_i)\) and \(\mathsf{y}_i = (\{ \mathrm{x}_i(k) \}_{k=0}^n, d_i(\theta), t_i(\theta))\) for all \(i \in V\). Protocol 3 \(h\)-privately computes functionality \((\Lambda, \mathsf{y}_1, \dots, \mathsf{y}_N) = F(\Lambda, \mathsf{x}_1, \dots, \mathsf{x}_N)\) in the presence of semi-honest adversaries under Assumption 1, where \(h\) and \(\Lambda\) are as in Lemma 1.

Proof. Let \(\Pi\) be Protocol 3, and \(\Pi'\) be a protocol excluding line 2 from \(\Pi\). The claim follows from Corollary 2 in [18] and Lemma 1 by compositing Protocol 2 and \(\Pi'\) if \(\Pi'\) \(h\)-privately computes \(F\) in the presence of semi-honest adversaries. This proof constructs simulators \(\mathsf{Sim}\) for \(\Pi'\).

Simulator for agents: The view of agent \(i\) is given by \(\mathsf{view}^{\Pi'}_i(\lambda, \Lambda, \mathsf{x}_1, \dots, \mathsf{x}_N) = (\mathsf{x}_i, \mathsf{r}_i, \mathsf{m}_i)\), where \(\mathsf{r}_i = \{r_{ij}\}_{j \in V \setminus \{i\}}\) for \(i \ne 1\), \(\mathsf{r}_1 = (\{r_\ell\}_{\ell \in V}, \{r_{1j}\}_{j \in V \setminus \{1\}})\), and \(\mathsf{m}_i = (\mathsf{m}_{i, \mathrm{t}}, \mathsf{m}_\mathrm{x}, \mathsf{m}_v) = (\{\mathsf{ct}_{\mathrm{t}, ji}\}_{j \in V \setminus\{i\}}, \{\mathsf{ct}_{\mathrm{x}, \ell 1}\}_{\ell \in V}, \{\mathsf{ct}_{v, j \ell}\}_{\ell \in V, j \in V \setminus \{\ell\}}\})\). \(\{r_\ell\}_{\ell \in V}\) and \(\{r_{ij}\}_{j \in V \setminus \{i\}}\) are seeds for random numbers used in the encryption of \(\tilde{\mathrm{x}}_1(n)\) and \(\tilde{v}_i\), respectively. Construct a simulator \(\mathsf{Sim}\) as follows: 1) Generate seeds \(\hat{\mathsf{r}}_i\) of the equal length as \(\mathsf{r}_i\) uniformly at random for all \(i \in C\). 2) Compute \(\hat{\mathsf{ct}}_{\mathrm{t}, ji} \gets \mathsf{Enc}(\mathsf{pk}_j, \hat{\mathrm{t}}_{ji})\) with \((\hat{\mathrm{t}}_{ij})_{j \in V \setminus \{i\}} \gets \mathsf{Share}(0, N - 1)\) for all \(i \in V\). 3) For all \(\ell \in V\), compute \(\hat{\mathsf{ct}}_{\mathrm{x}, \ell 1} \gets \mathsf{Enc}(\mathsf{pk}_\ell, \Delta_\mathrm{x}^{-1} d_i(\theta) \bmod q)\) with some \(i \in C\). 4) For all \(i \in V\) and for all \(j \in V \setminus \{i\}\), compute \(\hat{\mathsf{ct}}_{v, ji} \gets \mathsf{Enc}(\mathsf{pk}_j, \tau_{ji}) \oplus \hat{\mathsf{ct}}_{\mathrm{t}, ji}\), where \(\tau_{ji} = \tilde{v}_i\) if \(i \in C\), \(\tau_{ji} = 0\) if \(i \in V \setminus C\) and \(j \in (V \setminus \{i\}) \setminus C\), and \((\tau_{ji})_{i \in V \setminus C} \gets \mathsf{Share}(\Delta_\mathrm{x}^{-1} t_j(\theta) - \sum_{\ell \in C \setminus \{j\}} \tilde{v}_\ell, | V \setminus C |)\) if \(i \in V \setminus C\) and \(j \in C\). 5) Let \(\hat{\mathsf{m}}_i = (\hat{\mathsf{m}}_{i, \mathrm{t}}, \hat{\mathsf{m}}_\mathrm{x}, \hat{\mathsf{m}}_v) = (\{\hat{\mathsf{ct}}_{\mathrm{t}, ji}\}_{j \in V \setminus \{i\}}, \{\hat{\mathsf{ct}}_{\mathrm{x}, \ell 1}\}_{\ell \in V}, \{\hat{\mathsf{ct}}_{v, \ell j}\}_{\ell \in V, j \in V \setminus \{\ell\}}\})\). 6) Output \(\{ (\mathsf{x}_i, \hat{\mathsf{r}}_i, \hat{\mathsf{m}}_i) \mid i \in C \}\).

By construction, \(\{ (\mathsf{m}_{i, \mathrm{t}}, \mathsf{m}_\mathrm{x}) \!\mid\! i \in C \}\) and \(\{ (\hat{\mathsf{m}}_{i, \mathrm{t}}, \hat{\mathsf{m}}_\mathrm{x}) \!\mid\! i \in C \}\) have the same distribution. For all \(i \in V\) and for all \(j \in V \setminus \{i\}\), \(\mathsf{ct}_{v, ji}\) is computationally indistinguishable from \(\hat{\mathsf{ct}}_{v, ji}\) due to semantic security. Furthermore, for all \(i \in C\), it holds that \(\mathsf{Reconst}( (\mathsf{Dec}(\mathsf{sk}_i, \hat{\mathsf{ct}}_{v, ij}) )_{j \in V \setminus \{i\}} ) = \sum_{j \in C \setminus \{i\}} \tau_{ij} + \sum_{j \in V \setminus C} \tau_{ij} = (\sum_{j \in C \setminus \{i\}} \tilde{v}_j) + (\Delta_\mathrm{x}^{-1} t_i(\theta) - \sum_{\ell \in C \setminus \{j\}} \tilde{v}_\ell) = \Delta_\mathrm{x}^{-1} t_i(\theta)\), which means that \(\{ \mathsf{m}_v \mid i \in C \}\) and \(\{ \hat{\mathsf{m}}_v \mid i \in C \}\) are computationally indistinguishable even given \(\{ (\mathsf{x}_i, \mathsf{r}_i) \mid i \in C \}\). Therefore, the condition in Definition 3 holds.

Simulator for the supervisor: The construction is obvious because the supervisor receives no message. ◻

Combining with Proposition 2 and Theorem 1, as \(n \to \infty\), the security of Protocol 3 is guaranteed in the sense of Definition 3 with \(h = \min_i |N^{\mathrm{in}}_i|\) under Assumption 1 even for rational adversaries. Note that, for finite \(n\), rational adversaries are not equivalent to semi-honest ones, and then they might not completely follow the proposed protocol. However, according to Definition 2 and Proposition 2, the decrease of adversaries’ costs by deviating from the protocol is bounded by any small value if \(n\) is sufficiently large. In this light, their behavior can be made arbitrarily close to semi-honest ones by choosing large \(n\).

Remark 2. A mechanism is (weakly) budget balanced* if \(\sum_{i = 1}^N t_i(\theta) = 0\). Moreover, it is individually rational if \(u_i(d(\theta), t(\theta); \theta_i) \le 0\) for every \(i \in V\). The mechanism provided by Protocol 3 is neither budget balanced nor individually rational, although it is asymptotically incentive compatible. Further development of the proposed protocol to satisfy the properties is future work.*

Remark 3. The supervisor must verify that all agents pay \(t_i(\theta)\) correctly after executing Protocol 3. This is not straightforward because the supervisor does not know the exact values of \(t_i(\theta)\) due to encryption. Nevertheless, the value of a sum of \(t_i(\theta)\) can be verified without compromising privacy as follows. Suppose \(t_i'(\theta)\) is the value that agent \(i\) actually paid. Let \(v_i(\theta) = (\mathrm{x}_i(n) - \theta_i)^2\) and \(u_i(\theta) = v_i(\theta) + t_i(\theta)\). It follows that \(u_i(\theta) = \sum_{j=1}^N v_j(\theta)\) and \(\sum_{i=1}^N t_i(\theta) = (N - 1) \sum_{j=1}^N v_j(\theta)\). Combining these equations, we obtain \(\sum_{i=1}^N t_i(\theta) = (N - 1) u_i(\theta)\). Therefore, the supervisor can check whether the value of \(\sum_{i=1}^N t_i'(\theta)\) is correct by asking each agent if \((N - 1)^{-1} \sum_{i=1}^N t_i'(\theta)\) is equal to \(u_i(\theta)\).

6 Numerical Examples↩︎

This section presents numerical examples using the LWE encryption. We used ECLib [20] and lattice-estimator [21] to implement the proposed protocols and the encryption scheme with \(\lambda = 128\) bit security.

Let \(n = 30\), \(N = 5\), \(\Delta_\mathrm{w}= 0.1\), \(\Delta_\mathrm{x}= 0.01\), \(\epsilon = 0.1\), and \(A = [0 \;0 \;0 \;1 \;1; \;1 \;0 \;0 \;1 \;1; \;0 \;1 \;0 \;1 \;0; \;0 \;1 \;0 \;0 \;1; \;0 \;1 \;1 \;0 \;0]\). Fig. 4 depicts the state trajectories of the agents during the execution of Protocol 2 and Protocol 3 with \((\theta_1, \dots, \theta_5) = (3, 2, 1, 0, -1)\). The cost of agent \(2\) was \(u_2(d(\theta), t(\theta); \theta_2) = 10.65\), where \(v_2(d(\theta); \theta_2) = 1.85\) and \(t_2(\theta) = 8.80\). Fig. 4 shows the state trajectories when agent \(2\) stayed in the same state (i.e., \(\mathrm{x}_2(k) = 2\)) by violating Protocol 2. In that case, although \(v_2\) was reduced to \(0\), \(t_2\) was increased to \(14.43\), thereby increasing \(u_2\) to \(14.43\). This implies that the agent would never behave in such a manner as long as it is rational.

a

b

Figure 4: State trajectories during the execution of the proposed protocols..

7 Conclusions↩︎

We proposed a privacy-preserving protocol to solve average consensus problems for rational and strategic agents. The proposed protocol provides a distributed mechanism that incentivizes such agents to implement intended behavior faithfully and protects the privacy of agents using additively homomorphic encryption and additive secret sharing. Combining the mechanism and cryptographic primitives, the proposed protocol fulfills security under rational adversaries rather than semi-honest adversaries. The results of this study will be generalized to other cooperative control tasks.

References↩︎

[1]
W. Ren, R. W. Beard, and E. M. Atkins, “A survey of consensus problems in multi-agent coordination,” in American Control Conference, 2005, pp. 1859–1864.
[2]
R. Olfati-Saber, J. A. Fax, and R. M. Murray, “Consensus and cooperation in networked multi-agent systems,” Proceedings of the IEEE, vol. 95, no. 1, pp. 215–233, 2007.
[3]
S. S. Kia, B. Van Scoy, J. Cortes, R. A. Freeman, K. M. Lynch, and S. Martinez, “Tutorial on dynamic average consensus: The problem, its applications, and the algorithms,” IEEE Control Systems Magazine, vol. 39, no. 3, pp. 40–72, 2019.
[4]
Y. Shoham and K. Leyton-Brown, Multiagent systems: Algorithmic, game-theoretic, and logical foundations.1em plus 0.5em minus 0.4emCambridge University Press, 2008.
[5]
D. C. Parkes and J. Shneidman, “Distributed implementations of Vickrey-Clarke-Groves mechanisms,” in International Joint Conference on Autonomous Agents and Multiagent Systems, 2004, pp. 261–268.
[6]
T. Tanaka, F. Farokhi, and C. Langbort, “A faithful distributed implementation of dual decomposition and average consensus algorithms,” in IEEE Conference on Decision and Control, 2013, pp. 2985–2990.
[7]
——, “Faithful implementations of distributed algorithms and control laws,” IEEE Transactions on Control of Network Systems, vol. 4, no. 2, pp. 191–201, 2017.
[8]
M. S. Darup, A. B. Alexandru, D. E. Quevedo, and G. J. Pappas, “Encrypted control for networked systems: An illustrative introduction and current challenges,” IEEE Control Systems Magazine, vol. 41, no. 3, pp. 58–78, 2021.
[9]
M. Kishida, “Encrypted average consensus with quantized control law,” in IEEE Conference on Decision and Control, 2018, pp. 5850–5856.
[10]
M. Ruan, H. Gao, and Y. Wang, “Secure and privacy-preserving consensus,” IEEE Transactions on Automatic Control, vol. 64, no. 10, pp. 4035–4049, 2019.
[11]
M. Marcantoni, B. Jayawardhana, M. P. Chaher, and K. Bunte, “Secure formation control via edge computing enabled by fully homomorphic encryption and mixed uniform-logarithmic quantization,” IEEE Control Systems Letters, vol. 7, pp. 395–400, 2023.
[12]
M. S. Darup, A. Redder, and D. E. Quevedo, “Encrypted cooperative control based on structured feedback,” IEEE Control Systems Letters, vol. 3, no. 1, pp. 37–42, 2019.
[13]
A. B. Alexandru, M. S. Darup, and G. J. Pappas, “Encrypted cooperative control revisited,” in IEEE Conference on Decision and Control, 2019, pp. 7196–7202.
[14]
Y. Lindell, “How to simulate it – A tutorial on the simulation proof technique,” Cryptology ePrint Archive, 2016.
[15]
O. Goldreich, Foundations of cryptography: Basic applications.1em plus 0.5em minus 0.4emCambridge University Press, 2009.
[16]
I. Damgård, V. Pastro, N. Smart, and S. Zakarias, “Multiparty computation from somewhat homomorphic encryption,” in Lecture Notes in Computer Science.1em plus 0.5em minus 0.4emSpringer Berlin Heidelberg, 2012, pp. 643–662.
[17]
N. Nisan and A. Ronen, “Computationally feasible VCG mechanisms,” Journal of Artificial Intelligence Research, vol. 29, no. 1, pp. 19–47, 2007.
[18]
R. Canetti, “Security and composition of multiparty cryptographic protocols,” Journal of Cryptology, vol. 13, no. 1, pp. 143–202, 2000.
[19]
M. G. Rabbat, R. D. Nowak, and J. A. Bucklew, “Generalized consensus computation in networked systems with erasure links,” in IEEE Workshop on Signal Processing Advances in Wireless Communications, 2005, pp. 1088–1092.
[20]
https://kaoruteranishi.github.io/EncryptedControl/index.html, accessed: 2025-03-10.
[21]
M. R. Albrecht, R. Player, and S. Scott, “On the concrete hardness of learning with errors,” Journal of Mathematical Cryptology, vol. 9, no. 3, pp. 169–203, 2015.

  1. \(^{\ast}\)This work was supported by JSPS Grant-in-Aid for JSPS Fellows Grant Number JP21J22442 and for JSPS KAKENHI Grant Number JP23K22779.↩︎

  2. \(^{1}\)School of Aeronautics and Astronautics, Purdue University, West Lafayette, IN 47907, USA kteranis@purdue.edu↩︎

  3. \(^{2}\)Japan Society for the Promotion of Science, Chiyoda, Tokyo, Japan↩︎

  4. \(^{3}\)Department of Mechanical and Intelligent Systems Engineering, The University of Electro-Communications, 1-5-1 Chofugaoka, Chofu, Tokyo 182-8585, Japan kogiso@uec.ac.jp↩︎

  5. \(^{4}\)School of Aeronautics and Astronautics, Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN 47907, USA tanaka16@purdue.edu↩︎