Multi-Agent Reinforcement Learning with Control-Theoretic Safety Guarantees for Dynamic Network Bridging


Addressing complex cooperative tasks in safety-critical environments poses significant challenges for Multi-Agent Systems, especially under conditions of partial observability. This work introduces a hybrid approach that integrates Multi-Agent Reinforcement Learning with control-theoretic methods to ensure safe and efficient distributed strategies. Our contributions include a novel setpoint update algorithm that dynamically adjusts agents’ positions to preserve safety conditions without compromising the mission’s objectives. Through experimental validation, we demonstrate significant advantages over conventional MARL strategies, achieving comparable task performance with zero safety violations. Our findings indicate that integrating safe control with learning approaches not only enhances safety compliance but also achieves good performance in mission objectives.

1 Introduction↩︎

Despite the promising performance of marl in simulations, its application in real-world, safety-critical scenarios raises significant concerns. A key challenge is collision avoidance, especially in applications like autonomous driving, where failure to prevent collisions could result in severe consequences. Moreover, the inherent nature of mas often involves partial observability, further increasing the complexity of these problems by limiting the information available to each agent about the environment and the states of other agents.

This challenge highlights a fundamental limitation of all rl-based approaches: while agents learn to optimize predefined reward functions, their behavior lacks guarantees and remains inherently unpredictable. The sole reliance on marl’s reward functions to ensure safety is insufficient [1]. Even with extensive training and monitoring of an agent’s performance based on expected rewards, or other performance measures, it is impossible to exhaustively examine all potential scenarios in which the agent might fail to act safely or predictably.

In ensuring safety among agents, various control-theoretic approaches and hybrid learning approaches have been developed, ranging from Model Predictive Control (MPC) [2] to Reference Governor (RG) techniques [3], and from Control Barrier Functions (CBFs) [4] to Reachability Analysis (RA) methods [5]. In this work, we focus on developing a hybrid learning approach built upon the method used in [6], that can learn to achieve complex objectives while ensuring safety guarantees in distributed mas.

To demonstrate and evaluate the effectiveness of our hybrid approach we consider the cooperative task of Dynamic Network Bridging [7] (Fig. 1). In this task the goal is to establish and maintain a connection between two moving targets A and B in a 2D plane, relying on a swarm of \(N\) agents with limited observation of the environment. The agents must form an ad-hoc mobile network that establishes a communication path between the two targets as they move, dynamically adjusting their positions to ensure an uninterrupted multi-hop communication link.

Figure 1: Decentralized swarm coordination for dynamic network bridging

In this paper we introduce a hybrid approach combining marl and control-theoretic methods, accomplishing task objectives while providing safety guarantees. Our key contributions are: 1) A decentralized control framework that is marl compatible and restricts the effect of each agent’s movement updates to only its one-hop neighbors, enabling efficient local coordination; 2) An algorithm that updates setpoints while preserving safety conditions through communication with affected neighbors; 3) An analytical computationally tractable condition verifying potential safety violations during updates. Overall our approach provides effective coordination among agents while ensuring safety constraints.

2 Preliminaries↩︎

In this section, we present the foundational theoretical elements, drawing from both control theory and marl, that form the basis of our proposed hybrid approach.

Consider a linear time invariant (LTI) system expressed in the state space form \[\label{eqn:lti95sys} \dot{x} = Ax + Bu\tag{1}\]

where \(x \in \mathbb{R}^n\) is the state vector with \(n\) components, and \(u \in \mathbb{R}^p\) is the input vector with \(p\) components. Thus, the dynamical matrices are \(A \in \mathbb{R}^{n \times n}\) and \(B \in \mathbb{R}^{n \times p}\). If the pair \((A,B)\) is controllable, there exists a state feedback controller: \[\label{eqn:s-f95control} u = -Kx\tag{2}\]

such that, the origin (\(x=0\)) of the resulting closed-loop (C-L) system: \[\label{eqn:c-l95sys} \dot{x} = (A - BK)x = A_f x\tag{3}\] is an asymptotically stable equilibrium point (EP): \(x(t) \rightarrow 0 \quad \text{as} \quad t \rightarrow +\infty\). Hence, the matrix \(A_f\) is Hurwitz, i.e. all the eigenvalues have a negative real part [8]. The problem to design 2 that makes 1 asymptotically stable is called stabilization problem.

2.1 Tracking Control Problem↩︎

In the case the problem requires to move the system 1 following a specific trajectory, the problem is called tracking control. This problem can be solved by generating a sequence of EPs [9] that we call setpoints, denoted by \(x_{sp}\). The C-L system 3 can be represented as: \[\label{eqn:c-l95sp} \dot{x} = A_f(x - x_{sp})\tag{4}\] where the control input 2 becomes \(u = -K(x - x_{sp}).\)

2.2 Lyapunov’s Theory↩︎

Since \(A_f\) is Hurwitz, there exists a symmetric positive definite matrix \(P > 0\) such that: \[\label{eqn:lyapunov} A_f^T P + PA_f = -Q\tag{5}\] for a given symmetric positive definite matrix \(Q > 0\). Equation 5 is called Lyapunov’s equation. Defining the \(P\)-norm as: \[\label{P-norm} \lVert x - x_{sp} \rVert_P \triangleq \sqrt{(x - x_{sp})^T P (x - x_{sp})}\tag{6}\] the \(P\)-norm squared is a Lyapunov function \(V(x) = \lVert x - x_{sp} \rVert_P^2\), which is always positive except at \(x_{sp}\) where it is zero. For all trajectories \(x(t)\) starting within the ellipsoid: \[\label{pinvset} \mathcal{E}_{c}(x_{sp}) = \{ x \in \mathbb{R}^n \,|\, \lVert x - x_{sp} \rVert_P^2 \leq c \},\tag{7}\] where \(c > 0\), \(V(x(t))\) is monotonically decreasing. Therefore, the trajectory \(x(t)\) never leaves \(\mathcal{E}_c\), which is called a positively invariant set [8].

Remark 1. In the case of a nonlinear dynamical system described by \(\dot{x} = f(x,u)\) where \(u = 0\), \(x = 0\) is an equilibrium point for the system \(f(0,0) = 0\). The control input 2 can be designed by considering the linearized system* 1 around the equilibrium point. Therefore, the closed-loop (CL) system 3 can be used to solve Lyapunov’s equation 5 , which is used to compute the positively invariant set \(\mathcal{E}_c(x_{sp})\) 7 . While for LTI systems, \(\mathcal{E}_c(x_{sp})\) is positively invariant for any positive \(c \in \mathbb{R}\), for nonlinear systems, we need to verify that condition in a neighborhood of the equilibrium point \(x_{sp}\) [8]. Furthermore, for nonlinear systems, \(x_{sp}\) has to satisfy \(f(x_{sp},0) = 0\).*

2.3 6-DOF UAV↩︎

In this work, we consider a 6-DOF UAV as the agent, described by a state vector of 12 components: \[\label{UAV95state} x = \begin{bmatrix} \dot{p}_x, & p_x, & \dot{p}_y, & p_y, & \dot{p}_z, & p_z, & \dot{\phi}, & \phi, & \dot{\theta}, \\ \theta, & \dot{\psi}, & \psi \end{bmatrix}^T\tag{8}\] where \(p_x\), \(p_y\), \(p_z\), and their derivatives represent linear position and velocity, respectively. Similarly, \(\phi\), \(\theta\), \(\psi\), and their derivatives represent the Euler angles (roll, pitch, and yaw) and their angular velocities, respectively. This model is consistent with the one used in [9] and references therein. The controller used is given by 2 , obtained by linearizing the quadrotor model around an equilibrium point. For this case, the equilibrium point is indicated with \(x_{sp}\) which has the same structure of 8 with all components equal to zero except for \(p_x\), \(p_y\), \(p_z\).

2.4 Multi-Agent Reinforcement Learning↩︎

In a typical multi-agent cooperative scenario, \(N\) agents interact within an environment to complete a task. Noisy and limited sensors may prevent each agent from observing the full state of the environment, allowing access only to partial observations. With partial observability, an agent no longer knows the true state of the environment, but instead needs to maintain a belief state - a probability distribution over states estimated from its observation history - which it uses to select actions. Decision-making processes in such contexts can be formulated as a dpomdp [10], [11].

A dpomdp is defined by a tuple \((S, \mathbf{A}, \mathcal{O}, T, R, \Omega, \gamma, N)\) where \(S\) is a finite set of states of the environment, \(\mathbf{A}\) is a set of joint action spaces, and \(N\) is the number of agents, \(T\) is the state transition probability function which gives the probability of transitioning from state \(s\) to state \(s^{\prime}\) when joint action \(\mathbf{a}=(a_1, \cdots, a_N)\) is taken; \(R\) is the joint reward function that maps states and joint actions to real numbers and is used to specify the goal of the agents; \(\Omega\) is a set of joint observations \(O_i\), where \(O_i\) is the observation set for agent \(i\); \(\mathcal{O}\) is the joint observation probability function, which gives the probability of receiving joint observation \(\mathbf{o}=(o_1, \cdots, o_N)\) after taking joint action \(\mathbf{a}\) and ending up in state \(s^{\prime}\); and \(\gamma\) is the reward discount factor, \(0 \leq \gamma \leq 1\).

In a dpomdp, agents need to collaborate and coordinate their actions based on their individual observations to maximize the joint reward function. Solving a dpomdp optimally is computationally intractable, as the policy space grows exponentially with the number of agents and the planning horizon. Therefore, various approximate solution methods [12], [13] have been used, such as heuristic search, dynamic programming, value or policy-based approximators with neural networks etc. The problem facing the team is to find the optimal joint policy, i.e. a combination of individual agent policies that produces behavior that maximizes the team’s expected reward.

3 Problem Formulation↩︎

We address the challenge of dynamically forming and sustaining a connection between two moving targets, A and B, in a 2D space using a swarm of \(N\) agents [7]. These agents must collaboratively adjust their positions to maintain a continuous multi-hop link between the targets, operating under decentralized control and limited local perception, while ensuring they avoid collisions and respect physical boundaries. We consider all the agents as nonlinear systems that can be reduced into the form 1 controlled by 2 around the equilibrium point \(x_{sp,i}\) [9]: \[\label{MAS} \dot{x}_i = A_f(x_i - x_{sp,i}) = (A - BK)(x_i - x_{sp,i})\tag{9}\] for \(i = 1, \ldots, N\). In this case, we consider \(N\) agents described with the same model, but in general, our approach can be generalized to the case of different dynamics and multidimensional scenarios.

3.1 One-hop Communication↩︎

We assume that for each agent, there is a ball of radius \(r\) in the 2- or 3-dimensional Euclidean space centered at the agent’s current linear position \(p_i(t) = [p_{x,i}(t), p_{y,i}(t), p_{z,i}(t)]^T\): \[\mathcal{B}_r(p_i(t)) = \left\lbrace p \in \mathbb{R}^3 : \|p - p_i(t) \| ^2 \leq r^2 \right\rbrace\] Given two agents \(i\) and \(j\) with \(i \neq j\), they can communicate if and only if \[\label{1hop} \mathcal{B}_r(p_i(t)) \cap \mathcal{B}_r(p_j(t)) \neq \emptyset\tag{10}\]

In our setup, since \(p_{z,i}\) is fixed, the one-hop communication region is described by a 2-D ball. Note that, we use a different notation when we refer to the 2-D communication space to avoid any confusion between the quadrotor \(x\)-axis and the state space \(x\).

3.2 Safety↩︎

Definition 1. Given two agents \(i\) and \(j\), where \(x_{sp,i} \neq x_{sp,j}\), \(x_i(t_0) \in \mathcal{E}_c(x_{sp,i})\), and \(x_j(t_0) \in \mathcal{E}_c(x_{sp,j})\). Agents \(i\) and \(j\) are safe with respect to each other if \[\label{safe1} \mathcal{E}_c(x_{sp,i}) \cap \mathcal{E}_c(x_{sp,j}) = \emptyset.\qquad{(1)}\]

The above definition is derived by the positively invariant property of 7 , since the initial conditions of each agent are inside their respective ellipsoids, then \(x_i(t) \in \mathcal{E}_c(x_{sp,i})\) and \(x_j(t) \in \mathcal{E}_c(x_{sp,j})\) for \(t \rightarrow +\infty\). This means that the trajectory of each agent cannot intersect generating the risk of possible collisions.

In order to guarantee safety in our setup, we need the following assumption:

Assumption 1. The projection of \(\mathcal{E}_c(x_{sp,i})\) into the 2-D space formed by the components \(p_x\) and \(p_y\) must be contained within the 2-dimensional ball \(\mathcal{B}_r(p_i(t))\), where \(p_i(t) = [p_{x,i}(t), p_{y,i}(t)]\) represents the actual position coordinates (\(p_{x,i}(t), p_{y,i}(t)\)) of agent \(i\) for any state \(x_i \in \mathcal{E}_c(x_{sp,i})\).

3.3 Cooperative MARL for Dynamic Network Bridging↩︎

Each agent \(i\) has a local observation \(o_i\) comprising the structure of its neighborhood and features representing its neighbors at certain time-step \(t\). Such features include the node ID, current coordinates, action taken, and coordinates of the targets A and B to be connected. At the beginning of every time-step, agents make decentralized decisions, generating target points \(w\), utilizing their local observation to choose the direction of their next movement. This is encoded in a two-dimensional action space, with each dimension having three options: move forward, move backward, or hold along the corresponding \(p_x\) or \(p_y\) axis. Given the direction chosen by the agents, a new target point is calculated for each agent using a fixed offset equal for both the \(p_x\) and \(p_y\) axes. Finally, agents are rewarded using a reward function that motivates agents to form a connected network bridging the targets. Such reward signal combines three components:

Base Connectivity Reward: The base reward, \(R_{\text{base}}\), encourages larger connected components: \[R_{\text{base}}(s) = \frac{|C_{\text{max}}(s)|}{|\mathcal{V}|},\] where \(|C_{\text{max}}(s)|\) represents the size of the largest connected component in state \(s\), and \(|\mathcal{V}|\) is the total number of entities.

Centroid Distance Penalty: The centroid distance penalty, \(P_{\text{cent}}\), is based on agents’ positions relative to the targets, penalizing them based on the Euclidean distance between the centroid of the agents’ positions and the central point between the targets.

Target Path Bonus: A bonus, \(B_{\text{path}}=100\), is awarded if a path exists between the two targets.

The overall reward combines these three elements:

\[\label{eq:final95reward} R(s, a) = \begin{cases} B_{\text{path}}(s) & if\exists \mathrm{path}(T_1, T_2); \\ R_{\text{base}}(s) - P_{\text{cent}}(s), & otherwise. \end{cases}\tag{11}\]

We also include a physics simulator, modeling the continuous non-linear dynamics of the agents 2 , and a safe tracking control system proposed in the next section (Algorithm 3).

4 Safe Tracking Control↩︎

The safety tracking control algorithm that we propose here consists of two steps, the first one is the setpoint update of the agent \(i\) that guarantees that its current position belongs to \(\mathcal{E}_c\) of the new setpoint. This condition is fundamental for the second step where the safety condition ?? is checked for all agents within the one-hop communication ball.

Let us start with the first step which has been borrowed from [6]. For each agent, we select two scalars \(c\) and \(s\) such that \(0 < s < c\) therefore \(\mathcal{E}_{s}(x_{sp,i})\subset \mathcal{E}_{c}(x_{sp,i})\).

Definition 2. We can say that the agent has reached the setpoint \(x_{sp,i}\) if and only if \(x_i(t) \in \mathcal{E}_s(x_{sp,i})\).

Assuming that the agent \(i\) is moving from the target point \(w_i\) to the target point \(w_{i+1}\), generated by the marl, the idea is to generate a sequence of setpoints on the line that connects the two target points. Once the state of agent \(i\), \(x_i\) reached the setpoint \(x_{sp,i}\), a new setpoint \(x'_{sp,i}\) is generated following the rule \[\label{update} x'_{sp,i} = x_{sp,i} + (\sqrt{c} - \sqrt{s})v\tag{12}\] where \(v \in \mathbb{R}^n\) with \(\| v \|_2 = 1\), indicating the direction of the line connecting the two target points. As showed in [6]

\[\label{cond:safe1} x_i(t) \in \mathcal{E}_s(x_{sp,i}) \Rightarrow x_i(t) \in \mathcal{E}_c(x'_{sp,i}).\tag{13}\]

Fig. 2 illustrates the process of updating the setpoint, presented in 2-D space for clearer visualization, though it actually corresponds to a state space with n=12 components, as applicable to quadrotors.

Figure 2: Setpoint update for agent \(i\) in the state space. Note that the target points \(w_i\) and \(w_{i+1}\) have the same structure of the setpoints \(x_{sp,i}\).

Thanks to 12 and Assumption 1, the projection of \(\mathcal{E}_c(x'_{sp,i})\) is contained in \(\mathcal{B}(p_i(t))\).

Proposition 1. Considering agent \(i\) moving from target point \(w_i\) to \(w_{i+1}\), and updating the setpoint \(x_{sp,i}\) every time \(x_{i}(t)\in \mathcal{E}_s(x_{sp,i})\) by using 12 . Considering also Assumption 1 true, then the safety condition ?? can be violated only with all the agents \(j\neq i\) that satisfy 10 (i.e. within the one-hop communication range).

Proof. From Assumption 1, the ball \(\mathcal{B}_r(p_i(t))\) encompasses the projection onto the 2-D space of \(\mathcal{E}_c(x{sp,i})\) for any current state \(x_i(t) \in \mathcal{E}_c(x_{sp,i})\). Utilizing 12 to adjust the setpoint ensures that \(x_i(t)\) remains within \(\mathcal{E}_c(x'_{sp,i})\), signifying that its 2-D space projection still falls within \(\mathcal{B}_r(p_i(t))\). Thus, should the updated setpoint \(x'_{sp,i}\) contravene ?? , such a violation would only occur concerning agents situated within the one-hop communication range. ◻

Proposition 1 highlights a critical aspect of setpoint updates: setpoint updates only impact safety for agents within one-hop range. This ensures agent \(i\) can communicate with affected neighbors before executing updates. This allows coordinating to preemptively address any potential safety concerns from the new setpoint through direct communication with impacted agents.

Building upon Proposition 1, let’s define \(\mathcal{D}_i(t)\) as the set comprising all agents \(j \neq i\) that are within the one-hop communication range of agent \(i\) at time \(t\). The safe setpoint update algorithm for each agent \(i\) is given in Algorithm 3. Once agent \(i\)’s state \(x_i(t)\) reaches its current setpoint \(x_{sp,i}\) (as per Definition 2), it computes a new target setpoint \(x'_{sp,i}\). Before updating, agent \(i\) checks for any possible safety violations by evaluating the safety condition ?? for all agents in \(\mathcal{D}_i(t)\). If no violation is detected, \(x_{sp,i}\) is updated to the new \(x'_{sp,i}\); otherwise, the previous setpoint is retained.

Figure 3: Setpoint Update Algorithm

By applying Algorithm 3 to each agent, we ensure that the safety condition ?? is satisfied throughout the system. This is guaranteed by the fact that an agent can only update its setpoint after verifying that the new target does not violate safety for any agents within its one-hop communication range, as stated in Proposition 1. The proof of safety relies on the following assumptions:

Assumption 2. The initial setpoints of all agents satisfy the safety condition ?? .

Assumption 3. During the setpoint update for agent \(i\), all agents \(j \in \mathcal{D}_i(t)\) cannot change their setpoints \(x_{s p, j}\).

Proposition 2. Considering the multi-agent system described by 9 and Assumptions 1, 2 and 3, then applying Algorithm 3 to each agent guarantees that the safety condition ?? is satisfied for all agents.

Proof. By Assumption 2, the initial setpoints of all agents satisfy the safety condition ?? . Whenever an agent \(i\) updates its setpoint using Algorithm 3, it checks if the new target setpoint \(x_{s p, i}^{\prime}\) violates the safety condition for any agent \(j \in \mathcal{D}_i(t)\). If a violation is detected, the setpoint is not updated, ensuring that the current safe solution is maintained. Proposition 1 guarantees that safety is only impacted for agents within the one-hop communication range, which agent \(i\) can directly communicate with before executing the update. Furthermore, Assumption 3 prevents agents in \(\mathcal{D}_i(t)\) from changing their setpoints during agent \(i\) ’s update, ensuring a consistent safety evaluation. Therefore, by applying Algorithm 3 to each agent, the safety condition ?? is upheld throughout the system’s operation. ◻

We now need to verify when the safety condition ?? is violated. Our objective is to demonstrate that if ?? is violated, then there exists a point on the line connecting \(x_{sp,i}\) and \(x_{sp,j}\), inside the overlap region of two ellipsoids \(\mathcal{E}_c(x_{sp,i})\) and \(\mathcal{E}_c(x_{sp,j})\) (Fig. 4). This overlap region, denoted by \(\mathcal{E}_c(x{sp,i}) \cap \mathcal{E}_c(x_{sp,j})\), satisfies the following condition: \[\label{intersect95cond} \lambda(x-x_{sp,i})^TP(x-x_{sp,i}) + (\lambda - 1)(x-x_{sp,j})^TP(x-x_{sp,j}) \leq c\tag{14}\] for \(0 \leq \lambda \leq 1\) with \(x_{sp,i} \neq x_{sp,j}\).

Proposition 3. Let us consider the ellipsoids \(\mathcal{E}_c(x_{sp,i})\) and \(\mathcal{E}_c(x_{sp,j})\) defined as in 7 . The set of points \(x\) satisfying 14 for all \(\lambda \in [0,1]\) is either empty, a single point, or an ellipsoid: \[\label{intersect95ellips} \hat{\mathcal{E}}_{K_\lambda}(m_\lambda) = \left\lbrace x \in \mathbb{R}^n : (x-m_\lambda)^TP^{-1}(x-m_\lambda) \leq K_\lambda \right\rbrace\qquad{(2)}\] where \[\begin{align} m_\lambda &= \lambda x_{sp,i} + (1-\lambda)x_{sp,j}\\ K_\lambda &= 1 - \lambda(1-\lambda)(x_{sp,j} - x_{sp,i})^TP(x_{sp,j} - x_{sp,i}). \end{align}\]

Proof. See [14]. ◻

Proposition 3 shows that if there is an ellipsoid violation, then the point \(m_\lambda\) belongs to the intersection Fig. 4: \[\mathcal{E}_c(x_{sp,i})\cap \mathcal{E}_c(x_{sp,j}) \neq \emptyset \rightarrow m_\lambda \in \mathcal{E}_c(x_{sp,i})\cap \mathcal{E}_c(x_{sp,j})\]

Figure 4: Explaination of Proposition 3: \(m_\lambda\) is on the segment that connects the two setpoints.

Therefore, to test if the two ellipsoids intersect we need to find if there exists at least one point on the segment joining \(x_{sp,i}\) and \(x_{sp,j}\) that satisfies 14 . To do so we consider both ellipsoids \[\label{two95ellips} \left\lbrace\;\begin{array}{l} (x-x_{sp,i})^TP(x-x_{sp,i})\leq \rho\\ (x-x_{sp,j})^TP(x-x_{sp,j})\leq \rho\\ \end{array} \right.\tag{15}\]

We replace \(x\) with \(m_\lambda\) since we are checking only a point in the segment joining the two setpoints. Defining \(d \triangleq x_{sp,j}- x_{sp,i}\), 15 can be rewritten as \[\left\lbrace\;\begin{array}{r} \lambda^2 d^TPd\leq \rho\\ (\lambda -1)^2d^TPd\leq \rho\\ \end{array} \right. \Rightarrow \left\lbrace\;\begin{array}{r} \lambda^2 \Vert d \Vert_P^2\leq \rho\\ (\lambda -1)^2\Vert d \Vert_P^2\leq \rho\\ \end{array} \right.\] Solving for \(\lambda\) we obtain \[\left\lbrace\;\begin{array}{l} \lambda^2 \leq \frac{c}{\Vert d \Vert_P^2}\\ \lambda^2 \Vert d \Vert_P^2-2\lambda \Vert d \Vert_P^2+\Vert d \Vert_P^2\leq c\\ \end{array} \right. \Rightarrow 1-2\sqrt{\frac{c}{\Vert d \Vert_P^2}}\leq 0\] The two ellipsoids \(\mathcal{E}_c(x_{sp,i})\) and \(\mathcal{E}_c(x_{sp,j})\) intersect if \[\label{int95cond} 2\sqrt{\frac{c}{\Vert d \Vert_P^2}}\geq 1 \rightarrow \Vert d \Vert_P^2\leq 4c\tag{16}\]

Remark 2. Although Algorithm 1 ensures safety according to Proposition 2, it does not guarantee target achievement. Agents within one-hop communication range may become immobilized at a certain position because any attempt to reach the target points could potentially violate the safety condition ?? . We intentionally refrain from employing any enforcer to resolve this deadlock scenario, as our aim is to assess whether our marl framework can autonomously generate target points that circumvent such situations.

In summary, our analysis provides the following key theoretical results:

  • Each agent’s setpoint update only impacts the safety of neighboring agents within its one-hop communication range. This localized effect is an important property that enables decentralized coordination.

  • we establish that Algorithm 3 guarantees the preservation of the safety condition in (?? ) for all agents,

  • We provide 16 a computationally tractable method for evaluating possible safety condition ?? violations.

Together, these theoretical results form the basis for our safe and distributed multi-agent control framework, ensuring that agents can dynamically update their setpoints while maintaining the prescribed safety guarantees through local communication and coordination.

Table 1: Comparison of agent performance, safety mechanisms, explicit penalties, and safety violations.
Agent Type Safety Mechanism Explicit Penalty Truncation Avg Coverage Avg Safety Violations
Baseline A No No No 41% 13,880
Baseline B No Yes No 0.01% 0
Approach A Yes No No 39% 0
Approach B Yes Yes No 16% 0
Approach C Yes Yes Yes 22% 0

5 Neural Network Architecture↩︎

We employ the neural network architecture for action-value estimation presented in [7], which is tailored to address the challenges of dynamic network bridging in a cooperative multi-agent setting, enabling the agents to extrapolate information about spatial and temporal dependencies while collaborating effectively. Such architecture combines gnns for relational modeling and exploits their message-passing mechanisms compatibly with the task optimized in the environment [Ref Section]. Furthermore, a lstm is employed to cope with temporal dependencies and the partial observability of the task. The model is trained in a ctde [15] fashion, where the agents optimize the same action-value function parameterization during training. However, during execution, each agent acts based on its local observations and the shared representations from its neighbors, adhering to the decentralized nature of the task.

We continue by describing the key components of the architecture below.

5.0.1 Graph Attention Layers↩︎

dgn [16] facilitates cooperation between the agents by enabling them to share their latent representations with their immediate neighbors. To this end, we leverage gat [17] to capture the spatial and relational dependencies between agents in the network. Specifically, we employ two Multi-Headed gat that enable each agent to attend to and integrate information from its neighboring agents adaptively. This encoding allows the agents to condition their actions based on the dynamic network topology and inter-agent relationships effectively.

Additionally, we integrate target entities into the sharing process by equipping them with this same encoding module to produce their latent representation. If an agent’s neighborhood includes target entities, the agent can gather their representations and condition its actions accordingly.

5.0.2 Long Short-Term Memory↩︎

To handle the partial observability and temporal dynamics of the environment, we integrate a lstm [18] layer into our architecture. The lstm layer maintains a memory of past observations, allowing the agents to make informed decisions based on past interactions, partial observability, and the evolving environment.

During training, we employ observation stacking, aggregating observations over multiple time steps. Such a process gives the lstm layer the necessary temporal context for effective learning in the presence of partial observability.

5.0.3 Dueling Action Decoder↩︎

To estimate action values effectively, we incorporate a Dueling Action Decoder [19] in our architecture. This component comprises separate streams for estimating state values and advantages, which are then combined to produce the final action value estimates.

6 Experimental Setup↩︎

We evaluated our approach in a simulated 2D environment with three agents and two moving targets. The environment was normalized with axes ranging from 0 to 1. Agents and targets had a communication range of 0.25, and their movement offset was set to 0.05 for the calculation of the next target point \(w\). The physics simulator allowed us to simulate the non-linear uav dynamics while we updated the positions of the agents in their paths toward the decided target point.

Our multi-agent strategies were trained across 1M training steps. Training episodes are truncated and Partial Episode Boostraping [20] was performed after 100 steps taken by each agent. Agents were initialized at fixed starting positions that satisfy safety condition ?? , and the moving targets were randomly placed at a certain distance apart to ensure realistic difficulty and a feasible connection. Target motion and placing follow a seeded random pattern to control the scenarios during training and testing. The primary focus of our experiments was to demonstrate the effectiveness of our safe tracking control system in enabling the marl approach to learn policies that satisfy safety constraints. To establish the importance of this contribution, we designed a set of experiments with varying training configurations:

  • Baseline A: We first trained a typical marl approach without any safety tracking control, serving as a baseline.

  • Baseline B: To investigate the impact of explicit penalization for safety violations, we trained the same marl approach as in Baseline A but with a high penalty (-100) imposed whenever an agent violated the safety constraints.

  • Approach A: This configuration represents our key contribution, where we trained the agents with the safe tracking control system enabled. Agents employed Algorithm 3 to update their target points while respecting the safety conditions.

The results from Approach A demonstrated the effectiveness of our safe tracking control system in enabling the agents to learn policies that accomplish the task while adhering to safety constraints.

To further solidify our approach and highlight its advantages, we conducted additional experiments:

  • Approach B: Here, we combined the safe tracking control system with explicit penalization for safety violations, similar to Baseline B. This configuration aimed to analyze the agents’ behavior when both safety measures and penalization for safety violations were employed simultaneously.

  • Approach C: Building upon Approach B, we additionally introduced episode termination whenever safety constraints were violated. This experiment assessed the impact of a more stringent safety enforcement mechanism.

If enabled, the safe tracking control system was used to prevent agents from continuing their movement if they violated the safety condition ?? , such as going too close to other agents and risking collision. During training, in case the safety condition was violated, the agents involved were forced to stop their movement and would remain blocked until a new action toward safe directions was computed by at least one of the agents involved. To evaluate our agents, we ran 100 evaluation episodes disabling the safety tracking control system and counting the number of times that safety conditions were violated for every position update 12 .

7 Results↩︎

Table 1 shows the results of our experiments evaluating different learned marl strategies. We report the average communication coverage achieved by the agents as well as the number of safety violations observed when the safe tracking control system was disabled during evaluation.

Baseline A agents, trained without any safety mechanisms, achieved 41% coverage but incurred 13,880 safety violations on average. Introducing an explicit penalty of -100 for violating safety constraints (Baseline B) reduced the coverage to only 0.01% but did successfully prevent any safety violations.

In our Approach A, agents achieved 39% coverage (2% less than Baseline A) while completely avoiding any safety violations. This demonstrates that safe tracking control allows the agents to learn policies that respect safety constraints without significantly compromising task performance and without the need for explicit constraint violation penalties. Adding an explicit penalty for safety violations to the safe tracking control (Approach B) reduced coverage to 16% while still avoiding violations. Further adding episode truncation when violating safety (Approach C) increased coverage to 22% while maintaining zero violations.

8 Discussion↩︎

Our work presents a novel hybrid approach that combines marl with control-theoretic methods to address a complex cooperative task while ensuring safety guarantees. The theoretical analysis provides three key results. First, we prove that each agent’s setpoint update only affects the safety of its one-hop neighboring agents, enabling decentralized coordination. Second, we establish that Algorithm 3 guarantees the preservation of the safety condition for all agents under specific assumptions. Third, we derive an analytical condition to efficiently evaluate potential safety violations during setpoint updates.

The experimental results highlight the importance of our hybrid approach. Agents trained without any safety mechanisms (Baseline A) achieved high task coverage but incurred numerous safety violations. Introducing an explicit penalty for violating safety constraints (Baseline B) successfully prevented violations but at the cost of significantly compromised task performance. In contrast, our Approach A, which incorporates the safe tracking control system based on Algorithm 3, allowed agents to learn policies that respect safety constraints while maintaining reasonable task coverage, without explicit constraint violation penalties.

Despite these strengths, there are some limitations to our current work. While we have demonstrated the approach’s effectiveness on the network bridging task, we have not extensively evaluated its scalability to significantly larger swarm sizes or other domains. Moving forward we plan to investigate the scalability and adaptability of our approach across different domains and with varying swarm sizes.


I. ElSayed-Aly, S. Bharadwaj, C. Amato, R. Ehlers, U. Topcu, and L. Feng, “Safe multi-agent reinforcement learning via shielding,” arXiv preprint arXiv:2101.11196, 2021.
L. Dai, Q. Cao, Y. Xia, and Y. Gao, “Distributed mpc for formation of multi-agent systems with collision avoidance and obstacle avoidance,” Journal of the Franklin Institute, vol. 354, no. 4, pp. 2068–2085, 2017.
Y. Li, N. Li, H. E. Tseng, A. Girard, D. Filev, and I. Kolmanovsky, “Safe reinforcement learning using robust action governor,” in Learning for Dynamics and Control, pp. 1093–1104, PMLR, 2021.
Z. Gao, G. Yang, and A. Prorok, “Online control barrier functions for decentralized multi-agent navigation,” in 2023 International Symposium on Multi-Robot and Multi-Agent Systems (MRS), pp. 107–113, IEEE, 2023.
N. Kochdumper, H. Krasowski, X. Wang, S. Bak, and M. Althoff, “Provably safe reinforcement learning via action projection using reachability analysis and polynomial zonotopes,” IEEE Open Journal of Control Systems, vol. 2, pp. 79–92, 2023.
R. Romagnoli, B. H. Krogh, D. de Niz, A. D. Hristozov, and B. Sinopoli, “Software rejuvenation for safe operation of cyber–physical systems in the presence of run-time cyberattacks,” IEEE Transactions on Control Systems Technology, 2023.
R. Galliera, T. Möhlenhof, A. Amato, D. Duran, K. B. Venable, and N. Suri, “Distributed autonomous swarm formation for dynamic network bridging,” in (To Appear) The 17th International Workshop on Networked Robotics and Communication Systems (IEEE INFOCOM), 2024.
H. K. Khalil, Nonlinear systems; 3rd ed. Upper Saddle River, NJ: Prentice-Hall, 2002.
A. Chen, K. Mitsopoulos, and R. Romagnoli, “Reinforcement learning-based optimal control and software rejuvenation for safe and efficient uav navigation,” in 2023 62nd IEEE Conference on Decision and Control (CDC), pp. 7527–7532, IEEE, 2023.
D. S. Bernstein, R. Givan, N. Immerman, and S. Zilberstein, “The complexity of decentralized control of markov decision processes,” Mathematics of operations research, vol. 27, no. 4, pp. 819–840, 2002.
F. A. Oliehoek and C. Amato, A Concise Introduction to Decentralized POMDPs. SpringerBriefs in Intelligent Systems, Springer International Publishing, 2016.
F. A. Oliehoek, S. Whiteson, M. T. Spaan, et al., “Approximate solutions for factored dec-pomdps with many agents.,” in AAMAS, pp. 563–570, 2013.
R. Lowe, Y. I. Wu, A. Tamar, J. Harb, O. Pieter Abbeel, and I. Mordatch, “Multi-agent actor-critic for mixed cooperative-competitive environments,” Advances in neural information processing systems, vol. 30, 2017.
I. Gilitschenski and U. D. Hanebeck, “A robust computational test for overlap of two arbitrary-dimensional ellipsoids in fault-detection of kalman filters,” in 2012 15th International Conference on Information Fusion, pp. 396–401, IEEE, 2012.
S. V. Albrecht, F. Christianos, and L. Schäfer, Multi-Agent Reinforcement Learning: Foundations and Modern Approaches. MIT Press, 2023.
J. Jiang, C. Dun, T. Huang, and Z. Lu, “Graph convolutional reinforcement learning,” in International Conference on Learning Representations, 2020.
P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph attention networks,” in International Conference on Learning Representations, 2018.
M. J. Hausknecht and P. Stone, “Deep recurrent q-learning for partially observable mdps,” in 2015 AAAI Fall Symposia, Arlington, Virginia, USA, November 12-14, 2015, pp. 29–37, AAAI Press, 2015.
Z. Wang, T. Schaul, M. Hessel, H. Van Hasselt, M. Lanctot, and N. De Freitas, “Dueling network architectures for deep reinforcement learning,” in Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML’16, p. 1995–2003,, 2016.
F. Pardo, A. Tavakoli, V. Levdik, and P. Kormushev, “Time limits in reinforcement learning,” in Proceedings of the 35th International Conference on Machine Learning(J. Dy and A. Krause, eds.), vol. 80 of Proceedings of Machine Learning Research, pp. 4045–4054, PMLR, 10–15 Jul 2018.

  1. R. Romagnoli is with the Department of Electrical and Computer Engineering, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA 15213, USA:↩︎

  2. R. Galliera, N. Suri, and Konstantinos Mitsopoulos are with the Institute for Human and Machine Cognition, 40 South Alcaniz St, Pensacola, FL 32502, USA:↩︎