GCBF+: A Neural Graph Control Barrier Function Framework for Distributed Safe Multi-Agent Control

Massachusetts Institute of Technology

Crazyflie switching position using GCBF+

Swapping task for the Crazyflie drone platform using GCBF+

Crazyflie switching position with moving obstacle using GCBF+

Swapping with a moving obstacle for the Crazyflie drone platform using GCBF+

Crazyflie docking using GCBF+

Swapping with a moving obstacle for the Crazyflie drone platform using GCBF+

Abstract

Distributed, scalable, and safe control of large-scale multi-agent systems (MAS) is a challenging problem. In this paper, we design a distributed framework for safe multi-agent control in large-scale environments with obstacles, where a large number of agents are required to maintain safety using only local information and reach their goal locations. We introduce a new class of certificates, termed graph control barrier function (GCBF), which are based on the well-established control barrier function (CBF) theory for safety guarantees and utilize a graph structure for scalable and generalizable distributed control of MAS. We develop a novel theoretical framework to prove the safety of an arbitrary-sized MAS with a single GCBF. We propose a new training framework GCBF+ that uses graph neural networks (GNNs) to parameterize a candidate GCBF and a distributed control policy. The proposed framework is distributed and is capable of directly taking point clouds from LiDAR, instead of actual state information, for real-world robotic applications. We illustrate the efficacy of the proposed method through various hardware experiments on a swarm of drones with objectives ranging from exchanging positions to docking on a moving target without collision. Additionally, we perform extensive numerical experiments, where the number and density of agents, as well as the number of obstacles, increase. Empirical results show that in complex environments with nonlinear agents (e.g., Crazyflie drones) GCBF+ outperforms the handcrafted CBF-based method with the best performance by up to \(20\%\) for relatively small-scale MAS for up to 256 agents, and leading reinforcement learning (RL) methods by up to \(40\%\) for MAS with 1024 agents. Furthermore, the proposed method does not compromise on the performance, in terms of goal reaching, for achieving high safety rates, which is a common trade-off in RL-based methods.

Simulations

GCBF+ controller in the DoubleIntegrator environment trained with 8 agents and tested with 64/128/512 agents
GCBF+ controller with 2D obstacles and 32/64/512 agents
GCBF+ controller with 3D obstacles and 64/128/512 agents

GCBF: safety certificate for arbitrary-sized MAS

attention


MAS can be naturally viewed as graphs and we define GCBF as a function of only neighboring nodes.

Assume:
1. For given agent \(i\), a neighboring node \(j\) where \(\|p_i - p_j\| \geq R\) does not affect the GCBF value.
2. The 0-superlevel set of the GCBF is a subset of the safe set.

Then, GCBF certifies the safety of an arbitrary-sized MAS.

GCBF+ training architecture

algorithm structure

The sampled input features are labeled as safe control invariant \(\mathcal D_{\mathcal C}\) and unsafe \(\mathcal D_{\mathcal A}\) using the previous step learned control policy \(\pi_\phi\) for training. A nominal control policy \(\pi_{\mathrm{nom}}\) for goal reaching is used in a CBF-QP with the previously learned GCBF \(h_\theta\) to generate \(\pi_{\mathrm{QP}}\). Finally, the QP policy along with the CBF conditions are used to define the loss \(\mathcal L\).

algorithm structure

The learned CBF contour.

An improved loss design compared with GCBFv0.

GCBF+ uses an improved loss compared with GCBFv0 so that safety does not compete with goal-reaching in the loss. In the following figures, the orange arrows show the learned controls and the black arrows show the reference controls (\(u_\mathrm{nom}\) for GCBFv0 and \(u_\mathrm{QP}\) for GCBF+).

control loss


For GCBFv0 loss:
• Using small \(\eta_\mathrm{ctrl}\), the training focuses more on safety, and the learned controller cannot reach the goal.
• Using large \(\eta_\mathrm{ctrl}\), the training focuses more on behavior cloning the nominal controller, leading to unsafe behavior.
• Only when using the fine-tuned \(\eta_\mathrm{ctrl}\) can the learned controller be both safe and goal-reaching. However, the training loss still cannot be zero in this case.

For GCBF+ loss:
\(\pi_\phi\) is not sensitive to \(\eta_\mathrm{ctrl}\), and the learned control aligns much better with the guided signal, meaning that theoretically the loss can go to zero.

Using attention to satisfy GCBF assumptions

attention

To certify safety, GCBF needs the assumption that for given agent \(i\), a neighboring node \(j\) where \(\|p_i - p_j\| \geq R\) does not affect the GCBF value. To satisfy the assumption, we apply GNN and graph attention to GCBF \(h\), such that:
\(h_\theta(z_i) = \psi_{\theta_4}\left(\sum_{j\in\tilde{\mathcal{N}}_i} \underbrace{\mathrm{softmax}\big( \psi_{\theta_2}(q_{ij}) \big)}_{w_{ij}}\, \psi_{\theta_3}(q_{ij})\right)\)
where \(q_{ij} = \psi_{\theta_1}(z_{ij})\) is the encoding. The weight \(w_{ij}\) approaches \(0\) as the inter-agent distance \(d_{ij}\) approaches \(R\) without explicit supervision, showing that GCBF+ automatically learns to satisfy the assumption.

Numerical Results

algorithm structure
algorithm structure
algorithm structure

The performance of GCBF+ and the baselines in 2D and 3D environments and environments with obstacles.

GCBF+ outperforms the baselines across all the environments because it is guided by a learned GCBF, does not need to balance safety and performance in training and can work with actuator limits.

Related Work

This work improves on our previous work GCBFv0. For a survey of the field of learning safe control for multi-robot systems, see this paper.