Before autonomous systems can be deployed in safety-critical applications, we must be able to understand and verify the safety of these systems. For cases where the risk or cost of real-world testing is prohibitive, we propose a simulation-based framework for a) predicting ways in which an autonomous system is likely to fail and b) automatically adjusting the system's design to preemptively mitigate those failures. We frame this problem through the lens of approximate Bayesian inference and use differentiable simulation for efficient failure case prediction and repair. We apply our approach on a range of robotics and control problems, including optimizing search patterns for robot swarms and reducing the severity of outages in power transmission networks. Compared to optimization-based falsification techniques, our method predicts a more diverse, representative set of failure modes, and we also find that our use of differentiable simulation yields solutions that have up to 10x lower cost and requires up to 2x fewer iterations to converge relative to gradient-free techniques.
This work is part of a broader research thread around
Other work on this topic from our lab include:
@article{dawson2023_breaking_things,
author = {Dawson, Charles and Fan, Chuchu},
title = {A Bayesian approach to breaking things: efficiently predicting and repairing failure modes via sampling },
journal = {Conference on Robot Learning (CoRL)},
year = {2023},
}
}