To navigate complex environments, robots must increasingly use high-dimensional visual feedback (e.g. images) for control. However, relying on high-dimensional image data to make control decisions raises important questions; particularly, how might we prove the safety of a visual-feedback controller?
Control barrier functions (CBFs) are powerful tools for certifying the safety of feedback controllers in the state-feedback setting, but CBFs have traditionally been poorly-suited to visual feedback control due to the need to predict future observations in order to evaluate the barrier function. In this work, we solve this issue by leveraging recent advances in neural radiance fields (NeRFs), which learn implicit representations of 3D scenes and can render images from previously-unseen camera perspectives, to provide single-step visual foresight for a CBF-based controller, where the CBFs possess a discrete-time nature. This novel combination is able to filter out unsafe actions and intervene to preserve safety. We demonstrate the effect of our controller in real-time simulation experiments where it successfully prevents the robot from taking dangerous actions.
This work is part of a broader research thread around learned certificates, which allow us to build data-driven proofs of controller correctness. For a survey of the field of learned certificates, see this paper.
Other work on learned certificates from our lab include:
@inproceedings{tong2022enforcing,
title = {Enforcing safety for vision-based controllers via Control Barrier Functions and Neural Radiance Fields},
author = {Tong, Mukun and Dawson, Charles and Fan, Chuchu},
booktitle = {2023 IEEE International Conference on Robotics and Automation (ICRA)},
year = {2023}
}