Verifying controllers with vision-based perception using safe approximate abstractions
Jan 1, 2022ยท,,,,,ยท
0 min read
Chiao Hsieh
Yangge Li
Dawei Sun
Keyur Joshi
Sasa Misailovic
Sayan Mitra
Abstract
Fully formal verification of perception models is likely to remain challenging in the foreseeable future, and yet these models are being integrated into safety-critical control systems. We present a practical method for reasoning about the safety of such systems. Our method is based on systematically constructing approximations of perception models from system-level safety requirements, data, and program analysis of the modules that are downstream from perception. These approximations have some desirable properties like being low-dimensional, intelligible, and tractable. The closed-loop system, with the approximation substituting the actual perception model, is verified to be safe. Establishing the formal relationship between the actual and the approximate perception models remains well beyond available verification techniques. However, we do provide a useful empirical measure of their closeness called precision. Overall, our method can tradeoff the size of the approximation against precision. We apply the method to two significant case studies: 1) a vision-based lane tracking controller for an autonomous vehicle and 2) a controller for an agricultural robot. We show how the generated approximations for each system can be composed with the downstream modules and be verified using program analysis tools like CBMC. Detailed evaluations of the impacts of size, and the environmental parameters (e.g., lighting, road surface, and plant type) on the precision of the generated approximations suggest that the approach can be useful for realistic control systems.
Type
Publication
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems