A Model Guardian to Enforce and Improve Intelligent Vehicle Safety
Principal Investigators:
Matei Zaharia and Peter Bailis
TRI Liaison:
Project Summary
We propose to develop a software-based Model Guardian system that ensures more reliable, safer, and more predictable machine learning model performance by adapting critical practices in conventional software engineering to the problem of AI-powered intelligent vehicle safety: the use of runtime assertions and large-scale interactive backtesting.
Research Goals
Driving Goal: design a platform that enables scalable runtime evaluation of assertions for model behavior, and interactive backtesting to improve safety, reliability, and speed of development of models in autonomous and ADAS vehicles.
- Assertions with continuous evaluation: We will develop programming interfaces for specifying, testing, and deploying constraints that capture domain expertise to govern the behavior of model outputs and warn drivers of uncertainty.
- Efficient regression evaluation for violations: When Model Guardian’s assertions highlight a constraint violation, the system will perform efficient search and retrieval of related and representative historical scenarios to compare model behavior, obtain additional training data, and test new models.
- Interactive search and backtesting of historical data:
For interactive data exploration and backtesting on historical or simulated data, we will develop a declarative SQL-like query system for video, LIDAR and multi-modal drive data.