Skip to main content Skip to secondary navigation

A Model Guardian to Enforce and Improve Intelligent Vehicle Safety

Main content start

Principal Investigators:

Matei Zaharia and Peter Bailis

TRI Liaison:

Nikos Arechiga

Project Summary

We propose to develop a software-based Model Guardian system that ensures more reliable, safer, and more predictable machine learning model performance by adapting critical practices in conventional software engineering to the problem of AI-powered intelligent vehicle safety: the use of runtime assertions and large-scale interactive backtesting.

 Enable machine learning model builders throughout TRI and Toyota to efficiently test for and reason about model safety.

Research Goals

Driving Goal: design a platform that enables scalable runtime evaluation of assertions for model behavior, and interactive backtesting to improve safety, reliability, and speed of development of models in autonomous and ADAS vehicles.

  1. Assertions with continuous evaluation: We will develop programming interfaces for specifying, testing, and deploying constraints that capture domain expertise to govern the behavior of model outputs and warn drivers of uncertainty.
  2. Efficient regression evaluation for violations: When Model Guardian’s assertions highlight a constraint violation, the system will perform efficient search and retrieval of related and representative historical scenarios to compare model behavior, obtain additional training data, and test new models.
  3. Interactive search and backtesting of historical data:

For interactive data exploration and backtesting on historical or simulated data, we will develop a declarative SQL-like query system for video, LIDAR and multi-modal drive data.