Skip to content Skip to navigation

Uncertainty on Uncertainty, Robustness, and Simulation

Principal Investigators:

John Duchi, Peter Glynn, and Ramesh Johari

TRI Liaison:

Hongkai Dai

Project Summary

This proposal develops a principled approach to “uncertainty about uncertainty.” Given a system, we investigate how we can quantify its robustness: both robustness to events that are rare in the training data, as well as robustness to events that are not even present in the training data. The approaches we develop are valuable both to improve robustness of existing system designs, but also to suggest limits to robustness. An ambitious grand challenge goal for AI-assisted driving, to which we hope this project
points the way, can be measured in miles: with optimal learning and system design, how many miles is it necessary to drive (or simulate) before we can assert with confidence that a car and its control algorithms are safe?

Research Goals

  1. Robustness against “known unknowns” (rare events already present in the training data)
    1. Convergence rates for robust risk
    2. Importance sampling for robust risk evaluation
    3. Calibration and certification of risk
    4. Optimization and design
  2. Robustness against “unknown unknowns” (rare events not present in training data)
    1. Introducing additional noise in training data
    2. Risk-sensitive importance sampling
    3. Robustness against actions of others