Skip to main content Skip to secondary navigation

Interaction-Aware Control for Cars that Anticipate and Explain Complex Environments

Main content start

Principal Investigator:

Dorsa Sadigh

TRI Liaison:

Guy Rosman and Adrien Gaidon

Project Summary

We propose to build models that explain and predict humans’ actions in more challenging driving scenarios. We will develop algorithms that automatically generate such interesting and challenging test cases.

We are directly addressing pressing issues in autonomy, decision making, human modeling, verification, and validation for self-driving cars, which should be of immediate relevance to TRI.

Research Goals

Develop robust and explainable human models in challenging scenarios:

  1. Robustness analysis of learned human models.
  2. Explainable modeling of human driving behavior in complex scenarios.
  3. Generating scenarios with risky learned human models.

Models will be evaluated through data collected from Berkeley Deep Drive, and using simulators such as CARLA and CARS simulators.