Interaction-Aware Control for Cars that Anticipate and Explain Complex Environments
Main content start
Principal Investigator:
TRI Liaison:
Project Summary
We propose to build models that explain and predict humans’ actions in more challenging driving scenarios. We will develop algorithms that automatically generate such interesting and challenging test cases.
We are directly addressing pressing issues in autonomy, decision making, human modeling, verification, and validation for self-driving cars, which should be of immediate relevance to TRI.
Research Goals
Develop robust and explainable human models in challenging scenarios:
- Robustness analysis of learned human models.
- Explainable modeling of human driving behavior in complex scenarios.
- Generating scenarios with risky learned human models.
Models will be evaluated through data collected from Berkeley Deep Drive, and using simulators such as CARLA and CARS simulators.