We develop new algorithms for visual forecasting of the trajectories, actions, and behaviors (multiple levels-of- abstraction) of vehicles in a dynamic, realistic driving environment. We envision that such forecasting ability will help building more contextually-aware intelligent vehicles which can make quicker navigational decisions.
Predicting behavior of surrounding cars in traffic scenes will play a critical role in safety and navigation. The ability to forecast other vehicle trajectories and actions can help building more contextually aware intelligent vehicles.
We seek to forecast vehicles dynamics (both self and other vehicles) from vehicle-centric RGB and LIDAR data.
- Dataset Collection: Using open datasets such as Berkeley Deep Drive (BDD), and exploring simulators such as Carla for data collection for benchmarking.
- Forecasting vehicle trajectories, actions, and behaviors: Implementing several baseline methods, along with novel methods (based on SSL and IL) that independently forecast each concept.
- Joint Learning and Understanding: Developing methods to jointly learn the above vehicle dynamics.