This research will produce a set of open-source data corpuses of real people in simulator and on-road driving situations that will allow academic researchers to develop competing machine learning algorithms to recognize emotional state, stress, cognitive load, distraction and engagement, through facial expressions, physiological response and spoken statements. This project will demonstrate real applications for machine learning in real-time interactions between drivers and vehicles, and use the recognized driver states and interactive responses in combination to predict appropriate vehicle response to driver behavior.
- Develop testbeds for collecting driver information in lab simulator and on-road
- Collect human-centric multimodal data from semi-controlled, state induced experiments
- Create a corpus of driver state data ready for ML community
- Begin modeling and testing future driving interactions based on driver state information