Towards Context-Aware Modeling of Situation Awareness in Conditionally Automated Driving

Lilit Avetisyan; X. Jessie Yang; Feng Zhou · 2024 · arXiv

URL: http://arxiv.org/abs/2405.07088v1

archive: archived pipeline: cataloged

Abstract

Maintaining adequate situation awareness (SA) is crucial for the safe operation of conditionally automated vehicles (AVs), which requires drivers to regain control during takeover (TOR) events. This study developed a predictive model for real-time assessment of driver SA using multimodal data (e.g., galvanic skin response, heart rate and eye tracking data, and driver characteristics) collected in a simulated driving environment. Sixty-seven participants experienced automated driving scenarios with TORs, with conditions varying in risk perception and the presence of automation errors. A LightGBM (Light Gradient Boosting Machine) model trained on the top 12 predictors identified by SHAP (SHapley Additive exPlanations) achieved promising performance with RMSE=0.89, MAE=0.71, and Corr=0.78. These findings have implications towards context-aware modeling of SA in conditionally automated driving, paving the way for safer and more seamless driver-AV interactions.

Summary

Driving simulator study developed a real-time predictive model of driver situation awareness (SA) in conditionally automated (SAE Level 3) driving. Sixty-seven participants experienced automated drives with takeover requests (TORs) under a 2x2 mixed design crossing risk perception (high vs. low, between-subjects) with automation reliability (error vs. no error, within-subjects). After exclusions for sensor or simulator malfunctions, data from 44 participants were analyzed. Multimodal inputs included galvanic skin response, photoplethysmography-derived heart rate and HRV, eye-tracking metrics (fixations, dwell, pupil), and driver characteristics. Self-reported SA was collected every 30 s on a 4-item scale during two 15-min drives. A LightGBM regressor was trained with 10-fold cross-validation; SHAP was used for feature interpretation. Using the top 12 SHAP-ranked features (out of 21), the model achieved RMSE = 0.89, MAE = 0.71, and correlation = 0.78 with self-reported SA. High-risk exposure and automation errors increased reported SA and shifted gaze allocation toward the road and traffic. Authors frame the work as a step toward context-aware, continuously updated SA estimation to support driver-AV handover.

Key finding

A LightGBM model trained on multimodal physiological, eye-tracking, and demographic predictors estimated continuous driver SA in conditionally automated driving with RMSE = 0.89, MAE = 0.71, and r = 0.78 against self-report. Risk perception and automation errors significantly elevated reported SA, with eye-tracking showing more road-center and traffic-checking gaze under risk and error conditions.

Methodology

Driving simulator experiment, 2x2 mixed design (risk: high vs. low between-subjects; automation reliability: error vs. no error within-subjects). Each participant completed two ~15-min Level-3 automated drives with TOR events while engaging in a Tetris secondary task. Continuous GSR, PPG-derived HR/HRV, and eye-tracking data were recorded; SA was self-reported every 30 s on a 4-item scale. Trust, perceived risk, and demographic surveys were collected. A LightGBM regressor predicted SA from 21 features (1634 samples) using 10-fold cross-validation; SHAP identified the top 12 predictors, after which the model was retrained.

Sample size: 67 enrolled (30 F mean age 28.3 SD 11.5; 37 M mean age 25.9 SD 12.3); 44 analyzed after sensor/simulator exclusions

Quality score: 5 / 5

Topics