Improving Driver Situation Awareness Prediction using Human Visual Sensory and Memory Mechanism

Haibei Zhu; Teruhisa Misu; Sujitha Martin; Xingwei Wu; Kumar Akash · 2021 · arXiv

URL: http://arxiv.org/abs/2111.00087v1

archive: archived pipeline: cataloged verified

Abstract

Situation awareness (SA) is generally considered as the perception, understanding, and projection of objects' properties and positions. We believe if the system can sense drivers' SA, it can appropriately provide warnings for objects that drivers are not aware of. To investigate drivers' awareness, in this study, a human-subject experiment of driving simulation was conducted for data collection. While a previous predictive model for drivers' situation awareness utilized drivers' gaze movement only, this work utilizes object properties, characteristics of human visual sensory and memory mechanism. As a result, the proposed driver SA prediction model achieves over 70% accuracy and outperforms the baselines.

Summary

Driving-simulator study (Zhu, Misu, Martin, Wu, Akash; Honda Research Institute) building a driver situation-awareness (SA) prediction model that augments a previous gaze-only baseline with three additions: (1) object properties (prior road-object information), (2) gaze-behavioral features that distinguish foveal vs peripheral vision, and (3) an awareness-score adjustment grounded in human short-term-memory capacity theory. Ground-truth SA was obtained via SAGAT (Situation Awareness Global Assessment Technique) queries during simulated drives. The proposed model achieves over 70% prediction accuracy and outperforms the gaze-only baseline, supporting the argument that a model integrating both visual sensory characteristics and short-term-memory dynamics yields more accurate driver SA predictions than aggregated gaze movement alone, with implications for ADAS warning systems that adapt to whether the driver has already perceived a hazard.

Key finding

Augmenting gaze-based driver SA prediction with foveal/peripheral vision features and short-term-memory adjustments raises accuracy above 70%, outperforming the gaze-only baseline.

Methodology

experimental

Sample size: Exp 1: N=10; Exp 2: N=20

Quality score: 5 / 5

Topics