Functional near-infrared spectroscopy (fNIRS) and Eye tracking for Cognitive Load classification in a Driving Simulator Using Deep Learning

Mehshan Ahmed Khan; Houshyar Asadi; Mohammad Reza Chalak Qazani; Chee Peng Lim; Saied Nahavandi · 2024 · arXiv

URL: http://arxiv.org/abs/2408.06349v1

archive: archived pipeline: cataloged verified

Abstract

Motion simulators allow researchers to safely investigate the interaction of drivers with a vehicle. However, many studies that use driving simulator data to predict cognitive load only employ two levels of workload, leaving a gap in research on employing deep learning methodologies to analyze cognitive load, especially in challenging low-light conditions. Often, studies overlook or solely focus on scenarios in bright daylight. To address this gap and understand the correlation between performance and cognitive load, this study employs functional near-infrared spectroscopy (fNIRS) and eye-tracking data, including fixation duration and gaze direction, during simulated driving tasks in low visibility conditions, inducing various mental workloads. The first stage involves the statistical estimation of useful features from fNIRS and eye-tracking data. ANOVA will be applied to the signals to identify significant channels from fNIRS signals. Optimal features from fNIRS, eye-tracking and vehicle dynamics are then combined in one chunk as input to the CNN and LSTM model to predict workload variations. The proposed CNN-LSTM model achieved 99% accuracy with neurological data and 89% with vehicle dynamics to predict cognitive load, indicating potential for real-time assessment of driver mental state and guide designers for the development of safe adaptive systems.

Summary

Khan et al. (DSC 2024) tested whether deep learning models could classify cognitive load during simulated nighttime/rainy driving using a fusion of fNIRS, eye-tracking, and vehicle dynamics signals. Ten healthy adults completed three n-back levels (0/1/2) plus a baseline while driving Euro Truck Simulator 2; ANOVA selected significant fNIRS channels and a CNN-LSTM hybrid was trained on the multimodal feature set. The authors report 99% classification accuracy with neurological data (fNIRS + eye tracking) versus 89% with vehicle dynamics alone, framing the result as evidence that physiological signals dominate driving-behavior features for workload prediction in low-visibility conditions.

Key finding

A CNN-LSTM model fusing fNIRS and eye-tracking achieved 99% cognitive-load classification accuracy in nighttime/rainy simulated driving, outperforming vehicle-dynamics-only features (89%).

Methodology

Driving simulator study (Next Level Racing Motion Platform + ETS2) with 10 adults (9M, 1F) completing baseline, 0-back, 1-back, and 2-back auditory n-back conditions during simulated nighttime rainy driving. Recorded fNIRS (OBELAB), eye-tracking (Pupil Core), and vehicle dynamics (speed, angular velocity, linear acceleration, steering, throttle, brake). Features selected via ANOVA, then fed to a hybrid CNN-LSTM classifier comparing neurological-only vs vehicle-dynamics-only inputs.

Sample size: N=10 (9 male, 1 female)

Quality score: 5 / 5

Topics