Owl and Lizard: Patterns of Head Pose and Eye Pose in Driver Gaze Classification
URL: http://arxiv.org/abs/1508.04028v3
archive: archived pipeline: cataloged verified
Abstract
Accurate, robust, inexpensive gaze tracking in the car can help keep a driver safe by facilitating the more effective study of how to improve (1) vehicle interfaces and (2) the design of future Advanced Driver Assistance Systems. In this paper, we estimate head pose and eye pose from monocular video using methods developed extensively in prior work and ask two new interesting questions. First, how much better can we classify driver gaze using head and eye pose versus just using head pose? Second
Summary
On-road study of driver gaze classification combining head pose and eye pose estimation from monocular video. Authors evaluate ~1.35 million annotated frames from 40 drivers across six glance regions (road, center stack, instrument cluster, rearview mirror, left, right) using a face-alignment + pupil-detection pipeline feeding a random forest classifier. They introduce an 'owl vs lizard' analogy: 'owls' move the head with gaze, 'lizards' move only the eyes. Adding eye pose to head pose yields little gain for owls but substantial gain for lizards, and an 'owlness' metric is proposed to characterize inter-driver variation.
Key finding
Adding eye pose on top of head pose increases six-region gaze classification accuracy from 89.2% to 94.6% on average (a 5.4 percentage point gain) at ~1.3 decisions/second; the gain is concentrated in 'lizard' drivers who keep the head still while 'owl' drivers see little or no improvement, motivating an 'owlness' metric to explain inter-individual differences.
Methodology
on_road
Sample size: 40 drivers; ~1,351,864 annotated frames across 6 gaze regions
Quality score: 5 / 5