Eye Gaze for Intelligent Driving
Intelligent vehicles have been proposed as one path to increasing traffic safety and reducing on-road crashes. Driving “intelligence” today takes many forms, ranging from simple blind spot occupancy or forward collision warnings to distance-aware cruise and all the way to full driving autonomy in ce...
Saved in:
Main Author: | |
---|---|
Format: | Dissertation |
Language: | English |
Published: |
ProQuest Dissertations & Theses
01-01-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Intelligent vehicles have been proposed as one path to increasing traffic safety and reducing on-road crashes. Driving “intelligence” today takes many forms, ranging from simple blind spot occupancy or forward collision warnings to distance-aware cruise and all the way to full driving autonomy in certain situations. Primarily, these methods are outward-facing and operate on information about the state of the vehicle and surrounding traffic elements. However, another less explored domain of intelligence is cabin-facing modeling information about the driver’s cognitive states. In this thesis, I investigate the utility of a signal that can help us achieve cabin-facing intelligence: driver eye gaze. Eye gaze allows us to infer driver internal cognitive states and we explore how this can improve both autonomous driving methods and intelligent driving assistance. To enable this research, I first contribute DReyeVR, an open-source virtual reality driving simulator, which was designed with behavioral and interaction research priorities in mind but exists in the same experimental environments used by vehicular autonomy researchers, effectively bridging the two fields. I show how DReyeVR can be used to conduct psycho-physical experiments by designing one to characterize the extent and dynamics of driver peripheral vision. Making good on the promise of bridging behavioral and autonomy research, I show how similar naturalistic driver gaze data can be used to provide additional supervision to autonomous driving agents trained via imitation learning to mitigate causal confusion.I then turn to the assistive domain. First, I describe a study of false positives in a real-world dataset of forward collision warnings (FCW) deployed in vehicles during previous a longitudinal study in-the-wild. Deploying FCWs purely based on scene physics without accounting for driver attention leads to overwhelming them with redundant alerts. I demonstrate a warning strategy that accounts for driver attention and explicitly models their hypothesis of other vehicles’ behavior, hence improving both true positive and negative rates. Finally, I propose the shared awareness paradigm, a framework for continuously supporting driver situational awareness (SA) with an intelligent perception system. To build the driver situational awareness model, we first collect data using a novel SA labeling method, to obtain continuous, per-object driver awareness labels along with their gaze, driving actions and the simulated world state. I use this data to train a learned model to predict drivers’ situational awareness of traffic elements given a history of their gaze and scene context. In parallel, we reason about the importance of objects in a counterfactual fashion by studying the impact of perturbing object actions on the ego vehicle’s motion plan. Finally, I put it all together, in an offline demonstration on replayed simulated drives to show how we could alert drivers of important objects they are unaware of.I conclude by reflecting on how eye gaze can be used to model the internal cognitive states of human drivers, in service of improving both vehicle autonomy and driving assistance. |
---|---|
ISBN: | 9798896073161 |