Grayscale And Event-Based Sensor Fusion for Robust Steering Prediction for Self-Driving Cars
METADATA ONLY
Loading...
Author / Producer
Date
2023
Publication Type
Conference Paper
ETH Bibliography
yes
Citations
Altmetric
METADATA ONLY
Data
Rights / License
Abstract
Event-based vision, led by a dynamic vision sensor (DVS), is a bio-inspired vision model that leverages timestamped pixel-level brightness changes of non-static scenes. Thus, DVS's architecture captures the dynamics of a scene and filters static information out. Although machine learning algorithms based on DVS inputs overcome active pixel sensors (APS), they still struggle in challenging conditions. For example, DVS-based models outperform APS-based ones in high-dynamic scenes but suffer in static landscapes. In this paper, we present GEFU (Grayscale and Event-based FUsor), an approach that opens to sensor fusion by combining grayscale and event-based inputs. In particular, we evaluate GEFU's performance on a practical task: predicting a vehicle's steering angle in a realistic driving condition. GEFU is built on top of a consolidated convolutional neural network and trained with realistic driving conditions. Our approach outperforms solo DVS- or APS-based models on non-trivial driving cases, such as the static scenes for the former and the suboptimal light exposure for the latter approach. Our results show that GEFU (i) reduces the root-mean-squared error to similar to 2 degrees and (ii) although the magnitude of the steering angle does not always match the ground truth, the steering direction left/right is always predicted correctly.
Permanent link
Publication status
published
Editor
Book title
2023 IEEE Sensors Applications Symposium (SAS)
Journal / series
Volume
Pages / Article No.
10254055
Publisher
IEEE
Event
18th IEEE Sensors Applications Symposium (SAS 2023)
Edition / version
Methods
Software
Geographic location
Date collected
Date created
Subject
Sensor Vision; Machine Learning; Sensor Fusion