Grayscale And Event-Based Sensor Fusion for Robust Steering Prediction for Self-Driving Cars
Metadata only
Date
2023Type
- Conference Paper
ETH Bibliography
yes
Altmetrics
Abstract
Event-based vision, led by a dynamic vision sensor (DVS), is a bio-inspired vision model that leverages timestamped pixel-level brightness changes of non-static scenes. Thus, DVS's architecture captures the dynamics of a scene and filters static information out. Although machine learning algorithms based on DVS inputs overcome active pixel sensors (APS), they still struggle in challenging conditions. For example, DVS-based models outperform APS-based ones in high-dynamic scenes but suffer in static landscapes. In this paper, we present GEFU (Grayscale and Event-based FUsor), an approach that opens to sensor fusion by combining grayscale and event-based inputs. In particular, we evaluate GEFU's performance on a practical task: predicting a vehicle's steering angle in a realistic driving condition. GEFU is built on top of a consolidated convolutional neural network and trained with realistic driving conditions. Our approach outperforms solo DVS- or APS-based models on non-trivial driving cases, such as the static scenes for the former and the suboptimal light exposure for the latter approach. Our results show that GEFU (i) reduces the root-mean-squared error to similar to 2 degrees and (ii) although the magnitude of the steering angle does not always match the ground truth, the steering direction left/right is always predicted correctly. Show more
Publication status
publishedExternal links
Book title
2023 IEEE Sensors Applications Symposium (SAS)Pages / Article No.
Publisher
IEEEEvent
Subject
Sensor Vision; Machine Learning; Sensor FusionMore
Show all metadata
ETH Bibliography
yes
Altmetrics