Martin Hahner
Loading...
6 results
Search Results
Publications 1 - 6 of 6
- Texture Underfitting for Domain AdaptationItem type: Conference Paper
2019 IEEE Intelligent Transportation Systems Conference (ITSC)Zaech, Jan-Nico; Dai, Dengxin; Hahner, Martin; et al. (2019) - Fog Simulation on Real LiDAR Point Clouds for 3D Object Detection in Adverse WeatherItem type: Conference Paper
2021 IEEE/CVF International Conference on Computer Vision (ICCV)Hahner, Martin; Sakaridis, Christos; Dai, Dengxin; et al. (2021)This work addresses the challenging task of LiDAR-based 3D object detection in foggy weather. Collecting and annotating data in such a scenario is very time, labor and cost intensive. In this paper, we tackle this problem by simulating physically accurate fog into clear-weather scenes, so that the abundant existing real datasets captured in clear weather can be repurposed for our task. Our contributions are twofold: 1) We develop a physically valid fog simulation method that is applicable to any LiDAR dataset. This unleashes the acquisition of large-scale foggy training data at no extra cost. These partially synthetic data can be used to improve the robustness of several perception methods, such as 3D object detection and tracking or simultaneous localization and mapping, on real foggy data. 2) Through extensive experiments with several state-of-the-art detection approaches, we show that our fog simulation can be leveraged to significantly improve the performance for 3D object detection in the presence of fog. Thus, we are the first to provide strong 3D object detection baselines on the Seeing Through Fog dataset. Our code is available at www. trace. ethz. ch/lidar_fog_simulation. - Scene Understanding for Driving under Adverse ConditionsItem type: Doctoral ThesisHahner, Martin (2022)
- Learning Accurate and Human-Like Driving using Semantic Maps and AttentionItem type: Conference Paper
2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)Hecker, Simon; Dai, Dengxin; Liniger, Alexander; et al. (2020)This paper investigates how end-to-end driving models can be improved to drive more accurately and human-like. To tackle the first issue we exploit semantic and visual maps from HERE Technologies and augment the existing Drive360 dataset with such. The maps are used in an attention mechanism that promotes segmentation confidence masks, thus focusing the network on semantic classes in the image that are important for the current driving situation. Human-like driving is achieved using adversarial learning, by not only minimizing the imitation loss with respect to the human driver but by further defining a discriminator, that forces the driving model to produce action sequences that are human-like. Our models are trained and evaluated on the Drive360 + HERE dataset, which features 60 hours and 3000 km of real-world driving data. Extensive experiments show that our driving models are more accurate and behave more human-like than previous methods. - LiDAR Snowfall Simulation for Robust 3D Object DetectionItem type: Conference Paper
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Hahner, Martin; Sakaridis, Christos; Bijelic, Mario; et al. (2022)3D object detection is a central task for applications such as autonomous driving, in which the system needs to localize and classify surrounding traffic agents, even in the presence of adverse weather. In this paper, we address the problem of LiDAR-based 3D object detection under snowfall. Due to the difficulty of collecting and annotating training data in this setting, we propose a physically based method to simulate the effect of snowfall on real clear weather LiDAR point clouds. Our method samples snow particles in 2D space for each LiDAR line and uses the induced geometry to modify the measurement for each LiDAR beam accordingly. Moreover, as snowfall often causes wetness on the ground, we also simulate ground wetness on LiDAR point clouds. We use our simulation to generate partially synthetic snowy LiDAR data and leverage these data for training 3D object detection models that are robust to snowfall. We conduct an extensive evaluation using several state-of-the-art 3D object detection methods and show that our simulation consistently yields significant performance gains on the real snowy STF dataset compared to clear weather baselines and competing simulation approaches, while not sacrificing performance in clear weather. Our code is available at github.com/SysCV/LiDAR snow sim. - Semantic Understanding of Foggy Scenes with Purely Synthetic DataItem type: Conference Paper
2019 IEEE Intelligent Transportation Systems Conference (ITSC)Hahner, Martin; Dai, Dengxin; Sakaridis, Christos; et al. (2019)This work addresses the problem of semantic scene understanding under foggy road conditions. Although marked progress has been made in semantic scene understanding over the recent years, it is mainly concentrated on clear weather outdoor scenes. Extending semantic segmentation methods to adverse weather conditions like fog is crucially important for outdoor applications such as self-driving cars. In this paper, we propose a novel method, which uses purely synthetic data to improve the performance on unseen real-world foggy scenes captured in the streets of Zurich and its surroundings. Our results highlight the potential and power of photo-realistic synthetic images for training and especially fine-tuning deep neural nets. Our contributions are threefold, 1) we created a purely synthetic, high-quality foggy dataset of 25,000 unique outdoor scenes, that we call Foggy Synscapes and plan to release publicly 2) we show that with this data we outperform previous approaches on real-world foggy test data 3) we show that a combination of our data and previously used data can even further improve the performance on real-world foggy data.
Publications 1 - 6 of 6