HR-SAR-Net: A Deep Neural Network for Urban Scene Segmentation from High-Resolution SAR Data


METADATA ONLY
Loading...

Date

2020

Publication Type

Conference Paper

ETH Bibliography

yes

Citations

Altmetric
METADATA ONLY

Data

Rights / License

Abstract

Synthetic aperture radar (SAR) data is becoming increasingly available to a wide range of users through commercial service providers with resolutions reaching 0.5 m/px. Segmenting SAR data still requires skilled personnel, limiting the potential for large-scale use. We show that it is possible to automatically and reliably perform urban scene segmentation from next-gen resolution SAR data (0.15 m/px) using deep neural networks (DNNs), achieving a pixel accuracy of 95.19% and a mean intersection-over-union (mIoU) of 74.67% with data collected over a region of merely 2.2km2. The presented DNN is not only effective, but is very small with only 63k parameters and computationally simple enough to achieve a throughput of around 500 Mpx/s using a single GPU. We further identify that additional SAR receive antennas and data from multiple flights massively improve the segmentation accuracy. We describe a procedure for generating a high-quality segmentation ground truth from multiple inaccurate building and road annotations, which has been crucial to achieving these segmentation results.

Publication status

published

Editor

Book title

2020 IEEE Sensors Applications Symposium (SAS)

Journal / series

Volume

Pages / Article No.

9220068

Publisher

IEEE

Event

15th IEEE Sensors Applications Symposium (SAS 2020)

Edition / version

Methods

Software

Geographic location

Date collected

Date created

Subject

Organisational unit

03996 - Benini, Luca / Benini, Luca check_circle

Notes

Funding

Related publications and datasets