Metadata only
Date
2020-03Type
- Conference Paper
ETH Bibliography
yes
Altmetrics
Abstract
Digitizing historical maps automatically offers a multitude of challenges. This is particularly true for the case of label extraction since labels vary strongly in shape, size, orientation and type. In addition, characters may overlap with other features such as roads or hachures, which makes ex-traction even harder. To tackle this issue, we propose a novel semi-automatic workflow consisting of a sequence of deep learning and conventional text processing steps in conjunction with tailor-made correction software. To prove its efficiency, the workflow is being applied to the Siegfried Map Series (1870-1949) covering entire Switzerland with scales 1 : 25.000 and 1 : 50.000. The workflow consists of the following steps. First, we decide for each pixel if the content is text or background. For this purpose, we use a convolutional neuronal network with the U-Net architecture which was developed for biomedical image segmentation (Ronneberger, 2015). The weights are calculated with four manually annotated map sheets as ground truth. The trained model can then be used to predict the segmentation on any other map sheet. The results are clustered with DBSCAN (Ester, Kriegel, Sander, & Xu, 1996) to aggregate the individual pixels to letters and words. This way, each label can be localized and extracted without background. But since this is still a non-vectorized representation of the labels, we use the Google Vision API to interpret the text of each label and also search for matching entries in the Swiss Names database by Swisstopo for verification. As for most label extraction workflows, the last step consists of manually checking all labels and correcting possible mistakes. For this purpose, we modified the VGG Image Annotator to simplify the selection of the correct entry. Our framework reduces the time consumption of digitizing labels drastically by a factor of around 5. The fully automatic part (seg-mentation, interpretation, matching) takes around 5-10 min per sheet and the manual processing part around 1.5-2h. Compared to a fully manual digitizing process, time efficiency is not the only benefit. Also the chance of missing labels decreases strongly. A human cannot detect labels with the same accuracy as a computer algorithm. Most problems leading to more manual work occur during clustering and text recognition with the Google Vision API. Since the model is trained for maps in a flat part of German-speaking Switzerland, the algorithm performs poorer for other parts. In Alpine regions, the rock hachures are often misinterpreted as labels, leading to many false positives. French labels are often composed of several words, which are not clustered into one label by DBSCAN. Possible further work could include retraining with more diverse ground truth or extending the U-Net model so that it can also recognize and learn textual information. Show more
Publication status
publishedEditor
Book title
Automatic Vectorisation of Historical MapsPages / Article No.
Publisher
Department of Cartography and Geoinformatics, ELTE Eötvös Loránd UniversityEvent
Subject
Historical maps; Vectorization; Deep learning; Convolutional neuronal network; Label extractionOrganisational unit
03466 - Hurni, Lorenz / Hurni, Lorenz
More
Show all metadata
ETH Bibliography
yes
Altmetrics