Storytelling with Animated Interactive Objects in Real-time 3D Maps


Loading...

Author / Producer

Date

2023

Publication Type

Doctoral Thesis

ETH Bibliography

yes

Citations

Altmetric

Data

Abstract

Digital story maps present geographic information embedded in a narrative structure and often supplemented by multimedia elements. The currently predominant extrinsic approach, however, impedes following the storyline since the reader's attention is split between maps and additional illustrative content in the user interface. Incoherent representations between abstract map elements and realistic multimedia elements can be seen as another shortcoming. To close these gaps, an intrinsically oriented approach, which is inspired by historical and contemporary pictorial maps, is proposed in this dissertation: Figurative objects, which act as storytellers, are introduced into the map. By spatially anchoring the complementary entities within 3D maps in particular, the connection between the cartographic model and geographic reality will be strengthened. The overall goal of this work is to automatically turn static objects from prevalent 2D pictorial maps into animated objects for interactive 3D story maps. Artificial neural networks, primarily convolutional neural networks (CNNs), are applied in a sequence of discriminative and generative tasks to achieve the goal. For each task, data is prepared to train the networks in a supervised manner. Firstly, pictorial maps are identified from publicly available images on the internet by CNNs for classification since metadata of the images is not always present or reliable. Different strategies are investigated to input the images into the CNNs. Secondly, bounding boxes of objects on pictorial maps are detected using the example of sailing ships. Although map descriptions may include the occurrences of objects, their positions and sizes are usually unknown. To determine these two measures, CNNs for object detection are examined while modifying their hyperparameters. Thirdly, silhouettes of pictorial objects are recognised, exemplified by human figures. As the manual preparation of training data would be too labour-intensive, combinations of figurative and realistic entities are evaluated. Following, body parts and pose points are extracted from the silhouettes of the figures, which is a prerequisite for skeletal animations and the insertion of speech bubbles at head positions. CNNs with varying numbers of skip connections are compared for this task. Lastly, 3D figures are derived from their 2D counterparts based on the outputs of the previous step by a series of networks. This significantly facilitates and accelerates the 3D modelling process. The figures are represented by implicit surfaces, which are advantageous for curved surfaces, and rendered in real time by a ray tracing algorithm. Quantitative metrics, such as accuracy and rendering speed, and qualitative results are reported for each task. The inferred 3D pictorial objects may guide readers through the map while providing background information and offering interaction possibilities like quizzes. This is especially suitable for atlases, touristic or educative applications, even in augmented and virtual reality. Animated interactive objects are intended to engage map readers through an immersive storytelling approach, increase their map literacy and frequency of use, and create long-lasting memories.

Publication status

published

Editor

Contributors

Examiner: Hurni, Lorenz
Examiner : Bandrova, Temenoujka
Examiner : Öztireli, A. Cengiz

Book title

Journal / series

Volume

Pages / Article No.

Publisher

ETH Zurich

Event

Edition / version

Methods

Software

Geographic location

Date collected

Date created

Subject

Pictorial maps; Machine learning; 3D cartography; Storytelling; Computer graphics

Organisational unit

03466 - Hurni, Lorenz / Hurni, Lorenz check_circle

Notes

Funding

ETH-11 17-1 - Storytelling with Animated Interactive Objects in Real-time 3D Maps (ETHZ)

Related publications and datasets

Has part: