Lars Gebraad


Loading...

Last Name

Gebraad

First Name

Lars

Organisational unit

Search Results

Publications 1 - 10 of 21
  • Zunino, Andrea; Mosegaard, Klaus; Boehm, Christian; et al. (2021)
    EGUsphere
    The Hamiltonian Monte Carlo method (HMC) is gaining popularity in the geophysical community to fully address nonlinear inverse problems and related uncertainty quantification. We present here an application of HMC to invert seismic data in the acoustic approximation in the context of reflection seismology. We address a 2-D problem, in the form of a vertical cross section where both source and receivers are located near the surface of the model. To solve the forward problem we utilise the finite-difference method with PML absorbing boundary conditions. The observed data are represented by a set of shotgathers. The crucial aspect for a successful application of the HMC lies in the capability of performing gradient computations in an efficient manner. To this end, we use the adjont state method to compute the gradient of the misfit functional, which has a computational cost of only about twice that of the forward computation, a very efficient strategy. From the collection of samples characterising the posterior distribution obtained with the HMC, we can derive quantities of interest using statistical analysis and assess uncertainties. We illustrate an application of this methodology on a synthetic test mimicking the setup encountered in exploration problems.
  • Fichtner, Andreas; Zunino, Andrea; Gebraad, Lars (2019)
    Geophysical Journal International
    We present the theory for and applications of Hamiltonian Monte Carlo (HMC) solutions of linear and nonlinear tomographic problems. HMC rests on the construction of an artificial Hamiltonian system where a model is treated as a high-dimensional particle moving along a trajectory in an extended model space. Using derivatives of the forward equations, HMC is able to make long-distance moves from the current towards a new independent model, thereby promoting model independence, while maintaining high acceptance rates. Following a brief introduction to HMC using common geophysical terminology, we study linear (tomographic) problems. Though these may not be the main target of Monte Carlo methods, they provide valuable insight into the geometry and the tuning of HMC, including the design of suitable mass matrices and the length of Hamiltonian trajectories. This is complemented by a self-contained proof of the HMC algorithm in Appendix A. A series of tomographic/imaging examples is intended to illustrate (i) different variants of HMC, such as constrained and tempered sampling, (ii) the independence of samples produced by the HMC algorithm and (iii) the effects of tuning on the number of samples required to achieve practically useful convergence. Most importantly, we demonstrate the combination of HMC with adjoint techniques. This allows us to solve a fully nonlinear, probabilistic traveltime tomography with several thousand unknowns on a standard laptop computer, without any need for supercomputing resources.
  • Bozzi, Emanuele; Gebraad, Lars; Fichtner, Andreas; et al. (2024)
    GNGTS Book of Abstract 42nd National Conference
  • Gebraad, Lars; Fichtner, Andreas (2018)
    AGU Fall Meeting Abstracts
  • Bozzi, Emanuele; Piana Agostinetti, Nicola; Saccorotti, Gilberto; et al. (2024)
    EGUsphere
    Distributed Acoustic Sensing (DAS) technology is currently used to monitor seismic activity, offering a unique spatially-dense representation of the along-the-cable strain wavefield. Traditional seismic networks typically rely on the timing of specific seismic phases to estimate source locations. In this context, DAS arrays may fail to provide accurate traveltimes because of spatially-heterogeneous waveforms. The motivations are (but not limited to) the directional sensitivity, the heterogeneous cable ground-coupling and the enhanced sensitivity to lateral variations in the medium elastic properties. The resulting fluctuations in signal-to-noise ratios of the dense DAS channels pose significant challenges in the automatic picking of body phases, e.g., P-wave Absolute Arrival Times (P-ARTs). Consequently, the complex distribution of the estimated traveltimes impacts the accuracy of event locations, especially if incorrect assumptions on error statistics (e.g., Normal distribution) are considered. In this study, we address this issue by exploiting the intrinsic DAS measurements' spatial density and testing selected P-wave Differential Arrival Times (P-DATs) for source location. We estimate P-DATs for all the possible DAS channel pairs by identifying the time delay corresponding to the peak of each cross-correlation function. Subsequently, we select P-DATs based on two criteria: interchannel distance and cross-correlation index value. This procedure is often employed to reduce the risk of mixing delay times from coherent and incoherent waveforms. As a first test, using a probabilistic inversion (Hamiltonian Monte Carlo method), we demonstrate how the selected P-DATs provide a better constraint on the event's azimuthal direction compared to P-ARTs. Then, as a second experiment, we move from a subjective selection of P-DATs. To do so, we test a fully-automated and data-driven covariance matrix weighting procedure, in a probabilistic inversion scheme. Specifically, we compute posterior probability distributions for both the physical parameters (event location) and hyperparameters related to data features (interchannel distance and cross-correlation index thresholds). In this scheme, the hyperparameters define each weight along the diagonal of the covariance matrix. These tests offer useful insights into the utilization of P-DATs for event location with DAS. Moreover, we provide an automatic approach to avoid subjective biases based on pre‐conceptions in the a-priori data selection.
  • Gebraad, Lars; van Herwaarden, Dirk-Philip; Thrastarson, Sölvi; et al. (2023)
    Special Publications of the International Union of Geodesy and Geophysics ~ Applications of Data Assimilation and Inverse Problems in the Earth Sciences
    The continuously increasing quantity and quality of seismic waveform data carry the potential to provide images of the Earth’s internal structure with unprecedented detail. Harnessing this rapidly growing wealth of information, however, constitutes a formidable challenge. While the emergence of faster supercomputers helps to accelerate existing algorithms, the daunting scaling properties of seismic inverse problems still demand the development of more efficient solutions. The diversity of seismic inverse problems – in terms of scientific scope, spatial scale, nature of the data, and available resources – precludes the existence of a silver bullet. Instead, efficiency derives from problem adaptation. Within this context, this chapter describes a collection of methods that are smart in the sense of exploiting specific properties of seismic inverse problems, thereby increasing computational efficiency and usable data volumes, sometimes by orders of magnitude. These methods improve different aspects of a seismic inverse problem, for instance, by harnessing data redundancies, adapting numerical simulation meshes to prior knowledge of wavefield geometry, or permitting long-distance moves through model space for Monte Carlo sampling.
  • Zunino, Andrea; Gebraad, Lars; Ghirotto, Alessandro; et al. (2023)
    Geophysical Journal International
    The use of the probabilistic approach to solve inverse problems is becoming more popular in the geophysical community, thanks to its ability to address nonlinear forward problems and to provide uncertainty quantification. However, such strategy is often tailored to specific applications and therefore there is a need for common platforms to solve different geophysical inverse problems and showing potential and pitfalls of the methodology. In this work, we demonstrate a common framework within which it is possible to solve such inverse problems ranging from, for example, earthquake source location to potential field data inversion and seismic tomography. This allows us to fully address nonlinear problems and to derive useful information about the subsurface, including uncertainty estimation. This approach can, in fact, provide probabilities related to certain properties or structures of the subsurface, such as histograms of the value of some physical property, the expected volume of buried geological bodies or the probability of having boundaries defining different layers. Thanks to its ability to address high-dimensional problems, the Hamiltonian Monte Carlo (HMC) algorithm has emerged as the state-of-the-art tool for solving geophysical inverse problems within the probabilistic framework. HMC requires the computation of gradients, which can be obtained by adjoint methods. This unique combination of HMC and adjoint methods is what makes the solution of tomographic problems ultimately feasible. These results can be obtained with ‘HMCLab’, a numerical laboratory for solving a range of different geophysical inverse problems using sampling methods, focusing in particular on the HMC algorithm. HMCLab consists of a set of samplers (HMC and others) and a set of geophysical forward problems. For each problem its misfit function and gradient computation are provided and, in addition, a set of prior models can be combined to inject additional information into the inverse problem. This allows users to experiment with probabilistic inverse problems and also address real-world studies. We show how to solve a selected set of problems within this framework using variants of the HMC algorithm and analyse the results. HMCLab is provided as an open source package written both in Python and Julia, welcoming contributions from the community.
  • Randomising in and over Seismology
    Item type: Doctoral Thesis
    Gebraad, Lars (2024)
    This thesis presents a collection of works centered on computational statistics in Bayesian seismology. Bayesian seismology interprets inverse problems in seismology as questions of inference, striving not to produce a single answer to an inverse problem, but to ascribe a probability to all possible solutions. To avoid evaluating every potential solution, or scenario, algorithms from computational statistics are necessary. However, the selection of appropriate algorithms is non-trivial, often demanding a deep understanding of the inverse problem at hand and knowledge of the potential algorithms available. This work focuses on the use of a specific algorithm, Hamiltonian Monte Carlo (HMC), and related variants. Its application to Bayesian seismology is studied from various perspectives. Firstly, a general case study for appraising a computationally demanding inverse prob- lem in seismology with HMC is presented. It is demonstrated that the use of the HMC algorithm enables successful consideration of Full-Waveform Inversion within a Bayesian inference framework, unlocking inference on parameters such as density, which have traditionally been poorly resolved. This is followed by an effort to quantify the performance of algorithms on a given class of inverse problems. The collection of No-Free-Lunch algorithms precludes any single algorithm from being universally efficient, guiding the investigation into whether HMC and related algorithms might be optimal for a reduced set of relevant problems. While this is confirmed, the attempt is restricted by the curse of dimensionality, confining the analysis to inverse problems of limited dimensionality. The expertise gained on these appraisal algorithms is subsequently distilled into an accessible and well-documented collection of open-source codes called HMCLab. This collection includes numerous didactic materials aimed at showcasing HMC and its variants to the general geophysicist. It covers various inverse problems and their Bayesian treatment, along with instructions on implementing inverse problems posed by the user. Next, two approaches to writing efficient wavefield simulation codes are proposed. The first, an open-source package named psvWave, is a C++ written and Python accessible software designed to simulate 2D wavefields in parallel. The second approach demonstrates how to leverage modern unified chips using the Metal Shading Language to accelerate existing C++. Its ease of use is demonstrated on the psvWave package. Efficient wavefield modeling is integral to Bayesian seismology, as reducing computational costs can enable more extensive evaluations of wavefield-based inverse problems. The thesis concludes with a report on multiple seismological field campaigns that are extensively documented using aerial and ground-based photogrammetry. In the three field campaigns, Structure-from-Motion methods were innovatively used to digitise the field sites. It is shown that these methods are accessible with limited resources and consumer electronics. The digitisation employing remotely operated drones enables safe surveying of hazardous fields and the ability to rapidly create meshes of structures and topography for wavefield simulations, while ground-based imagery offers a low-cost, low-risk alternative.
  • Gebraad, Lars; Fichtner, Andreas (2023)
    Seismological Research Letters
    The M series of chips produced by Apple has proven a capable and power-efficient alternative to mainstream Intel and AMD ×86 processors for everyday tasks. In addition, the unified design integrating the central processing and graphics processing unit (GPU), have allowed these M series chips to excel at many tasks with heavy graphical requirements without the need for a discrete GPU) in some cases even outperforming discrete GPUs. In this work, we show how the M series chips can be leveraged using the Metal Shading Language (MSL) to accelerate typical array operations in C++. More important, we show how the usage of MSL avoids the typical complexity of compute unified device architecture (CUDA) or OpenACC memory management by allowing the central processing unit (CPU) and GPU to work in unified memory. We demonstrate how performant the M series chips are on standard 1D and 2D array operations such as array addition, single-precision A·X plus Y, and finite-difference stencils, with respect to serial and OpenMP-accelerated CPU code. The reduced complexity of implementing MSL also allows us to accelerate an existing elastic wave equation solver (originally based on OpenMP-accelerated C++) while retaining all CPU and OpenMP functionality without modification. The resulting performance gain of simulating the wave equation is near an order of magnitude for large domain sizes. This gain attained from using MSL is similar to other GPU-accelerated wave-propagation codes with respect to their CPU variants but does not come at much increased programming complexity that prohibits the typical scientific programmer to leverage these accelerators. This result shows how unified processing units can be a valuable tool to seismologists and computational scientists in general, lowering the bar to writing performant codes that leverage modern GPUs.
  • Gebraad, Lars; Boehm, Christian; Fichtner, Andreas (2020)
    Journal of Geophysical Research: Solid Earth
Publications 1 - 10 of 21