A hybrid approach to seismic deblending: when physics meets self-supervision

  title={A hybrid approach to seismic deblending: when physics meets self-supervision},
  author={Nick Luiken and Matteo Ravasi and Claire Birnie},
To limit the time, cost, and environmental impact associated with the acquisition of seismic data, in recent decades considerable effort has been put into so-called simultaneous shooting acquisitions, where seismic sources are fired at short time intervals between each other. As a consequence, waves originating from consecutive shots are entangled within the seismic recordings, yielding so-called blended data. For processing and imaging purposes, the data generated by each individual shot must… 

Transfer learning for self-supervised, blind-spot seismic denoising

Noise is ever present in seismic data and arises from numerous sources and is continually evolving, both spatially and temporally. The use of supervised deep learning procedures for denoising of

Posterior sampling with CNN-based, Plug-and-Play regularization with applications to Post-Stack Seismic Inversion

Uncertainty quantification is crucial to inverse problems, as it could provide decision-makers with valuable information about the inversion results. For example, seismic inversion is a notoriously

Deep preconditioners and their application to seismic wavefield processing

  • M. Ravasi
  • Geology
    Frontiers in Earth Science
  • 2022
Seismic data processing heavily relies on the solution of physics-driven inverse problems. In the presence of unfavourable data acquisition conditions (e.g., regular or irregular coarse sampling of



Iterative Deblending of Simultaneous-Source Seismic Data With Structuring Median Constraint

Numerical experiments demonstrate that the iterative debl lending based on the SMF constraint obtains a better performance and a faster convergence than the low-rank and compressed sensing constraint-based deblending approaches.

Seismic Simultaneous Source Separation via Patchwise Sparse Representation

This work proposes an alternative method to separate the blended data by combining patchwise dictionary learning with sparse inversion, in which the dictionary is directly learned from the measured blended data.

An amplitude-preserving deblending approach for simultaneous sources

Inspired by compressive sensing (CS), deblending is formulated as an analysis-based sparse inversion problem with an algorithm derived from the classic alternating direction method (ADM), associated with variable splitting and nonmonotone line-search techniques.

A convolutional neural network approach to deblending seismic data

A data-driven deep-learning-based method for fast and efficient seismic deblending using a convolutional neural network designed according to the special characteristics of seismic data and performsdeblending with results comparable to those obtained with conventional industry debl lending algorithms.

Hybrid-Sparsity Constrained Dictionary Learning for Iterative Deblending of Extremely Noisy Simultaneous-Source Data

A hybrid-sparsity constraint model that applies the dictionary learning into the deblending framework that is based on the sparsity-promoting transform to deal with extremely noisy simultaneous source data is proposed.

The importance of transfer learning in seismic modeling and imaging

Accurate forward modeling is essential for solving inverse problems in exploration seismology. Unfortunately, it is often not possible to afford being physically or numerically accurate. To overcome

Separation of blended data by iterative estimation and subtraction of blending interference noise

Seismic acquisition is a trade-off between economy and quality. In conventional acquisition the time intervals between successive records are large enough to avoid interference in time. To obtain an

The potential of self-supervised networks for random noise suppression in seismic data

Interpolation and Denoising of Seismic Data using Convolutional Neural Networks

Inspired by the great contributions achieved in image processing and computer vision, a particular architecture of convolutional neural network is investigated, referred to as U-net, which implements a Convolutional autoencoder able to describe the complex features of clean and regularly sampled data for reconstructing the corrupted ones.