Skip to content

Implementations of various neural network architectures to infer the interface depth of simulations of the elastic wave equation with the frozen gaussian approximation.

License

Notifications You must be signed in to change notification settings

KyleMylonakis/GeoSeg

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

91 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GeoSeg

About

This is an API for building neural network for a particular kind of inverse problem arising in geophysics as described in Hateley, Mylonakis, Roberts, Yang. The frozen Gaussian approximation (FGA) is a mathematical technique to quickly solve an approximation to the 3D elastic wave equation. The FGA solves for the waveform given an initial epicenter and boundary data. The full inverse problem would be to determine the boundary data and epicenter from the waveform at a specified set of receivers.

The goal of this API is to be able to quickly build different neural network architectures to test increasingly more complicated versions of this inverse problem.

The basic components of models in the package are Blocks and Meta-Architectures. A Block is a combination of network layers together with an up and down sample method, and a Meta-Architecture is the scaffolding that the blocks will be placed upon. The current supported meta-architectures and blocks are:

  • Meta_Architectures:
    • CNN: A basic feed forward convolutional network which down samples its input.
    • EncoderDecorder: A feed forward network with an encoding branch which down-samples and a decoding branch that up-samples back to original resolution.
    • UNet: Similar to the auto-encoder but with connections from the encoding branch to the decoding branch.
  • Blocks:
    • Convolutional: A single convolutional layer.
    • Residual: A single convolutional layer with a skip connection.
    • Dense: A multi-layer block where each layer receeves input from all previous layers.

The down-sample layer for each block is a strided convolution and their up-sample is a strided convolution-transpose. All blocks have the option of adding a batch-normalization layer and/or a bottleneck as was done in [3]. Each of Meta-Architecture can be implemented with any of the Blocks. Details of the implementation can be found in the documentation.

Running Experiments

To run an experiment you need data seismograph data together with labels in numpy array form. The seismograph data should be of shape (N,3,r) corresponding to N time samples of P or S wave seismic data in the x, y, and z direction recorded by r-receivers.

  1. Clone the repo
clone https://github.com/KyleMylonakis/GeoSeg
  1. Train the network
python3 main.py --config path/to/config.json \ 
                          --save-dir path/to/save/model

At the end of training the model will be in path/to/save/model/model.h5 along with the weights from the checkpoints with the highest validation accuracy path/to/save/model/model_chkpt.h5.

Configuring Experiments

Each experiment is defined by an experiment_config.json. There are three main configurations to set.

Model Config

The model config has a meta_arch and block config.

"model":
"meta_arch":{
    "name": "Unet",
    **kwargs   
    },
"block":{
    "name": "dense",
    **kwargs
    }

The only field required is "name". Any other parameters that define the model will be set to their defaults if not provided. For a description of all the parameters see documentation in meta_arch.MetaModel and blocks.Blocks.

Train and Eval Config

"train": {
    "save_every": 100,
    "data": "path/to/train/data.npy",
    "labels": "path/to/train/labels.npy",
    "downsample": 5,
    "optimizer": {
      "algorithm": "nadam",
      "parameters": {
        **kwargs
      }
    },
    "batch_size": 16,
    "epochs": 1,
    "shuffle": true
  },
"eval": {
    "data": "path/to/eval/data.npy",
    "labels": "path/to/eval/labels.npy",
    "batch_size": 16,
    "shuffle": false
  }

References

  1. Deep Learning Seismic Interface Detection using the Frozen Gaussian Approximation; James C. Hateley, Jay Roberts, Kyle Mylonakis, Xu Yang. (2018)
  2. UNet: Convolutional Networks for Biomedical Image Segmentation; Olaf Ronneberger, Philipp Fischer, and Thomas Brox. (2015)
  3. Densely Connected Convolutional Networks; Gao Huang, Zhuang Liu. (2018)
  4. Deep Residual Learning for Image Recognition; Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. (2015)

About

Implementations of various neural network architectures to infer the interface depth of simulations of the elastic wave equation with the frozen gaussian approximation.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages