Skip to content

Commit

Permalink
docs: Elaborate on SubDomains for BCs w MPI
Browse files Browse the repository at this point in the history
  • Loading branch information
georgebisbas committed Nov 20, 2024
1 parent bedb1a9 commit 7f95217
Show file tree
Hide file tree
Showing 4 changed files with 48 additions and 8 deletions.
35 changes: 33 additions & 2 deletions FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
- [Can I control the MPI domain decomposition](#can-i-control-the-mpi-domain-decomposition)
- [How should I use MPI on multi-socket machines](#how-should-I-use-MPI-on-multi-socket-machines)
- [How do I make sure my code is "MPI safe"](#how-do-i-make-sure-my-code-is-MPI-safe)
- [My code is generating different results when running with MPI](#my-code-is-generating-different-results-when-running-with-MPI)
- [Why does my Operator kernel die suddenly](#why-does-my-operator-kernel-die-suddenly)
- [Can I manually modify the C code generated by Devito and test these modifications](#can-i-manually-modify-the-c-code-generated-by-devito-and-test-these-modifications)
- [How do I find the source of my bug quickly and get support](#how-do-i-find-the-source-of-my-bug-quickly-and-get-support)
Expand Down Expand Up @@ -127,7 +128,6 @@ Yes, just set the environment variable `TMPDIR` to your favorite location.

[top](#Frequently-Asked-Questions)


## I create an Operator, look at the generated code, and the equations appear in a different order than I expected.

The Devito compiler computes a topological ordering of the input equations based on data dependency analysis. Heuristically, some equations might be moved around to improve performance (e.g., data locality). Therefore, the order of the equations in the generated code might be different than that used as input to the Operator.
Expand Down Expand Up @@ -677,8 +677,39 @@ NUMA node1 CPU(s): 32-63

There are a few things you may want to check

* To refer to the actual ("global") shape of the domain, you should always use `grid.shape` (or analogously through a `Function`, `f.grid.shape`). And unless you know well what you're doing, you should never use the function shape, namely `f.shape` or `f.data.shape`, as that will return the "local" domain shape, that is the data shape after domain decomposition, which might differ across the various MPI ranks.
* To refer to the actual ("global") shape of the domain, you should always use `grid.shape` (or analogously through a `Function`, `f.grid.shape`). And unless you know well what you're doing, you should never use the function shape, namely `f.shape` or `f.data.shape`, as that will return the "local" domain shape, that is the data shape after domain decomposition, which might differ across the various MPI ranks.


[top](#Frequently-Asked-Questions)

## My code is generating different results when running with MPI

There are a few things you may want to check

* Users are expected to execute the MPI launch command, specifying the desired number of ranks and other
possible arguments: mpirun, mpiexec, srun, e.g.: `mpirun -np <num processes> [options] <executable> [arguments]`, and get exactly the same results. However some pre- and post-processing may be rank-specific (e.g., plotting on a given MPI rank only).

* In case you used hardcoded boundary conditions to model your problem, consider switching to `SubDomain`s. Devito's support for distributed NumPy arrays enables domain decomposition without requiring changes to user code. The data is physically distributed, but from the user’s
perspective, it remains a logically centralized entity. Users can interact with data using familiar indexing schemes (e.g., slicing) without concern about the underlying layout.
You can find related tutorials [here:](https://github.com/devitocodes/devito/tree/master/examples/userapi).

For example, instead of
```python
t = grid.stepping_dim
x, y = grid.dimensions
op = Operator([Eq(u.forward, u + 1),
Eq(u[t+1, x, 0], 0),
Eq(u[t+1, x, 2], 0),
Eq(u[t+1, 0, y], 0),
Eq(u[t+1, 2, y], 0)])
```

one should use a semantically equivalent computation can be expressed exploiting SubDomains.

```python
u.data[:] = 0
op = Operator(Eq(u.forward, u + 1, subdomain=grid.interior))
```

[top](#Frequently-Asked-Questions)

Expand Down
1 change: 1 addition & 0 deletions devito/operator/operator.py
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,7 @@ class Operator(Callable):
... Eq(u[t+1, 2, y], 0)])
A semantically equivalent computation can be expressed exploiting SubDomains.
This is the only way for expressing BCs, when running with MPI.
>>> u.data[:] = 0
>>> op = Operator(Eq(u.forward, u + 1, subdomain=grid.interior))
Expand Down
5 changes: 5 additions & 0 deletions devito/types/grid.py
Original file line number Diff line number Diff line change
Expand Up @@ -496,6 +496,11 @@ class SubDomain(AbstractSubDomain):
--------
Domain : An example of preset SubDomain.
Interior : An example of preset Subdomain.
Notes
-----
SubDomains are the only way to harness the benefits of domain decomposition,
especially when defining BCs.
"""

def __subdomain_finalize__(self, grid, **kwargs):
Expand Down
15 changes: 9 additions & 6 deletions examples/userapi/04_boundary_conditions.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,10 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This tutorial aims to demonstrate how users can implement various boundary conditions in Devito, building on concepts introduced in previous tutorials. Over the course of this notebook we will go over the implementation of both free surface boundary conditions and perfectly-matched layers (PMLs) in the context of the first-order acoustic wave equation. This tutorial is based on a simplified version of the method outlined in Liu and Tao's 1997 paper (https://doi.org/10.1121/1.419657).\n",
"This tutorial aims to demonstrate how users can implement various boundary conditions in Devito, building on concepts introduced in previous tutorials. More specifically, this tutorial will use `SubDomain`s to model boundary conditions. `SubDomain`s are the recommended way to model BCs in order to run with MPI and distributed memory parallelism,\n",
"as they align with Devito's support for distributed NumPy arrays with zero changes needed to the user coded.\n",
"\n",
"Over the course of this notebook we will go over the implementation of both free surface boundary conditions and perfectly-matched layers (PMLs) in the context of the first-order acoustic wave equation. This tutorial is based on a simplified version of the method outlined in Liu and Tao's 1997 paper (https://doi.org/10.1121/1.419657).\n",
"\n",
"We will set up our domain with PMLs along the left, right, and bottom edges, and free surface boundaries at the top as shown below.\n",
"\n",
Expand Down Expand Up @@ -85,7 +88,7 @@
" def define(self, dimensions):\n",
" x, y = dimensions\n",
" return {x: ('right', self.PMLS), y: y}\n",
" \n",
"\n",
"class Base(SubDomain): # Base PML region\n",
" name = 'base'\n",
" def __init__(self, PMLS):\n",
Expand All @@ -102,7 +105,7 @@
" super().__init__()\n",
" self.PMLS = PMLS\n",
" self.S_O = S_O\n",
" \n",
"\n",
" def define(self, dimensions):\n",
" x, y = dimensions\n",
" return {x: ('middle', self.PMLS, self.PMLS), y: ('left', self.S_O//2)}\n",
Expand Down Expand Up @@ -394,7 +397,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We next add our free surface boundary conditions. Note that this implementation is based off that found in `examples/seismic/acoustic/operators.py`."
"Next, we will add our free surface boundary conditions. Note that this implementation is based on that found in `examples/seismic/acoustic/operators.py`."
]
},
{
Expand Down Expand Up @@ -536,9 +539,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"It is important to remember that the ordering of equations when an `Operator` is created dictates the order of loops within the generated c code. As such, the `v` equations all need to be placed before the `p` ones otherwise the operator will attempt to use the updated `v` fields before they have been updated.\n",
"It is important to remember that the ordering of equations when an `Operator` is created dictates the order of loops within the generated c code. As such, the `v` equations need to be placed before the `p` ones; otherwise, the operator will attempt to use the updated `v` fields before they have been updated.\n",
"\n",
"Now let's plot the wavefield."
"Now, let's plot the wavefield."
]
},
{
Expand Down

0 comments on commit 7f95217

Please sign in to comment.