Skip to content

Commit

Permalink
Merge branch 'main' of https://github.com/TomF98/torchphysics into main
Browse files Browse the repository at this point in the history
  • Loading branch information
nheilenkoetter committed Jul 13, 2023
2 parents ac15ec1 + 14d8095 commit de97cb9
Show file tree
Hide file tree
Showing 59 changed files with 9,562 additions and 315 deletions.
13 changes: 9 additions & 4 deletions CHANGELOG.rst
Original file line number Diff line number Diff line change
@@ -1,10 +1,15 @@
=========
Changelog
=========
All notable changes to this project will be documented in this file.

Version 0.1

Version 1.0
===========
First official release of TorchPhysics on PyPI.

- Feature A added
- FIX: nasty bug #1729 fixed
- add your changes here!
Version 1.0.1
=============
- Updated documentation and error messages
- Simplyfied creation/definition of DeepONets
- Add more evalution types for the DeepONet
36 changes: 29 additions & 7 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,13 +9,15 @@ You can use TorchPhysics e.g. to
- train a neural network to approximate solutions for different parameters
- solve inverse problems and interpolate external data

The following approaches are implemented using high-level concepts to make their usage as easy as possible:
The following approaches are implemented using high-level concepts to make their usage as easy
as possible:

- physics-informed neural networks (PINN) [1]_
- QRes [2]_
- the Deep Ritz method [3]_
- DeepONets [4]_ and Physics-Informed DeepONets [5]_

TorchPhysics can also be used to implement extensions of these approaches or concepts like DeepONets [4]_ and Physics-Informed DeepONets [5]_. We aim to also include further implementations in the future.
We aim to also include further implementations in the future.


TorchPhysics is build upon the machine learning library PyTorch_.
Expand Down Expand Up @@ -76,20 +78,40 @@ to have a look at the following sections:

Installation
============
TorchPhysics can be installed by using:
TorchPhysics reqiueres the follwing dependencies to be installed:

- PyTorch_ >= 1.7.1, < 2.0.0
- `PyTorch Lightning`_ >= 1.3.4, < 2.0.0
- Numpy_ >= 1.20.2
- Matplotlib_ >= 3.0.0
- Scipy_ >= 1.6.3

Installing TorchPhysics with ``pip``, automatically downloads everything that is needed:

.. code-block:: python
pip install git+https://github.com/boschresearch/torchphysics
pip install torchphysics
Additionally, to use the ``Shapely`` and ``Trimesh`` functionalities, install the library
with the option ``all``:

.. code-block:: python
pip install torchphysics[all]
If you want to change or add something to the code. You should first copy the repository and install
it locally:
If you want to add functionalities or modify the code. We recommend copying the
repository and installing it locally:

.. code-block:: python
git clone https://github.com/boschresearch/torchphysics
pip install .
cd path_to_torchphysics_folder
pip install .[all]
.. _Numpy: https://numpy.org/
.. _Matplotlib: https://matplotlib.org/
.. _Scipy: https://scipy.org/

About
=====
Expand Down
25 changes: 23 additions & 2 deletions docs/examples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ One of the simplest applications is the forward solution of a Poisson equation:
u &= \sin(\frac{\pi}{2} x_1)\cos(2\pi x_2), \text{ on } \partial \Omega
\end{align}
This problem is part of the tutorial and only mentioned for completeness.
This problem is part of the tutorial and is therefore explained with alot of details.
The corresponding implementation can be found here_.

.. _here : tutorial/solve_pde.html
Expand All @@ -38,7 +38,7 @@ A simple example would be the problem:
u(0) &= 1
\end{align*}
where we want to train a family of solutions for :math:`k \in [0, 2]`. So we essentially
where we want to train a family of solutions for :math:`k \in [0, 2]`. We
want to find the function :math:`u(x, k) = e^{kx}`.
Implemented is this example in: `simple-parameter-dependency-notebook`_

Expand Down Expand Up @@ -96,6 +96,27 @@ Link to the notebook: `moving-domain-notebook`_

.. _`moving-domain-notebook`: https://github.com/boschresearch/torchphysics/blob/main/examples/pinn/moving-heat-equation.ipynb


Using hard constrains
=====================
For some problems, it is advantageous to build some prior knowledge into the used network architecture
(e.g. scaling the network output or fixing the values on the boundary). This can easily be achieved
in TorchPhysics and is demonstrated in this `hard-constrains-notebook`_. There we consider the system:

.. math::
\begin{align*}
\partial_y u(x,y) &= \frac{u(x,y)}{y}, \text{ in } [0, 1] \times [0, 1] \\
u_1(x, 0) &= 0 , \text{ for } x \in [0, 1] \\
u_2(x, 1) &= \sin(20\pi*x) , \text{ for } x \in [0, 1] \\
\vec{n} \nabla u(x, y) &= 0 , \text{ for } x \in \{0, 1\}, y \in \{0, 1\}\\
\end{align*}
where the high frequency is problematic for the usual PINN-approach.

.. _`hard-constrains-notebook`: https://github.com/boschresearch/torchphysics/blob/main/examples/pinn/hard-constrains.ipynb


Interface jump
==============
For an example where we want to solve a problem with a discontinuous solution,
Expand Down
2 changes: 1 addition & 1 deletion docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ can be found.
:maxdepth: 2

Overview <readme>
Tutorial <tutorial/tutorial_start>
Tutorial <tutorial/main_page>
Examples <examples>


Expand Down
45 changes: 45 additions & 0 deletions docs/tutorial/applied_tutorial_start.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
==============================
Applied TorchPhysics Tutorials
==============================
Here, we explain the library of TorchPhysics along the implementation of different
examples.

To start, we consider a heat equation problem of the form

.. math::
\begin{align}
\partial_t u(x,t) &= \Delta_x u(x,t) \text{ on } \Omega\times I, \\
u(x, t) &= u_0 \text{ on } \Omega\times \{0\},\\
u(x,t) &= h(t) \text{ at } \partial\Omega_{heater}\times I, \\
\nabla_x u(x, t) \cdot n(x) &= 0 \text{ at } (\partial \Omega \setminus \partial\Omega_{heater}) \times I,
\end{align}
that we will solve with PINNs. This example is a nice starting point for a new user and can
be found here_. The notebook gives a lot of information about TorchPhysics and even repeats the
basic ideas of PINNs.

.. _here : https://github.com/boschresearch/torchphysics/blob/main/examples/tutorial/Introduction_Tutorial_PINNs.ipynb

A next step would be to make the problem more complicated, such that not a single solution
should be found, but a whole family of solutions for different functions :math:`h`.
As long as the different :math:`h` can be defined through some parameters, the solution operator
can still be learned through PINNs. This is explained in this notebook_, which is really similar to
previous one and highlights the small aspects that have to be changed.

.. _notebook : https://github.com/boschresearch/torchphysics/blob/main/examples/tutorial/Tutorial_PINNs_Parameter_Dependency.ipynb

For more complex :math:`h` functions, we end up at the DeepONet. DeepONets can also be learned
physics informed, which is demonstrated in this tutorial_.

.. _tutorial : https://github.com/boschresearch/torchphysics/blob/main/examples/tutorial/Introduction_Tutorial_DeepONet.ipynb

Similar examples, with a description of each step, can be found in the two notebooks `PINNs for Poisson`_
and `DRM for Poisson`_. The second notebook
also uses the Deep Ritz Method instead of PINNs.

More applications can be found on the `example page`_.

.. _`PINNs for Poisson`: https://github.com/boschresearch/torchphysics/blob/main/examples/tutorial/solve_pde.ipynb
.. _`DRM for Poisson`: https://github.com/boschresearch/torchphysics/blob/main/examples/tutorial/solve_pde_drm.ipynb
.. _`example page`: https://torchphysics.readthedocs.io/en/latest/examples.html
16 changes: 14 additions & 2 deletions docs/tutorial/condition_tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Conditions
The **Conditions** are the central concept of TorchPhysics. They transform the conditions of
the underlying differential equation into the trainings conditions for the neural network.

Many kinds of conditions are pre implemented, they can be found under
Many kinds of conditions are pre implemented, they can be found in
the docs_. Depending on the used type, the condition get different inputs. The five arguments
that almost all conditions need, are:

Expand Down Expand Up @@ -65,7 +65,7 @@ need the output ``u`` and the input ``t`` and ``x``. Therefore the corresponding
def boundary_residual(u, t, x):
return u - torch.sin(t * x[:, :1])*torch.sin(t * x[:, 1:])
Here many important things are happening:
Here many **important** things are happening:

1) The inputs of the function ``boundary_residual`` only corresponds to the variables required.
The needed variables will then internally be correctly passed to the method. Here it is important
Expand All @@ -76,10 +76,22 @@ Here many important things are happening:
If we assume ``x`` is two-dimensional the value ``x[:, :1]`` corresponds to the entries
of the first axis, while ``x[:, 1:]`` is the second axis.
One could also write something like ``t[:, :1]``, but this is equal to ``t``.
Maybe now the question arises, if one could also use ``x[:, 0]`` instead of ``x[:, :1]``.
Both expressions return the first axis of our two-dimensional input points. **But**
there is one important difference in the output, the final **shape**. The use of
``x[:, 0]`` leads to an output of the shape: [batch-dimension] (essentially just
a list/tensor with the values of the first axis), while ``x[:, :1]`` has the
shape: [batch-dimension, 1]. So ``x[:, :1]`` preserves the shape of the
input. This is important to get the correct behavior for different operations like
addition, multiplication, etc. Therefore, ``x[:, 0]`` should generally not be used
inside the residuals and can even lead to errors while training. For more info
on tensor shapes one should check the documentation of `PyTorch`_.
4) The output at the end has to be a PyTorch tensor.
5) We have to rewrite the residual in such a way, that the right hand side is zero. E.g.
here we have to bring the sin-function to the other side.

.. _`PyTorch`: https://pytorch.org/tutorials/beginner/introyt/tensors_deeper_tutorial.html

The defined method could then be passed to a condition. Let us assume we have already created
our model and sampler:

Expand Down
48 changes: 48 additions & 0 deletions docs/tutorial/main_page.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
=========================
The TorchPhysics Tutorial
=========================
Here one can find all important information and knowledge to get started with the
software library TorchPhysics.

In order to make it as easy and user-friendly as possible for all users,
both complete novices and professionals in Machine Learning and PDEs, to get started, the tutorial
starts with some basics regarding differential equations and neural networks. For more
experienced users, these points can be skipped.

Afterward, we give a rough overview of different Deep Learning approaches for solving
differential equations, with a focus on PINNs and DeepONet.

The main and final topic is the use of TorchPhysics. Here we split the tutorial into two parts.
The first part, guides you along some implementations of different small examples, while showing
all the important aspects and steps to solve a differential equation in TorchPhysics. This tutorial
series is aimed at an audience which is more interested on the direct utilization of the library
and for getting a fast and small overview of the possibilities.

To get a deeper understanding of the library, we show in the second part how the library is
internally structured. This series is more aimed for users who plan to add or change functionalities.


Basics of Deep Learning and Differential Equations
=====================================================
Will be added in the future.


Overview of Deep Learning Methods for Differential Equations
============================================================
Will be added in the future.


Usage of TorchPhysics
=====================
Like mentioned at the beginning, here we explain the aspects of TorchPhysics in more
detail. We split the tutorial into two categories:

1) A more applied tutorial to learn TorchPhysics by implementing some examples.
All basics features will be explained. The start can be found here_.

2) A more in depth tutorial that focuses more on the library architecture. This
tutorial begins on this page_.


.. _here : applied_tutorial_start.html
.. _page : tutorial_start.html
17 changes: 16 additions & 1 deletion docs/tutorial/sampler_tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ created with:
import torchphysics as tp
X = tp.spaces.R2('x') # the space of the object
R = tp.domains.Parallelogram(X, [0, 0], [1, 0], [0, 1]) # unit square
C = tp.domains.Circle(X, [0, 0], 1)  # unit circle
C = tp.domains.Circle(X, [0, 0], 1) # unit circle
random_R = tp.samplers.RandomUniformSampler(R, n_points=50) # 50 random points in R
grid_C = tp.samplers.GridSampler(C, density=10) # grid points with density 10 in C
Expand All @@ -51,6 +51,20 @@ be used in the following way:
# first argument the space, afterwards all samplers:
tp.utils.scatter(X, random_R, random_R_bound)
To create on batch of points, the ``.sample_points()`` method can be used. This functions
creates ``n_points`` (or density dependent) different points with the desired sampling strategy
and returns the corresponding ``Points`` object:

.. code-block:: python
created_points = random_R.sample_points()
The utilities of the ``Points`` objects were part of the `spaces and points tutorial`_.
As a general reminder, these ``created_points`` are essentially a dictionary with the
different space variables as keys and PyTorch-tensors as values. To extract a stacked
PyTorch-tensor of all values, one can use the property ``.as_tensor``. Generally the
``.sample_points()`` function will be called only internally.

The default behavior of each sampler is, that in each iteration of the trainings process new
points are created and used. If this is not desired, not useful (grid sampling) or not
efficient (e.g. really complex domains) one can make every sampler ``static``. This will
Expand Down Expand Up @@ -118,4 +132,5 @@ found in the `sampler-docs`_.
Now you know all about the creation of points and can either go to the conditions or
definition of neural networks. Click here_ to go back to the main tutorial page.

.. _`spaces and points tutorial`: tutorial_spaces_and_points.html
.. _here: tutorial_start.html
Loading

0 comments on commit de97cb9

Please sign in to comment.