Skip to content

Commit

Permalink
Merge branch 'Version2.0' into main
Browse files Browse the repository at this point in the history
Signed-off-by: Tom Freudenberg <[email protected]>
  • Loading branch information
Tom Freudenberg committed Jun 20, 2024
2 parents 3b99722 + c055588 commit 79a4b15
Show file tree
Hide file tree
Showing 45 changed files with 23,732 additions and 1,001 deletions.
4 changes: 2 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,9 @@ __pycache__/*

# Log folders
**/lightning_logs/**
**/fluid_logs/**
**/Multiscale/**
**/bosch/**

**/fluid_logs/**
# Project files
.ropeproject
.project
Expand Down
22 changes: 7 additions & 15 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -80,32 +80,24 @@ Installation
============
TorchPhysics reqiueres the follwing dependencies to be installed:

- PyTorch_ >= 1.7.1, < 2.0.0
- `PyTorch Lightning`_ >= 1.3.4, < 2.0.0
- PyTorch_ >= 2.0.0
- `PyTorch Lightning`_ >= 2.0.0
- Numpy_ >= 1.20.2
- Matplotlib_ >= 3.0.0
- Scipy_ >= 1.6.3

Installing TorchPhysics with ``pip``, automatically downloads everything that is needed:
To install the TorchPhysics version that is compatible with PyTorch>=2.0.0
you have to copy this branch and install it manually, either by

.. code-block:: python
pip install torchphysics
pip install "git+https://github.com/boschresearch/torchphysics@Version2.0"
Additionally, to use the ``Shapely`` and ``Trimesh`` functionalities, install the library
with the option ``all``:
or by

.. code-block:: python
pip install torchphysics[all]
If you want to add functionalities or modify the code. We recommend copying the
repository and installing it locally:

.. code-block:: python
git clone https://github.com/boschresearch/torchphysics
git clone https://github.com/boschresearch/torchphysics@Version2.0
cd path_to_torchphysics_folder
pip install .[all]
Expand Down
4 changes: 2 additions & 2 deletions docs/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@ protobuf~=3.19.0 # fix github test and docu creation
sphinx>=3.2.1
sphinx_rtd_theme

torch>=1.7.1
pytorch-lightning>=1.3.4
torch>=2.0.0
pytorch-lightning>=2.0.0
numpy>=1.20.2
matplotlib>=3.4.2
trimesh>=3.9.19
Expand Down
11 changes: 6 additions & 5 deletions docs/tutorial/solve_pde.rst
Original file line number Diff line number Diff line change
Expand Up @@ -169,11 +169,12 @@ Afterwards we switch to LBFGS:
solver = tp.solver.Solver(train_conditions=[bound_cond, pde_cond], optimizer_setting=optim)
trainer = pl.Trainer(gpus=1,
max_steps=3000, # number of training steps
logger=False,
benchmark=True,
checkpoint_callback=False)
trainer = pl.Trainer(devices=1, accelerator="gpu",
num_sanity_val_steps=0,
benchmark=True,
max_steps=3000,
logger=False,
enable_checkpointing=False)
trainer.fit(solver)
Expand Down
15 changes: 10 additions & 5 deletions docs/tutorial/solver_info.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,15 +22,20 @@ the basic trainer is defined as follows:
import pytorch_lightning as pl
trainer = pl.Trainer(gpus=1, max_steps=4000, check_val_every_n_epoch=10)
trainer = pl.Trainer(devices=1, accelerator="gpu",
max_steps=4000, check_val_every_n_epoch=10)
trainer.fit(solver) # start training
Some important keywords are:

- **gpus**: The number of GPUs that should be used. If only one or more CPUs are available
set ```gpus=None``. Depending on the operating system, the GPUs may have to be further
specified beforehand via ``os.environ`` or other ways. There are more different possiblities to
specify the used device, see the above mentionde documentation.
- **devices**: The number of devices that should be used.
- **accelerator**: On what kind of device the network will be trained.
Usually, one needs at least one GPU to train a neural network sufficiently well. If only
CPUs are available set ``"cpu"``.
Depending on the operating system, the GPUs may have to be further
specified beforehand via ``os.environ`` or other ways.
There are more different possibilities to specify the used device,
see the above-mentioned documentation.
- **max_steps**: The maximum number of training iterations. In each iteration
all defined training conditions will be evaluated
(e.g. points sampled, model output computed, residuals evaluated, etc.) and a
Expand Down
18 changes: 8 additions & 10 deletions examples/deeponet/inverse_ode.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -258,13 +258,12 @@
"\n",
"solver = tp.solver.Solver([data_condition], optimizer_setting=optim)\n",
"\n",
"trainer = pl.Trainer(gpus='-1' if torch.cuda.is_available() else None,\n",
"trainer = pl.Trainer(devices=1, accelerator=\"gpu\",\n",
" num_sanity_val_steps=0,\n",
" benchmark=True,\n",
" max_steps=10000,\n",
" logger=False,\n",
" checkpoint_callback=False\n",
" )\n",
" max_steps=10000, \n",
" logger=False, \n",
" enable_checkpointing=False)\n",
"\n",
"trainer.fit(solver)"
]
Expand Down Expand Up @@ -327,13 +326,12 @@
"\n",
"solver = tp.solver.Solver([data_condition], optimizer_setting=optim)\n",
"\n",
"trainer = pl.Trainer(gpus='-1' if torch.cuda.is_available() else None,\n",
"trainer = pl.Trainer(devices=1, accelerator=\"gpu\",\n",
" num_sanity_val_steps=0,\n",
" benchmark=True,\n",
" max_steps=1000,\n",
" logger=False,\n",
" checkpoint_callback=False\n",
" )\n",
" max_steps=1000, \n",
" logger=False, \n",
" enable_checkpointing=False)\n",
"\n",
"trainer.fit(solver)"
]
Expand Down
18 changes: 9 additions & 9 deletions examples/deeponet/ode.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -191,13 +191,12 @@
"\n",
"solver = tp.solver.Solver([ode_cond, initial_cond], optimizer_setting=optim)\n",
"\n",
"trainer = pl.Trainer(gpus='-1' if torch.cuda.is_available() else None,\n",
"trainer = pl.Trainer(devices=1, accelerator=\"gpu\",\n",
" num_sanity_val_steps=0,\n",
" benchmark=True,\n",
" max_steps=5000,\n",
" logger=False,\n",
" checkpoint_callback=False\n",
" )\n",
" max_steps=5000, \n",
" logger=False, \n",
" enable_checkpointing=False)\n",
"\n",
"trainer.fit(solver)"
]
Expand Down Expand Up @@ -284,11 +283,12 @@
"\n",
"solver = tp.solver.Solver(train_conditions=[ode_cond, initial_cond], optimizer_setting=optim)\n",
"\n",
"trainer = pl.Trainer(gpus=1,\n",
" max_steps=3000, \n",
" logger=False,\n",
"trainer = pl.Trainer(devices=1, accelerator=\"gpu\",\n",
" num_sanity_val_steps=0,\n",
" benchmark=True,\n",
" checkpoint_callback=False)\n",
" max_steps=3000, \n",
" logger=False, \n",
" enable_checkpointing=False)\n",
" \n",
"trainer.fit(solver)"
]
Expand Down
25 changes: 12 additions & 13 deletions examples/deeponet/oscillator.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@
"source": [
"import os\n",
"os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"3\"\n",
"import torch\n",
"import torchphysics as tp\n",
"import torch\n",
"import pytorch_lightning as pl"
]
},
Expand Down Expand Up @@ -219,13 +219,12 @@
"source": [
"solver = tp.solver.Solver([ode_cond])\n",
"\n",
"trainer = pl.Trainer(gpus='-1' if torch.cuda.is_available() else None,\n",
"trainer = pl.Trainer(devices=1, accelerator=\"gpu\",\n",
" num_sanity_val_steps=0,\n",
" benchmark=True,\n",
" max_steps=2000,\n",
" logger=False,\n",
" checkpoint_callback=False\n",
" )\n",
" max_steps=30000, \n",
" logger=False, \n",
" enable_checkpointing=False)\n",
"\n",
"trainer.fit(solver)"
]
Expand Down Expand Up @@ -311,12 +310,12 @@
"\n",
"solver = tp.solver.Solver(train_conditions=[ode_cond], optimizer_setting=optim)\n",
"\n",
"trainer = pl.Trainer(gpus=1,\n",
" max_steps=2500, \n",
" logger=False,\n",
"trainer = pl.Trainer(devices=1, accelerator=\"gpu\",\n",
" num_sanity_val_steps=0,\n",
" benchmark=True,\n",
" checkpoint_callback=False)\n",
" \n",
" max_steps=2500, \n",
" logger=False, \n",
" enable_checkpointing=False)\n",
"trainer.fit(solver)"
]
},
Expand Down Expand Up @@ -454,10 +453,10 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.15"
"version": "3.11.7"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}
}
19 changes: 10 additions & 9 deletions examples/deepritz/corner_pde.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -161,12 +161,12 @@
"source": [
"import pytorch_lightning as pl\n",
"\n",
"trainer = pl.Trainer(gpus=1, # or None if CPU is used\n",
" max_steps=5000, # number of training steps\n",
" logger=False,\n",
"trainer = pl.Trainer(devices=1, accelerator=\"gpu\",\n",
" num_sanity_val_steps=0,\n",
" benchmark=True,\n",
" checkpoint_callback=False)\n",
" \n",
" max_steps=5000, \n",
" logger=False, \n",
" enable_checkpointing=False)\n",
"trainer.fit(solver)"
]
},
Expand Down Expand Up @@ -245,11 +245,12 @@
"bound_cond.sampler = bound_cond.sampler.make_static() \n",
"solver = tp.solver.Solver(train_conditions=[bound_cond, pde_cond], optimizer_setting=optim)\n",
"\n",
"trainer = pl.Trainer(gpus=1,\n",
" max_steps=3000, # number of training steps\n",
" logger=False,\n",
"trainer = pl.Trainer(devices=1, accelerator=\"gpu\",\n",
" num_sanity_val_steps=0,\n",
" benchmark=True,\n",
" checkpoint_callback=False)\n",
" max_steps=3000, \n",
" logger=False, \n",
" enable_checkpointing=False)\n",
" \n",
"trainer.fit(solver)"
]
Expand Down
18 changes: 9 additions & 9 deletions examples/deepritz/poisson-equation.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,11 @@
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0\"\n",
"import torch\n",
"import torchphysics as tp"
"import torchphysics as tp\n",
"import pytorch_lightning as pl"
]
},
{
Expand Down Expand Up @@ -238,15 +241,12 @@
"solver = tp.solver.Solver([pde_condition,\n",
" boundary_condition])\n",
"\n",
"import pytorch_lightning as pl\n",
"import os\n",
"os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0\"\n",
"\n",
"trainer = pl.Trainer(gpus=None, # or None for CPU\n",
" max_steps=2000,\n",
" logger=False,\n",
"trainer = pl.Trainer(devices=1, accelerator=\"gpu\",\n",
" num_sanity_val_steps=0,\n",
" benchmark=True,\n",
" checkpoint_callback=False)\n",
" max_steps=2000, \n",
" logger=False, \n",
" enable_checkpointing=False)\n",
"trainer.fit(solver)"
]
},
Expand Down
263 changes: 14 additions & 249 deletions examples/oscillation/duffing_nonlinear.ipynb

Large diffs are not rendered by default.

Loading

0 comments on commit 79a4b15

Please sign in to comment.