Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unified OCP Trainer #520

Merged
merged 67 commits into from
Jan 5, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
67 commits
Select commit Hold shift + click to select a range
9599f42
initial single trainer commit
mshuaibii Jul 7, 2023
68afdeb
more general evaluator
mshuaibii Jul 11, 2023
3c62f4a
backwards tasks
mshuaibii Jul 11, 2023
569375c
debug config
mshuaibii Jul 11, 2023
2e284cc
predict support, evaluator cleanup
mshuaibii Jul 12, 2023
ba97e97
cleanup, remove hpo
mshuaibii Jul 12, 2023
8af0f90
loss bugfix, cleanup hpo
mshuaibii Jul 13, 2023
d452675
backwards compatability for old configs
mshuaibii Jul 13, 2023
adba02c
backwards breaking fix
mshuaibii Jul 14, 2023
8bac184
eval fix
mshuaibii Jul 14, 2023
4961bb1
remove old imports
janiceblue Jul 17, 2023
99eb482
default for get task metrics
janiceblue Jul 18, 2023
a269544
rebase cleanup
mshuaibii Jul 18, 2023
448c567
config refactor support
mshuaibii Jul 19, 2023
12ec31f
Merge branch 'main' into ocp_trainer
mshuaibii Jul 19, 2023
15fdc56
black
mshuaibii Jul 19, 2023
c47111f
reorganize free_atoms
mshuaibii Jul 20, 2023
eacd66b
output config fix
mshuaibii Jul 20, 2023
024bc86
config naming
mshuaibii Jul 20, 2023
5f47f8a
support loss mean over all dimensions
janiceblue Jul 21, 2023
0a7d815
config backwards support
mshuaibii Jul 21, 2023
73fba56
equiformer can now run
janiceblue Jul 25, 2023
efd956d
add example equiformer config
janiceblue Jul 26, 2023
4477f90
handle arbitrary torch loss fns
mshuaibii Jul 27, 2023
0bd8935
correct primary metric def
mshuaibii Aug 1, 2023
ac13093
update s2ef portion of OCP tutorial
mshuaibii Aug 1, 2023
929c2fb
add type annotations
mshuaibii Aug 9, 2023
f7b76ec
cleanup
mshuaibii Aug 9, 2023
55e71b3
Type annotations
r-barnes Aug 9, 2023
4b5e2a0
Abstract out _get_timestamp
r-barnes Aug 9, 2023
32ef93c
don't double ids when saving prediction results
janiceblue Aug 31, 2023
18f77dc
clip_grad_norm should be float
janiceblue Sep 7, 2023
49076b5
Merge branch 'main' into ocp_trainer
mshuaibii Oct 27, 2023
c1d06aa
model compatibility
mshuaibii Oct 27, 2023
7fa3870
evaluator test fix
mshuaibii Oct 27, 2023
4371bfa
lint
mshuaibii Oct 27, 2023
1abf998
remove old models
mshuaibii Oct 27, 2023
8395a3a
pass calculator test
mshuaibii Nov 2, 2023
a49bb4a
remove DP, cleanup
mshuaibii Nov 3, 2023
1f5a6be
remove comments
mshuaibii Nov 3, 2023
72a90d7
eqv2 support
mshuaibii Nov 3, 2023
396c1e7
Merge branch 'main' into ocp_trainer
mshuaibii Nov 3, 2023
2a82f56
odac energy trainer merge fix
mshuaibii Nov 3, 2023
843fbbd
is2re support
mshuaibii Nov 6, 2023
4566c23
cleanup
mshuaibii Nov 6, 2023
92336ec
config cleanup
mshuaibii Nov 6, 2023
371ad84
oc22 support
mshuaibii Nov 7, 2023
de2a6ad
introduce collater to handle otf_graph arg
mshuaibii Nov 7, 2023
5df5120
organize methods
mshuaibii Nov 7, 2023
2f793a8
include parent in targets
mshuaibii Nov 7, 2023
26179df
shape flexibility
mshuaibii Nov 7, 2023
cc6c6c2
cleanup debug lines
mshuaibii Nov 8, 2023
d2bdc6e
cleanup
mshuaibii Nov 8, 2023
9984ae7
normalizer bugfix for new configs
mshuaibii Nov 14, 2023
d278b6e
calculator normalization fix, backwards support for ckpt loads
mshuaibii Nov 17, 2023
caf611f
New weight_decay config -- defaults in BaseModel, extendable by other…
abhshkdz Dec 11, 2023
e7e2282
Doc update
abhshkdz Dec 11, 2023
af06723
Throw a warning instead of a hard error for optim.weight_decay
abhshkdz Dec 11, 2023
ccda09f
EqV2 readme update
abhshkdz Dec 11, 2023
e11dba6
Config update
abhshkdz Dec 11, 2023
9f86d2e
don't need transform on inference lmdbs with no ground truth
janiceblue Dec 20, 2023
54d606e
Merge branch 'main' into ocp_trainer
mshuaibii Jan 4, 2024
e8c1c6f
remove debug configs
mshuaibii Jan 4, 2024
d3d7e1c
ocp-2.0 example.yml
mshuaibii Jan 4, 2024
ddac40a
take out ocpdataparallel from fit.py
janiceblue Jan 4, 2024
3ab12b4
linter
janiceblue Jan 5, 2024
bc7b5cf
update tutorials
mshuaibii Jan 5, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions DATASET.md
Original file line number Diff line number Diff line change
Expand Up @@ -340,7 +340,7 @@ Please consider citing the following paper in any research manuscript using the



```
```bibtex
@article{ocp_dataset,
author = {Chanussot*, Lowik and Das*, Abhishek and Goyal*, Siddharth and Lavril*, Thibaut and Shuaibi*, Muhammed and Riviere, Morgane and Tran, Kevin and Heras-Domingo, Javier and Ho, Caleb and Hu, Weihua and Palizhati, Aini and Sriram, Anuroop and Wood, Brandon and Yoon, Junwoong and Parikh, Devi and Zitnick, C. Lawrence and Ulissi, Zachary},
title = {Open Catalyst 2020 (OC20) Dataset and Community Challenges},
Expand Down Expand Up @@ -462,12 +462,12 @@ The Open Catalyst 2022 (OC22) dataset is licensed under a [Creative Commons Attr
Please consider citing the following paper in any research manuscript using the OC22 dataset:


```
```bibtex
@article{oc22_dataset,
author = {Tran*, Richard and Lan*, Janice and Shuaibi*, Muhammed and Wood*, Brandon and Goyal*, Siddharth and Das, Abhishek and Heras-Domingo, Javier and Kolluru, Adeesh and Rizvi, Ammar and Shoghi, Nima and Sriram, Anuroop and Ulissi, Zachary and Zitnick, C. Lawrence},
title = {The Open Catalyst 2022 (OC22) Dataset and Challenges for Oxide Electrocatalysis},
year = {2022},
journal={arXiv preprint arXiv:2206.08917},
title = {The Open Catalyst 2022 (OC22) dataset and challenges for oxide electrocatalysts},
journal = {ACS Catalysis},
year={2023},
}
```

Expand Down Expand Up @@ -513,7 +513,7 @@ The OpenDAC 2023 (ODAC23) dataset is licensed under a [Creative Commons Attribut
Please consider citing the following paper in any research manuscript using the ODAC23 dataset:


```
```bibtex
@article{odac23_dataset,
author = {Anuroop Sriram and Sihoon Choi and Xiaohan Yu and Logan M. Brabson and Abhishek Das and Zachary Ulissi and Matt Uyttendaele and Andrew J. Medford and David S. Sholl},
title = {The Open DAC 2023 Dataset and Challenges for Sorbent Discovery in Direct Air Capture},
Expand Down
20 changes: 13 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,28 +11,34 @@ library of state-of-the-art machine learning algorithms for catalysis.
</div>

It provides training and evaluation code for tasks and models that take arbitrary
chemical structures as input to predict energies / forces / positions, and can
be used as a base scaffold for research projects. For an overview of tasks, data, and metrics, please read our papers:
chemical structures as input to predict energies / forces / positions / stresses,
and can be used as a base scaffold for research projects. For an overview of
tasks, data, and metrics, please read our papers:
- [OC20](https://arxiv.org/abs/2010.09990)
- [OC22](https://arxiv.org/abs/2206.08917)
- [ODAC23](https://arxiv.org/abs/2311.00341)

Projects developed on `ocp`:
Projects and models built on `ocp`:

- CGCNN [[`arXiv`](https://arxiv.org/abs/1710.10324)] [[`code`](https://github.com/Open-Catalyst-Project/ocp/blob/main/ocpmodels/models/cgcnn.py)]
- SchNet [[`arXiv`](https://arxiv.org/abs/1706.08566)] [[`code`](https://github.com/Open-Catalyst-Project/ocp/blob/main/ocpmodels/models/schnet.py)]
- DimeNet [[`arXiv`](https://arxiv.org/abs/2003.03123)] [[`code`](https://github.com/Open-Catalyst-Project/ocp/blob/main/ocpmodels/models/dimenet.py)]
- ForceNet [[`arXiv`](https://arxiv.org/abs/2103.01436)] [[`code`](https://github.com/Open-Catalyst-Project/ocp/blob/main/ocpmodels/models/forcenet.py)]
- DimeNet++ [[`arXiv`](https://arxiv.org/abs/2011.14115)] [[`code`](https://github.com/Open-Catalyst-Project/ocp/blob/main/ocpmodels/models/dimenet_plus_plus.py)]
- SpinConv [[`arXiv`](https://arxiv.org/abs/2106.09575)] [[`code`](https://github.com/Open-Catalyst-Project/ocp/blob/main/ocpmodels/models/spinconv.py)]
- GemNet-dT [[`arXiv`](https://arxiv.org/abs/2106.08903)] [[`code`](https://github.com/Open-Catalyst-Project/ocp/tree/main/ocpmodels/models/gemnet)]
- PaiNN [[`arXiv`](https://arxiv.org/abs/2102.03150)] [[`code`](https://github.com/Open-Catalyst-Project/ocp/tree/main/ocpmodels/models/painn)]
- Graph Parallelism [[`arXiv`](https://arxiv.org/abs/2203.09697)] [[`code`](https://github.com/Open-Catalyst-Project/ocp/tree/main/ocpmodels/models/gemnet_gp)]
- GemNet-OC [[`arXiv`](https://arxiv.org/abs/2204.02782)] [[`code`](https://github.com/Open-Catalyst-Project/ocp/tree/main/ocpmodels/models/gemnet_oc)]
- SCN [[`arXiv`](https://arxiv.org/abs/2206.14331)] [[`code`](https://github.com/Open-Catalyst-Project/ocp/tree/main/ocpmodels/models/scn)]
- AdsorbML [[`arXiv`](https://arxiv.org/abs/2211.16486)] [[`code`](https://github.com/open-catalyst-project/adsorbml)]
- eSCN [[`arXiv`](https://arxiv.org/abs/2302.03655)] [[`code`](https://github.com/Open-Catalyst-Project/ocp/tree/main/ocpmodels/models/escn)]
- EquiformerV2 [[`arXiv`](https://arxiv.org/abs/2306.12059)] [[`code`](https://github.com/Open-Catalyst-Project/ocp/tree/main/ocpmodels/models/equiformer_v2)]

Older model implementations that are no longer supported:

- CGCNN [[`arXiv`](https://arxiv.org/abs/1710.10324)] [[`code`](https://github.com/Open-Catalyst-Project/ocp/blob/e7a8745eb307e8a681a1aa9d30c36e8c41e9457e/ocpmodels/models/cgcnn.py)]
- DimeNet [[`arXiv`](https://arxiv.org/abs/2003.03123)] [[`code`](https://github.com/Open-Catalyst-Project/ocp/blob/e7a8745eb307e8a681a1aa9d30c36e8c41e9457e/ocpmodels/models/dimenet.py)]
- SpinConv [[`arXiv`](https://arxiv.org/abs/2106.09575)] [[`code`](https://github.com/Open-Catalyst-Project/ocp/blob/e7a8745eb307e8a681a1aa9d30c36e8c41e9457e/ocpmodels/models/spinconv.py)]
- ForceNet [[`arXiv`](https://arxiv.org/abs/2103.01436)] [[`code`](https://github.com/Open-Catalyst-Project/ocp/blob/e7a8745eb307e8a681a1aa9d30c36e8c41e9457e/ocpmodels/models/forcenet.py)]


## Installation

See [installation instructions](https://github.com/Open-Catalyst-Project/ocp/blob/main/INSTALL.md).
Expand Down
32 changes: 0 additions & 32 deletions configs/is2re/100k/cgcnn/cgcnn.yml

This file was deleted.

32 changes: 0 additions & 32 deletions configs/is2re/10k/cgcnn/cgcnn.yml

This file was deleted.

32 changes: 0 additions & 32 deletions configs/is2re/all/cgcnn/cgcnn.yml

This file was deleted.

5 changes: 3 additions & 2 deletions configs/is2re/all/painn/painn_h1024_bs8x4.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,9 @@ optim:
load_balancing: atoms
num_workers: 2
optimizer: AdamW
optimizer_params: {"amsgrad": True}
optimizer_params:
amsgrad: True
weight_decay: 0 # 2e-6 (TF weight decay) / 1e-4 (lr) = 2e-2
lr_initial: 1.e-4
scheduler: ReduceLROnPlateau
mode: min
Expand All @@ -31,4 +33,3 @@ optim:
ema_decay: 0.999
clip_grad_norm: 10
loss_energy: mae
weight_decay: 0 # 2e-6 (TF weight decay) / 1e-4 (lr) = 2e-2
7 changes: 4 additions & 3 deletions configs/is2re/example.yml
Original file line number Diff line number Diff line change
Expand Up @@ -95,9 +95,10 @@ optim:
# Learning rate. Passed as an `lr` argument when initializing the optimizer.
lr_initial: 1.e-4
# Additional args needed to initialize the optimizer.
optimizer_params: {"amsgrad": True}
# Weight decay to use. Passed as an argument when initializing the optimizer.
weight_decay: 0
optimizer_params:
amsgrad: True
# Weight decay to use. Passed as an argument when initializing the optimizer.
weight_decay: 0
# Learning rate scheduler. Should work for any scheduler specified in
# in torch.optim.lr_scheduler: https://pytorch.org/docs/stable/optim.html
# as long as the relevant args are specified here.
Expand Down
5 changes: 3 additions & 2 deletions configs/oc22/is2re/painn/painn.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,9 @@ optim:
load_balancing: atoms
num_workers: 2
optimizer: AdamW
optimizer_params: {"amsgrad": True}
optimizer_params:
amsgrad: True
weight_decay: 0 # 2e-6 (TF weight decay) / 1e-4 (lr) = 2e-2
lr_initial: 1.e-4
scheduler: ReduceLROnPlateau
mode: min
Expand All @@ -31,4 +33,3 @@ optim:
ema_decay: 0.999
clip_grad_norm: 10
loss_energy: mae
weight_decay: 0 # 2e-6 (TF weight decay) / 1e-4 (lr) = 2e-2
5 changes: 3 additions & 2 deletions configs/oc22/s2ef/gemnet-oc/gemnet_oc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,9 @@ optim:
num_workers: 2
lr_initial: 5.e-4
optimizer: AdamW
optimizer_params: {"amsgrad": True}
optimizer_params:
amsgrad: True
weight_decay: 0 # 2e-6 (TF weight decay) / 1e-4 (lr) = 2e-2
warmup_steps: -1 # don't warm-up the learning rate
# warmup_factor: 0.2
lr_gamma: 0.8
Expand All @@ -81,4 +83,3 @@ optim:
max_epochs: 80
ema_decay: 0.999
clip_grad_norm: 10
weight_decay: 0 # 2e-6 (TF weight decay) / 1e-4 (lr) = 2e-2
5 changes: 3 additions & 2 deletions configs/oc22/s2ef/gemnet-oc/gemnet_oc_finetune.yml
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,9 @@ optim:
num_workers: 2
lr_initial: 1.e-4
optimizer: AdamW
optimizer_params: {"amsgrad": True}
optimizer_params:
amsgrad: True
weight_decay: 0 # 2e-6 (TF weight decay) / 1e-4 (lr) = 2e-2
warmup_steps: -1 # don't warm-up the learning rate
# warmup_factor: 0.2
lr_gamma: 0.8
Expand Down Expand Up @@ -94,7 +96,6 @@ optim:
max_epochs: 15
ema_decay: 0.999
clip_grad_norm: 10
weight_decay: 0 # 2e-6 (TF weight decay) / 1e-4 (lr) = 2e-2
loss_energy: mae
loss_force: l2mae
force_coefficient: 100
Expand Down
5 changes: 3 additions & 2 deletions configs/oc22/s2ef/gemnet-oc/gemnet_oc_oc20_oc22.yml
Original file line number Diff line number Diff line change
Expand Up @@ -65,15 +65,16 @@ optim:
num_workers: 2
lr_initial: 5.e-4
optimizer: AdamW
optimizer_params: {"amsgrad": True}
optimizer_params:
amsgrad: True
weight_decay: 0 # 2e-6 (TF weight decay) / 1e-4 (lr) = 2e-2
scheduler: ReduceLROnPlateau
mode: min
factor: 0.8
patience: 3
max_epochs: 80
ema_decay: 0.999
clip_grad_norm: 10
weight_decay: 0 # 2e-6 (TF weight decay) / 1e-4 (lr) = 2e-2
loss_energy: mae
loss_force: atomwisel2
force_coefficient: 1
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -67,15 +67,16 @@ optim:
num_workers: 2
lr_initial: 5.e-4
optimizer: AdamW
optimizer_params: {"amsgrad": True}
optimizer_params:
amsgrad: True
weight_decay: 0 # 2e-6 (TF weight decay) / 1e-4 (lr) = 2e-2
scheduler: ReduceLROnPlateau
mode: min
factor: 0.8
patience: 3
max_epochs: 80
ema_decay: 0.999
clip_grad_norm: 10
weight_decay: 0 # 2e-6 (TF weight decay) / 1e-4 (lr) = 2e-2
loss_energy: mae
loss_force: atomwisel2
force_coefficient: 1
Expand Down
5 changes: 3 additions & 2 deletions configs/oc22/s2ef/painn/painn.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,9 @@ optim:
eval_every: 5000
num_workers: 2
optimizer: AdamW
optimizer_params: {"amsgrad": True}
optimizer_params:
amsgrad: True
weight_decay: 0 # 2e-6 (TF weight decay) / 1e-4 (lr) = 2e-2
lr_initial: 1.e-4
warmup_steps: -1 # don't warm-up the learning rate
# warmup_factor: 0.2
Expand All @@ -39,4 +41,3 @@ optim:
max_epochs: 80
ema_decay: 0.999
clip_grad_norm: 10
weight_decay: 0 # 2e-6 (TF weight decay) / 1e-4 (lr) = 2e-2
43 changes: 0 additions & 43 deletions configs/oc22/s2ef/spinconv/spinconv.yml

This file was deleted.

36 changes: 0 additions & 36 deletions configs/oc22/s2ef/spinconv/spinconv_finetune.yml

This file was deleted.

Loading