Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run model from hydra setup #16

Merged
merged 13 commits into from
Apr 11, 2024
Merged

Run model from hydra setup #16

merged 13 commits into from
Apr 11, 2024

Conversation

AdrianM0
Copy link
Collaborator

@AdrianM0 AdrianM0 commented Apr 11, 2024

Squash and merge for clean commits and progress tracking would be nice?

This was linked to issues Apr 11, 2024
@AdrianM0 AdrianM0 linked an issue Apr 11, 2024 that may be closed by this pull request
@AdrianM0 AdrianM0 marked this pull request as ready for review April 11, 2024 10:59
@kjappelbaum
Copy link
Collaborator

Squash and merge for clean commits and progress tracking would be nice?

was this a real question or a statement?

offline: False
project: "molbind"
entity: "adrianmirza"
entity: "wandb_username"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure if this is the ideal default. Perhaps nothing or environment variable is even better.
I mean "better" because this here can lead to silent or hard-to-follow bugs

But this is a minor point.


rootutils.setup_root(__file__, indicator=".project-root", pythonpath=True)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did know this module!

what didn't work without this?


device_count = torch.cuda.device_count()
trainer = L.Trainer(
max_epochs=100,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you maybe wand to also make max_epochs and accelerator customizable (for debugging, mps on MacBook might be fine, too)

data_modalities=train_modality_data,
batch_size=config.data.batch_size,
shuffle=True,
num_workers=1,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you have a more complex loader, you might be happy about having more workers

dataset=data_modalities[modality],
modality=modality,
central_modality=central_modality,
context_length=256,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this be fixed here?

Copy link
Collaborator

@kjappelbaum kjappelbaum left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks cool, nice progress 🚀

@AdrianM0 AdrianM0 merged commit 0d5a79b into main Apr 11, 2024
0 of 11 checks passed
@AdrianM0 AdrianM0 deleted the hydra branch April 12, 2024 08:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

StringDataset assumes SMILES as central_modality Wrong dimensionality in example Migrate config to hydra
2 participants