Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow attaching past evaluations that are outside of the current parameter range #185

Merged
merged 13 commits into from
Jun 11, 2024

Conversation

AngelFP
Copy link
Member

@AngelFP AngelFP commented Mar 15, 2024

This enables, among others, the possibility of resuming an old exploration with an updated range of the varying parameters.

Previously, trying to add trials outside of the current design space when using and AxServiceGenerator would fail because the AxClient would not allow it. In this PR, this workaround is used when fit_out_of_design=True so that trials outside of the current range will still contribute to the surrogate model.

When fit_out_of_design=False (default), the out-of-range trials will be ignored by the generator and the user will be informed about it.

@AngelFP AngelFP added the enhancement New feature or request label Mar 15, 2024
@delaossa delaossa self-assigned this Jun 5, 2024
Copy link
Collaborator

@delaossa delaossa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello! I tried this today and it worked, but I found some issues (which might not be directly related with this PR):

  • I could resume the optimization with changes in the parameters range with no problem. However, I realized that I wanted to use fit_out_of_design=True (instead of the default, which is False). So I canceled the run and started over. This time the history file was different and contained the cancelled simulations from the previous resume attempt. I had to remove manually these entries from the history file, rename it and revert the changes in the parameters ranges in exploration_parameters.json to start over adequately.
  • Building models with AxModelManager from the history file does not work in this case at the moment. It is complaining about points out of range. I guess that we will have to implement the same workaround here too.

@delaossa
Copy link
Collaborator

delaossa commented Jun 6, 2024

With regards to my first comment, I wonder if it would be a good idea to backup automatically the history file from which the exploration run is resuming. Previous versions had a feature like this if I recall well.

About the other point, I just added a commit where the same strategy for considering (or not) trials with parameters outside the design range is replicated in AxModelManager.

Copy link
Collaborator

@delaossa delaossa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR approved from my side after the addition of one commit.
Please check before pulling.

@delaossa
Copy link
Collaborator

delaossa commented Jun 7, 2024

A unit test is failing here:

FAILED tests/test_ax_generators.py::test_ax_single_fidelity_resume - AssertionError: assert 'Sobol' == 'GPEI'

This happens when the exploration resumes with fit_out_of_design=True.
It is expected that, when trials with parameters outside the design range are added, the next generated point uses the model. However, the generation method is still 'Sobol' in this case.

Could the expected behavior be in conflict with the changes implemented here? -> #207

@delaossa
Copy link
Collaborator

I have made additional checks and it seems that this issue is always present when one resumes an optimas run.
So it is nothing particular to this PR, but rather to this -> #207 or even from before. However only the new unit tests associated to this PR are failing, because it is the only one checking the generation method of the new trials after resuming.

@AngelFP
Copy link
Member Author

AngelFP commented Jun 11, 2024

With regards to my first comment, I wonder if it would be a good idea to backup automatically the history file from which the exploration run is resuming. Previous versions had a feature like this if I recall well.

About the other point, I just added a commit where the same strategy for considering (or not) trials with parameters outside the design range is replicated in AxModelManager.

Thanks for adding fit_out_of_design to the AxModelManager. I'm not a fan of how we currently have to duplicate some code in the generator and the AxModelManager, so we should find a way of unifying this in a future PR.

I will have a look at the other issues.

@AngelFP
Copy link
Member Author

AngelFP commented Jun 11, 2024

A unit test is failing here:

FAILED tests/test_ax_generators.py::test_ax_single_fidelity_resume - AssertionError: assert 'Sobol' == 'GPEI'

This happens when the exploration resumes with fit_out_of_design=True. It is expected that, when trials with parameters outside the design range are added, the next generated point uses the model. However, the generation method is still 'Sobol' in this case.

Could the expected behavior be in conflict with the changes implemented here? -> #207

Ok, this issue should be fixed now.

@AngelFP
Copy link
Member Author

AngelFP commented Jun 11, 2024

With regards to my first comment, I wonder if it would be a good idea to backup automatically the history file from which the exploration run is resuming. Previous versions had a feature like this if I recall well.

I think that saving a backup (or just using a different name for the history file after resume) could indeed be a good idea. Otherwise it will always be overwritten and can lead to issues like the one you describe. This should be the focus of a future PR.

@AngelFP AngelFP merged commit 9490ea3 into main Jun 11, 2024
6 checks passed
@AngelFP AngelFP deleted the feature/allow_out_of_design branch June 11, 2024 17:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants