-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add MOEA/D sampler #152
Add MOEA/D sampler #152
Conversation
…for weight vector generation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We sincerely appreciate your contribution to adding one of the most popular EMO algorithms, MOEA/D.
I confirmed that the examples worked as expected. I added some minor comments, but they are just suggestions. So, let me merge this PR.
author: Hiroaki Natsume | ||
title: MOEA/D sampler | ||
description: Sampler using MOEA/D algorithm. MOEA/D stands for "Multi-Objective Evolutionary Algorithm based on Decomposition. | ||
tags: [sampler, multiobjective] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A nit, but other samplers seem to use multi-objective optimization
as a tag.
tags: [sampler, multiobjective] | |
tags: [sampler, multi-objective optimization] |
## Installation | ||
|
||
``` | ||
pip install scipy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a nit, but it might be kind to users if we also provided a common way to install the package's dependencies.
pip install -r https://hub.optuna.org/samplers/moead/requirements.txt
if __name__ == "__main__": | ||
population_size = 100 | ||
n_trials = 1000 | ||
|
||
mod = optunahub.load_module("samplers/moead") | ||
sampler = mod.MOEADSampler( | ||
population_size=population_size, | ||
scalar_aggregation_func="tchebycheff", | ||
n_neighbors=population_size // 10, | ||
) | ||
study = optuna.create_study(sampler=sampler) | ||
study.optimize(objective, n_trials=n_trials) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a strong opinion, but removing if __name__ == "__main__":
might be simple as an example.
if __name__ == "__main__": | |
population_size = 100 | |
n_trials = 1000 | |
mod = optunahub.load_module("samplers/moead") | |
sampler = mod.MOEADSampler( | |
population_size=population_size, | |
scalar_aggregation_func="tchebycheff", | |
n_neighbors=population_size // 10, | |
) | |
study = optuna.create_study(sampler=sampler) | |
study.optimize(objective, n_trials=n_trials) | |
population_size = 100 | |
n_trials = 1000 | |
mod = optunahub.load_module("samplers/moead") | |
sampler = mod.MOEADSampler( | |
population_size=population_size, | |
scalar_aggregation_func="tchebycheff", | |
n_neighbors=population_size // 10, | |
) | |
study = optuna.create_study(sampler=sampler) | |
study.optimize(objective, n_trials=n_trials) |
|
||
Q. Zhang and H. Li, | ||
"MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition," in IEEE Transactions on Evolutionary Computation, vol. 11, no. 6, pp. 712-731, Dec. 2007, | ||
doi: 10.1109/TEVC.2007.892759. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since you provide the doi, how about using a link to the paper?
doi: 10.1109/TEVC.2007.892759. | |
[doi: 10.1109/TEVC.2007.892759](https://doi.org/10.1109/TEVC.2007.892759). |
parent_population: list[FrozenTrial], | ||
neighbors: dict[int, list[int]], | ||
) -> dict[str, Any]: | ||
"""Generate a child parameter from the given parent population by NSGA-II algorithm. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A nit.
"""Generate a child parameter from the given parent population by NSGA-II algorithm. | |
"""Generate a child parameter from the given parent population by MOEA/D algorithm. |
subproblem_parent_population = [ | ||
parent_population[i] for i in neighbors[self._subproblem_id] | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure, but this implementation might have unexpected behavior in the case of multi-threading.
*, | ||
population_size: int = 100, | ||
n_neighbors: int | None = None, | ||
scalar_aggregation_func: str = "tchebycheff", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
scalar_aggregation_func
seems to be either tchebycheff
or weighted_sum
, we may be use Literal
as a type hint.
scalar_aggregation_func: str = "tchebycheff", | |
scalar_aggregation_func: Literal['weighted_sum', 'tchebycheff'] = "tchebycheff", |
@toshihikoyanase , Thank you for your comment. |
Contributor Agreements
Please read the contributor agreements and if you agree, please click the checkbox below.
Tip
Please follow the Quick TODO list to smoothly merge your PR.
Motivation
There are already exist samplers for more than 3 objective optimization, such as NSGA-III, but would like to add another one
Description of the changes
Add MOEA/D sampler
TODO List towards PR Merge
Please remove this section if this PR is not an addition of a new package.
Otherwise, please check the following TODO list:
./template/
to create your package<COPYRIGHT HOLDER>
inLICENSE
of your package with your nameREADME.md
in your package__init__.py
README.md
README.md