Skip to content

Commit

Permalink
Update README
Browse files Browse the repository at this point in the history
  • Loading branch information
hrntsm committed Sep 5, 2024
1 parent 1cf3b68 commit b79ba4e
Show file tree
Hide file tree
Showing 8 changed files with 148 additions and 122 deletions.
114 changes: 42 additions & 72 deletions package/samplers/moead/README.md
Original file line number Diff line number Diff line change
@@ -1,104 +1,74 @@
---
author: Please fill in the author name here. (e.g., John Smith)
title: Please fill in the title of the feature here. (e.g., Gaussian-Process Expected Improvement Sampler)
description: Please fill in the description of the feature here. (e.g., This sampler searches for each trial based on expected improvement using Gaussian process.)
tags: [Please fill in the list of tags here. (e.g., sampler, visualization, pruner)]
optuna_versions: ['Please fill in the list of versions of Optuna in which you have confirmed the feature works, e.g., 3.6.1.']
author: Hiroaki Natsume
title: MOEA/D sampler
description: Sampler using MOEA/D algorithm. MOEA/D stands for "Multi-Objective Evolutionary Algorithm based on Decomposition.
tags: [sampler, multiobjective]
optuna_versions: [4.0.0]
license: MIT License
---

<!--
This is an example of the frontmatters.
All columns must be string.
You can omit quotes when value types are not ambiguous.
For tags, a package placed in
- package/samplers/ must include the tag "sampler"
- package/visualilzation/ must include the tag "visualization"
- package/pruners/ must include the tag "pruner"
respectively.
---
author: Optuna team
title: My Sampler
description: A description for My Sampler.
tags: [sampler, 2nd tag for My Sampler, 3rd tag for My Sampler]
optuna_versions: [3.6.1]
license: "MIT License"
---
-->

Please read the [tutorial guide](https://optuna.github.io/optunahub-registry/recipes/001_first.html) to register your feature in OptunaHub.
You can find more detailed explanation of the following contents in the tutorial.
Looking at [other packages' implementations](https://github.com/optuna/optunahub-registry/tree/main/package) will also help you.

## Abstract

You can provide an abstract for your package here.
This section will help attract potential users to your package.
Sampler using MOEA/D algorithm. MOEA/D stands for "Multi-Objective Evolutionary Algorithm based on Decomposition.

**Example**

This package provides a sampler based on Gaussian process-based Bayesian optimization. The sampler is highly sample-efficient, so it is suitable for computationally expensive optimization problems with a limited evaluation budget, such as hyperparameter optimization of machine learning algorithms.
This sampler is specialized for multiobjective optimization. The objective function is internally decomposed into multiple single-objective subproblems to perform optimization.

## Class or Function Names

Please fill in the class/function names which you implement here.

**Example**

- GPSampler
- MOEADSampler

## Installation

If you have additional dependencies, please fill in the installation guide here.
If no additional dependencies is required, **this section can be removed**.

**Example**

```shell
$ pip install scipy torch
```
pip install scipy
```

## Example

Please fill in the code snippet to use the implemented feature here.

**Example**

```python
import optuna
import optunahub

def objective(trial: optuna.Trial) -> tuple[float, float]:
x = trial.suggest_float("x", 0, 5)
y = trial.suggest_float("y", 0, 3)

def objective(trial):
x = trial.suggest_float("x", -5, 5)
return x**2
v0 = 4 * x**2 + 4 * y**2
v1 = (x - 5) ** 2 + (y - 5) ** 2
return v0, v1


sampler = optunahub.load_module(package="samplers/gp").GPSampler()
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=100)
if __name__ == "__main__":
population_size = 100
n_trials = 1000

mod = optunahub.load_module("samplers/moead")
sampler = mod.MOEADSampler(
population_size=population_size,
scalar_aggregation_func="tchebycheff",
n_neighbors=population_size // 10,
)
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=n_trials)
```

## Others

Please fill in any other information if you have here by adding child sections (###).
If there is no additional information, **this section can be removed**.
Comparison between Random, NSGAII and MOEA/D with ZDT1 as the objective function.
See `compare_2objective.py` in moead directory for details.

### Pareto Front Plot

<!--
For example, you can add sections to introduce a corresponding paper.
| MOEA/D | NSGAII | Random |
| --------------------------- | ---------------------------- | ---------------------------- |
| ![MOEA/D](images/moead.png) | ![NSGAII](images/nsgaii.png) | ![Random](images/random.png) |

### Compare

![Compare](images/compare_pareto_front.png)

### Reference
Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. 2019.
Optuna: A Next-generation Hyperparameter Optimization Framework. In KDD.

### Bibtex
```
@inproceedings{optuna_2019,
title={Optuna: A Next-generation Hyperparameter Optimization Framework},
author={Akiba, Takuya and Sano, Shotaro and Yanase, Toshihiko and Ohta, Takeru and Koyama, Masanori},
booktitle={Proceedings of the 25th {ACM} {SIGKDD} International Conference on Knowledge Discovery and Data Mining},
year={2019}
}
```
-->
Q. Zhang and H. Li,
"MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition," in IEEE Transactions on Evolutionary Computation, vol. 11, no. 6, pp. 712-731, Dec. 2007,
doi: 10.1109/TEVC.2007.892759.
54 changes: 54 additions & 0 deletions package/samplers/moead/compare_2objectives.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
import numpy as np
import optuna
import optunahub


def objective(trial: optuna.Trial) -> tuple[float, float]:
# ZDT1
n_variables = 30

x = np.array([trial.suggest_float(f"x{i}", 0, 1) for i in range(n_variables)])
g = 1 + 9 * np.sum(x[1:]) / (n_variables - 1)
f1 = x[0]
f2 = g * (1 - (f1 / g) ** 0.5)

return f1, f2


if __name__ == "__main__":
mod = optunahub.load_module("samplers/moead")

seed = 42
population_size = 100
n_trials = 10000
crossover = optuna.samplers.nsgaii.BLXAlphaCrossover()
samplers = [
optuna.samplers.RandomSampler(seed=seed),
optuna.samplers.NSGAIISampler(
seed=seed,
population_size=population_size,
crossover=crossover,
),
mod.MOEADSampler(
seed=seed,
population_size=population_size,
n_neighbors=population_size // 5,
scalar_aggregation_func="tchebycheff",
crossover=crossover,
),
]
studies = []
for sampler in samplers:
study = optuna.create_study(
sampler=sampler,
study_name=f"{sampler.__class__.__name__}",
directions=["minimize", "minimize"],
)
study.optimize(objective, n_trials=n_trials)
studies.append(study)

optuna.visualization.plot_pareto_front(study).show()

m = optunahub.load_module("visualization/plot_pareto_front_multi")
fig = m.plot_pareto_front(studies)
fig.show()
69 changes: 23 additions & 46 deletions package/samplers/moead/example.py
Original file line number Diff line number Diff line change
@@ -1,56 +1,33 @@
import numpy as np
import optuna
import optunahub

from moead import MOEAdSampler

def objective(trial: optuna.Trial) -> tuple[float, float]:
x = trial.suggest_float("x", 0, 5)
y = trial.suggest_float("y", 0, 3)

def objective(trial: optuna.Trial) -> tuple[float, float, float]:
n_variables = 3

x = np.array([trial.suggest_float(f"x{i}", 0, 1) for i in range(n_variables)])
n = 10
g = 100 * (n - 2) + 100 * np.sum((x - 0.5) ** 2 - np.cos(20 * np.pi * (x - 0.5)))

f1 = (1 + g) * x[0] * x[1]
f2 = (1 + g) * x[0] * (1 - x[1])
f3 = (1 + g) * (1 - x[0])

return f1, f2, f3
v0 = 4 * x**2 + 4 * y**2
v1 = (x - 5) ** 2 + (y - 5) ** 2
return v0, v1


if __name__ == "__main__":
# mod = optunahub.load_module("samplers/moea_d")
# sampler = mod.MOEAdSampler()
seed = 42
population_size = 50
mod = optunahub.load_module("samplers/moead")

population_size = 100
n_trials = 1000
crossover = optuna.samplers.nsgaii.BLXAlphaCrossover()
samplers = [
optuna.samplers.RandomSampler(seed=seed),
# optuna.samplers.NSGAIIISampler(
# seed=seed, population_size=population_size, crossover=crossover
# ),
MOEAdSampler(
seed=seed,
population_size=population_size,
n_neighbors=population_size // 5,
scalar_aggregation_func="tchebycheff",
crossover=crossover,
),
]
studies = []
for sampler in samplers:
study = optuna.create_study(
sampler=sampler,
study_name=f"{sampler.__class__.__name__}",
directions=["minimize", "minimize", "minimize"],
)
study.optimize(objective, n_trials=n_trials)
studies.append(study)

# optuna.visualization.plot_pareto_front(studies[0]).show()

m = optunahub.load_module("visualization/plot_pareto_front_multi")
fig = m.plot_pareto_front(studies)
fig.show()
sampler = mod.MOEADSampler(
population_size=population_size,
scalar_aggregation_func="tchebycheff",
n_neighbors=population_size // 10,
crossover=crossover,
)
study = optuna.create_study(
sampler=sampler,
study_name=f"{sampler.__class__.__name__}",
directions=["minimize", "minimize"],
)
study.optimize(objective, n_trials=n_trials)

optuna.visualization.plot_pareto_front(study).show()
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added package/samplers/moead/images/moead.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added package/samplers/moead/images/nsgaii.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added package/samplers/moead/images/random.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
33 changes: 29 additions & 4 deletions package/samplers/moead/moead.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,12 +51,37 @@ def __init__(
Args:
seed:
Seed for random number generator.
population_size:
The number of individuals in the population.
T:
n_neighbors:
The number of the weight vectors in the neighborhood of each weight vector.
The larger this value, the more weight is applied to the exploration.
scalar_aggregation_function:
The scalar aggregation function to use. The default is "weighted_sum". Other options are "tchebycheff" and "PBI".
The scalar aggregation function to use. The default is "tchebycheff". Other options is "weight_sum".
population_size:
Number of individuals (trials) in a generation.
``population_size`` must be greater than or equal to ``crossover.n_parents``.
For :class:`~optuna.samplers.nsgaii.UNDXCrossover` and
:class:`~optuna.samplers.nsgaii.SPXCrossover`, ``n_parents=3``, and for the other
algorithms, ``n_parents=2``.
mutation_prob:
Probability of mutating each parameter when creating a new individual.
If :obj:`None` is specified, the value ``1.0 / len(parent_trial.params)`` is used
where ``parent_trial`` is the parent trial of the target individual.
crossover:
Crossover to be applied when creating child individuals.
For more information on each of the crossover method, please refer to
optuna crossover documentation.
crossover_prob:
Probability that a crossover (parameters swapping between parents) will occur
when creating a new individual.
swapping_prob:
Probability of swapping each parameter of the parents during crossover.
"""
if population_size < 2:
raise ValueError("`population_size` must be greater than or equal to 2.")
Expand Down

0 comments on commit b79ba4e

Please sign in to comment.