diff --git a/docs/source/reference/integration.rst b/docs/source/reference/integration.rst index 18cbfd10e7..34cc1fa589 100644 --- a/docs/source/reference/integration.rst +++ b/docs/source/reference/integration.rst @@ -26,51 +26,51 @@ We summarize the necessary dependencies for each integration. +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------+ | Integration | Dependencies | +===================================================================================================================================================================================+====================================+ -| `AllenNLP `_ | allennlp, torch, psutil, jsonnet | +| `AllenNLP `__ | allennlp, torch, psutil, jsonnet | +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------+ -| `BoTorch `_ | botorch, gpytorch, torch | +| `BoTorch `__ | botorch, gpytorch, torch | +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------+ -| `CatBoost `_ | catboost | +| `CatBoost `__ | catboost | +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------+ -| `ChainerMN `_ | chainermn | +| `ChainerMN `__ | chainermn | +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------+ -| `Chainer `_ | chainer | +| `Chainer `__ | chainer | +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------+ -| `pycma `_ | cma | +| `pycma `__ | cma | +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------+ -| `Dask `_ | distributed | +| `Dask `__ | distributed | +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------+ -| `FastAI `_ | fastai | +| `FastAI `__ | fastai | +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------+ -| `Keras `_ | keras | +| `Keras `__ | keras | +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------+ -| `LightGBMTuner `_ | lightgbm, scikit-learn | +| `LightGBMTuner `__ | lightgbm, scikit-learn | +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------+ -| `LightGBMPruningCallback `_ | lightgbm | +| `LightGBMPruningCallback `__ | lightgbm | +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------+ -| `MLflow `_ | mlflow | +| `MLflow `__ | mlflow | +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------+ -| `MXNet `_ | mxnet | +| `MXNet `__ | mxnet | +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------+ -| PyTorch `Distributed `_ | torch | +| PyTorch `Distributed `__ | torch | +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------+ -| PyTorch (`Ignite `_) | pytorch-ignite | +| PyTorch (`Ignite `__) | pytorch-ignite | +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------+ -| PyTorch (`Lightning `_) | pytorch-lightning | +| PyTorch (`Lightning `__) | pytorch-lightning | +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------+ -| `SHAP `_ | scikit-learn, shap | +| `SHAP `__ | scikit-learn, shap | +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------+ -| `Scikit-learn `_ | pandas, scipy, scikit-learn | +| `Scikit-learn `__ | pandas, scipy, scikit-learn | +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------+ -| `SKorch `_ | skorch | +| `SKorch `__ | skorch | +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------+ -| `TensorBoard `_ | tensorboard, tensorflow | +| `TensorBoard `__ | tensorboard, tensorflow | +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------+ -| `TensorFlow `_ | tensorflow, tensorflow-estimator | +| `TensorFlow `__ | tensorflow, tensorflow-estimator | +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------+ -| `TensorFlow + Keras `_ | tensorflow | +| `TensorFlow + Keras `__ | tensorflow | +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------+ -| `Weights & Biases `_ | wandb | +| `Weights & Biases `__ | wandb | +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------+ -| `XGBoost `_ | xgboost | +| `XGBoost `__ | xgboost | +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------+ diff --git a/optuna/_hypervolume/hssp.py b/optuna/_hypervolume/hssp.py index 670891d1be..ed40815b96 100644 --- a/optuna/_hypervolume/hssp.py +++ b/optuna/_hypervolume/hssp.py @@ -132,7 +132,7 @@ def _solve_hssp( paper: - `Greedy Hypervolume Subset Selection in Low Dimensions - `_ + `__ """ if subset_size == rank_i_indices.size: return rank_i_indices diff --git a/optuna/importance/__init__.py b/optuna/importance/__init__.py index e21aad7b7c..c8451e7718 100644 --- a/optuna/importance/__init__.py +++ b/optuna/importance/__init__.py @@ -68,7 +68,7 @@ def get_param_importances( .. note:: :class:`~optuna.importance.FanovaImportanceEvaluator` takes over 1 minute when given a study that contains 1000+ trials. We published - `optuna-fast-fanova `_ library, + `optuna-fast-fanova `__ library, that is a Cython accelerated fANOVA implementation. By using it, you can get hyperparameter importances within a few seconds. If ``n_trials`` is more than 10000, the Cython implementation takes more than diff --git a/optuna/importance/_fanova/_evaluator.py b/optuna/importance/_fanova/_evaluator.py index 1bbf358634..d90817c694 100644 --- a/optuna/importance/_fanova/_evaluator.py +++ b/optuna/importance/_fanova/_evaluator.py @@ -23,7 +23,7 @@ class FanovaImportanceEvaluator(BaseImportanceEvaluator): Implements the fANOVA hyperparameter importance evaluation algorithm in `An Efficient Approach for Assessing Hyperparameter Importance - `_. + `__. fANOVA fits a random forest regression model that predicts the objective values of :class:`~optuna.trial.TrialState.COMPLETE` trials given their parameter configurations. @@ -33,13 +33,13 @@ class FanovaImportanceEvaluator(BaseImportanceEvaluator): .. note:: This class takes over 1 minute when given a study that contains 1000+ trials. - We published `optuna-fast-fanova `_ library, + We published `optuna-fast-fanova `__ library, that is a Cython accelerated fANOVA implementation. By using it, you can get hyperparameter importances within a few seconds. .. note:: - Requires the `sklearn `_ Python package. + Requires the `sklearn `__ Python package. .. note:: diff --git a/optuna/importance/_mean_decrease_impurity.py b/optuna/importance/_mean_decrease_impurity.py index 635cfa2ad4..5a35e23986 100644 --- a/optuna/importance/_mean_decrease_impurity.py +++ b/optuna/importance/_mean_decrease_impurity.py @@ -31,9 +31,9 @@ class MeanDecreaseImpurityImportanceEvaluator(BaseImportanceEvaluator): .. note:: - This evaluator requires the `sklearn `_ Python package + This evaluator requires the `sklearn `__ Python package and is based on `sklearn.ensemble.RandomForestClassifier.feature_importances_ - `_. + `__. Args: n_trees: diff --git a/optuna/importance/_ped_anova/evaluator.py b/optuna/importance/_ped_anova/evaluator.py index bfe1ddadbf..6cc3a3c9e7 100644 --- a/optuna/importance/_ped_anova/evaluator.py +++ b/optuna/importance/_ped_anova/evaluator.py @@ -72,7 +72,7 @@ class PedAnovaImportanceEvaluator(BaseImportanceEvaluator): For further information about PED-ANOVA algorithm, please refer to the following paper: - `PED-ANOVA: Efficiently Quantifying Hyperparameter Importance in Arbitrary Subspaces - `_ + `__ .. note:: @@ -81,7 +81,7 @@ class PedAnovaImportanceEvaluator(BaseImportanceEvaluator): .. note:: - Please refer to `the original work `_. + Please refer to `the original work `__. Args: baseline_quantile: diff --git a/optuna/pruners/_hyperband.py b/optuna/pruners/_hyperband.py index 4f341e0acc..cf847868ab 100644 --- a/optuna/pruners/_hyperband.py +++ b/optuna/pruners/_hyperband.py @@ -21,7 +21,7 @@ class HyperbandPruner(BasePruner): :math:`n` as its hyperparameter. For a given finite budget :math:`B`, all the configurations have the resources of :math:`B \\over n` on average. As you can see, there will be a trade-off of :math:`B` and :math:`B \\over n`. - `Hyperband `_ attacks this trade-off + `Hyperband `__ attacks this trade-off by trying different :math:`n` values for a fixed budget. .. note:: @@ -29,7 +29,7 @@ class HyperbandPruner(BasePruner): is used. * Optuna uses :class:`~optuna.samplers.TPESampler` by default. * `The benchmark result - `_ + `__ shows that :class:`optuna.pruners.HyperbandPruner` supports both samplers. .. note:: @@ -58,7 +58,7 @@ class HyperbandPruner(BasePruner): (\\frac{\\texttt{max}\\_\\texttt{resource}}{\\texttt{min}\\_\\texttt{resource}})) + 1`. Please set ``reduction_factor`` so that the number of brackets is not too large (about 4 – 6 in most use cases). Please see Section 3.6 of the `original paper - `_ for the detail. + `__ for the detail. Example: @@ -235,7 +235,7 @@ def _get_bracket_id( """Compute the index of bracket for a trial of ``trial_number``. The index of a bracket is noted as :math:`s` in - `Hyperband paper `_. + `Hyperband paper `__. """ if len(self._pruners) == 0: diff --git a/optuna/pruners/_successive_halving.py b/optuna/pruners/_successive_halving.py index 1c8da0abf1..45402d2549 100644 --- a/optuna/pruners/_successive_halving.py +++ b/optuna/pruners/_successive_halving.py @@ -11,18 +11,18 @@ class SuccessiveHalvingPruner(BasePruner): """Pruner using Asynchronous Successive Halving Algorithm. - `Successive Halving `_ is a bandit-based + `Successive Halving `__ is a bandit-based algorithm to identify the best one among multiple configurations. This class implements an asynchronous version of Successive Halving. Please refer to the paper of `Asynchronous Successive Halving `_ for detailed descriptions. + a06f20b349c6cf09a6b171c71b88bbfc-Paper.pdf>`__ for detailed descriptions. Note that, this class does not take care of the parameter for the maximum resource, referred to as :math:`R` in the paper. The maximum resource allocated to a trial is typically limited inside the objective function (e.g., ``step`` number in `simple_pruning.py - `_, + `__, ``EPOCH`` number in `chainer_integration.py - `_). + `__). .. seealso:: Please refer to :meth:`~optuna.trial.Trial.report`. @@ -71,7 +71,7 @@ def objective(trial): min_resource: A parameter for specifying the minimum resource allocated to a trial (in the `paper `_ this parameter is referred to as + a06f20b349c6cf09a6b171c71b88bbfc-Paper.pdf>`__ this parameter is referred to as :math:`r`). This parameter defaults to 'auto' where the value is determined based on a heuristic that looks at the number of required steps for the first trial to complete. @@ -98,14 +98,14 @@ def objective(trial): reduction_factor: A parameter for specifying reduction factor of promotable trials (in the `paper `_ this parameter is + a06f20b349c6cf09a6b171c71b88bbfc-Paper.pdf>`__ this parameter is referred to as :math:`\\eta`). At the completion point of each rung, about :math:`{1 \\over \\mathsf{reduction}\\_\\mathsf{factor}}` trials will be promoted. min_early_stopping_rate: A parameter for specifying the minimum early-stopping rate (in the `paper `_ this parameter is + a06f20b349c6cf09a6b171c71b88bbfc-Paper.pdf>`__ this parameter is referred to as :math:`s`). bootstrap_count: Minimum number of trials that need to complete a rung before any trial diff --git a/optuna/pruners/_wilcoxon.py b/optuna/pruners/_wilcoxon.py index 7b08025611..78d8eab14c 100644 --- a/optuna/pruners/_wilcoxon.py +++ b/optuna/pruners/_wilcoxon.py @@ -22,7 +22,7 @@ @experimental_class("3.6.0") class WilcoxonPruner(BasePruner): - """Pruner based on the `Wilcoxon signed-rank test `_. + """Pruner based on the `Wilcoxon signed-rank test `__. This pruner performs the Wilcoxon signed-rank test between the current trial and the current best trial, and stops whenever the pruner is sure up to a given p-value that the current trial is worse than the best one. diff --git a/optuna/samplers/_cmaes.py b/optuna/samplers/_cmaes.py index 475f1780af..b7cdeda973 100644 --- a/optuna/samplers/_cmaes.py +++ b/optuna/samplers/_cmaes.py @@ -59,7 +59,7 @@ class _CmaEsAttrKeys(NamedTuple): class CmaEsSampler(BaseSampler): - """A sampler using `cmaes `_ as the backend. + """A sampler using `cmaes `__ as the backend. Example: @@ -95,27 +95,27 @@ def objective(trial): For further information about CMA-ES algorithm, please refer to the following papers: - `N. Hansen, The CMA Evolution Strategy: A Tutorial. arXiv:1604.00772, 2016. - `_ + `__ - `A. Auger and N. Hansen. A restart CMA evolution strategy with increasing population size. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC 2005), - pages 1769–1776. IEEE Press, 2005. `_ + pages 1769–1776. IEEE Press, 2005. `__ - `N. Hansen. Benchmarking a BI-Population CMA-ES on the BBOB-2009 Function Testbed. - GECCO Workshop, 2009. `_ + GECCO Workshop, 2009. `__ - `Raymond Ros, Nikolaus Hansen. A Simple Modification in CMA-ES Achieving Linear Time and Space Complexity. 10th International Conference on Parallel Problem Solving From Nature, - Sep 2008, Dortmund, Germany. inria-00287367. `_ + Sep 2008, Dortmund, Germany. inria-00287367. `__ - `Masahiro Nomura, Shuhei Watanabe, Youhei Akimoto, Yoshihiko Ozaki, Masaki Onishi. Warm Starting CMA-ES for Hyperparameter Optimization, AAAI. 2021. - `_ + `__ - `R. Hamano, S. Saito, M. Nomura, S. Shirakawa. CMA-ES with Margin: Lower-Bounding Marginal Probability for Mixed-Integer Black-Box Optimization, GECCO. 2022. - `_ + `__ - `M. Nomura, Y. Akimoto, I. Ono. CMA-ES with Learning Rate Adaptation: Can CMA-ES with Default Population Size Solve Multimodal and Noisy Problems?, GECCO. 2023. - `_ + `__ .. seealso:: - You can also use `optuna_integration.PyCmaSampler `_ which is a sampler using cma + You can also use `optuna_integration.PyCmaSampler `__ which is a sampler using cma library as the backend. Args: @@ -196,7 +196,7 @@ def objective(trial): :class:`~optuna.pruners.MedianPruner` is used. On the other hand, it is suggested to set this flag :obj:`True` when the :class:`~optuna.pruners.HyperbandPruner` is used. Please see `the benchmark result - `_ for the details. + `__ for the details. use_separable_cma: If this is :obj:`True`, the covariance matrix is constrained to be diagonal. diff --git a/optuna/samplers/_nsgaiii/_sampler.py b/optuna/samplers/_nsgaiii/_sampler.py index 1c66f68330..53c7c9c82b 100644 --- a/optuna/samplers/_nsgaiii/_sampler.py +++ b/optuna/samplers/_nsgaiii/_sampler.py @@ -47,11 +47,11 @@ class NSGAIIISampler(BaseSampler): - `An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point-Based Nondominated Sorting Approach, Part I: Solving Problems With Box Constraints - `_ + `__ - `An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point-Based Nondominated Sorting Approach, Part II: Handling Constraints and Extending to an Adaptive Approach - `_ + `__ Args: reference_points: diff --git a/optuna/samplers/_qmc.py b/optuna/samplers/_qmc.py index 4f1ddfc860..6d9aefd292 100644 --- a/optuna/samplers/_qmc.py +++ b/optuna/samplers/_qmc.py @@ -42,11 +42,11 @@ class QMCSampler(BaseSampler): - `Bergstra, James, and Yoshua Bengio. Random search for hyper-parameter optimization. Journal of machine learning research 13.2, 2012. - `_ + `__ We use the QMC implementations in Scipy. For the details of the QMC algorithm, see the Scipy API references on `scipy.stats.qmc - `_. + `__. .. note: If your search space contains categorical parameters, it samples the categorical diff --git a/optuna/samplers/_tpe/sampler.py b/optuna/samplers/_tpe/sampler.py index 2f289e2bd2..51344401d2 100644 --- a/optuna/samplers/_tpe/sampler.py +++ b/optuna/samplers/_tpe/sampler.py @@ -73,17 +73,17 @@ class TPESampler(BaseSampler): For further information about TPE algorithm, please refer to the following papers: - `Algorithms for Hyper-Parameter Optimization - `_ + `__ - `Making a Science of Model Search: Hyperparameter Optimization in Hundreds of - Dimensions for Vision Architectures `_ + Dimensions for Vision Architectures `__ - `Tree-Structured Parzen Estimator: Understanding Its Algorithm Components and Their Roles for - Better Empirical Performance `_ + Better Empirical Performance `__ For multi-objective TPE (MOTPE), please refer to the following papers: - `Multiobjective Tree-Structured Parzen Estimator for Computationally Expensive Optimization - Problems `_ - - `Multiobjective Tree-Structured Parzen Estimator `_ + Problems `__ + - `Multiobjective Tree-Structured Parzen Estimator `__ Example: An example of a single-objective optimization is as follows: @@ -154,23 +154,23 @@ def objective(trial): weights: A function that takes the number of finished trials and returns a weight for them. See `Making a Science of Model Search: Hyperparameter Optimization in Hundreds of - Dimensions for Vision Architectures `_ - for more details. + Dimensions for Vision Architectures + `__ for more details. .. note:: In the multi-objective case, this argument is only used to compute the weights of bad trials, i.e., trials to construct `g(x)` in the `paper - `_ + `__ ). The weights of good trials, i.e., trials to construct `l(x)`, are computed by a rule based on the hypervolume contribution proposed in the `paper of MOTPE - `_. + `__. seed: Seed for random number generator. multivariate: If this is :obj:`True`, the multivariate TPE is used when suggesting parameters. The multivariate TPE is reported to outperform the independent TPE. See `BOHB: Robust and Efficient Hyperparameter Optimization at Scale - `_ for more details. + `__ for more details. .. note:: Added in v2.2.0 as an experimental feature. The interface may change in newer @@ -547,7 +547,7 @@ def hyperopt_parameters() -> dict[str, Any]: Example: Create a :class:`~optuna.samplers.TPESampler` instance with the default - parameters of `hyperopt `_. + parameters of `hyperopt `__. .. testcode:: diff --git a/optuna/samplers/nsgaii/_crossovers/_blxalpha.py b/optuna/samplers/nsgaii/_crossovers/_blxalpha.py index 62d9d77a91..f226e0041f 100644 --- a/optuna/samplers/nsgaii/_crossovers/_blxalpha.py +++ b/optuna/samplers/nsgaii/_crossovers/_blxalpha.py @@ -22,7 +22,7 @@ class BLXAlphaCrossover(BaseCrossover): - `Eshelman, L. and J. D. Schaffer. Real-Coded Genetic Algorithms and Interval-Schemata. FOGA (1992). - `_ + `__ Args: alpha: diff --git a/optuna/samplers/nsgaii/_crossovers/_sbx.py b/optuna/samplers/nsgaii/_crossovers/_sbx.py index 5ca6842c51..7fed3ab7c5 100644 --- a/optuna/samplers/nsgaii/_crossovers/_sbx.py +++ b/optuna/samplers/nsgaii/_crossovers/_sbx.py @@ -23,7 +23,7 @@ class SBXCrossover(BaseCrossover): - `Deb, K. and R. Agrawal. “Simulated Binary Crossover for Continuous Search Space.” Complex Syst. 9 (1995): n. pag. - `_ + `__ Args: eta: diff --git a/optuna/samplers/nsgaii/_crossovers/_spx.py b/optuna/samplers/nsgaii/_crossovers/_spx.py index 254827572c..61725e0b58 100644 --- a/optuna/samplers/nsgaii/_crossovers/_spx.py +++ b/optuna/samplers/nsgaii/_crossovers/_spx.py @@ -25,7 +25,7 @@ class SPXCrossover(BaseCrossover): David E. Goldberg and Kumara Sastry and Kumara Sastry Progress Toward Linkage Learning in Real-Coded GAs with Simplex Crossover. IlliGAL Report. 2000. - `_ + `__ Args: epsilon: diff --git a/optuna/samplers/nsgaii/_crossovers/_undx.py b/optuna/samplers/nsgaii/_crossovers/_undx.py index b5cb7af82d..e6bb13bcbb 100644 --- a/optuna/samplers/nsgaii/_crossovers/_undx.py +++ b/optuna/samplers/nsgaii/_crossovers/_undx.py @@ -25,7 +25,7 @@ class UNDXCrossover(BaseCrossover): for real-coded genetic algorithms, Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), 1999, pp. 1581-1588 Vol. 2 - `_ + `__ Args: sigma_xi: diff --git a/optuna/samplers/nsgaii/_crossovers/_uniform.py b/optuna/samplers/nsgaii/_crossovers/_uniform.py index 80e7e3c28b..673ea80d03 100644 --- a/optuna/samplers/nsgaii/_crossovers/_uniform.py +++ b/optuna/samplers/nsgaii/_crossovers/_uniform.py @@ -20,7 +20,7 @@ class UniformCrossover(BaseCrossover): - `Gilbert Syswerda. 1989. Uniform Crossover in Genetic Algorithms. In Proceedings of the 3rd International Conference on Genetic Algorithms. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2-9. - `_ + `__ Args: swapping_prob: diff --git a/optuna/samplers/nsgaii/_crossovers/_vsbx.py b/optuna/samplers/nsgaii/_crossovers/_vsbx.py index 26d11e6f7c..112dac0cf8 100644 --- a/optuna/samplers/nsgaii/_crossovers/_vsbx.py +++ b/optuna/samplers/nsgaii/_crossovers/_vsbx.py @@ -24,7 +24,7 @@ class VSBXCrossover(BaseCrossover): - `Pedro J. Ballester, Jonathan N. Carter. Real-Parameter Genetic Algorithms for Finding Multiple Optimal Solutions in Multi-modal Optimization. GECCO 2003: 706-717 - `_ + `__ Args: eta: diff --git a/optuna/samplers/nsgaii/_sampler.py b/optuna/samplers/nsgaii/_sampler.py index 3d242fbf88..19355e85ce 100644 --- a/optuna/samplers/nsgaii/_sampler.py +++ b/optuna/samplers/nsgaii/_sampler.py @@ -43,7 +43,7 @@ class NSGAIISampler(BaseSampler): For further information about NSGA-II, please refer to the following paper: - `A fast and elitist multiobjective genetic algorithm: NSGA-II - `_ + `__ Args: population_size: diff --git a/optuna/storages/_journal/file.py b/optuna/storages/_journal/file.py index 51d3e7c16e..de89052d02 100644 --- a/optuna/storages/_journal/file.py +++ b/optuna/storages/_journal/file.py @@ -147,7 +147,7 @@ class JournalFileStorage(BaseJournalLogStorage): Compared to SQLite3, the benefit of this backend is that it is more suitable for environments where the file system does not support ``fcntl()`` file locking. - For example, as written in the `SQLite3 FAQ `_, + For example, as written in the `SQLite3 FAQ `__, SQLite3 might not work on NFS (Network File System) since ``fcntl()`` file locking is broken on many NFS implementations. In such scenarios, this backend provides several workarounds for locking files. For more details, refer to the `Medium blog post`_. diff --git a/optuna/storages/_rdb/storage.py b/optuna/storages/_rdb/storage.py index 2d855847d8..25f7d4a970 100644 --- a/optuna/storages/_rdb/storage.py +++ b/optuna/storages/_rdb/storage.py @@ -183,7 +183,7 @@ def objective(trial): mechanism. Set ``heartbeat_interval``, ``grace_period``, and ``failed_trial_callback`` appropriately according to your use case. For more details, please refer to the :ref:`tutorial ` and `Example page - `_. + `__. .. seealso:: You can use :class:`~optuna.storages.RetryFailedTrialCallback` to automatically retry diff --git a/optuna/study/_multi_objective.py b/optuna/study/_multi_objective.py index 134c93f626..7aeba8cd0e 100644 --- a/optuna/study/_multi_objective.py +++ b/optuna/study/_multi_objective.py @@ -49,7 +49,7 @@ def _fast_non_domination_rank( The fast non-dominated sort algorithm assigns a rank to each trial based on the dominance relationship of the trials, determined by the objective values and the penalty values. The algorithm is based on `the constrained NSGA-II algorithm - `_, but the handling of the case when penalty + `__, but the handling of the case when penalty values are None is different. The algorithm assigns the rank according to the following rules: diff --git a/optuna/study/study.py b/optuna/study/study.py index 1841a32ea5..44a04ac575 100644 --- a/optuna/study/study.py +++ b/optuna/study/study.py @@ -434,7 +434,7 @@ def objective(trial): .. note:: ``n_jobs`` allows parallelization using :obj:`threading` and may suffer from - `Python's GIL `_. + `Python's GIL `__. It is recommended to use :ref:`process-based parallelization` if ``func`` is CPU bound. diff --git a/optuna/terminator/improvement/evaluator.py b/optuna/terminator/improvement/evaluator.py index 8da9a96307..5dcdf42509 100644 --- a/optuna/terminator/improvement/evaluator.py +++ b/optuna/terminator/improvement/evaluator.py @@ -42,7 +42,7 @@ def _get_beta(n_params: int, n_trials: int, delta: float = 0.1) -> float: # The following div is according to the original paper: "We then further scale it down # by a factor of 5 as defined in the experiments in - # `Srinivas et al. (2010) `_" + # `Srinivas et al. (2010) `__" beta /= 5 return beta @@ -85,7 +85,7 @@ class RegretBoundEvaluator(BaseImprovementEvaluator): For further information about this evaluator, please refer to the following paper: - - `Automatic Termination for Hyperparameter Optimization `_ + - `Automatic Termination for Hyperparameter Optimization `__ """ # NOQA: E501 def __init__( diff --git a/optuna/testing/visualization.py b/optuna/testing/visualization.py index d0b1c2b9a3..d462a31adc 100644 --- a/optuna/testing/visualization.py +++ b/optuna/testing/visualization.py @@ -14,8 +14,9 @@ def prepare_study_with_trials( This function is added to reduce the code to set up dummy study object in each test case. However, you can only use this function for unit tests that are loosely coupled with the dummy study object. Unit tests that are tightly coupled with the study become difficult to - read because of `Mystery Guest `_ - and/or `Eager Test `_ anti-patterns. + read because of + `Mystery Guest `__ and/or + `Eager Test `__ anti-patterns. Args: n_objectives: Number of objective values. diff --git a/optuna/trial/_trial.py b/optuna/trial/_trial.py index 42a7b853c9..9f4c87b7a0 100644 --- a/optuna/trial/_trial.py +++ b/optuna/trial/_trial.py @@ -248,7 +248,7 @@ def suggest_int( Example: Suggest the number of trees in `RandomForestClassifier `_. + stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html>`__. .. testcode:: @@ -357,7 +357,7 @@ def suggest_categorical( Example: Suggest a kernel function of `SVC `_. + sklearn.svm.SVC.html>`__. .. testcode:: @@ -429,7 +429,7 @@ def report(self, value: float, step: int) -> None: Example: Report intermediate scores of `SGDClassifier `_ training. + generated/sklearn.linear_model.SGDClassifier.html>`__ training. .. testcode:: diff --git a/optuna/visualization/_param_importances.py b/optuna/visualization/_param_importances.py index 78db50f436..6f56735438 100644 --- a/optuna/visualization/_param_importances.py +++ b/optuna/visualization/_param_importances.py @@ -159,7 +159,7 @@ def objective(trial): .. note:: :class:`~optuna.importance.FanovaImportanceEvaluator` takes over 1 minute when given a study that contains 1000+ trials. We published - `optuna-fast-fanova `_ library, + `optuna-fast-fanova `__ library, that is a Cython accelerated fANOVA implementation. By using it, you can get hyperparameter importances within a few seconds. diff --git a/tutorial/10_key_features/002_configurations.py b/tutorial/10_key_features/002_configurations.py index 924ee1a518..aa0729b797 100644 --- a/tutorial/10_key_features/002_configurations.py +++ b/tutorial/10_key_features/002_configurations.py @@ -48,7 +48,7 @@ def objective(trial): # # Also, you can use branches or loops depending on the parameter values. # -# For more various use, see `examples `_. +# For more various use, see `examples `__. ################################################################################################### # - Branches: diff --git a/tutorial/10_key_features/003_efficient_optimization_algorithms.py b/tutorial/10_key_features/003_efficient_optimization_algorithms.py index b6970bd755..9c6e4d0c7f 100644 --- a/tutorial/10_key_features/003_efficient_optimization_algorithms.py +++ b/tutorial/10_key_features/003_efficient_optimization_algorithms.py @@ -83,11 +83,11 @@ # # - Threshold pruning algorithm implemented in :class:`~optuna.pruners.ThresholdPruner` # -# - A pruning algorithm based on `Wilcoxon signed-rank test `_ implemented in :class:`~optuna.pruners.WilcoxonPruner` +# - A pruning algorithm based on `Wilcoxon signed-rank test `__ implemented in :class:`~optuna.pruners.WilcoxonPruner` # # We use :class:`~optuna.pruners.MedianPruner` in most examples, # though basically it is outperformed by :class:`~optuna.pruners.SuccessiveHalvingPruner` and -# :class:`~optuna.pruners.HyperbandPruner` as in `this benchmark result `_. +# :class:`~optuna.pruners.HyperbandPruner` as in `this benchmark result `__. # # # Activating Pruners @@ -97,7 +97,7 @@ # :func:`~optuna.trial.Trial.should_prune` decides termination of the trial that does not meet a predefined condition. # # We would recommend using integration modules for major machine learning frameworks. -# Exclusive list is :mod:`~optuna.integration` and usecases are available in `optuna-examples `_. +# Exclusive list is :mod:`~optuna.integration` and usecases are available in `optuna-examples `__. import logging @@ -148,7 +148,7 @@ def objective(trial): # Which Sampler and Pruner Should be Used? # ---------------------------------------- # -# From the benchmark results which are available at `optuna/optuna - wiki "Benchmarks with Kurobako" `_, at least for not deep learning tasks, we would say that +# From the benchmark results which are available at `optuna/optuna - wiki "Benchmarks with Kurobako" `__, at least for not deep learning tasks, we would say that # # * For :class:`~optuna.samplers.RandomSampler`, :class:`~optuna.pruners.MedianPruner` is the best. # * For :class:`~optuna.samplers.TPESampler`, :class:`~optuna.pruners.HyperbandPruner` is the best. @@ -156,7 +156,7 @@ def objective(trial): # However, note that the benchmark is not deep learning. # For deep learning tasks, # consult the below table. -# This table is from the `Ozaki et al., Hyperparameter Optimization Methods: Overview and Characteristics, in IEICE Trans, Vol.J103-D No.9 pp.615-631, 2020 `_ paper, +# This table is from the `Ozaki et al., Hyperparameter Optimization Methods: Overview and Characteristics, in IEICE Trans, Vol.J103-D No.9 pp.615-631, 2020 `__ paper, # which is written in Japanese. # # +---------------------------+-----------------------------------------+---------------------------------------------------------------+ @@ -179,8 +179,8 @@ def objective(trial): # # For the complete list of Optuna's integration modules, see :mod:`~optuna.integration`. # -# For example, `LightGBMPruningCallback `_ introduces pruning without directly changing the logic of training iteration. -# (See also `example `_ for the entire script.) +# For example, `LightGBMPruningCallback `__ introduces pruning without directly changing the logic of training iteration. +# (See also `example `__ for the entire script.) # # .. code-block:: text # diff --git a/tutorial/10_key_features/004_distributed.py b/tutorial/10_key_features/004_distributed.py index bf9385b012..96a4062ebe 100644 --- a/tutorial/10_key_features/004_distributed.py +++ b/tutorial/10_key_features/004_distributed.py @@ -12,7 +12,7 @@ 2. create a study with ``--storage`` argument 3. share the study among multiple nodes and processes -Of course, you can use Kubernetes as in `the kubernetes examples `_. +Of course, you can use Kubernetes as in `the kubernetes examples `__. To just see how parallel optimization works in Optuna, check the below video. diff --git a/tutorial/10_key_features/005_visualization.py b/tutorial/10_key_features/005_visualization.py index faa1a025e9..1877f12ec6 100644 --- a/tutorial/10_key_features/005_visualization.py +++ b/tutorial/10_key_features/005_visualization.py @@ -12,7 +12,7 @@ please refer to the tutorial of :ref:`multi_objective`. .. note:: - By using `Optuna Dashboard `_, you can also check the optimization history, + By using `Optuna Dashboard `__, you can also check the optimization history, hyperparameter importances, hyperparameter relationships, etc. in graphs and tables. Please make your study persistent using :ref:`RDB backend ` and execute following commands to run Optuna Dashboard. @@ -21,7 +21,7 @@ $ pip install optuna-dashboard $ optuna-dashboard sqlite:///example-study.db - Please check out `the GitHub repository `_ for more details. + Please check out `the GitHub repository `__ for more details. .. list-table:: :header-rows: 1 diff --git a/tutorial/20_recipes/001_rdb.py b/tutorial/20_recipes/001_rdb.py index d2e1dcf96f..28e6329604 100644 --- a/tutorial/20_recipes/001_rdb.py +++ b/tutorial/20_recipes/001_rdb.py @@ -11,7 +11,7 @@ .. note:: You can also utilize other RDB backends, e.g., PostgreSQL or MySQL, by setting the storage argument to the DB's URL. - Please refer to `SQLAlchemy's document `_ for how to set up the URL. + Please refer to `SQLAlchemy's document `__ for how to set up the URL. New Study diff --git a/tutorial/20_recipes/002_multi_objective.py b/tutorial/20_recipes/002_multi_objective.py index 5f82cdbf48..c54054facf 100644 --- a/tutorial/20_recipes/002_multi_objective.py +++ b/tutorial/20_recipes/002_multi_objective.py @@ -7,7 +7,7 @@ This tutorial showcases Optuna's multi-objective optimization feature by optimizing the validation accuracy of Fashion MNIST dataset and the FLOPS of the model implemented in PyTorch. -We use `fvcore `_ to measure FLOPS. +We use `fvcore `__ to measure FLOPS. """ import torch diff --git a/tutorial/20_recipes/005_user_defined_sampler.py b/tutorial/20_recipes/005_user_defined_sampler.py index 2124611d25..2cd7605602 100644 --- a/tutorial/20_recipes/005_user_defined_sampler.py +++ b/tutorial/20_recipes/005_user_defined_sampler.py @@ -8,7 +8,7 @@ - experiment your own sampling algorithms, - implement task-specific algorithms to refine the optimization performance, or -- wrap other optimization libraries to integrate them into Optuna pipelines (e.g., `BoTorchSampler `_). +- wrap other optimization libraries to integrate them into Optuna pipelines (e.g., `BoTorchSampler `__). This section describes the internal behavior of sampler classes and shows an example of implementing a user-defined sampler. @@ -37,7 +37,7 @@ -------------------------------------------------- For example, the following code defines a sampler based on -`Simulated Annealing (SA) `_: +`Simulated Annealing (SA) `__: """ import numpy as np @@ -107,7 +107,7 @@ def sample_independent(self, study, trial, param_name, param_distribution): # In favor of code simplicity, the above implementation doesn't support some features (e.g., maximization). # If you're interested in how to support those features, please see # `examples/samplers/simulated_annealing.py -# `_. +# `__. # # # You can use ``SimulatedAnnealingSampler`` in the same way as built-in samplers as follows: diff --git a/tutorial/20_recipes/007_optuna_callback.py b/tutorial/20_recipes/007_optuna_callback.py index 91309812ad..490932f7fd 100644 --- a/tutorial/20_recipes/007_optuna_callback.py +++ b/tutorial/20_recipes/007_optuna_callback.py @@ -9,7 +9,7 @@ ``Callback`` is called after every evaluation of ``objective``, and it takes :class:`~optuna.study.Study` and :class:`~optuna.trial.FrozenTrial` as arguments, and does some work. -`MLflowCallback `_ is a great example. +`MLflowCallback `__ is a great example. """ ################################################################################################### diff --git a/tutorial/20_recipes/008_specify_params.py b/tutorial/20_recipes/008_specify_params.py index 75e84110ec..d68f906c36 100644 --- a/tutorial/20_recipes/008_specify_params.py +++ b/tutorial/20_recipes/008_specify_params.py @@ -26,7 +26,7 @@ Optuna has :func:`optuna.study.Study.enqueue_trial` which lets you pass those sets of hyperparameters to Optuna and Optuna will evaluate them. -This section walks you through how to use this lit API with `LightGBM `_. +This section walks you through how to use this lit API with `LightGBM `__. """ import lightgbm as lgb diff --git a/tutorial/20_recipes/009_ask_and_tell.py b/tutorial/20_recipes/009_ask_and_tell.py index 7c04a760b5..ad59d4c69a 100644 --- a/tutorial/20_recipes/009_ask_and_tell.py +++ b/tutorial/20_recipes/009_ask_and_tell.py @@ -64,7 +64,7 @@ def objective(trial): # This interface is not flexible enough. # For example, if ``objective`` requires additional arguments other than ``trial``, # you need to define a class as in -# `How to define objective functions that have own arguments? <../../faq.html#how-to-define-objective-functions-that-have-own-arguments>`_. +# `How to define objective functions that have own arguments? <../../faq.html#how-to-define-objective-functions-that-have-own-arguments>`__. # The ask-and-tell interface provides a more flexible syntax to optimize hyperparameters. # The following example is equivalent to the previous code block. diff --git a/tutorial/20_recipes/012_artifact_tutorial.py b/tutorial/20_recipes/012_artifact_tutorial.py index 9a183de64a..a224cd610b 100644 --- a/tutorial/20_recipes/012_artifact_tutorial.py +++ b/tutorial/20_recipes/012_artifact_tutorial.py @@ -11,7 +11,7 @@ as files. Introduced from Optuna v3.3, this module finds a broad range of applications, such as utilizing snapshots of large size models for hyperparameter tuning, optimizing massive chemical structures, and even human-in-the-loop optimization employing images or sounds. Use of Optuna's artifact module allows you to handle data that would be too large to store in a database. Furthermore, -by integrating with `optuna-dashboard `_, saved artifacts can be automatically visualized +by integrating with `optuna-dashboard `__, saved artifacts can be automatically visualized with the web UI, which significantly reduces the effort of experiment management. TL;DR @@ -42,7 +42,7 @@ chemical structures, image and audio data, etc.) for each trial. Also, while this tutorial does not touch upon it, it's possible to manage artifacts associated not only with trials but also with -studies. Please refer to the `official documentation `_ +studies. Please refer to the `official documentation `__ if you are interested in. Situations where artifacts are useful @@ -92,7 +92,7 @@ parameters were, when each trial started and ended, etc. This file is in the SQLite format, and it is not suitable for storing large data. Writing large data entries may cause performance degradation. Note that SQLite is not suitable for distributed parallel optimization. If you want to perform that, please use MySQL as we will explain later, or JournalStorage -(`example `_). +(`example `__). So, let's use the artifact module to save large data in a different format. Suppose the data is generated for each trial and you want to save it in some format (e.g., png format if it's an image). The specific destination for saving the artifacts can be any @@ -156,7 +156,7 @@ def objective(trial: optuna.Trial) -> float: As the scale of optimization increases, it becomes difficult to complete all calculations locally. Optuna's storage objects can persist data remotely by specifying a URL, enabling distributed optimization. Here, we will use MySQL as a remote relational database server. MySQL is an open-source relational database management system and a well-known software used for various purposes. -For using MySQL with Optuna, the `tutorial `_ +For using MySQL with Optuna, the `tutorial `__ can be a good reference. However, it is also not appropriate to read and write large data in a relational database like MySQL. In Optuna, it is common to use the artifact module when you want to read and write such data for each trial. Unlike Scenario 1, diff --git a/tutorial/20_recipes/013_wilcoxon_pruner.py b/tutorial/20_recipes/013_wilcoxon_pruner.py index c92ce17888..c55281ff86 100644 --- a/tutorial/20_recipes/013_wilcoxon_pruner.py +++ b/tutorial/20_recipes/013_wilcoxon_pruner.py @@ -4,11 +4,11 @@ Early-stopping independent evaluations by Wilcoxon pruner ============================================================ -This tutorial showcases Optuna's `WilcoxonPruner `_. +This tutorial showcases Optuna's `WilcoxonPruner `__. This pruner is effective for objective functions that averages multiple evaluations. -We solve `Traveling Salesman Problem (TSP) `_ -by `Simulated Annealing (SA) `_. +We solve `Traveling Salesman Problem (TSP) `__ +by `Simulated Annealing (SA) `__. Overview: Solving Traveling Salesman Problem with Simulated Annealing ---------------------------------------------------------------------------- @@ -39,7 +39,7 @@ decreases to zero. There are several ways to define neighborhood for TSP, but we use a -simple neighborhood called `2-opt `_. 2-opt neighbor chooses a path in +simple neighborhood called `2-opt `__. 2-opt neighbor chooses a path in the current solution and reverses the visiting order in the path. For example, if the initial solution is `a→b→c→d→e→a`, `a→d→c→b→e→a` is a 2-opt neighbor (the path from `b` to `d` is reversed).