nevergrad
is a Python 3.6+ library. It can be installed with:
pip install nevergrad
You can also install the master branch instead of the latest release with:
pip install git+https://github.com/facebookresearch/nevergrad@master#egg=nevergrad
Alternatively, you can clone the repository and run python3 setup.py develop
from inside the repository folder.
The goals of this package are to provide:
- gradient/derivative-free optimization algorithms, including algorithms able to handle noise.
- tools to instrument any code, making it painless to optimize your parameters/hyperparameters, whether they are continuous, discrete or a mixture of continuous and discrete variables.
- functions on which to test the optimization algorithms.
- benchmark routines in order to compare algorithms easily.
The structure of the package follows its goal, you will therefore find subpackages:
optimization
: implementing optimization algorithmsinstrumentation
: tooling to convert code into a well-defined function to optimize.functions
: implementing both simple and complex benchmark functionsbenchmark
: for running experiments comparing the algorithms on benchmark functionscommon
: a set of tools used throughout the package
Convergence of a population of points to the minima with two-points DE.
The following README is very general, here are links to find more details on:
- how to perform optimization using
nevergrad
, including using parallelization and a few recommendation on which algorithm should be used depending on the settings - how to instrument functions with any kind of parameters in order to convert them into a function defined on a continuous vectorial space where optimization can be performed. It also provides a tool to instantiate a script or non-python code in order into a Python function and be able to tune some of its parameters.
- how to benchmark all optimizers on various test functions.
- benchmark results of some standard optimizers an simple test cases.
- examples of optimization for machine learning.
- how to contribute through issues and pull requests and how to setup your dev environment.
- guidelines of how to contribute by adding a new algorithm.
All optimizers assume a centered and reduced prior at the beginning of the optimization (i.e. 0 mean and unitary standard deviation). They are however able to find solutions far from this initial prior.
Optimizing (minimizing!) a function using an optimizer (here OnePlusOne
) can be easily run with:
from nevergrad.optimization import optimizerlib
def square(x):
return sum((x - .5)**2)
optimizer = optimizerlib.OnePlusOne(dimension=1, budget=100)
# alternatively, you can use optimizerlib.registry which is a dict containing all optimizer classes
recommendation = optimizer.optimize(square)
You can print the full list of optimizers with:
from nevergrad.optimization import registry
print(sorted(registry.keys()))
The optimization documentation contains more information on how to use several workers, take full control of the optimization through the ask
and tell
interface and some pieces of advice on how to choose the proper optimizer for your problem.
@misc{nevergrad,
author = {J. Rapin and O. Teytaud},
title = {{Nevergrad - A gradient-free optimization platform}},
year = {2018},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://GitHub.com/FacebookResearch/Nevergrad}},
}
nevergrad
is released under the MIT license. See LICENSE for additional details.