Releases: LLNL/MuyGPyS
v0.6.3
v0.6.3 is exactly the same as v0.6.1, but the fmfn/BayesianOptimization dependency is rolled back to the old behavior. This is because the maintainers of that project have lost control of the PyPI credentials, but their desired dependency management solution does not work when uploading to PyPI. Ergo, a user that needs a recent version of The maintainers fixed the PyPI version so the workaround is no longer necessary to get the newest version.bayes_opt
will need to manually update it in their environment.
v0.6.2
v0.6.2 is exactly the same as v0.6.1, but the fmfn/BayesianOptimization dependency is rolled back to the old behavior. This is because the maintainers of that project have lost control of the PyPI credentials, but their desired dependency management solution does not work when uploading to PyPI. Ergo, a user that needs a recent version of bayesian-optimization
will need to manually update it in their environment.
v0.6.1
v0.6.1 introduces a new loss function, the leave-one-out-likelihood loss, referred to throughout the library as "lool"
. Changes in detail:
- Added new
MuyGPyS.optimize.loss.lool_fn()
and implementations. - Added
"lool"
as a supportedloss_method
option. - Modified
MuyGPyS.gp.muygps.MuyGPS.get_opt_fn()
->MuyGPyS.gp.muygps.MuyGPS.get_opt_mean_fn()
to accept anopt_method
argument to specify which form of mean function to return - Added
MuyGPyS.gp.muygps.MuyGPS.get_opt_var_fn()
to return an unscaled variance function of form specified byopt_method
argument. - Added some MPI documentation
- Changed bayesian-optimization dependency to track from the github repo instead of PyPI per bayesian-optimization/BayesianOptimization#366.
v0.6.0
v0.6.0 introduces support for distributed memory processing using MPI
. MuyGPyS
now supports three distinction implementations of all of the math functions. The version to be used is determined at import time, in the same way as JAX support as introduced in v0.5.0. Some MPI feature details:
pip
installation now supports the additionalmpi
extras flag to install the python mpi bindings. However, the user must manually install MPI in their environment.MuyGPyS
does not and will not do this for the end user. Please refer to the README for clarification.- Currently, if JAX dependencies and bindings are found they will supercede the MPI bindings by default. This can be overridden by modifying the
MuyGPyS.config
object or passingabsl
arguments at the command line. Future releases will support options to simultaneously use MPI and JAX. - Notably, the various implementations (
numpy
,JAX
, andMPI
) ofMuyGPyS
include only the kernel math and optimization functions.MuyGPyS.NN_Wrapper
current wraps third-party libraries, and so the nearest neighbors computations do not take advantage of distributed memory or hardware acceleration. This may change in future releases. - Just like the
JAX
implementations, theMPI
implementations ofMuyGPyS
functions share the same API as the single-corenumpy
implementation. Thus, existing workflows should trivially generalize to distributed memory (less nearest neighbors and sampling portions). However, theMPI
implementation partitions data across all available processors. This means that querying said data outside ofMuyGPyS
functions (e.g. for visualization purposes) will require the user to use MPI directly.
There are also a number of quality-of-life and future-proofing changes to the MuyGPyS
API. Changes in detail:
MuyGPyS.gp.muygps.MuyGPS.sigma_sq_optim()
is removed. Independentsigam_sq
optimization can now be accomplished withMuyGPyS.optimize.sigma_sq.muygps_sigma_sq_optim()
. The function has a similar signature, except that it (1) accepts aMuyGPS
object as an argument and returns a newMuyGPS
object with an optimizedsigma_sq
parameter, and (2) uses annn_targets
tensor instead ofnn_indices
+targets
matrices, which is a necessary change for the distributed memory workflows.MuyGPyS.gp.muygps.MuyGPS.get_opt_fn()
andMuyGPyS.gp.kernel.KernelFn.get_opt_fn()
now require anopt_method
argument that specifies which format the returned optimization functions should support.- Loss functions are moved out of
MuyGPyS.optimize.objective
and into the newMuyGPyS.optimize.loss
. - Objective function choice is now modular.
MuyGPyS.optimize.objective.make_obj_fn()
is now the function to used to construct an objective function. In addition to the original arguments, it now expects two new arguments: (1)obj_method
specifies the form of the objective function, and currently only supports"loo_crossval"
but will support other options in the future, and (2)opt_method
specifies the format of the optimizers, and currently supports"scipy"
and"bayes"
.obj_method
is now a kwarg in the high-level example workflows. - The default value of
opt_method
has changed from"scipy"
to"bayes"
throughout the project.
v0.5.2
v0.5.2 fixes a critical bug introduced in v0.5.1. It is now possible to import MuyGPyS
without installing JAX
, as intended. There are no other semantic changes to the code, so all features of v0.5.1 are preserved. Changes in detail:
- Added
MuyGPyS._src.jaxconfig.Config
, which provides a local copy ofjax._src.config.Config
for the inheritance ofMuyGPyS._src.config.MuyGPySConfig
in casejax
is not installed.
v0.5.1
v0.5.1 adds support for batch optimization of hyperparameters using BayesianOptimization. Changes in detail:
- High-level optimization functions now support an
opt_method
kwarg that accepts"scipy"
and"bayesian"
(alternately"bayes"
or"bayes_opt"
) options. - High-level optimization functions now forward additional kwargs to the optimization method. This is not relevant for scipy, but can drastically affect performance using BayesianOptimization. See the documentation notebooks for examples.
MuyGPyS.optimize.chassis.scipy_optimize_from_tensors()
is now deprecated and will be removed in the future. Instead useMuyGPyS.optimize.chassis.optimize_from_tensors()
with the kwargopt_method="scipy"
. The same is true for*_from_indices
.- Significantly changed how the
MuyGPyS.config
object works. See the updated README and documentation. - Fixed a simple but major SigmaSq value assignment bug.
- Fixed a minor bug related to optimizing epsilon.
v0.5.0
v0.5.0 introduces just-in-time compilation and GPU support using JAX. This change allows for the acceleration of workflows on CPU (1-2.5x) and NVidia GPU (30-60x). The code API is unchanged from v0.4.1 - all codes should still work. The only major changes for the user surround installation.
Briefly, pip installation (from either PyPI or source) now uses extras flags to manage optional dependencies - see the README for supported optional flags and their effects. Installing MuyGPyS in an environment without JAX (e.g. pip install muygpys
) will result in the use of numpy implementations for all math functions. On CPU, pip install muygpys[jax_cpu]
will install the JAX dependencies.
GPU installation is more complicated; please refer to the README.
Although JAX operates by default on 32 bit types, we force it by default to use 64 bit types in order to maintain agreement with the numpy implementations (up to machine precision). The user can override this and use 32 bit types for faster computation. See the README for details.
v0.4.1
v0.4.1 streamlines the codebase by moving all of the computation inside of pure functions.
This will hopefully improve readability and changeability of the codebase for future development.
The object-oriented API remains largely unchanged; member functions are now wrappers around static pure functions.
The only breaking API changes are to MuyGPyS.optimize.objective.loo_crossval()
, which is most likely masked by MuyGPyS.optimize.chassis.scipy_optimize_from_tensors()
to most users.
The latter function now returns an optimized model instead of modifying the given model in place.
Major changes are as follows:
- Gave
SigmaSq
atrained()
boolean member function to check whether it has been set. No longer assumes "unlearned" values. - Modified optimization chassis to use pure functions.
scipy_optimize_from_*
functions now return an optimized model. - Moved opt function preparation into
KernelFn
andMuyGPS
classes. Hyperparameter
bounds no longer take the value"fixed"
. They instead default to(0.0, 0.0)
. The fixed status of hyperparameters is now accessed viaHyperparameter.fixed()
.- Relaxed dependency
scipy>=1.4.1
and incrementedhnswlib>=0.6.0
.
v0.4.0
v0.4.0 overhauls the handling of the sigma_sq
parameter throughout the code based upon user feedback. These changes will hopefully reduce confusion surrounding how we've implemented this scaling parameter in the past. Major changes are as follows:
sigma_sq
is segregated from the usual hyperparameter handling. Users can no longer setsigma_sq
or provide bounds (which were ignored in old versions anyway), and must train it directly. Currently the only method for doing so uses the analytic approximation inMuyGPS.sigma_sq_optim()
.do_regress()
gains thesigma_method
kwarg, whose default argument"analytic"
automates the above process. The only other accepted value None results in not training sigma_sq, which mostly arises in classification or settings where the user does not need variance.do_regress()
(as well asMuyGPS.regress()
and related functions) gain theapply_sigma_sq
boolean kwarg, whose default valueTrue
results in automatically scales the predicted variance usingsigma_sq
.do_regress()
gains thereturn_distances
boolean kwarg (defaultFalse
). If true, the API call will return the crosswise and pairwise distances of the test data and its nearest neighbor sets as additional return values. This allows the user to retain the distance tensors for later use, if so desired.- Added convenience functions
MuyGPyS.gp.distance.make_regress_tensors()
andMuyGPyS.gp.distance.make_train_tensors()
that provide a simpler interface to simultaneously create thecrosswise_dists
,pairwise_dists
,batch_nn_targets
, andbatch_targets
(latter only). MuyGPyS.testing.gp.BaselineGP
now uses the current kernel API.- Tutorial boilerplate has moved out of the README and into jupyter notebooks stored in
docs/examples/
. These notebooks also compile into pages on the readthedocs.io web documentation usingnbsphinx
.
Initial public release
Stable prerelease version 0.3.0. Includes support for singleton and multivariate MuyGPs models for both regression and classification, and includes some support for computing the posterior variance and uncertainty quantification tuning, although more features along these lines are planned in future releases. Full documentation included.