Skip to content

Releases: LLNL/MuyGPyS

v0.6.3

21 Oct 00:50
Compare
Choose a tag to compare

v0.6.3 is exactly the same as v0.6.1, but the fmfn/BayesianOptimization dependency is rolled back to the old behavior. This is because the maintainers of that project have lost control of the PyPI credentials, but their desired dependency management solution does not work when uploading to PyPI. Ergo, a user that needs a recent version of bayes_opt will need to manually update it in their environment. The maintainers fixed the PyPI version so the workaround is no longer necessary to get the newest version.

v0.6.2

21 Oct 00:45
Compare
Choose a tag to compare

v0.6.2 is exactly the same as v0.6.1, but the fmfn/BayesianOptimization dependency is rolled back to the old behavior. This is because the maintainers of that project have lost control of the PyPI credentials, but their desired dependency management solution does not work when uploading to PyPI. Ergo, a user that needs a recent version of bayesian-optimization will need to manually update it in their environment.

v0.6.1

21 Oct 00:24
Compare
Choose a tag to compare

v0.6.1 introduces a new loss function, the leave-one-out-likelihood loss, referred to throughout the library as "lool". Changes in detail:

  • Added new MuyGPyS.optimize.loss.lool_fn() and implementations.
  • Added "lool" as a supported loss_method option.
  • Modified MuyGPyS.gp.muygps.MuyGPS.get_opt_fn() -> MuyGPyS.gp.muygps.MuyGPS.get_opt_mean_fn() to accept an opt_method argument to specify which form of mean function to return
  • Added MuyGPyS.gp.muygps.MuyGPS.get_opt_var_fn() to return an unscaled variance function of form specified by opt_method argument.
  • Added some MPI documentation
  • Changed bayesian-optimization dependency to track from the github repo instead of PyPI per bayesian-optimization/BayesianOptimization#366.

v0.6.0

12 Oct 22:06
Compare
Choose a tag to compare

v0.6.0 introduces support for distributed memory processing using MPI. MuyGPyS now supports three distinction implementations of all of the math functions. The version to be used is determined at import time, in the same way as JAX support as introduced in v0.5.0. Some MPI feature details:

  • pip installation now supports the additional mpi extras flag to install the python mpi bindings. However, the user must manually install MPI in their environment. MuyGPyS does not and will not do this for the end user. Please refer to the README for clarification.
  • Currently, if JAX dependencies and bindings are found they will supercede the MPI bindings by default. This can be overridden by modifying the MuyGPyS.config object or passing absl arguments at the command line. Future releases will support options to simultaneously use MPI and JAX.
  • Notably, the various implementations (numpy, JAX, and MPI) of MuyGPyS include only the kernel math and optimization functions. MuyGPyS.NN_Wrapper current wraps third-party libraries, and so the nearest neighbors computations do not take advantage of distributed memory or hardware acceleration. This may change in future releases.
  • Just like the JAX implementations, the MPI implementations of MuyGPyS functions share the same API as the single-core numpy implementation. Thus, existing workflows should trivially generalize to distributed memory (less nearest neighbors and sampling portions). However, the MPI implementation partitions data across all available processors. This means that querying said data outside of MuyGPyS functions (e.g. for visualization purposes) will require the user to use MPI directly.

There are also a number of quality-of-life and future-proofing changes to the MuyGPyS API. Changes in detail:

  • MuyGPyS.gp.muygps.MuyGPS.sigma_sq_optim() is removed. Independent sigam_sq optimization can now be accomplished with MuyGPyS.optimize.sigma_sq.muygps_sigma_sq_optim(). The function has a similar signature, except that it (1) accepts a MuyGPS object as an argument and returns a new MuyGPS object with an optimized sigma_sq parameter, and (2) uses an nn_targets tensor instead of nn_indices + targets matrices, which is a necessary change for the distributed memory workflows.
  • MuyGPyS.gp.muygps.MuyGPS.get_opt_fn() and MuyGPyS.gp.kernel.KernelFn.get_opt_fn() now require an opt_method argument that specifies which format the returned optimization functions should support.
  • Loss functions are moved out of MuyGPyS.optimize.objective and into the new MuyGPyS.optimize.loss.
  • Objective function choice is now modular. MuyGPyS.optimize.objective.make_obj_fn() is now the function to used to construct an objective function. In addition to the original arguments, it now expects two new arguments: (1) obj_method specifies the form of the objective function, and currently only supports "loo_crossval" but will support other options in the future, and (2) opt_method specifies the format of the optimizers, and currently supports "scipy" and "bayes". obj_method is now a kwarg in the high-level example workflows.
  • The default value of opt_method has changed from "scipy" to "bayes" throughout the project.

v0.5.2

20 Apr 17:59
Compare
Choose a tag to compare

v0.5.2 fixes a critical bug introduced in v0.5.1. It is now possible to import MuyGPyS without installing JAX, as intended. There are no other semantic changes to the code, so all features of v0.5.1 are preserved. Changes in detail:

  • Added MuyGPyS._src.jaxconfig.Config, which provides a local copy of jax._src.config.Config for the inheritance of MuyGPyS._src.config.MuyGPySConfig in case jax is not installed.

v0.5.1

23 Mar 00:54
2fe7405
Compare
Choose a tag to compare

v0.5.1 adds support for batch optimization of hyperparameters using BayesianOptimization. Changes in detail:

  • High-level optimization functions now support an opt_method kwarg that accepts "scipy" and "bayesian" (alternately "bayes" or "bayes_opt") options.
  • High-level optimization functions now forward additional kwargs to the optimization method. This is not relevant for scipy, but can drastically affect performance using BayesianOptimization. See the documentation notebooks for examples.
  • MuyGPyS.optimize.chassis.scipy_optimize_from_tensors() is now deprecated and will be removed in the future. Instead use MuyGPyS.optimize.chassis.optimize_from_tensors() with the kwarg opt_method="scipy". The same is true for *_from_indices.
  • Significantly changed how the MuyGPyS.config object works. See the updated README and documentation.
  • Fixed a simple but major SigmaSq value assignment bug.
  • Fixed a minor bug related to optimizing epsilon.

v0.5.0

01 Mar 21:13
cb56c95
Compare
Choose a tag to compare

v0.5.0 introduces just-in-time compilation and GPU support using JAX. This change allows for the acceleration of workflows on CPU (1-2.5x) and NVidia GPU (30-60x). The code API is unchanged from v0.4.1 - all codes should still work. The only major changes for the user surround installation.

Briefly, pip installation (from either PyPI or source) now uses extras flags to manage optional dependencies - see the README for supported optional flags and their effects. Installing MuyGPyS in an environment without JAX (e.g. pip install muygpys) will result in the use of numpy implementations for all math functions. On CPU, pip install muygpys[jax_cpu] will install the JAX dependencies.
GPU installation is more complicated; please refer to the README.

Although JAX operates by default on 32 bit types, we force it by default to use 64 bit types in order to maintain agreement with the numpy implementations (up to machine precision). The user can override this and use 32 bit types for faster computation. See the README for details.

v0.4.1

19 Jan 22:10
Compare
Choose a tag to compare

v0.4.1 streamlines the codebase by moving all of the computation inside of pure functions.
This will hopefully improve readability and changeability of the codebase for future development.
The object-oriented API remains largely unchanged; member functions are now wrappers around static pure functions.
The only breaking API changes are to MuyGPyS.optimize.objective.loo_crossval(), which is most likely masked by MuyGPyS.optimize.chassis.scipy_optimize_from_tensors() to most users.
The latter function now returns an optimized model instead of modifying the given model in place.
Major changes are as follows:

  • Gave SigmaSq a trained() boolean member function to check whether it has been set. No longer assumes "unlearned" values.
  • Modified optimization chassis to use pure functions. scipy_optimize_from_* functions now return an optimized model.
  • Moved opt function preparation into KernelFn and MuyGPS classes.
  • Hyperparameter bounds no longer take the value "fixed". They instead default to (0.0, 0.0). The fixed status of hyperparameters is now accessed via Hyperparameter.fixed().
  • Relaxed dependency scipy>=1.4.1 and incremented hnswlib>=0.6.0.

v0.4.0

09 Dec 01:30
f60e671
Compare
Choose a tag to compare

v0.4.0 overhauls the handling of the sigma_sq parameter throughout the code based upon user feedback. These changes will hopefully reduce confusion surrounding how we've implemented this scaling parameter in the past. Major changes are as follows:

  • sigma_sq is segregated from the usual hyperparameter handling. Users can no longer set sigma_sq or provide bounds (which were ignored in old versions anyway), and must train it directly. Currently the only method for doing so uses the analytic approximation in MuyGPS.sigma_sq_optim().
  • do_regress() gains the sigma_method kwarg, whose default argument "analytic" automates the above process. The only other accepted value None results in not training sigma_sq, which mostly arises in classification or settings where the user does not need variance.
  • do_regress() (as well as MuyGPS.regress() and related functions) gain the apply_sigma_sq boolean kwarg, whose default value True results in automatically scales the predicted variance using sigma_sq.
  • do_regress() gains the return_distances boolean kwarg (default False). If true, the API call will return the crosswise and pairwise distances of the test data and its nearest neighbor sets as additional return values. This allows the user to retain the distance tensors for later use, if so desired.
  • Added convenience functions MuyGPyS.gp.distance.make_regress_tensors() and MuyGPyS.gp.distance.make_train_tensors() that provide a simpler interface to simultaneously create the crosswise_dists, pairwise_dists, batch_nn_targets, and batch_targets (latter only).
  • MuyGPyS.testing.gp.BaselineGP now uses the current kernel API.
  • Tutorial boilerplate has moved out of the README and into jupyter notebooks stored in docs/examples/. These notebooks also compile into pages on the readthedocs.io web documentation using nbsphinx.

Initial public release

29 Jul 21:50
Compare
Choose a tag to compare
Pre-release

Stable prerelease version 0.3.0. Includes support for singleton and multivariate MuyGPs models for both regression and classification, and includes some support for computing the posterior variance and uncertainty quantification tuning, although more features along these lines are planned in future releases. Full documentation included.