Releases: LLNL/MuyGPyS
v0.9.0
v0.9.0 makes several API changes, generalizes the backend math to support multivariate kernels (thanks in large part to Alex Geringer-Sameth), adds several other features, and overhauls the documentation. Major additions include:
- External distance/difference tensor creation functions are removed. This functionality is now achieved with, e.g.,
MuyGPS.make_predict_tensors()
orMuyGPS.kernel.deformation.crosswise_tensor()
. AnalyticScale
now supports iterative optimization to resolve unidentifiability with the noise prior.- Added a
ShearKernel
functor that uses these multivariate kernel features to implement physics-informed lensing shear interpolation. This feature is still experimental. - Old
MultivariateMuyGPS
class and derivates are deprecated and will be removed in a future release. MuyGPyS.examples
package and contents are deprecated and will be either removed or seriously refactored in a future release.- Old build system is replaced by a
pyproject.toml
and the source code has been restructured into a src-layout. - Added and improved few documentation notebooks.
What's Changed
- reordered tiling for shear kernel to a 3x3 grid of nxm kernels by @bwpriest in #206
- Add kwargs to shear functions. by @gsallaberry in #208
- generalized tensor solves for multivariate kernels by @bwpriest in #209
- renaming K to Kin throughout for clarity by @bwpriest in #210
- removed now-unneeded scipy limitation by @bwpriest in #211
- removed support for vector scale parameter by @bwpriest in #212
- optimization test overhaul by @bwpriest in #213
- gp math overhaul to support more diverse tensor shapes by @bwpriest in #214
- made shear kernel more general purpose by @bwpriest in #215
- tensorize lool_fn and analytic scale optimization by @bwpriest in #216
- Bug fixes and shear_kernel notebook updates. by @gsallaberry in #217
- Add variance and optimization tests to shear_kernel.ipynb by @gsallaberry in #219
- Iterative analytic scale optimization by @igoumiri in #207
- moved tensor creation inside of MuyGPS class hierarchy by @bwpriest in #218
- Make tests for ShearKernel by @gsallaberry in #220
- added objective function access method to OptimizeFn by @bwpriest in #221
- added 2-in-3-out shear kernel implementation by @bwpriest in #222
- added target_mask kwarg to optimizer functors that specifies which re… by @bwpriest in #223
- updated readme, contributing guidelines, and added a notebook by @bwpriest in #224
- [skip ci] added more visualization for the 2x3 shear kernel by @bwpriest in #226
- [skip ci] fixed error in shear kernel notebook by @bwpriest in #227
- [skip ci] added shear kernel confidence interval exploration by @bwpriest in #228
- reattaching nonstationary hyperparameters to optimization chassis by @bwpriest in #229
- Fix nonstationary unit tests by @igoumiri in #230
- modernize build system by @bwpriest in #231
- added a notebook illustrating the difference between the 2x3 and 3x3 shear means by @bwpriest in #232
- [skip ci] added math descriptions to the 2x3 offset notebook by @bwpriest in #234
- updated shear notebook with (2+1)x3 kernel and solved the offset problem by @bwpriest in #237
- added ShearNoise33 noise model that assumes that convergence variables have a noise prior 2x the other parameters. by @bwpriest in #238
- bulk documentation updates for v0.9 by @bwpriest in #240
- deprecated MuyGPyS.examples submodule and MultivariateMuyGPS classes. by @bwpriest in #241
- switched to a src-layout for source code by @bwpriest in #242
- fixed fast coefficient computation by @bwpriest in #243
New Contributors
- @gsallaberry made their first contribution in #208
Full Changelog: v0.8.2...v0.9.0
v0.8.2
v0.8.2 adds some robustness features to optimization. Changes in detail include:
- The
looph
loss function has a slightly different form, and theboundary_scale
parameter now defaults to3.0
and should not be set lower. - The documentation for all loss functions is improved.
- There is a new
DownSampleScale
variance scale parameter whose optimization method uses the analytic form, but downsamples neighborhoods across several rounds and returns the median in an attempt to avoid influence from outliers in the training data.
What's Changed
- corrected and streamlined some MuyGPs function documentation. [skip ci] by @bwpriest in #199
- added downsampling scale class for robustness. by @bwpriest in #200
- updated looph to new form and improved loss documentation. by @bwpriest in #201
- unitless looph implementation by @bwpriest in #202
- updated .readthedocs.yaml to meet new RTD requirements [skip ci] by @bwpriest in #203
- fixed some documentation errors for loss functions [skip ci] by @bwpriest in #204
- release 0.8.2 prep by @bwpriest in #205
Full Changelog: v0.8.1...v0.8.2
v0.8.1
v0.8.1 is a small update to v0.8.0 that adds two planned changes that got overlooked:
- The
nu
kwargs and member ofMatern
is now calledsmoothness
, and - Ancillary codes like
performance/benchmark.py
are updated to use the new API.
What's Changed
- update extraneous bits with new API by @bwpriest in #197
- replaced "nu" with "smoothness" throughout by @bwpriest in #198
Full Changelog: v0.8.0...v0.8.1
v0.8.0
v0.8.0 introduces significant changes to the libraries namespaces and introduces changes to the optimization API. It also streamlines the library's optimization workflows, and makes the library largely ifless. The primary changes felt by users will be the following:
- The API now accepts no
"*_method"
string arguments (other than those toNN_Wrapper
). Instead, one should import and use loss and optimization functions directly. See the univariate regression tutorial and the optimization docs for details. - Several keyword arguments, member objects, and functor classes are renamed. References to Greek character names deriving from equations have been dropped in favor of interpretable English words. Some examples follow:
- The
eps
kwarg and member ofMuyGPS
is now callednoise
. - The
sigma_sq
kwarg and member is now calledscale
, as in the variance scale parameter, and theSigmaSq
class has been replaced:FixedScale
is insensitive to optimization.AnalyticScale
contains the analytic optimization internally.
DistortionFn
is replaced by the more preciseDeformationFn
, and is contained in theMuyGPS
kwarg and memberdeformation
instead ofdistortion_fn
. The usable functors are renamedIsotropy
andAnisotropy
for brevity.ScalarHyperparameter
now has the simpler aliasParameter
.
- The
There are other, less obvious-to-users changes in this update as well
- The whole optimization workflow has been made ifless. Optimization choices are now purely functions of the classes involved and their member functors.
- loss functions are all now objects of the
LossFn
class, and similarly outer-loop optimization functions are objects of theOptimizeFn
class. OptimizeFn
takes an objective function maker function in its constructor, which makes it easier to define and incorporate new objective functions.- As a consequence, it is now much easier to add alternative loss or objective functions or to wrap different optimization libraries entirely.
- It will also be easier to add different ways to optimize the variance scale parameter.
- loss functions are all now objects of the
- There are no more "toss-catch"-style optimization function preparations. The primary function held by
MuyGPS.kernel
,MuyGPS.posterior_mean
, etc, are now suitable for optimization and are all created byMuyGPS._make()
. If a user for some reason changes a parameter value directly, it is necessary to runMuyGPS._make()
to update the downstream functors. - Backend-sensitive classes have
_backend_fn
-type kwargs in their constructors that allows a user to override the default backend specified by theMUYGPYS_BACKEND
environment variable. This makes testing the backends against one another simpler. Users should most likely ignore these kwargs, as it is unclear if they have other uses.
What's Changed
- Feature/ifless loss by @bwpriest in #190
- improved loss abstraction by @bwpriest in #191
- made noise perturbation logic ifless by @bwpriest in #192
- made sigma_sq logic ifless by @bwpriest in #193
- streamlined and made DistortionFns ifless by @bwpriest in #194
- made outer-loop optimization ifless by @bwpriest in #195
- divested namespace of notation in favor of English by @bwpriest in #196
Full Changelog: v0.7.2...v0.8.0
v0.7.2
v0.7.1
v0.7.1 fixes some of the loss documentation in v0.7.1 and includes a new experimental kernel. Changes in detail include:
- Improvements to the docstrings for all loss functions.
- Corrections to the docstring and tutorial for the looph loss in particular.
- Added
MuyGPyS.gp.kernel.experimental.ShearKernel
andexperimental/shear_kernel.ipynb
.
v0.7.0
v0.7.0 introduces major changes to the API, several new model features, as well as a major overhaul of the library's backend. Changes in detail include:
- The API now reduces the number of strings arguments, and instead largely expects objects directly. See the univariate regression tutorial for details.
- Added explicit distance models, currently supporting anisotropic and isotropic. See the anisotropic tutorial for details of the anisotropic distance model.
- Added several new loss functions. See the new loss function tutorial for details.
- Added several experimental features that are not fully hardened:
- Heteroscedastic noise model, where each training observation has a separate Gaussian noise prior.
- Hierarchical nonstationary parameters, where a scalar hyperparameter has several (possibly optimized) knot values throughout the domain and interpolates values at new locations using a lower-level GP.
- An alternative optimization workflow that reconstructs neighborhoods during optimization, meant to handle cases like anisotropy where neighborhoods change during optimization.
- Added experimental notebooks investigating several of these experimental features.
- Majorly changed the backend of the library. Control flow is increasingly functional. Member functions such as
MuyGPS.posterior_mean()
are composed at object creation depending on different model choices. The specific way in which this is done is subject to change.
What's Changed
- Refactoring math to remove global numpy dependency by @bwpriest in #95
- Housekeeping to tidy up the import interface by @bwpriest in #97
- All optimization functions now rely on kwargs form by @bwpriest in #98
- removed all
*_from_indices
functions from the libarary (less MuyGPyS.examples) by @bwpriest in #100 - Removing
MuyGPS.regress
in favor of separate mean and variance functions by @bwpriest in #102 - Making all member functions modular pure functions by @bwpriest in #104
- coalesced noise model application logic to single function by @bwpriest in #107
- Feature/heteroscedasticity by @alecmdunton in #109
- Exposing object creation to the user by @bwpriest in #118
- Refactored code so length scale lives in distortion models by @alecmdunton in #122
- Fixed bug that was fixing length_scale in optimizations by @bwpriest in #128
- Reorganize hyperparameters and init hierarchical nonstationary hyperparameter by @igoumiri in #129
- Anisotropic feature integrated into library. Tests added to gp.py and kernel.py by @alecmdunton in #127
- Added performance benchmarking script by @bwpriest in #135
- Updated format throughout to match PEP by @bwpriest in #133
- fixed tests/optimize.py to actually work by @bwpriest in #139
- removed jax cuda extras; user must now install manually by @bwpriest in #140
- Added pretty print overloads for MuyGPS classes by @bwpriest in #142
- Iss/141 by @alecmdunton in #143
- All torch tests passing by @alecmdunton in #144
- removed hardcoded float commands in examples/muygps_torch by @alecmdunton in #146
- Streamlined doc notebooks to obscure sampling and plotting code by @bwpriest in #148
- Refactored distortion class to take metric Callable as argument by @alecmdunton in #147
- Fix develop tests being skipped by @igoumiri in #151
- Fixed test harness errors introduced by merge. Updated some documentation. by @bwpriest in #152
- Feature: hierarchical RBF by @igoumiri in #145
- Added opt parameter indirection in preparation for hierarchical parameters by @bwpriest in #153
- Optimization for hierarchical GPs by @igoumiri in #154
- Fix knot optimization by @igoumiri in #156
- pulled boilerplate functions out of optimization pipelines by @bwpriest in #157
- minor nonstationary notebook cleanup [skip ci]. by @bwpriest in #159
- removed regress api tutorial [skip ci] by @bwpriest in #160
- [skip ci] refactored nb names and added a flat optimization for sanit… by @bwpriest in #161
- Adding new pseudo Huber loss function for outlier robustness. by @bwpriest in #164
- Added a variance-regularized pseudo-Huber loss function similar in fo… by @bwpriest in #166
- fixed bug to actually forward the looph function by @bwpriest in #167
- first implementation commit by @akilandrews in #165
- documentation nb updates/cleanup by @bwpriest in #169
- Update .readthedocs.yaml to supported python version [skip ci] by @bwpriest in #170
- Roll back required ipython versions in setup.py [skip ci] by @bwpriest in #171
- fixed RTD builds by @bwpriest in #172
- precomputing torch tutorial as it does not seem possible to run on RT… by @bwpriest in #173
- anisotropic tutorial and nb cleanup by @bwpriest in #175
- Added loss function tutorial. Fixed some mistakes in the documentatio… by @bwpriest in #176
- moved UnivariateSampler* into MuyGPyS._test.sampler for convenience by @bwpriest in #177
- moved mini batch tests to their own file by @bwpriest in #178
- Passing loss functions instead of strings. Unhooked experimental opti… by @bwpriest in #180
- improvements to samplers and tutorials by @bwpriest in #182
- reduced training ratio for anisotropic tutorial by @bwpriest in #183
- Optimization loop chassis by @akilandrews in #181
- partial fix to torch parameter optimization by @bwpriest in #184
- final updates for v0.7.0 by @bwpriest in #185
New Contributors
- @akilandrews made their first contribution in #165
Full Changelog: v0.6.6...v0.7.0
v0.6.6
v0.6.6 introduces a new interface for handling backend implementations of the math functions, as well as a new PyTorch backend and bespoke PyTorch MuyGPs layer for use in deep kernel models built in MuyGPs. The new torch tutorial explains how to use this new feature step-by-step. Changes in detail include:
- The backend configuration is now controlled by the
MUYGPYS_BACKEND
environment variable, which can accept"numpy"
,"jax"
,"mpi"
, and"torch"
values. A future release will support fusing the accelerator backends with MPI. It is also possible to select the backend at runtime by manipulating theMuyGPyS.config
object. See the README for details. - Added a PyTorch backend for all math functions.
- Added a
MuyGPyS.torch
module that supports creating a MuyGPs layer that can fit into a PyTorch deep neural network architecture and be optimized using a normal PyTorch workflow.MuyGPyS.torch.muygps_layer.MuyGPs_layer
only supports the Matérn kernel as of this release. It also only supports set values of the smoothness parameternu in [0.5, 1.5, 2.5, $\infty$]
becausetorch
does not presently support the modified Bessel function of the second kind. It class will be modularized to support other kernels as they are added to the library.
- Added a tutorial to the docs explaining the use of
MuyGPyS.torch
. - Refactored the noise prior so that the homoscadastic noise prior is explicitly modularized. Although the noise model behavior is unchanged, this will afford the easy addition of other noise models in future releases.
v0.6.5
v0.6.4
v0.6.4 introduces the fast mean prediction features described in [Dunton2022]. Notably, this fast mean prediction workflow is only supported in shared memory (numpy and JAX backends). Attempts to access the new functions while in MPI mode will raise NotImplementedError
s. Changes in detail include:
- Added functions for creating the precomputed coefficient tensor for the fast regression feature
- Added
fast_regress()
functions toMuyGPyS.gp.muygps.MuyGPS
andMuyGPyS.gp.muygps.MultivariateMuyGPS
- Added high level workflow for implementing fast regression in
MuyGPyS.examples.fast_regress
- Added new documentation notebook explaining the fast regression workflow
- Added some performance improvements for MPI back end
[Dunton2022] Dunton, Alec M., Benjamin W. Priest, and Amanda Muyskens. “Fast Gaussian Process Posterior Mean Prediction via Local Cross Validation and Precomputation.” arXiv preprint arXiv:2205.10879 (2022).