Skip to content

abdoudiaw/Efficient-Sampling

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

42 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Efficient learning

The repository has codes of our online learning method based on optimizer-driven sampling.

To run the scripts in TableI and TableII. First you need to install mystic and its requirements: https://github.com/uqfoundation/mystic

mystic

highly-constrained non-convex optimization and uncertainty quantification

Adjustable settings are in _model.py, but are set at values used in manuscript. Results may be slightly different due to randomness from sampler and optimizer.

Executes with:

  $ python main_workflow.py

and may take anywhere from a few minutes to a few days, depending on the model and tolerance used, and randomness. Relevant information is printed to stdout, and also dumped into several files:

  • cost.pkl: the objective function
  • hist.pkl: historical testing misfit
  • size.pkl: historical endpoint cache size
  • score.txt: convergence of the test score versus model evaluations
  • stop: a database of the endpoints of the optimizers
  • func.db: a database of the learned surrogates
  • eval: a database of evaluations of the objective

Any of the "pkl" files can be read like this:

 >>> import dill
 >>> cost = dill.load(open('cost.pkl', 'rb'))

while relevant results from the databases are plotted with:

python plot_func.py

and test score convergence is plotted with:

python plot_*_converge.py    (* = loose, tight)

with "loose" corresponding to loose tolerance, and tight to strict tolerance.

=======

About

Online learning with optimizer-driven sampling

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages