v0.3
AutoDiscJax v0.3
-
New modules:
ClampModule
: new util module to allow clamping of module outputsUniformGoalGenerator
: module to sample goals uniformly between a low and high valueRandomInterventionSelector
: module to randomly select an intervention among history of prior interventionsNullRolloutStatisticsEncoder
: module to allow users to not compute any statistics on the system outputsPushPerturbationGenerator
: new module to generate push perturbation parameters (separated from the NoisePerturbationGenerator)
-
Updated modules:
-
Module
: each autodiscjax module now returns a log_data object -
BaseGenerator
andBaseOptimizer
now inherit from theClampModule
module -
big refacto of optimizers + BaseGCInterventionOptimizer + LearningProgressIM modules to allow save of local optim runs in imgep_experiment_pipeline
-
EAOptimizer
now has a noise scheduler -
GRNRollout
: removed the vmap of grn_step (no needed anymore) -
GRNRolloutStatisticsEncoder
: added the is_converging statistic -
WallPerturbationGenerator
:- takes a
sigma
parameter allowing to tune the perpendicular vs radial repulsive forces exerted by the wall - positions wall based on distance travelled (instead of time travelled)
- takes a
-
NoisePerturbationGenerator
: added possibility to generate noise for w and c parameters -
L2GoalAchievementLoss
: added sqrt (before was squared L2) and adaptive scaling by current reached goal space extent -
NearestNeighborInterventionSelector
: added adaptive scaling by current reached goal space extent
-
-
New util functions:
create_modules.py
: bunch of util functions to instantiate autodiscjax modules from a given configappend_to_log
: util function to append the module's log_data outputs to log
-
Updated util functions:
is_converging
: now looks if the signal amplitude has decreased (when compared to a prior phase of the signal)
-
Bug Fixes:
HypercubeGoalGenerator
: fix hypercube center calculationGRNRolloutStatisticsEncoder
: stats were mixed, fixed
-
API changes:
- big refacto of optimizers + BaseGCInterventionOptimizer + LearningProgressIM modules to allow save of local optim runs in imgep_experiment_pipeline
imgep_experiment_pipeline
:- now saves local optim runs in history
rs_experiment_pipeline
: added random search experiment pipelineimgep_evaluation_pipeline
was renamed intorobustness_evaluation_pipeline
(andrun_imgep_evaluation
intorun_robustness_tests
), and was modified with:- vmap vectorization of modules
- logging
- perturbation generator now takes raw system outputs as input (instead of just ys as this is more general)
-
jit decorators where removed/added to optimize compute time
-
Tests: renamed, updated and novel tests to test the different modules
-
Examples: updated with the new modules/pipeline
-
TODOs for next release:
- docs
- pip package
SGDOptimizer
: deal with NaN valuesIMFlowGoalGenerator
bug fixes:- saving IM_vals into the logs can crash the code
- can return NaN values