v0.5.0
MLJFlux v0.5.0
- (new model) Add
NeuralNetworkBinaryClasssifier
, an optimised form ofNeuralNetworkClassifier
for the special case of two target classes. UseFlux.σ
instead ofsoftmax
for the default finaliser (#248) - (internals) Switch from implicit to explicit differentiation (#251)
- (breaking) Use optimisers from Optimisers.jl instead of Flux.jl (#251). Note that the new optimisers are immutable.
- (RNG changes.) Change the default value of the model field
rng
fromRandom.GLOBAL_RNG
toRandom.default_rng()
. Change the seeded RNG, obtained by specifying an integer value forrng
, fromMersenneTwister
toXoshiro
(#251) - (RNG changes.) Update the
Short
builder so that therng
argument ofbuild(::Short, rng, ...)
is passed on to theDropout
layer, as these layers now support this on a GPU, at
least forrng=Random.default_rng()
(#251) - (weakly breaking) Change the implementation of L1/L2 regularization from explicit loss penalization to weight/sign decay (internally chained with the user-specified optimiser). The only breakage for users is that the losses reported in the history will no longer be penalized, because the penalty is not explicitly computed (#251)
Merged pull requests:
- Fix metalhead breakage (#250) (@ablaom)
- Omnibus PR, including switch to explicit style differentiation (#251) (@ablaom)
- 🚀 Instate documentation for MLJFlux (#252) (@EssamWisam)
- Update examples/MNIST Manifest, including Julia 1.10 (#254) (@ablaom)
- ✨ Add 7 workflow examples for MLJFlux (#256) (@EssamWisam)
- Add binary classifier (#257) (@ablaom)
- For a 0.5.0 release (#259) (@ablaom)
- Add check that Flux optimiser is not being used (#260) (@ablaom)
Closed issues: