Releases: FluxML/MLJFlux.jl
v0.6.4
MLJFlux v0.6.4
Merged pull requests:
- Fix mlj_embedder_interface.jl (#299) (@EssamWisam)
- For a 0.6.4 release (#300) (@ablaom)
v0.6.3
MLJFlux v0.6.3
- Extend compatibility: Flux = "0.14, 0.15"
Merged pull requests:
- CompatHelper: bump compat for Flux to 0.16, (keep existing compat) (#290) (@github-actions[bot])
- CompatHelper: bump compat for Optimisers to 0.4, (keep existing compat) (#291) (@github-actions[bot])
- CompatHelper: bump compat for ColorTypes to 0.12, (keep existing compat) (#292) (@github-actions[bot])
- For a 0.6.3 release (extension of compat bounds only) (#293) (@ablaom)
- Revert [compat] Flux = "0.14" (#294) (@ablaom)
- Extend Flux [compat] to Flux = "0.14, 0.15" (#296) (@ablaom)
v0.6.2
MLJFlux v0.6.2
Merged pull requests:
- fixes to metadata of entityembedder (#288) (@tiemvanderdeure)
- For a 0.6.2 release (#289) (@ablaom)
v0.6.1
MLJFlux v0.6.1
- Add model wrapper
EntityEmbedder(model)
to transform supervised MLJFlux models into entity embedding transformers (#286) - Make some performance improvements around unwrapping of
CategoricalArray
s (#281)
Merged pull requests:
- ⭐️ Add entity embeddings workflow example (#278) (@EssamWisam)
- performance improvements (#281) (@tiemvanderdeure)
- Fix failing test caused by new Julia release (#284) (@OkonSamuel)
- For a 0.6.1 release (#285) (@ablaom)
- ⭐️ Entity embedder interface is here (#286) (@EssamWisam)
- Rebase entity-tutorial (#287) (@EssamWisam)
Closed issues:
v0.6.0
MLJFlux v0.6.0
All models, except ImageClassifier
, now support categorical features (presented as table columns with a CategoricalVector
type). Rather than one-hot encoding, embeddings into a continuous space are learned (i.e, by adding an embedding layer) and the dimension of theses spaces can be specified by the user, using a new dictionary-valued hyperparameter, embedding_dims
. The learned embeddings are exposed by a new implementation of transform
, which means they can be used with other models (transfer learning) as described in Cheng Guo and Felix Berkhahn (2016): Entity Embeddings of Categorical Variables.
Also, all continuous input presented to these models is now forced to be Float32
, but this is the only breaking change.
Merged pull requests:
- Update docs (#265) (@ablaom)
- Introduce EntityEmbeddings (#267) (@EssamWisam)
- Fix
l2
loss inMultitargetNeuralNetworkRegressor
doctring (#270) (@ablaom) - automatically convert input matrix to Float32 (#272) (@tiemvanderdeure)
- Force
Float32
as type presented to Flux chains (#276) (@ablaom) - For a 0.6.0 release (#277) (@ablaom)
Closed issues:
v0.5.1
v0.5.0
MLJFlux v0.5.0
- (new model) Add
NeuralNetworkBinaryClasssifier
, an optimised form ofNeuralNetworkClassifier
for the special case of two target classes. UseFlux.σ
instead ofsoftmax
for the default finaliser (#248) - (internals) Switch from implicit to explicit differentiation (#251)
- (breaking) Use optimisers from Optimisers.jl instead of Flux.jl (#251). Note that the new optimisers are immutable.
- (RNG changes.) Change the default value of the model field
rng
fromRandom.GLOBAL_RNG
toRandom.default_rng()
. Change the seeded RNG, obtained by specifying an integer value forrng
, fromMersenneTwister
toXoshiro
(#251) - (RNG changes.) Update the
Short
builder so that therng
argument ofbuild(::Short, rng, ...)
is passed on to theDropout
layer, as these layers now support this on a GPU, at
least forrng=Random.default_rng()
(#251) - (weakly breaking) Change the implementation of L1/L2 regularization from explicit loss penalization to weight/sign decay (internally chained with the user-specified optimiser). The only breakage for users is that the losses reported in the history will no longer be penalized, because the penalty is not explicitly computed (#251)
Merged pull requests:
- Fix metalhead breakage (#250) (@ablaom)
- Omnibus PR, including switch to explicit style differentiation (#251) (@ablaom)
- 🚀 Instate documentation for MLJFlux (#252) (@EssamWisam)
- Update examples/MNIST Manifest, including Julia 1.10 (#254) (@ablaom)
- ✨ Add 7 workflow examples for MLJFlux (#256) (@EssamWisam)
- Add binary classifier (#257) (@ablaom)
- For a 0.5.0 release (#259) (@ablaom)
- Add check that Flux optimiser is not being used (#260) (@ablaom)
Closed issues:
v0.4.0
MLJFlux v0.4.0
Merged pull requests:
- Bump Metalhead to 0.9 - making CUDA optional (#240) (@mohamed82008)
- For a 0.4.0 release (#243) (@ablaom)
v0.3.1
v0.3.0
MLJFlux v0.3.0
Merged pull requests: