MLJFlux v0.6.0
All models, except ImageClassifier
, now support categorical features (presented as table columns with a CategoricalVector
type). Rather than one-hot encoding, embeddings into a continuous space are learned (i.e, by adding an embedding layer) and the dimension of theses spaces can be specified by the user, using a new dictionary-valued hyperparameter, embedding_dims
. The learned embeddings are exposed by a new implementation of transform
, which means they can be used with other models (transfer learning) as described in Cheng Guo and Felix Berkhahn (2016): Entity Embeddings of Categorical Variables.
Also, all continuous input presented to these models is now forced to be Float32
, but this is the only breaking change.
Merged pull requests:
- Update docs (#265) (@ablaom)
- Introduce EntityEmbeddings (#267) (@EssamWisam)
- Fix
l2
loss inMultitargetNeuralNetworkRegressor
doctring (#270) (@ablaom) - automatically convert input matrix to Float32 (#272) (@tiemvanderdeure)
- Force
Float32
as type presented to Flux chains (#276) (@ablaom) - For a 0.6.0 release (#277) (@ablaom)
Closed issues: