Skip to content

Commit

Permalink
🚑 No need to load standardizer
Browse files Browse the repository at this point in the history
  • Loading branch information
EssamWisam committed Jun 7, 2024
1 parent d737f71 commit 801bc18
Show file tree
Hide file tree
Showing 3 changed files with 17 additions and 29 deletions.
40 changes: 14 additions & 26 deletions docs/src/workflow examples/Composition/composition.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -30,15 +30,7 @@
"metadata": {}
},
{
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[ Info: Precompiling Imbalance [c709b415-507b-45b7-9a3d-1767c89fde68]\n"
]
}
],
"outputs": [],
"cell_type": "code",
"source": [
"using MLJ # Has MLJFlux models\n",
Expand Down Expand Up @@ -118,9 +110,7 @@
"output_type": "stream",
"text": [
"[ Info: For silent loading, specify `verbosity=0`. \n",
"import MLJFlux ✔\n",
"[ Info: For silent loading, specify `verbosity=0`. \n",
"import MLJModels ✔\n"
"import MLJFlux ✔\n"
]
},
{
Expand All @@ -136,7 +126,7 @@
"source": [
"BorderlineSMOTE1 = @load BorderlineSMOTE1 pkg=Imbalance verbosity=0\n",
"NeuralNetworkClassifier = @load NeuralNetworkClassifier pkg=MLJFlux\n",
"Standardizer = @load Standardizer pkg=MLJModels\n",
"# We didn't need to load Standardizer because it is a local model for MLJ (see `localmodels()`)\n",
"\n",
"clf = NeuralNetworkClassifier(\n",
" builder=MLJFlux.MLP(; hidden=(5,4), σ=Flux.relu),\n",
Expand Down Expand Up @@ -215,7 +205,7 @@
"cell_type": "markdown",
"source": [
"### Training the Composed Model\n",
"It's indistinguishable from training a single model. Isn't MLJ beautiful?"
"It's indistinguishable from training a single model."
],
"metadata": {}
},
Expand All @@ -231,40 +221,38 @@
"[ Info: Training machine(BorderlineSMOTE1(m = 5, …), …).\n",
"[ Info: Training machine(:model, …).\n",
"[ Info: After filtering, the mapping from each class to number of borderline points is (\"virginica\" => 1, \"versicolor\" => 2).\n",
"\rProgress: 13%|█████▌ | ETA: 0:00:01\u001b[K\rProgress: 100%|█████████████████████████████████████████| Time: 0:00:00\u001b[K\n",
"\rProgress: 67%|███████████████████████████▍ | ETA: 0:00:01\u001b[K\r\n",
" class: virginica\u001b[K\r\u001b[A[ Info: After filtering, the mapping from each class to number of borderline points is (\"virginica\" => 1, \"versicolor\" => 2).\n",
"[ Info: After filtering, the mapping from each class to number of borderline points is (\"virginica\" => 1, \"versicolor\" => 2).\n",
"┌ Warning: Layer with Float32 parameters got Float64 input.\n",
"│ The input will be converted, but any earlier layers may be very slow.\n",
"│ layer = Dense(4 => 5, relu) # 25 parameters\n",
"│ summary(x) = \"4×8 Matrix{Float64}\"\n",
"└ @ Flux ~/.julia/packages/Flux/Wz6D4/src/layers/stateless.jl:60\n",
"\rOptimising neural net: 4%[> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 6%[=> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 8%[=> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 10%[==> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 12%[==> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 14%[===> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 16%[===> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 18%[====> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 20%[====> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 22%[=====> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 24%[=====> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 25%[======> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 27%[======> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 29%[=======> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 31%[=======> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 33%[========> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 35%[========> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 37%[=========> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 39%[=========> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 41%[==========> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 43%[==========> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 45%[===========> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 47%[===========> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 49%[============> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 51%[============> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 53%[=============> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 55%[=============> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 57%[==============> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 59%[==============> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 61%[===============> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 63%[===============> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 65%[================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 67%[================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 69%[=================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 71%[=================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 73%[==================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 75%[==================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 76%[===================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 78%[===================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 80%[====================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 82%[====================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 84%[=====================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 86%[=====================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 88%[======================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 90%[======================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 92%[=======================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 94%[=======================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 96%[========================>] ETA: 0:00:00\u001b[K\rOptimising neural net: 98%[========================>] ETA: 0:00:00\u001b[K\rOptimising neural net: 100%[=========================] Time: 0:00:00\u001b[K\n",
"\rOptimising neural net: 4%[> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 47%[===========> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 49%[============> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 51%[============> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 53%[=============> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 55%[=============> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 57%[==============> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 59%[==============> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 61%[===============> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 63%[===============> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 65%[================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 67%[================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 69%[=================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 71%[=================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 73%[==================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 75%[==================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 76%[===================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 78%[===================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 80%[====================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 82%[====================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 84%[=====================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 86%[=====================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 88%[======================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 90%[======================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 92%[=======================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 94%[=======================> ] ETA: 0:00:00\u001b[K\rOptimising neural net: 96%[========================>] ETA: 0:00:00\u001b[K\rOptimising neural net: 98%[========================>] ETA: 0:00:00\u001b[K\rOptimising neural net: 100%[=========================] Time: 0:00:00\u001b[K\n",
"[ Info: After filtering, the mapping from each class to number of borderline points is (\"virginica\" => 3, \"versicolor\" => 1).\n",
"[ Info: After filtering, the mapping from each class to number of borderline points is (\"virginica\" => 3, \"versicolor\" => 1).\n",
"[ Info: After filtering, the mapping from each class to number of borderline points is (\"versicolor\" => 2).\n",
"┌ Warning: Cannot oversample a class with no borderline points. Skipping.\n",
"└ @ Imbalance ~/.julia/packages/Imbalance/gOviV/src/oversampling_methods/borderline_smote1/borderline_smote1.jl:67\n",
"└ @ Imbalance ~/.julia/packages/Imbalance/knJL1/src/oversampling_methods/borderline_smote1/borderline_smote1.jl:67\n",
"[ Info: After filtering, the mapping from each class to number of borderline points is (\"versicolor\" => 2).\n",
"┌ Warning: Cannot oversample a class with no borderline points. Skipping.\n",
"└ @ Imbalance ~/.julia/packages/Imbalance/gOviV/src/oversampling_methods/borderline_smote1/borderline_smote1.jl:67\n",
"\rEvaluating over 5 folds: 40%[==========> ] ETA: 0:00:09\u001b[K[ Info: After filtering, the mapping from each class to number of borderline points is (\"virginica\" => 1, \"versicolor\" => 2).\n",
"└ @ Imbalance ~/.julia/packages/Imbalance/knJL1/src/oversampling_methods/borderline_smote1/borderline_smote1.jl:67\n",
"\rEvaluating over 5 folds: 40%[==========> ] ETA: 0:00:00\u001b[K[ Info: After filtering, the mapping from each class to number of borderline points is (\"virginica\" => 1, \"versicolor\" => 2).\n",
"[ Info: After filtering, the mapping from each class to number of borderline points is (\"virginica\" => 1, \"versicolor\" => 2).\n",
"\rEvaluating over 5 folds: 60%[===============> ] ETA: 0:00:04\u001b[K[ Info: After filtering, the mapping from each class to number of borderline points is (\"virginica\" => 1).\n",
"[ Info: After filtering, the mapping from each class to number of borderline points is (\"virginica\" => 1).\n",
"┌ Warning: Cannot oversample a class with no borderline points. Skipping.\n",
"└ @ Imbalance ~/.julia/packages/Imbalance/gOviV/src/oversampling_methods/borderline_smote1/borderline_smote1.jl:67\n",
"└ @ Imbalance ~/.julia/packages/Imbalance/knJL1/src/oversampling_methods/borderline_smote1/borderline_smote1.jl:67\n",
"[ Info: After filtering, the mapping from each class to number of borderline points is (\"virginica\" => 1).\n",
"┌ Warning: Cannot oversample a class with no borderline points. Skipping.\n",
"└ @ Imbalance ~/.julia/packages/Imbalance/gOviV/src/oversampling_methods/borderline_smote1/borderline_smote1.jl:67\n",
"└ @ Imbalance ~/.julia/packages/Imbalance/knJL1/src/oversampling_methods/borderline_smote1/borderline_smote1.jl:67\n",
"[ Info: After filtering, the mapping from each class to number of borderline points is (\"virginica\" => 3, \"versicolor\" => 3).\n",
"[ Info: After filtering, the mapping from each class to number of borderline points is (\"virginica\" => 3, \"versicolor\" => 3).\n",
"\rEvaluating over 5 folds: 100%[=========================] Time: 0:00:06\u001b[K\n"
"\rEvaluating over 5 folds: 100%[=========================] Time: 0:00:00\u001b[K\n"
]
},
{
"output_type": "execute_result",
"data": {
"text/plain": "PerformanceEvaluation object with these fields:\n model, measure, operation,\n measurement, per_fold, per_observation,\n fitted_params_per_fold, report_per_fold,\n train_test_rows, resampling, repeats\nExtract:\n┌────────────┬──────────────┬─────────────\n│\u001b[22m measure \u001b[0m│\u001b[22m operation \u001b[0m│\u001b[22m measurement \u001b[0m│\n├────────────┼──────────────┼─────────────┤\n│ Accuracy() │ predict_mode │ 0.98 │\n└───────────────────────────────────────┘\n┌──────────────────────────────────────┐\n│\u001b[22m per_fold \u001b[0m│\u001b[22m 1.96*SE \u001b[0m│\n├──────────────────────────────────────┤\n│ [1.0, 1.0, 0.95, 0.95, 1.0] │ 0.0268 │\n└──────────────────────────────────────┘\n"
"text/plain": "PerformanceEvaluation object with these fields:\n model, measure, operation, measurement, per_fold,\n per_observation, fitted_params_per_fold,\n report_per_fold, train_test_rows, resampling, repeats\nExtract:\n┌────────────┬──────────────┬─────────────┬─────────┬───────────────────────────\n│\u001b[22m measure \u001b[0m│\u001b[22m operation \u001b[0m│\u001b[22m measurement \u001b[0m│\u001b[22m 1.96*SE \u001b[0m│\u001b[22m per_fold \u001b[0m ⋯\n├───────────────────────────────────────────────────────────────────────────\n│ Accuracy() │ predict_mode │ 0.98 │ 0.0268 │ [1.0, 1.0, 0.95, 0.95, 1 ⋯\n└───────────────────────────────────────┴────────────────────────────────────\n\u001b[36m 1 column omitted\u001b[0m\n"
},
"metadata": {},
"execution_count": 7
Expand Down
2 changes: 1 addition & 1 deletion docs/src/workflow examples/Composition/composition.jl
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ Imbalance.checkbalance(y)
# Let's load `BorderlineSMOTE1` to oversample the data and `Standardizer` to standardize it.
BorderlineSMOTE1 = @load BorderlineSMOTE1 pkg=Imbalance verbosity=0
NeuralNetworkClassifier = @load NeuralNetworkClassifier pkg=MLJFlux
Standardizer = @load Standardizer pkg=MLJModels
## We didn't need to load Standardizer because it is a local model for MLJ (see `localmodels()`)

clf = NeuralNetworkClassifier(
builder=MLJFlux.MLP(; hidden=(5,4), σ=Flux.relu),
Expand Down
4 changes: 2 additions & 2 deletions docs/src/workflow examples/Composition/composition.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ Let's load `BorderlineSMOTE1` to oversample the data and `Standardizer` to stand
````@example composition
BorderlineSMOTE1 = @load BorderlineSMOTE1 pkg=Imbalance verbosity=0
NeuralNetworkClassifier = @load NeuralNetworkClassifier pkg=MLJFlux
Standardizer = @load Standardizer pkg=MLJModels
# We didn't need to load Standardizer because it is a local model for MLJ (see `localmodels()`)
clf = NeuralNetworkClassifier(
builder=MLJFlux.MLP(; hidden=(5,4), σ=Flux.relu),
Expand Down Expand Up @@ -75,7 +75,7 @@ for inference, the standardizer will automatically use the training set's mean a
will be transparent.

### Training the Composed Model
It's indistinguishable from training a single model. Isn't MLJ beautiful?
It's indistinguishable from training a single model.

````@example composition
mach = machine(pipeline, X, y)
Expand Down

0 comments on commit 801bc18

Please sign in to comment.