From 26fbab5dd2cbc9ecfc71df26e83b4bb4cd91f524 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Fr=C3=A9d=C3=A9rik=20Paradis?= Date: Sun, 19 Mar 2023 14:45:44 -0400 Subject: [PATCH 1/2] Remove support for Python 3.7 (#162) --- .github/workflows/main.yml | 2 +- README.md | 2 +- docs/source/index.rst | 2 +- setup.py | 3 +-- 4 files changed, 4 insertions(+), 5 deletions(-) diff --git a/.github/workflows/main.yml b/.github/workflows/main.yml index 802adae8..81a0b42d 100644 --- a/.github/workflows/main.yml +++ b/.github/workflows/main.yml @@ -11,7 +11,7 @@ jobs: cicd-pipeline: strategy: matrix: - python-version: ["3.7", "3.8", "3.9", "3.10"] + python-version: ["3.8", "3.9", "3.10"] os: [ubuntu-latest] include: - python-version: "3.10" diff --git a/README.md b/README.md index 482a4fe3..b2f5d2a5 100644 --- a/README.md +++ b/README.md @@ -18,7 +18,7 @@ Use Poutyne to: Read the documentation at [Poutyne.org](https://poutyne.org). -Poutyne is compatible with the __latest version of PyTorch__ and __Python >= 3.7__. +Poutyne is compatible with the __latest version of PyTorch__ and __Python >= 3.8__. ### Cite ``` diff --git a/docs/source/index.rst b/docs/source/index.rst index 066b60e9..11043310 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -21,7 +21,7 @@ Use Poutyne to: - Train models easily. - Use callbacks to save your best model, perform early stopping and much more. -Poutyne is compatible with the **latest version of PyTorch** and **Python >= 3.7**. +Poutyne is compatible with the **latest version of PyTorch** and **Python >= 3.8**. Cite ---- diff --git a/setup.py b/setup.py index 2da3d352..fcf64bca 100644 --- a/setup.py +++ b/setup.py @@ -58,7 +58,6 @@ def main(): 'Intended Audience :: Science/Research', 'License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)', 'Programming Language :: Python :: 3', - 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: 3.9', 'Programming Language :: Python :: 3.10', @@ -69,7 +68,7 @@ def main(): ], packages=packages, install_requires=['numpy', 'torch', 'torchmetrics'], - python_requires='>=3.7', + python_requires='>=3.8', description='A simplified framework and utilities for PyTorch.', long_description=readme, long_description_content_type='text/markdown', From 8a3d96c2a9acbe108fd624af1cb5e64e4f33db30 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Fr=C3=A9d=C3=A9rik=20Paradis?= Date: Sun, 19 Mar 2023 14:46:56 -0400 Subject: [PATCH 2/2] Format markdown --- CHANGELOG.md | 265 +++++++++++++++++++++++---------------------- CODE_OF_CONDUCT.md | 22 ++-- CONTRIBUTING.md | 35 +++--- README.md | 69 ++++++------ 4 files changed, 201 insertions(+), 190 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 8ceb62f9..1527bbbe 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,190 +1,193 @@ # v1.x.x -* +- + +# v1.15 + +- Remove support for Python 3.7 # v1.14 -* Update examples using classification metrics from torchmetrics to add the now required `task` argument. -* Fix the no LR scheduler bug when using PyTorch 2.0. +- Update examples using classification metrics from torchmetrics to add the now required `task` argument. +- Fix the no LR scheduler bug when using PyTorch 2.0. # v1.13 Breaking changes: -* The deprecated `torch_metrics` keyword argument has been removed. Users should use the `batch_metrics` or `epoch_metrics` +- The deprecated `torch_metrics` keyword argument has been removed. Users should use the `batch_metrics` or `epoch_metrics` keyword argument for torchmetrics' metrics. -* The deprecated `EpochMetric` class has been removed. Users should implement the +- The deprecated `EpochMetric` class has been removed. Users should implement the [`Metric` class](https://poutyne.org/metrics.html#poutyne.Metric) instead. # v1.12.1 -* Fix memory leak when using recursive structure as data in the `Model.fit()` or the `ModelBundle.train_data()` methods. +- Fix memory leak when using recursive structure as data in the `Model.fit()` or the `ModelBundle.train_data()` methods. # v1.12 -* Fix a bug when transfering the optimizer on another device caused by a new feature in PyTorch 1.12, i.e. the "capturable" +- Fix a bug when transfering the optimizer on another device caused by a new feature in PyTorch 1.12, i.e. the "capturable" parameter in Adam and AdamW. -* Add utilitary functions for saving ([`save_random_states`](https://poutyne.org/utils.html#poutyne.save_random_states)) +- Add utilitary functions for saving ([`save_random_states`](https://poutyne.org/utils.html#poutyne.save_random_states)) and loading ([`load_random_states`](https://poutyne.org/utils.html#poutyne.load_random_states)) Python, Numpy and Pytorch's (both CPU and GPU) random states. Furthermore, we also add the [`RandomStatesCheckpoint`](https://poutyne.org/callbacks.html#poutyne.RandomStatesCheckpoint) callback. This callback is now used in ModelBundle. - # v1.11 -* Remove support for Python 3.6 as PyTorch. -* Add Dockerfile +- Remove support for Python 3.6 as PyTorch. +- Add Dockerfile # v1.10.1 -* Major bug fix: the state of the loss function was not reset after each epoch/evaluate calls so the values returned +- Major bug fix: the state of the loss function was not reset after each epoch/evaluate calls so the values returned were averages for the whole lifecycle of the Model class. # v1.10 -* Add a [WandB logger](https://poutyne.org/callbacks.html#poutyne.WandBLogger). -* [Epoch and batch metrics are now unified.](https://poutyne.org/metrics.html) Their only difference is whether the +- Add a [WandB logger](https://poutyne.org/callbacks.html#poutyne.WandBLogger). +- [Epoch and batch metrics are now unified.](https://poutyne.org/metrics.html) Their only difference is whether the metric for the batch is computed. The main interface is now the [`Metric` class](https://poutyne.org/metrics.html#poutyne.Metric). It is compatible with [TorchMetrics](https://torchmetrics.readthedocs.io/). Thus, TorchMetrics metrics can now be passed as either batch or epoch metrics. The metrics with the interface `metric(y_pred, y_true)` are internally wrapped into a `Metric` object and are still fully supported. The `torch_metrics` keyword argument and the `EpochMetric` class are now **deprecated** and will be removed in future versions. -* `Model.get_batch_size` is replaced by +- `Model.get_batch_size` is replaced by [`poutyne.get_batch_size()`](https://poutyne.org/utils.html#poutyne.get_batch_size). # v1.9 -* Add support for [TorchMetrics](https://torchmetrics.readthedocs.io/) metrics. -* [`Experiment`](https://poutyne.org/experiment.html#poutyne.Experiment) is now an alias for +- Add support for [TorchMetrics](https://torchmetrics.readthedocs.io/) metrics. +- [`Experiment`](https://poutyne.org/experiment.html#poutyne.Experiment) is now an alias for [`ModelBundle`](https://poutyne.org/experiment.html#poutyne.ModelBundle), a class quite similar to `Experiment` except that it allows to instantiate an "Experiment" from a Poutyne Model or a network. -* Add support for PackedSequence. -* Add flag to [`TensorBoardLogger`](https://poutyne.org/callbacks.html#poutyne.TensorBoardLogger) to allow to put +- Add support for PackedSequence. +- Add flag to [`TensorBoardLogger`](https://poutyne.org/callbacks.html#poutyne.TensorBoardLogger) to allow to put training and validation metrics in different graphs. This allow to have a behavior closer to Keras. -* Add support for fscore on binary classification. -* Add `convert_to_numpy` flag to be able to obtain tensors instead of NumPy arrays in evaluate\* and predict\*. +- Add support for fscore on binary classification. +- Add `convert_to_numpy` flag to be able to obtain tensors instead of NumPy arrays in evaluate\* and predict\*. # v1.8 Breaking changes: -* When using epoch metrics `'f1'`, `'precision'`, `'recall'` and associated classes, the default average has been changed +- When using epoch metrics `'f1'`, `'precision'`, `'recall'` and associated classes, the default average has been changed to `'macro'` instead of `'micro'`. This changes the names of the metrics that is displayed and that is in the log dictionnary in callbacks. This change also applies to `Experiment` when using `task='classif'`. -* Exceptions when loading checkpoints in `Experiment` are now propagated instead of being silenced. +- Exceptions when loading checkpoints in `Experiment` are now propagated instead of being silenced. # v1.7 -* Add [`plot_history`](https://poutyne.org/utils.html#poutyne.plot_history) and +- Add [`plot_history`](https://poutyne.org/utils.html#poutyne.plot_history) and [`plot_metric`](https://poutyne.org/utils.html#poutyne.plot_metric) functions to easily plot the history returned by Poutyne. [`Experiment`](https://poutyne.org/experiment.html#poutyne.Experiment) also saves the figures at the end of the training. -* All text files (e.g. CSVs in CSVLogger) are now saved using UTF-8 on all platforms. +- All text files (e.g. CSVs in CSVLogger) are now saved using UTF-8 on all platforms. # v1.6 -* PeriodicSaveCallback and all its subclasses now have the `restore_best` argument. -* `Experiment` now contains a `monitoring` argument that can be set to false to avoid monitoring any metric and saving uneeded checkpoints. -* The format of the ETA time and total time now contains days, hours, minutes when appropriate. -* Add `predict` methods to Callback to allow callback to be call during prediction phase. -* Add `infer` methods to Experiment to more easily make inference (predictions) with an experiment. -* Add a progress bar callback during predictions of a model. -* Add a method to compare the results of two experiments. -* Add `return_ground_truth` and `has_ground_truth` arguments to +- PeriodicSaveCallback and all its subclasses now have the `restore_best` argument. +- `Experiment` now contains a `monitoring` argument that can be set to false to avoid monitoring any metric and saving uneeded checkpoints. +- The format of the ETA time and total time now contains days, hours, minutes when appropriate. +- Add `predict` methods to Callback to allow callback to be call during prediction phase. +- Add `infer` methods to Experiment to more easily make inference (predictions) with an experiment. +- Add a progress bar callback during predictions of a model. +- Add a method to compare the results of two experiments. +- Add `return_ground_truth` and `has_ground_truth` arguments to [`predict_dataset`](https://poutyne.org/model.html#poutyne.Model.predict_dataset) and [`predict_generator`](https://poutyne.org/model.html#poutyne.Model.predict_generator). # v1.5 -* Add [`LambdaCallback`](https://poutyne.org/callbacks.html#poutyne.LambdaCallback) to more easily define a callback +- Add [`LambdaCallback`](https://poutyne.org/callbacks.html#poutyne.LambdaCallback) to more easily define a callback from lambdas or functions. -* In Jupyter Notebooks, when coloring is enabled, the print rate of progress output is limited to one output every +- In Jupyter Notebooks, when coloring is enabled, the print rate of progress output is limited to one output every 0.1 seconds. This solves the slowness problem (and the memory problem on Firefox) when there is a great number of steps per epoch. -* Add `return_dict_format` argument to [`train_on_batch`](https://poutyne.org/model.html#poutyne.Model.train_on_batch) +- Add `return_dict_format` argument to [`train_on_batch`](https://poutyne.org/model.html#poutyne.Model.train_on_batch) and [`evaluate_on_batch`](https://poutyne.org/model.html#poutyne.Model.evaluate_on_batch) and allows to return predictions and ground truths in [`evaluate_*`](https://poutyne.org/model.html#poutyne.Model.evaluate) even when `return_dict_format=True`. Furthermore, [`Experiment.test*`](https://poutyne.org/experiment.html#poutyne.Experiment.test_data) now support `return_pred=True` and `return_ground_truth=True`. -* Split [Tips and Tricks](https://poutyne.org/examples/tips_and_tricks.html) example into two examples: +- Split [Tips and Tricks](https://poutyne.org/examples/tips_and_tricks.html) example into two examples: [Tips and Tricks](https://poutyne.org/examples/tips_and_tricks.html) and [Sequence Tagging With an RNN](https://poutyne.org/examples/sequence_tagging.html). # v1.4 -* Add examples for image reconstruction and semantic segmentation with Poutyne. -* Add the following flags in [`ProgressionCallback`](https://poutyne.org/callbacks.html#poutyne.ProgressionCallback): +- Add examples for image reconstruction and semantic segmentation with Poutyne. +- Add the following flags in [`ProgressionCallback`](https://poutyne.org/callbacks.html#poutyne.ProgressionCallback): `show_every_n_train_steps`, `show_every_n_valid_steps`, `show_every_n_test_steps`. They allow to show only certain steps instead of all steps. -* Fix bug where all warnings were silenced. -* Add `strict` flag when loading checkpoints. In Model, a NamedTuple is returned as in PyTorch's `load_state_dict`. In +- Fix bug where all warnings were silenced. +- Add `strict` flag when loading checkpoints. In Model, a NamedTuple is returned as in PyTorch's `load_state_dict`. In Experiment, a warning is raised when there are missing or unexpected keys in the checkpoint. -* In CSVLogger, when multiple learning rates are used, we use the column names `lr_group_0`, `lr_group_1`, etc. instead +- In CSVLogger, when multiple learning rates are used, we use the column names `lr_group_0`, `lr_group_1`, etc. instead of `lr`. -* Fix bug where EarlyStopping would be one epoch late and would anyway disregard the monitored metric at the last epoch. +- Fix bug where EarlyStopping would be one epoch late and would anyway disregard the monitored metric at the last epoch. # v1.3.1 -* Bug fix for when changing the GPU device twice with optimizer having a state would crash. +- Bug fix for when changing the GPU device twice with optimizer having a state would crash. # v1.3 -* A progress bar is now set on validation a model (similar to training). It is disableable by passing -`progress_options=dict(show_on_valid=False)` in the `fit*` methods. -* A progress bar is now set testing a model (similar to training). It is disableable by passing `verbose=False` in the -`evaluate*` methods. -* A new notification callback [`NotificationCallback`](https://poutyne.org/callbacks.html#poutyne.NotificationCallback) +- A progress bar is now set on validation a model (similar to training). It is disableable by passing + `progress_options=dict(show_on_valid=False)` in the `fit*` methods. +- A progress bar is now set testing a model (similar to training). It is disableable by passing `verbose=False` in the + `evaluate*` methods. +- A new notification callback [`NotificationCallback`](https://poutyne.org/callbacks.html#poutyne.NotificationCallback) allowing to received message at specific time (start/end training/testing an at any given epoch). -* A new logging callback, [`MLflowLogger`](https://poutyne.org/callbacks.html#poutyne.MLFlowLogger), this callback allows +- A new logging callback, [`MLflowLogger`](https://poutyne.org/callbacks.html#poutyne.MLFlowLogger), this callback allows you to log experimentation configuration and metrics during training, validation and testing. -* Fix bug where [`evaluate_generator`](https://poutyne.org/model.html#poutyne.Model.evaluate_generator) did not support +- Fix bug where [`evaluate_generator`](https://poutyne.org/model.html#poutyne.Model.evaluate_generator) did not support generators with StopIteration exception. -* Experiment now has a [`train_data`](https://poutyne.org/experiment.html#poutyne.Experiment.train_data) and a +- Experiment now has a [`train_data`](https://poutyne.org/experiment.html#poutyne.Experiment.train_data) and a [`test_data`](https://poutyne.org/experiment.html#poutyne.Experiment.test_data) method. -* The [Lambda layer](https://poutyne.org/layers.html#poutyne.Lambda) now supports multiple arguments in its forward method. +- The [Lambda layer](https://poutyne.org/layers.html#poutyne.Lambda) now supports multiple arguments in its forward method. # v1.2 -* A `device` argument is added to [`Model`](https://poutyne.org/model.html#poutyne.Model). -* The argument `optimizer` of [`Model`](https://poutyne.org/model.html#poutyne.Model) can now be a dictionary. This +- A `device` argument is added to [`Model`](https://poutyne.org/model.html#poutyne.Model). +- The argument `optimizer` of [`Model`](https://poutyne.org/model.html#poutyne.Model) can now be a dictionary. This allows to pass different argument to the optimizer, e.g. `optimizer=dict(optim='sgd', lr=0.1)`. -* The progress bar now uses 20 characters instead of 25. -* The progress bar is now more fluid since partial blocks are used allowing increments of 1/8th of a block at once. -* The function [`torch_to_numpy`](https://poutyne.org/utils.html#poutyne.torch_to_numpy) now does .detach() before +- The progress bar now uses 20 characters instead of 25. +- The progress bar is now more fluid since partial blocks are used allowing increments of 1/8th of a block at once. +- The function [`torch_to_numpy`](https://poutyne.org/utils.html#poutyne.torch_to_numpy) now does .detach() before .cpu(). This might slightly improves performances in some cases. -* In Experiment, the [`load_checkpoint`](https://poutyne.org/experiment.html#poutyne.Experiment.load_checkpoint) method +- In Experiment, the [`load_checkpoint`](https://poutyne.org/experiment.html#poutyne.Experiment.load_checkpoint) method can now load arbitrary checkpoints by passing a filename instead of the usual argument. -* Experiment now has a `train_dataset` and a `test_dataset` method. -* Experiment is not considered a beta feature anymore. +- Experiment now has a `train_dataset` and a `test_dataset` method. +- Experiment is not considered a beta feature anymore. **Breaking changes:** -* In [`evaluate`](https://poutyne.org/model.html#poutyne.Model.evaluate), `dataloader_kwargs` is now a dictionary +- In [`evaluate`](https://poutyne.org/model.html#poutyne.Model.evaluate), `dataloader_kwargs` is now a dictionary keyword argument instead of arbitrary keyword arguments. Other methods are already this way. This was an oversight of the last release. # v1.1 -* There is now a batch metric [`TopKAccuracy`](https://poutyne.org/metrics.html#poutyne.TopKAccuracy) and it is possible +- There is now a batch metric [`TopKAccuracy`](https://poutyne.org/metrics.html#poutyne.TopKAccuracy) and it is possible to use them as strings for `k` in 1 to 10 and 20, 30, …, 100, e.g. `'top5'`. -* Add [`fit_dataset`](https://poutyne.org/model.html#poutyne.Model.fit_dataset) - , [`evaluate_dataset`](https://poutyne.org/model.html#poutyne.Model.evaluate_dataset) - and [`predict_dataset`](https://poutyne.org/model.html#poutyne.Model.predict_dataset) methods which allow to pass +- Add [`fit_dataset`](https://poutyne.org/model.html#poutyne.Model.fit_dataset) + , [`evaluate_dataset`](https://poutyne.org/model.html#poutyne.Model.evaluate_dataset) + and [`predict_dataset`](https://poutyne.org/model.html#poutyne.Model.predict_dataset) methods which allow to pass PyTorch Datasets and creates DataLoader internally. Here is [an example with MNIST](https://github.com/GRAAL-Research/poutyne/blob/master/examples/basic_mnist_classification.py) . -* Colors now work correctly in Colab. -* The default colorscheme was changed so that it looks good in Colab, notebooks and command line. The previous one was +- Colors now work correctly in Colab. +- The default colorscheme was changed so that it looks good in Colab, notebooks and command line. The previous one was not readable in Colab. -* Checkpointing callbacks now don't use the Python [`tempfile` package](https://docs.python.org/3/library/tempfile.html) +- Checkpointing callbacks now don't use the Python [`tempfile` package](https://docs.python.org/3/library/tempfile.html) anymore for the temporary file. The use of this package caused problem when the temp filesystem was not on the same partition as the final destination of the checkpoint. The temporary file is now created at the same place as the final destination. Thus, in most use cases, this will render the use of the `temporary_filename` argument not necessary. The argument is still available for those who need it. -* In Experiment, it is not possible to call the method `test` when training without logging. +- In Experiment, it is not possible to call the method `test` when training without logging. # v1.0.1 @@ -194,27 +197,27 @@ Update following bug in new PyTorch version: https://github.com/pytorch/pytorch/ ## Version 1.0.0 of Poutyne is here! -* Output is now very nicely colored and now has a progress bar. Both are disableable with the ``progress_options`` - arguments. The ``colorama`` package needs to be installed to have the colors. See the documentation of +- Output is now very nicely colored and now has a progress bar. Both are disableable with the `progress_options` + arguments. The `colorama` package needs to be installed to have the colors. See the documentation of the [fit](https://poutyne.org/model.html#poutyne.Model.fit) method for details. -* Multi-GPU support: Uses ``torch.nn.parallel.data_parallel`` under the hood. -* Huge update to the documentation with a documentation of metrics and a lot of examples. -* No need to import ``framework`` anymore. Everything now can be imported from ``poutyne``directly, - i.e. ``from poutyne import whatever_you_want``. -* [``PeriodicSaveCallbacks``](https://poutyne.org/callbacks.html#poutyne.PeriodicSaveCallback) (such - as [``ModelCheckpoint``](https://poutyne.org/callbacks.html#poutyne.ModelCheckpoint)) now has a - flag ``keep_only_last_best`` which allow to only keep the last best checkpoint even when the names differ between +- Multi-GPU support: Uses `torch.nn.parallel.data_parallel` under the hood. +- Huge update to the documentation with a documentation of metrics and a lot of examples. +- No need to import `framework` anymore. Everything now can be imported from `poutyne`directly, + i.e. `from poutyne import whatever_you_want`. +- [`PeriodicSaveCallbacks`](https://poutyne.org/callbacks.html#poutyne.PeriodicSaveCallback) (such + as [`ModelCheckpoint`](https://poutyne.org/callbacks.html#poutyne.ModelCheckpoint)) now has a + flag `keep_only_last_best` which allow to only keep the last best checkpoint even when the names differ between epochs. -* [``FBeta``](https://poutyne.org/metrics.html#poutyne.FBeta) now supports an ``ignore_index`` as - in ``nn.CrossEntropyLoss``. -* Epoch metrics strings ``'precision'`` and ``'recall'`` now available directly without instantiating ``FBeta``. -* Better ETA estimation in output by weighting more recent batches than older batches. -* Batch metrics [``acc``](https://poutyne.org/metrics.html#poutyne.acc) - and [``bin_acc``](https://poutyne.org/metrics.html#poutyne.bin_acc) now have class - counterparts [``Accuracy``](https://poutyne.org/metrics.html#poutyne.Accuracy) - and [``BinaryAccuracy``](https://poutyne.org/metrics.html#poutyne.BinaryAccuracy) in addition to a ``reduction`` +- [`FBeta`](https://poutyne.org/metrics.html#poutyne.FBeta) now supports an `ignore_index` as + in `nn.CrossEntropyLoss`. +- Epoch metrics strings `'precision'` and `'recall'` now available directly without instantiating `FBeta`. +- Better ETA estimation in output by weighting more recent batches than older batches. +- Batch metrics [`acc`](https://poutyne.org/metrics.html#poutyne.acc) + and [`bin_acc`](https://poutyne.org/metrics.html#poutyne.bin_acc) now have class + counterparts [`Accuracy`](https://poutyne.org/metrics.html#poutyne.Accuracy) + and [`BinaryAccuracy`](https://poutyne.org/metrics.html#poutyne.BinaryAccuracy) in addition to a `reduction` keyword argument as in PyTorch. -* Various bug fixes. +- Various bug fixes. # v0.8.2 @@ -235,7 +238,7 @@ Update following bug in new PyTorch version: https://github.com/pytorch/pytorch/ disabled as instructed, the new behavior takes place. (See documentation of [evaluate_generator](https://poutyne.org/model.html#poutyne.framework.Model.evaluate_generator) and [predict_generator](https://poutyne.org/model.html#poutyne.framework.Model.predict_generator)) -- Names of methods `on_batch_begin` and `on_batch_end` changed to `on_train_batch_begin` and `on_train_batch_end` +- Names of methods `on_batch_begin` and `on_batch_end` changed to `on_train_batch_begin` and `on_train_batch_end` respectively. When the old names are used, a warning is issued with backward compatibility added. This backward compatibility will be removed in the next version. - `EpochMetric` classes now have an obligatory reset method. @@ -253,62 +256,62 @@ This is not legal advice. You should consult your lawyer about the implication o # v0.7.1 -* Fix a bug introduced in v0.7 when only one of epoch metrics and batch metrics were provided and we would try to +- Fix a bug introduced in v0.7 when only one of epoch metrics and batch metrics were provided and we would try to concatenate a tuple and a list. # v0.7 -* Add automatic naming for class object in `batch_metrics` and `epoch_metrics`. -* Add get_saved_epochs method to Experiment -* `optimizer` parameter can now be set to None in `Model`in the case where there is no need for it. -* Fixes warning from new PyTorch version. -* Various improvement of the code. +- Add automatic naming for class object in `batch_metrics` and `epoch_metrics`. +- Add get_saved_epochs method to Experiment +- `optimizer` parameter can now be set to None in `Model`in the case where there is no need for it. +- Fixes warning from new PyTorch version. +- Various improvement of the code. -*Breaking changes:* +_Breaking changes:_ -* Threshold of the binary_accuracy metric is now 0 instead of 0.5 so that it works using the logits instead of the +- Threshold of the binary_accuracy metric is now 0 instead of 0.5 so that it works using the logits instead of the probabilities. -* The attribute `model` of the `Model` class is now called `network` instead. A deprecation warning is in place until +- The attribute `model` of the `Model` class is now called `network` instead. A deprecation warning is in place until the next version. # v0.6 -* Poutyne now has a new logo! -* Add a beta `Experiment` class that encapsulates logging and checkpointing callbacks so that it is possible to stop and +- Poutyne now has a new logo! +- Add a beta `Experiment` class that encapsulates logging and checkpointing callbacks so that it is possible to stop and resume optimization at any time. -* Add epoch metrics allowing to compute metrics over an epoch that are not decomposable such as F1 scores, precision, +- Add epoch metrics allowing to compute metrics over an epoch that are not decomposable such as F1 scores, precision, recall. While only these former epoch metrics are currently available in Poutyne, epoch metrics can allow to compute the AUROC metric, PCC metric, etc. -* Support for multiple batches per optimizer step. This allows to have smaller batches that fit in memory instead of a +- Support for multiple batches per optimizer step. This allows to have smaller batches that fit in memory instead of a big batch that does not fit while retaining the advantage of the big batch. -* Add return_ground_truth argument to evaluate_generator. -* Data loading is now taken into account time for progress estimation. -* Various doc updates and example finetunings. +- Add return_ground_truth argument to evaluate_generator. +- Data loading is now taken into account time for progress estimation. +- Various doc updates and example finetunings. -*Breaking changes:* +_Breaking changes:_ -* `metrics` argument in Model is now deprecated. This argument will be removed in the next version. Use `batch_metrics` +- `metrics` argument in Model is now deprecated. This argument will be removed in the next version. Use `batch_metrics` instead. -* `pytoune` package is now removed. -* If steps_per_epoch or validation_steps are greater than the generator length in *_generator methods, then the +- `pytoune` package is now removed. +- If steps_per_epoch or validation_steps are greater than the generator length in \*\_generator methods, then the generator is cycled through instead of stopping as before. # v0.5.1 -* Update for PyTorch 1.1. -* Transfers metric modules to GPU when appropriate. +- Update for PyTorch 1.1. +- Transfers metric modules to GPU when appropriate. # v0.5 -* Adding a new `OptimizerPolicy` class allowing to have Phase-based learning rate policies. The two following learning +- Adding a new `OptimizerPolicy` class allowing to have Phase-based learning rate policies. The two following learning policies are also provided: - * "Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates", Leslie N. Smith, Nicholay - Topin, https://arxiv.org/abs/1708.07120 - * "SGDR: Stochastic Gradient Descent with Warm Restarts", Ilya Loshchilov, Frank - Hutter, https://arxiv.org/abs/1608.0398 -* Adding of "bin_acc" metric for binary classification in addition to the "accuracy" metric". -* Adding "time" in callbacks' logs. -* Various refactoring and small bug fixes. + - "Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates", Leslie N. Smith, Nicholay + Topin, https://arxiv.org/abs/1708.07120 + - "SGDR: Stochastic Gradient Descent with Warm Restarts", Ilya Loshchilov, Frank + Hutter, https://arxiv.org/abs/1608.0398 +- Adding of "bin_acc" metric for binary classification in addition to the "accuracy" metric". +- Adding "time" in callbacks' logs. +- Various refactoring and small bug fixes. # v0.4.1 @@ -329,7 +332,7 @@ Non-breaking changes: # v0.4 - New usage example using MNIST -- New *_on_batch methods to Model +- New \*\_on_batch methods to Model - Every Numpy array is converted into a tensor and vice-versa everywhere it applies i.e. methods return Numpy arrays and can take Numpy arrays as input. - New convenient simple layers (Flatten, Identity and Lambda layers) @@ -354,23 +357,23 @@ Other changes: Last release before an upgrade with breaking changes due to the update of PyTorch 0.4.0. -* Add an on_backward_end callback function -* Add a ClipNorm callback -* Fix various bugs. +- Add an on_backward_end callback function +- Add a ClipNorm callback +- Fix various bugs. # v0.2.1 -* Fix warning bugs and bad logic in checkpoints. -* Fix bug where we did not display metric when its value was equal to zero. +- Fix warning bugs and bad logic in checkpoints. +- Fix bug where we did not display metric when its value was equal to zero. # v0.2 -* ModelCheckpoint now writes off the checkpoint atomically. -* New initial_epoch parameter to Model. -* Mean of losses and metrics done with batch size weighted by len(y) instead of just the mean of the losses and metrics. -* Update to the documentation. -* Model's predict and evaluate makes more sense now and have now a generator version. -* Few other bug fixes. +- ModelCheckpoint now writes off the checkpoint atomically. +- New initial_epoch parameter to Model. +- Mean of losses and metrics done with batch size weighted by len(y) instead of just the mean of the losses and metrics. +- Update to the documentation. +- Model's predict and evaluate makes more sense now and have now a generator version. +- Few other bug fixes. # v0.1.1 diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md index 52e62891..246a4cb3 100644 --- a/CODE_OF_CONDUCT.md +++ b/CODE_OF_CONDUCT.md @@ -17,23 +17,23 @@ diverse, inclusive, and healthy community. Examples of behavior that contributes to a positive environment for our community include: -* Demonstrating empathy and kindness toward other people -* Being respectful of differing opinions, viewpoints, and experiences -* Giving and gracefully accepting constructive feedback -* Accepting responsibility and apologizing to those affected by our mistakes, +- Demonstrating empathy and kindness toward other people +- Being respectful of differing opinions, viewpoints, and experiences +- Giving and gracefully accepting constructive feedback +- Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience -* Focusing on what is best not just for us as individuals, but for the +- Focusing on what is best not just for us as individuals, but for the overall community Examples of unacceptable behavior include: -* The use of sexualized language or imagery, and sexual attention or +- The use of sexualized language or imagery, and sexual attention or advances of any kind -* Trolling, insulting or derogatory comments, and personal or political attacks -* Public or private harassment -* Publishing others' private information, such as a physical or email +- Trolling, insulting or derogatory comments, and personal or political attacks +- Public or private harassment +- Publishing others' private information, such as a physical or email address, without their explicit permission -* Other conduct which could reasonably be considered inappropriate in a +- Other conduct which could reasonably be considered inappropriate in a professional setting ## Enforcement Responsibilities @@ -106,7 +106,7 @@ Violating these terms may lead to a permanent ban. ### 4. Permanent Ban **Community Impact**: Demonstrating a pattern of violation of community -standards, including sustained inappropriate behavior, harassment of an +standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals. **Consequence**: A permanent ban from any sort of public interaction within diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index e74ce2a5..aa530ca6 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,4 +1,5 @@ # Contributing to Poutyne + We love your input! We want to make contributing to this project as easy and transparent as possible, whether it's: - Reporting a bug @@ -8,9 +9,11 @@ We love your input! We want to make contributing to this project as easy and tra - Becoming a maintainer ## We Develop with Github + We use github to host code, to track issues and feature requests, as well as accept pull requests. ## We Use [Github Flow](https://guides.github.com/introduction/flow/index.html), So All Code Changes Happen Through Pull Requests + Pull requests are the best way to propose changes to the codebase. We actively welcome your pull requests: 1. Fork the repo and create your branch from the **`dev` branch**. @@ -21,6 +24,7 @@ Pull requests are the best way to propose changes to the codebase. We actively w 6. Submit that pull request! ## Any contributions you make will be under the LGPLv3 Software License + In short, when you submit code changes, your submissions are understood to be under the same [LGPLv3 License](https://choosealicense.com/licenses/lgpl-3.0/) that covers the project. Feel free to contact the maintainers if that's a concern. ## Write bug reports with detail, background, and sample code @@ -36,26 +40,26 @@ We use GitHub issues to track public bugs. Report a bug by [opening a new issue] - What you expected would happen - What actually happens - Notes (possibly including why you think this might be happening, or stuff you tried that didn't work) -Feel free to include any print screen or other file you feel may further clarify your point. + Feel free to include any print screen or other file you feel may further clarify your point. + ## Do you have a suggestion for an enhancement? -We use GitHub issues to track enhancement requests. Before you create an enhancement request: +We use GitHub issues to track enhancement requests. Before you create an enhancement request: -* Make sure you have a clear idea of the enhancement you would like. If you have a vague idea, consider discussing -it first on the users list. +- Make sure you have a clear idea of the enhancement you would like. If you have a vague idea, consider discussing + it first on the users list. -* Check the documentation to make sure your feature does not already exist. +- Check the documentation to make sure your feature does not already exist. -* Do a [quick search](https://github.com/GRAAL-Research/poutyne/issues) to see whether your enhancement has already been suggested. +- Do a [quick search](https://github.com/GRAAL-Research/poutyne/issues) to see whether your enhancement has already been suggested. When creating your enhancement request, please: -* Provide a clear title and description. - -* Explain why the enhancement would be useful. It may be helpful to highlight the feature in other libraries. +- Provide a clear title and description. -* Include code examples to demonstrate how the enhancement would be used. +- Explain why the enhancement would be useful. It may be helpful to highlight the feature in other libraries. +- Include code examples to demonstrate how the enhancement would be used. ## Prerequisites @@ -70,6 +74,7 @@ pip install -r docs/requirements.txt ``` Also, you should run `python setup.py develop` to build the project and be able to build the documentation. + ``` python setup.py develop ``` @@ -77,6 +82,7 @@ python setup.py develop ## Use a Consistent Coding Style All of the code is formatted using [black](https://black.readthedocs.io) with the associated [config file](https://github.com/GRAAL-Research/poutyne/blob/master/pyproject.toml). In order to format the code of your submission, simply run + > See the [styling requirements](https://github.com/GRAAL-Research/poutyne/blob/master/styling_requirements.txt) for the proper black version to use. ``` @@ -84,6 +90,7 @@ black . ``` We also have our own `pylint` [config file](https://github.com/GRAAL-Research/poutyne/blob/master/.pylintrc). Try not to introduce code incoherences detected by the linting. You can run the linting procedure with + > See the [styling requirements](https://github.com/GRAAL-Research/poutyne/blob/master/styling_requirements.txt) for the proper pylint version to use. ``` @@ -105,12 +112,12 @@ pytest tests When submitting a pull request for a new feature, try to include documentation for the new objects/modules introduced and their public methods. - All of Poutyne's html documentation is automatically generated from the Python files' documentation. To have a preview of what the final html will look like with your modifications, first start by rebuilding the html pages. +All of Poutyne's html documentation is automatically generated from the Python files' documentation. To have a preview of what the final html will look like with your modifications, first start by rebuilding the html pages. - ``` +``` cd docs ./rebuild_html_doc.sh - ``` +``` You can then see the local html files in your favorite browser. Here is an example using Firefox: @@ -119,7 +126,9 @@ firefox _build/html/index.html ``` ## License + By contributing, you agree that your contributions will be licensed under its LGPLv3 License. ## References + This document was adapted from the open-source contribution guidelines for [Facebook's Draft](https://github.com/facebook/draft-js/blob/a9316a723f9e918afde44dea68b5f9f39b7d9b00/CONTRIBUTING.md). diff --git a/README.md b/README.md index b2f5d2a5..307f76dd 100644 --- a/README.md +++ b/README.md @@ -13,14 +13,16 @@ Poutyne is a simplified framework for [PyTorch](https://pytorch.org/) and handles much of the boilerplating code needed to train neural networks. Use Poutyne to: + - Train models easily. - Use callbacks to save your best model, perform early stopping and much more. Read the documentation at [Poutyne.org](https://poutyne.org). -Poutyne is compatible with the __latest version of PyTorch__ and __Python >= 3.8__. +Poutyne is compatible with the **latest version of PyTorch** and **Python >= 3.8**. ### Cite + ``` @misc{Paradis_Poutyne_A_Simplified_2020, author = {Paradis, Frédérik and Beauchemin, David and Godbout, Mathieu and Alain, Mathieu and Garneau, Nicolas and Otte, Stefan and Tremblay, Alexis and Bélanger, Marc-Antoine and Laviolette, François}, @@ -30,9 +32,7 @@ Poutyne is compatible with the __latest version of PyTorch__ and __Python >= 3 } ``` - ------------------- - +--- ## Getting started: few seconds to Poutyne @@ -105,7 +105,7 @@ model.fit( Since Poutyne is inspired by [Keras](https://keras.io), one might have notice that this is really similar to some of its [functions](https://keras.io/models/model/). -You can evaluate the performances of your network using the ``evaluate`` method of Poutyne's model: +You can evaluate the performances of your network using the `evaluate` method of Poutyne's model: ```python loss, (accuracy, f1score) = model.evaluate(test_x, test_y) @@ -136,8 +136,7 @@ model_bundle.test_data(test_x, test_y) [See the complete code here.](https://github.com/GRAAL-Research/poutyne/blob/master/examples/basic_random_classification_with_model_bundle.py) Also, [see this](https://github.com/GRAAL-Research/poutyne/blob/master/examples/basic_random_regression_with_model_bundle.py) for an example for regression. - ------------------- +--- ## Installation @@ -161,67 +160,67 @@ pip install -U git+https://github.com/GRAAL-Research/poutyne.git@dev docker pull ghcr.io/graal-research/poutyne:latest ``` ------------------- +--- ## Learning Material ### Blog posts -* [Medium PyTorch post](https://medium.com/pytorch/poutyne-a-simplified-framework-for-deep-learning-in-pytorch-74b1fc1d5a8b) - Presentation of the basics of Poutyne and how it can help you be more efficient when developing neural networks with PyTorch. +- [Medium PyTorch post](https://medium.com/pytorch/poutyne-a-simplified-framework-for-deep-learning-in-pytorch-74b1fc1d5a8b) - Presentation of the basics of Poutyne and how it can help you be more efficient when developing neural networks with PyTorch. ### Examples Look at notebook files with full working [examples](https://github.com/GRAAL-Research/poutyne/blob/master/examples/): -* [introduction.ipynb](https://github.com/GRAAL-Research/poutyne/blob/master/examples/introduction.ipynb) ([tutorial version](https://github.com/GRAAL-Research/poutyne/blob/master/tutorials/introduction_pytorch_poutyne_tutorial.ipynb)) - comparison of Poutyne with bare PyTorch and usage examples of Poutyne callbacks and the ModelBundle class. -* [tips_and_tricks.ipynb](https://github.com/GRAAL-Research/poutyne/blob/master/examples/tips_and_tricks.ipynb) - tips and tricks using Poutyne -* [sequence_tagging.ipynb](https://github.com/GRAAL-Research/poutyne/blob/master/examples/sequence_tagging.ipynb) - Sequence tagging with an RNN -* [transfer_learning.ipynb](https://github.com/GRAAL-Research/poutyne/blob/master/examples/transfer_learning.ipynb) - transfer learning on `ResNet-18` on the [CUB-200](http://www.vision.caltech.edu/visipedia/CUB-200-2011.html) dataset. -* [policy_interface.ipynb](https://github.com/GRAAL-Research/poutyne/blob/master/examples/policy_interface.ipynb) - example of policies -* [image_reconstruction.ipynb](https://github.com/GRAAL-Research/poutyne/blob/master/examples/image_reconstruction.ipynb) - example of image reconstruction -* [classification_and_regression.ipynb](https://github.com/GRAAL-Research/poutyne/blob/master/examples/classification_and_regression.ipynb) - example of multitask learning with classification and regression -* [semantic_segmentation.ipynb](https://github.com/GRAAL-Research/poutyne/blob/master/examples/semantic_segmentation.ipynb) - example of semantic segmentation - -or in ``Google Colab``: - -* [introduction.ipynb](https://colab.research.google.com/github/GRAAL-Research/poutyne/blob/master/examples/introduction.ipynb) ([tutorial version](https://colab.research.google.com/github/GRAAL-Research/poutyne/blob/master/tutorials/introduction_pytorch_poutyne_tutorial.ipynb)) - comparison of Poutyne with bare PyTorch and usage examples of Poutyne callbacks and the ModelBundle class. -* [tips_and_tricks.ipynb](https://colab.research.google.com/github/GRAAL-Research/poutyne/blob/master/examples/tips_and_tricks.ipynb) - tips and tricks using Poutyne -* [sequence_tagging.ipynb](https://colab.research.google.com/github/GRAAL-Research/poutyne/blob/master/examples/sequence_tagging.ipynb) - Sequence tagging with an RNN -* [transfer_learning.ipynb](https://colab.research.google.com/github/GRAAL-Research/poutyne/blob/master/examples/transfer_learning.ipynb) - transfer learning on `ResNet-18` on the [CUB-200](http://www.vision.caltech.edu/visipedia/CUB-200-2011.html) dataset. -* [policy_interface.ipynb](https://colab.research.google.com/github/GRAAL-Research/poutyne/blob/master/examples/policy_interface.ipynb) - example of policies -* [image_reconstruction.ipynb](https://colab.research.google.com/github/GRAAL-Research/poutyne/blob/master/examples/image_reconstruction.ipynb) - example of image reconstruction -* [classification_and_regression.ipynb](https://colab.research.google.com/github/GRAAL-Research/poutyne/blob/master/examples/classification_and_regression.ipynb) - example of multitask learning with classification and regression -* [semantic_segmentation.ipynb](https://colab.research.google.com/github/GRAAL-Research/poutyne/blob/master/examples/semantic_segmentation.ipynb) - example of semantic segmentation +- [introduction.ipynb](https://github.com/GRAAL-Research/poutyne/blob/master/examples/introduction.ipynb) ([tutorial version](https://github.com/GRAAL-Research/poutyne/blob/master/tutorials/introduction_pytorch_poutyne_tutorial.ipynb)) - comparison of Poutyne with bare PyTorch and usage examples of Poutyne callbacks and the ModelBundle class. +- [tips_and_tricks.ipynb](https://github.com/GRAAL-Research/poutyne/blob/master/examples/tips_and_tricks.ipynb) - tips and tricks using Poutyne +- [sequence_tagging.ipynb](https://github.com/GRAAL-Research/poutyne/blob/master/examples/sequence_tagging.ipynb) - Sequence tagging with an RNN +- [transfer_learning.ipynb](https://github.com/GRAAL-Research/poutyne/blob/master/examples/transfer_learning.ipynb) - transfer learning on `ResNet-18` on the [CUB-200](http://www.vision.caltech.edu/visipedia/CUB-200-2011.html) dataset. +- [policy_interface.ipynb](https://github.com/GRAAL-Research/poutyne/blob/master/examples/policy_interface.ipynb) - example of policies +- [image_reconstruction.ipynb](https://github.com/GRAAL-Research/poutyne/blob/master/examples/image_reconstruction.ipynb) - example of image reconstruction +- [classification_and_regression.ipynb](https://github.com/GRAAL-Research/poutyne/blob/master/examples/classification_and_regression.ipynb) - example of multitask learning with classification and regression +- [semantic_segmentation.ipynb](https://github.com/GRAAL-Research/poutyne/blob/master/examples/semantic_segmentation.ipynb) - example of semantic segmentation + +or in `Google Colab`: + +- [introduction.ipynb](https://colab.research.google.com/github/GRAAL-Research/poutyne/blob/master/examples/introduction.ipynb) ([tutorial version](https://colab.research.google.com/github/GRAAL-Research/poutyne/blob/master/tutorials/introduction_pytorch_poutyne_tutorial.ipynb)) - comparison of Poutyne with bare PyTorch and usage examples of Poutyne callbacks and the ModelBundle class. +- [tips_and_tricks.ipynb](https://colab.research.google.com/github/GRAAL-Research/poutyne/blob/master/examples/tips_and_tricks.ipynb) - tips and tricks using Poutyne +- [sequence_tagging.ipynb](https://colab.research.google.com/github/GRAAL-Research/poutyne/blob/master/examples/sequence_tagging.ipynb) - Sequence tagging with an RNN +- [transfer_learning.ipynb](https://colab.research.google.com/github/GRAAL-Research/poutyne/blob/master/examples/transfer_learning.ipynb) - transfer learning on `ResNet-18` on the [CUB-200](http://www.vision.caltech.edu/visipedia/CUB-200-2011.html) dataset. +- [policy_interface.ipynb](https://colab.research.google.com/github/GRAAL-Research/poutyne/blob/master/examples/policy_interface.ipynb) - example of policies +- [image_reconstruction.ipynb](https://colab.research.google.com/github/GRAAL-Research/poutyne/blob/master/examples/image_reconstruction.ipynb) - example of image reconstruction +- [classification_and_regression.ipynb](https://colab.research.google.com/github/GRAAL-Research/poutyne/blob/master/examples/classification_and_regression.ipynb) - example of multitask learning with classification and regression +- [semantic_segmentation.ipynb](https://colab.research.google.com/github/GRAAL-Research/poutyne/blob/master/examples/semantic_segmentation.ipynb) - example of semantic segmentation ### Videos -* [Presentation on Poutyne](https://youtu.be/gQ3SW5r7HSs) given at one of the weekly presentations of the Institute Intelligence and Data (IID) of Université Laval. [Slides](https://github.com/GRAAL-Research/poutyne/blob/master/slides/poutyne.pdf) and the [associated Latex source code](https://github.com/GRAAL-Research/poutyne/blob/master/slides/src/) are also available. +- [Presentation on Poutyne](https://youtu.be/gQ3SW5r7HSs) given at one of the weekly presentations of the Institute Intelligence and Data (IID) of Université Laval. [Slides](https://github.com/GRAAL-Research/poutyne/blob/master/slides/poutyne.pdf) and the [associated Latex source code](https://github.com/GRAAL-Research/poutyne/blob/master/slides/src/) are also available. ------------------- +--- ## Contributing to Poutyne We welcome user input, whether it is regarding bugs found in the library or feature propositions ! Make sure to have a look at our [contributing guidelines](https://github.com/GRAAL-Research/poutyne/blob/master/CONTRIBUTING.md) for more details on this matter. ------------------- +--- ## Sponsors This project supported by [Frédérik Paradis](https://github.com/freud14/) and [David Beauchemin](https://github.com/davebulaval). [Join the sponsors - show your ❤️ and support, and appear on the list](https://github.com/sponsors/freud14)! ------------------- +--- ## License Poutyne is LGPLv3 licensed, as found in the [LICENSE file](https://github.com/GRAAL-Research/poutyne/blob/master/LICENSE). ------------------- +--- ## Why this name, Poutyne? Poutyne's name comes from [poutine](https://en.wikipedia.org/wiki/Poutine), the well-known dish from Quebec. It is usually composed of French fries, squeaky cheese curds and brown gravy. However, in Quebec, poutine also has the meaning of something that is an ["ordinary or common subject or activity"](https://fr.wiktionary.org/wiki/poutine). Thus, Poutyne will get rid of the ordinary boilerplate code that plain [PyTorch](https://pytorch.org) training usually entails. ![Poutine](https://upload.wikimedia.org/wikipedia/commons/4/4e/La_Banquise_Poutine_%28cropped%29.jpg) -*Yuri Long from Arlington, VA, USA \[[CC BY 2.0](https://creativecommons.org/licenses/by/2.0)\]* +_Yuri Long from Arlington, VA, USA \[[CC BY 2.0](https://creativecommons.org/licenses/by/2.0)\]_ ------------------- +---