Skip to content

Commit

Permalink
✨ Merge pull request #111 from ENSTA-U2IS-AI/dev
Browse files Browse the repository at this point in the history
✨ Add ChannelLayerNorm, Conflictual Loss, AUGRC & improve code quality
  • Loading branch information
o-laurent authored Sep 4, 2024
2 parents 63e1fb6 + d47f0f3 commit 8dc7b3f
Show file tree
Hide file tree
Showing 95 changed files with 1,600 additions and 766 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/build-docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ jobs:

- name: Install dependencies
run: |
python3 -m pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cpu
python3 -m pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cpu
python3 -m pip install .[image,dev,docs]
- name: Sphinx build
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/run-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ jobs:
- name: Install dependencies
if: steps.changed-files-specific.outputs.only_changed != 'true'
run: |
python3 -m pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cpu
python3 -m pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cpu
python3 -m pip install .[all]
- name: Check style & format
Expand Down
12 changes: 6 additions & 6 deletions auto_tutorials_source/tutorial_bayesian.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
Training a Bayesian LeNet using TorchUncertainty models and Lightning
---------------------------------------------------------------------
In this part, we train a bayesian LeNet, based on the model and routines already implemented in TU.
In this part, we train a Bayesian LeNet, based on the model and routines already implemented in TU.
1. Loading the utilities
~~~~~~~~~~~~~~~~~~~~~~~~
Expand All @@ -30,13 +30,13 @@
- the Trainer from Lightning
- the model: bayesian_lenet, which lies in the torch_uncertainty.model
- the classification training routine from torch_uncertainty.routines
- the bayesian objective: the ELBOLoss, which lies in the torch_uncertainty.losses file
- the Bayesian objective: the ELBOLoss, which lies in the torch_uncertainty.losses file
- the datamodule that handles dataloaders: MNISTDataModule from torch_uncertainty.datamodules
We will also need to define an optimizer using torch.optim, the
neural network utils from torch.nn, as well as the partial util to provide
the modified default arguments for the ELBO loss.
We will also need to define an optimizer using torch.optim and Pytorch's
neural network utils from torch.nn.
"""
# %%
from pathlib import Path

from lightning.pytorch import Trainer
Expand Down Expand Up @@ -94,7 +94,7 @@ def optim_lenet(model: nn.Module):
loss = ELBOLoss(
model=model,
inner_loss=nn.CrossEntropyLoss(),
kl_weight=1 / 50000,
kl_weight=1 / 10000,
num_samples=3,
)

Expand Down
13 changes: 7 additions & 6 deletions auto_tutorials_source/tutorial_corruption.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,16 @@
Corrupting Images with TorchUncertainty to Benchmark Robustness
===============================================================
This tutorial shows the impact of the different corruptions available in the
TorchUncertainty library. These corruptions were first proposed in the paper
This tutorial shows the impact of the different corruption transforms available in the
TorchUncertainty library. These corruption transforms were first proposed in the paper
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations
by Dan Hendrycks and Thomas Dietterich.
For this tutorial, we will only load the corruption transforms available in
torch_uncertainty.transforms.corruptions. We also need to load utilities from
torch_uncertainty.transforms.corruption. We also need to load utilities from
torchvision and matplotlib.
"""
# %%
from torchvision.datasets import CIFAR10
from torchvision.transforms import Compose, ToTensor, Resize

Expand Down Expand Up @@ -60,7 +61,7 @@ def show_images(transforms):
# %%
# 1. Noise Corruptions
# ~~~~~~~~~~~~~~~~~~~~
from torch_uncertainty.transforms.corruptions import (
from torch_uncertainty.transforms.corruption import (
GaussianNoise,
ShotNoise,
ImpulseNoise,
Expand All @@ -79,7 +80,7 @@ def show_images(transforms):
# %%
# 2. Blur Corruptions
# ~~~~~~~~~~~~~~~~~~~~
from torch_uncertainty.transforms.corruptions import (
from torch_uncertainty.transforms.corruption import (
GaussianBlur,
GlassBlur,
DefocusBlur,
Expand All @@ -96,7 +97,7 @@ def show_images(transforms):
# %%
# 3. Other Corruptions
# ~~~~~~~~~~~~~~~~~~~~
from torch_uncertainty.transforms.corruptions import (
from torch_uncertainty.transforms.corruption import (
JPEGCompression,
Pixelate,
Frost,
Expand Down
1 change: 1 addition & 0 deletions auto_tutorials_source/tutorial_der_cubic.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@
We also need to define an optimizer using torch.optim and the neural network utils within torch.nn.
"""
# %%
import torch
from lightning.pytorch import Trainer
from lightning import LightningDataModule
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@
We also need to define an optimizer using torch.optim, the neural network utils within torch.nn.
"""
# %%
from pathlib import Path

import torch
Expand Down
5 changes: 3 additions & 2 deletions auto_tutorials_source/tutorial_from_de_to_pe.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@
The dataset is automatically downloaded using torchvision. We then visualize a few images to see a bit what we are working with.
"""
# Create the transforms for the images
# %%
import torch
import torchvision.transforms as T

Expand Down Expand Up @@ -241,7 +242,7 @@ def optim_recipe(model, lr_mult: float = 1.0):
# We have put the pre-trained models on Hugging Face that you can download with the utility function
# "hf_hub_download" imported just below. These models are trained for 75 epochs and are therefore not
# comparable to the all the other models trained in this notebook. The pretrained models can be seen
# `here <https://huggingface.co/ENSTA-U2IS/tutorial-models>`_ and TorchUncertainty's are `here <https://huggingface.co/torch-uncertainty>`_.
# on `HuggingFace <https://huggingface.co/ENSTA-U2IS/tutorial-models>`_ and TorchUncertainty's are `here <https://huggingface.co/torch-uncertainty>`_.

from torch_uncertainty.utils.hub import hf_hub_download

Expand Down Expand Up @@ -297,7 +298,7 @@ def optim_recipe(model, lr_mult: float = 1.0):
# This modification is particularly useful when the ensemble size is large, as it is often the case in practice.
#
# We will need to update the model and replace the layers with their Packed equivalents. You can find the
# documentation of the Packed-Linear layer `here <https://torch-uncertainty.github.io/generated/torch_uncertainty.layers.PackedLinear.html>`_,
# documentation of the Packed-Linear layer using this `link <https://torch-uncertainty.github.io/generated/torch_uncertainty.layers.PackedLinear.html>`_,
# and the Packed-Conv2D, `here <https://torch-uncertainty.github.io/generated/torch_uncertainty.layers.PackedLinear.html>`_.

import torch
Expand Down
3 changes: 2 additions & 1 deletion auto_tutorials_source/tutorial_mc_batch_norm.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@
We also need import the neural network utils within `torch.nn`.
"""
# %%
from pathlib import Path

from lightning import Trainer
Expand Down Expand Up @@ -98,7 +99,7 @@
# 6. Testing the Model
# ~~~~~~~~~~~~~~~~~~~~
# Now that the model is trained, let's test it on MNIST. Don't forget to call
# .eval() to enable Monte Carlo batch normalization at inference.
# .eval() to enable Monte Carlo batch normalization at evaluation (sometimes called inference).
# In this tutorial, we plot the most uncertain images, i.e. the images for which
# the variance of the predictions is the highest.
# Please note that we apply a reshape to the logits to determine the dimension corresponding to the ensemble
Expand Down
2 changes: 1 addition & 1 deletion auto_tutorials_source/tutorial_mc_dropout.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@
We also need import the neural network utils within `torch.nn`.
"""

# %%
from pathlib import Path

from torch_uncertainty.utils import TUTrainer
Expand Down
1 change: 1 addition & 0 deletions auto_tutorials_source/tutorial_scaler.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@
If you use the classification routine, the plots will be automatically available in the tensorboard logs if you use the `log_plots` flag.
"""
# %%
from torch_uncertainty.datamodules import CIFAR100DataModule
from torch_uncertainty.metrics import CalibrationError
from torch_uncertainty.models.resnet import resnet
Expand Down
82 changes: 72 additions & 10 deletions docs/source/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -156,8 +156,6 @@ Models
Wrappers
^^^^^^^^



Functions
"""""""""

Expand Down Expand Up @@ -188,30 +186,82 @@ Metrics

Classification
^^^^^^^^^^^^^^

.. currentmodule:: torch_uncertainty.metrics.classification

Proper Scores
"""""""""""""

.. autosummary::
:toctree: generated/
:nosignatures:
:template: class.rst

BrierScore
CategoricalNLL

Out-of-Distribution Detection
"""""""""""""""""""""""""""""

.. autosummary::
:toctree: generated/
:nosignatures:
:template: class.rst

AURC
AUSE
FPRx
FPR95


Selective Classification
""""""""""""""""""""""""

.. autosummary::
:toctree: generated/
:nosignatures:
:template: class.rst

AUGRC
RiskAtxCov
RiskAt80Cov
CovAtxRisk
CovAt5Risk

Calibration
"""""""""""

.. autosummary::
:toctree: generated/
:nosignatures:
:template: class.rst

AdaptiveCalibrationError
BrierScore
CalibrationError
CategoricalNLL
CovAt5Risk

Diversity
"""""""""

.. autosummary::
:toctree: generated/
:nosignatures:
:template: class.rst

Disagreement
Entropy
GroupingLoss
MeanIntersectionOverUnion
MutualInformation
RiskAt80Cov
VariationRatio


Others
""""""

.. autosummary::
:toctree: generated/
:nosignatures:
:template: class.rst

AUSE
GroupingLoss

Regression
^^^^^^^^^^

Expand All @@ -232,6 +282,18 @@ Regression
SILog
ThresholdAccuracy

Segmentation
^^^^^^^^^^^^

.. currentmodule:: torch_uncertainty.metrics.classification

.. autosummary::
:toctree: generated/
:nosignatures:
:template: class.rst

MeanIntersectionOverUnion

Losses
------

Expand Down
2 changes: 1 addition & 1 deletion docs/source/cli_guide.rst
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ This command will display the available subcommands of the CLI tool.
fit Runs the full optimization routine.
validate Perform one evaluation epoch over the validation set.
test Perform one evaluation epoch over the test set.
predict Run inference on your data.
predict Run evaluation on your data.
You can execute whichever subcommand you like and set up all your hyperparameters directly using the command line

Expand Down
2 changes: 1 addition & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
f"{datetime.now().year!s}, Adrien Lafage and Olivier Laurent"
)
author = "Adrien Lafage and Olivier Laurent"
release = "0.2.1.post0"
release = "0.2.2"

# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
Expand Down
25 changes: 25 additions & 0 deletions docs/source/references.rst
Original file line number Diff line number Diff line change
Expand Up @@ -243,6 +243,19 @@ For Laplace Approximation, consider citing:
* Authors: *Erik Daxberger, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, and Philipp Hennig*
* Paper: `NeurIPS 2021 <https://arxiv.org/abs/2106.14806>`__.

Losses
------

Conflictual Loss
^^^^^^^^^^^^^^^^

For the conflictual loss, consider citing:

**On the Calibration of Epistemic Uncertainty: Principles, Paradoxes and Conflictual Loss**

* Authors: *Mohammed Fellaji, Frédéric Pennerath, Brieuc Conan-Guez, and Miguel Couceiro*
* Paper: `ArXiv 2024 <https://arxiv.org/pdf/2407.12211`__.

Metrics
-------

Expand Down Expand Up @@ -278,6 +291,18 @@ For the area under the risk-coverage curve, consider citing:
* Authors: *Yonatan Geifman and Ran El-Yaniv*
* Paper: `NeurIPS 2017 <https://arxiv.org/pdf/1705.08500.pdf>`__.


Area Under the Generalized Risk-Coverage curve
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

For the area under the generalized risk-coverage curve, consider citing:

**Overcoming Common Flaws in the Evaluation of Selective Classification Systems**

* Authors: *Jeremias Traub, Till J. Bungert, Carsten T. Lüth, Michael Baumgartner, Klaus H. Maier-Hein, Lena Maier-Hein, and Paul F Jaeger*
* Paper: `ArXiv <https://arxiv.org/pdf/2407.01032.pdf>`__.


Grouping Loss
^^^^^^^^^^^^^

Expand Down
2 changes: 1 addition & 1 deletion experiments/classification/mnist/configs/lenet_swa.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ optimizer:
weight_decay: 5e-4
nesterov: true
lr_scheduler:
class_path: torch_uncertainty.optim_recipes.FullSWALR
class_path: torch_uncertainty.optim_recipes.CosineSWALR
init_args:
milestone: 20
swa_lr: 0.01
Expand Down
2 changes: 1 addition & 1 deletion experiments/classification/mnist/configs/lenet_swag.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ optimizer:
weight_decay: 5e-4
nesterov: true
lr_scheduler:
class_path: torch_uncertainty.optim_recipes.FullSWALR
class_path: torch_uncertainty.optim_recipes.CosineSWALR
init_args:
milestone: 10
swa_lr: 0.01
Expand Down
4 changes: 2 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ build-backend = "flit_core.buildapi"

[project]
name = "torch_uncertainty"
version = "0.2.1.post0"
version = "0.2.2"
authors = [
{ name = "ENSTA U2IS", email = "[email protected]" },
{ name = "Adrien Lafage", email = "[email protected]" },
Expand Down Expand Up @@ -32,7 +32,7 @@ classifiers = [
]
dependencies = [
"timm",
"lightning[pytorch-extra]",
"lightning[pytorch-extra]>=2.0",
"torchvision>=0.16",
"tensorboard",
"einops",
Expand Down
2 changes: 1 addition & 1 deletion tests/datamodules/classification/test_cifar10.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ def test_cifar10_main(self):
dm = CIFAR10DataModule(root="./data/", batch_size=128, cutout=16)

assert dm.dataset == CIFAR10
assert isinstance(dm.train_transform.transforms[2], Cutout)
assert isinstance(dm.train_transform.transforms[1], Cutout)

dm.dataset = DummyClassificationDataset
dm.ood_dataset = DummyClassificationDataset
Expand Down
Loading

0 comments on commit 8dc7b3f

Please sign in to comment.