Skip to content

Commit

Permalink
🎨 Merge pull request #117 from ENSTA-U2IS-AI/dev
Browse files Browse the repository at this point in the history
🎨 On the road to 0.3.0: Adding shift evaluation & more
  • Loading branch information
o-laurent authored Oct 22, 2024
2 parents 74e641a + f38d7be commit e0b7a54
Show file tree
Hide file tree
Showing 98 changed files with 2,256 additions and 699 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/build-docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ jobs:
echo "PYTHON_VERSION=$(python -c "import platform; print(platform.python_version())")"
- name: Cache folder for TorchUncertainty
uses: actions/cache@v3
uses: actions/cache@v4
id: cache-folder
with:
path: |
Expand All @@ -41,7 +41,7 @@ jobs:
- name: Install dependencies
run: |
python3 -m pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cpu
python3 -m pip install .[image,dev,docs]
python3 -m pip install .[all]
- name: Sphinx build
if: github.event.pull_request.draft == false
Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
data/
logs/
lightning_logs/
auto_tutorials_source/*.png
docs/*/generated/
docs/*/auto_tutorials/
*.pth
Expand Down
10 changes: 5 additions & 5 deletions auto_tutorials_source/tutorial_bayesian.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@
To train a BNN using TorchUncertainty, we have to load the following modules:
- the Trainer from Lightning
- our TUTrainer
- the model: bayesian_lenet, which lies in the torch_uncertainty.model
- the classification training routine from torch_uncertainty.routines
- the Bayesian objective: the ELBOLoss, which lies in the torch_uncertainty.losses file
Expand All @@ -39,9 +39,9 @@
# %%
from pathlib import Path

from lightning.pytorch import Trainer
from torch import nn, optim

from torch_uncertainty import TUTrainer
from torch_uncertainty.datamodules import MNISTDataModule
from torch_uncertainty.losses import ELBOLoss
from torch_uncertainty.models.lenet import bayesian_lenet
Expand All @@ -65,12 +65,12 @@ def optim_lenet(model: nn.Module):
# 3. Creating the necessary variables
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# In the following, we define the Lightning trainer, the root of the datasets and the logs.
# In the following, we instantiate our trainer, define the root of the datasets and the logs.
# We also create the datamodule that handles the MNIST dataset, dataloaders and transforms.
# Please note that the datamodules can also handle OOD detection by setting the eval_ood
# parameter to True. Finally, we create the model using the blueprint from torch_uncertainty.models.

trainer = Trainer(accelerator="cpu", enable_progress_bar=False, max_epochs=1)
trainer = TUTrainer(accelerator="cpu", enable_progress_bar=False, max_epochs=1)

# datamodule
root = Path("data")
Expand Down Expand Up @@ -111,7 +111,7 @@ def optim_lenet(model: nn.Module):
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# Now that we have prepared all of this, we just have to gather everything in
# the main function and to train the model using the Lightning Trainer.
# the main function and to train the model using our wrapper of Lightning Trainer.
# Specifically, it needs the routine, that includes the model as well as the
# training/eval logic and the datamodule
# The dataset will be downloaded automatically in the root/data folder, and the
Expand Down
80 changes: 64 additions & 16 deletions auto_tutorials_source/tutorial_corruption.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,23 +12,35 @@
torchvision and matplotlib.
"""
# %%
from torchvision.datasets import CIFAR10
from torchvision.transforms import Compose, ToTensor, Resize
from torchvision.transforms import Compose, ToTensor, Resize, CenterCrop

import matplotlib.pyplot as plt
from PIL import Image
from urllib import request

ds = CIFAR10("./data", train=False, download=True)
urls = [
"https://upload.wikimedia.org/wikipedia/commons/d/d9/Carduelis_tristis_-Michigan%2C_USA_-male-8.jpg",
"https://upload.wikimedia.org/wikipedia/commons/5/5d/Border_Collie_Blanca_y_Negra_Hembra_%28Belen%2C_Border_Collie_Los_Baganes%29.png",
"https://upload.wikimedia.org/wikipedia/commons/f/f8/Birmakatze_Seal-Point.jpg",
"https://upload.wikimedia.org/wikipedia/commons/a/a9/Garranos_fight.jpg",
"https://upload.wikimedia.org/wikipedia/commons/8/8b/Cottontail_Rabbit.jpg",
]

def download_img(url, i):
request.urlretrieve(url, f"tmp_{i}.png")
return Image.open(f"tmp_{i}.png").convert('RGB')

images_ds = [download_img(url, i) for i, url in enumerate(urls)]


def get_images(main_corruption, index: int = 0):
"""Create an image showing the 6 levels of corruption of a given transform."""
images = []
for severity in range(6):
ds_transforms = Compose(
[ToTensor(), main_corruption(severity), Resize(256, antialias=True)]
transforms = Compose(
[Resize(256, antialias=True), CenterCrop(256), ToTensor(), main_corruption(severity), CenterCrop(224)]
)
ds = CIFAR10("./data", train=False, download=False, transform=ds_transforms)
images.append(ds[index][0].permute(1, 2, 0).numpy())
images.append(transforms(images_ds[index]).permute(1, 2, 0).numpy())
return images


Expand Down Expand Up @@ -65,49 +77,85 @@ def show_images(transforms):
GaussianNoise,
ShotNoise,
ImpulseNoise,
SpeckleNoise,
)

show_images(
[
GaussianNoise,
ShotNoise,
ImpulseNoise,
SpeckleNoise,
]
)

# %%
# 2. Blur Corruptions
# ~~~~~~~~~~~~~~~~~~~~
from torch_uncertainty.transforms.corruption import (
GaussianBlur,
MotionBlur,
GlassBlur,
DefocusBlur,
ZoomBlur,
)

show_images(
[
GaussianBlur,
GlassBlur,
MotionBlur,
DefocusBlur,
ZoomBlur,
]
)

# %%
# 3. Other Corruptions
# ~~~~~~~~~~~~~~~~~~~~
# 3. Weather Corruptions
# ~~~~~~~~~~~~~~~~~~~~~~
from torch_uncertainty.transforms.corruption import (
JPEGCompression,
Pixelate,
Frost,
Snow,
Fog,
)

show_images(
[
Fog,
Frost,
Snow,
]
)

# %%
# 4. Other Corruptions

from torch_uncertainty.transforms.corruption import (
Brightness, Contrast, Elastic, JPEGCompression, Pixelate)

show_images(
[
Brightness,
Contrast,
JPEGCompression,
Pixelate,
Frost,
Elastic,
]
)

# %%
# 5. Unused Corruptions
# ~~~~~~~~~~~~~~~~~~~~~

# The following corruptions are not used in the paper Benchmarking Neural Network Robustness to Common Corruptions and Perturbations.

from torch_uncertainty.transforms.corruption import (
GaussianBlur,
SpeckleNoise,
Saturation,
)

show_images(
[
GaussianBlur,
SpeckleNoise,
Saturation,
]
)

Expand Down
6 changes: 3 additions & 3 deletions auto_tutorials_source/tutorial_der_cubic.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
To train a MLP with the DER loss function using TorchUncertainty, we have to load the following modules:
- the Trainer from Lightning
- our TUTrainer
- the model: mlp from torch_uncertainty.models.mlp
- the regression training routine from torch_uncertainty.routines
- the evidential objective: the DERLoss from torch_uncertainty.losses. This loss contains the classic NLL loss and a regularization term.
Expand All @@ -31,10 +31,10 @@
"""
# %%
import torch
from lightning.pytorch import Trainer
from lightning import LightningDataModule
from torch import nn, optim

from torch_uncertainty import TUTrainer
from torch_uncertainty.models.mlp import mlp
from torch_uncertainty.datasets.regression.toy import Cubic
from torch_uncertainty.losses import DERLoss
Expand Down Expand Up @@ -67,7 +67,7 @@ def optim_regression(
# Please note that this MLP finishes with a NormalInverseGammaLayer that interpret the outputs of the model
# as the parameters of a Normal Inverse Gamma distribution.

trainer = Trainer(accelerator="cpu", max_epochs=50) #, enable_progress_bar=False)
trainer = TUTrainer(accelerator="cpu", max_epochs=50) #, enable_progress_bar=False)

# dataset
train_ds = Cubic(num_samples=1000)
Expand Down
9 changes: 4 additions & 5 deletions auto_tutorials_source/tutorial_evidential_classification.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
To train a LeNet with the DEC loss function using TorchUncertainty, we have to load the following utilities from TorchUncertainty:
- the Trainer from Lightning
- our wrapper of the Lightning Trainer
- the model: LeNet, which lies in torch_uncertainty.models
- the classification training routine in the torch_uncertainty.routines
- the evidential objective: the DECLoss from torch_uncertainty.losses
Expand All @@ -28,9 +28,9 @@
from pathlib import Path

import torch
from lightning.pytorch import Trainer
from torch import nn, optim

from torch_uncertainty import TUTrainer
from torch_uncertainty.datamodules import MNISTDataModule
from torch_uncertainty.losses import DECLoss
from torch_uncertainty.models.lenet import lenet
Expand All @@ -53,10 +53,9 @@ def optim_lenet(model: nn.Module) -> dict:
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# In the following, we need to define the root of the logs, and to
# fake-parse the arguments needed for using the PyTorch Lightning Trainer. We
# also use the same MNIST classification example as that used in the
# We use the same MNIST classification example as that used in the
# original DEC paper. We only train for 3 epochs for the sake of time.
trainer = Trainer(accelerator="cpu", max_epochs=3, enable_progress_bar=False)
trainer = TUTrainer(accelerator="cpu", max_epochs=3, enable_progress_bar=False)

# datamodule
root = Path() / "data"
Expand Down
4 changes: 2 additions & 2 deletions auto_tutorials_source/tutorial_from_de_to_pe.py
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ def optim_recipe(model, lr_mult: float = 1.0):


from torch_uncertainty.routines import ClassificationRoutine
from torch_uncertainty.utils import TUTrainer
from torch_uncertainty import TUTrainer

# Create the trainer that will handle the training
trainer = TUTrainer(accelerator="cpu", max_epochs=max_epochs)
Expand Down Expand Up @@ -242,7 +242,7 @@ def optim_recipe(model, lr_mult: float = 1.0):
# We have put the pre-trained models on Hugging Face that you can download with the utility function
# "hf_hub_download" imported just below. These models are trained for 75 epochs and are therefore not
# comparable to the all the other models trained in this notebook. The pretrained models can be seen
# on `HuggingFace <https://huggingface.co/ENSTA-U2IS/tutorial-models>`_ and TorchUncertainty's are `here <https://huggingface.co/torch-uncertainty>`_.
# on `HuggingFace <https://huggingface.co/ENSTA-U2IS/tutorial-models>`_ and TorchUncertainty's are `there <https://huggingface.co/torch-uncertainty>`_.

from torch_uncertainty.utils.hub import hf_hub_download

Expand Down
6 changes: 3 additions & 3 deletions auto_tutorials_source/tutorial_mc_batch_norm.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
First, we have to load the following utilities from TorchUncertainty:
- the Trainer from Lightning
- the TUTrainer from our framework
- the datamodule handling dataloaders: MNISTDataModule from torch_uncertainty.datamodules
- the model: LeNet, which lies in torch_uncertainty.models
- the MC Batch Normalization wrapper: mc_batch_norm, which lies in torch_uncertainty.post_processing
Expand All @@ -25,9 +25,9 @@
# %%
from pathlib import Path

from lightning import Trainer
from torch import nn

from torch_uncertainty import TUTrainer
from torch_uncertainty.datamodules import MNISTDataModule
from torch_uncertainty.models.lenet import lenet
from torch_uncertainty.optim_recipes import optim_cifar10_resnet18
Expand All @@ -41,7 +41,7 @@
# logs. We also create the datamodule that handles the MNIST dataset
# dataloaders and transforms.

trainer = Trainer(accelerator="cpu", max_epochs=2, enable_progress_bar=False)
trainer = TUTrainer(accelerator="cpu", max_epochs=2, enable_progress_bar=False)

# datamodule
root = Path("data")
Expand Down
2 changes: 1 addition & 1 deletion auto_tutorials_source/tutorial_mc_dropout.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@
# %%
from pathlib import Path

from torch_uncertainty.utils import TUTrainer
from torch_uncertainty import TUTrainer
from torch import nn

from torch_uncertainty.datamodules import MNISTDataModule
Expand Down
6 changes: 6 additions & 0 deletions docs/source/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -320,6 +320,12 @@ Losses
ELBOLoss
BetaNLL
DECLoss
DERLoss
FocalLoss
ConflictualLoss
ConfidencePenaltyLoss
KLDiv
ELBOLoss

Post-Processing Methods
-----------------------
Expand Down
2 changes: 1 addition & 1 deletion docs/source/cli_guide.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Let's see how to implement the CLI, by checking out the ``experiments/classifica
from torch_uncertainty.baselines.classification import ResNetBaseline
from torch_uncertainty.datamodules import CIFAR10DataModule
from torch_uncertainty.utils import TULightningCLI
from torch_uncertainty import TULightningCLI
class ResNetCLI(TULightningCLI):
Expand Down
2 changes: 1 addition & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
f"{datetime.now().year!s}, Adrien Lafage and Olivier Laurent"
)
author = "Adrien Lafage and Olivier Laurent"
release = "0.2.2.post2"
release = "0.3.0"

# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
Expand Down
6 changes: 3 additions & 3 deletions docs/source/quickstart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -86,16 +86,16 @@ CIFAR10 datamodule.
.. code:: python
from torch_uncertainty.datamodules import CIFAR10DataModule
from lightning.pytorch import Trainer
from lightning.pytorch import TUTrainer
dm = CIFAR10DataModule(root="data", batch_size=32)
trainer = Trainer(gpus=1, max_epochs=100)
trainer = TUTTrainer(gpus=1, max_epochs=100)
trainer.fit(routine, dm)
trainer.test(routine, dm)
Here it is, you have trained your first model with TorchUncertainty! As a result, you will get access to various metrics
measuring the ability of your model to handle uncertainty. You can get other examples of training with lightning Trainers
looking at the `Tutorials <tutorials.html#layers>`_.
looking at the `Tutorials <auto_tutorials/index.html>`_.

More metrics
^^^^^^^^^^^^
Expand Down
10 changes: 10 additions & 0 deletions docs/source/references.rst
Original file line number Diff line number Diff line change
Expand Up @@ -246,6 +246,16 @@ For Laplace Approximation, consider citing:
Losses
------

Focal Loss
^^^^^^^^^^

For the focal loss, consider citing:

**Focal Loss for Dense Object Detection**

* Authors: *Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár*
* Paper: `TPAMI 2020 <https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8417976>`__.

Conflictual Loss
^^^^^^^^^^^^^^^^

Expand Down
Loading

0 comments on commit e0b7a54

Please sign in to comment.