Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ Refactor wrappers & PP, Add Checkpoint Ensembles, EMA, SWA, & SWAG, Add LaplaceApprox & ABNN #98

Merged
merged 61 commits into from
Jun 26, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
61 commits
Select commit Hold shift + click to select a range
870af02
:sparkles: Add LeNet experiment on MNIST
o-laurent May 29, 2024
ec61536
:bug: Fix notMNIST
o-laurent May 30, 2024
d268911
:bug: Fix MNIST datamodule OODs
o-laurent May 31, 2024
e137610
Merge branch 'main' of github.com:ENSTA-U2IS-AI/torch-uncertainty int…
o-laurent May 31, 2024
19fafbd
:sparkles: Add Laplace wrapper
o-laurent Jun 2, 2024
acf90eb
:books: Add Laplace to the references
o-laurent Jun 3, 2024
63fb874
:hammer: Refactor Mixup params
o-laurent Jun 5, 2024
acbd582
:bug: Fix #99 error in calibration plots
o-laurent Jun 6, 2024
8c2de92
:shirt: Slightly improve dropout
o-laurent Jun 7, 2024
8655bec
:bug: Fix MC Dropout test
o-laurent Jun 7, 2024
91cf1c0
:book: Remove Packed-Ensembles mentionned twice
o-laurent Jun 12, 2024
988a89b
:sparkles: Add Trajectory Ensemble
o-laurent Jun 12, 2024
8e7c188
:sparkles: Add EMA & SWA & Reformat models
o-laurent Jun 12, 2024
8b0a02a
:book: Add SWA to docs
o-laurent Jun 12, 2024
3c231e2
:hammer: Refactor EMA, SWA, & Checkpoint Ens.
o-laurent Jun 12, 2024
1f72ead
:book: Fix conf error
o-laurent Jun 12, 2024
1c7059d
:sparkles: Merge pull request #96 from ENSTA-U2IS-AI/laplace
o-laurent Jun 13, 2024
c091c9a
:shirt: Small changes
o-laurent Jun 13, 2024
4bfd351
:hammer: Refactor the post processing methods
o-laurent Jun 13, 2024
2a15dce
Merge branch 'trajectory' of github.com:ENSTA-U2IS-AI/torch-uncertain…
o-laurent Jun 13, 2024
be85bf8
:hammer: Refactor the AbstractDatamodule
o-laurent Jun 13, 2024
951ff09
:bug: Fix test of abstract methods
o-laurent Jun 13, 2024
83c4cce
:hammer: Refactor pp methods
o-laurent Jun 16, 2024
e48368d
:sparkles: Add first version of SWAG
o-laurent Jun 16, 2024
df5330e
:hammer: Refactor wrappers
o-laurent Jun 16, 2024
1d0c595
:hammer: Refactor the classification routine
o-laurent Jun 16, 2024
6e989f6
:book: Add links to the conf. in ReadMe
o-laurent Jun 16, 2024
fbc8c55
:white_check_mark: Update tests
o-laurent Jun 16, 2024
27bb610
:sparkles: Improve SWAG code
o-laurent Jun 17, 2024
0010d95
:shirt: Minor fix
o-laurent Jun 17, 2024
2c63b3b
:wrench: Fix online install
o-laurent Jun 17, 2024
914599b
:sparkles: Add a full scheduler for SWA & SWAG & update config
o-laurent Jun 17, 2024
a3443e3
:bug: Improve SWA & SWAG
o-laurent Jun 17, 2024
4a4eeac
:hammer: Refactor stochastic models
o-laurent Jun 17, 2024
737d862
:bug: Fix Stochastic MLP error
o-laurent Jun 17, 2024
6f57332
:books: Update documentation
o-laurent Jun 17, 2024
00eb701
:books: Fix bugs in docs
o-laurent Jun 17, 2024
4e1e8af
:white_check_mark: Add first battery of tests
o-laurent Jun 17, 2024
501b5d4
:heavy_check_mark: Fix tests
o-laurent Jun 17, 2024
d135612
:white_check_mark: Improve SWAG tests
o-laurent Jun 17, 2024
f1b6546
:white_check_mark: Improve Stochastic tests
o-laurent Jun 17, 2024
edbb88e
:shirt: Minor changes
o-laurent Jun 17, 2024
fdbaf76
:white_check_mark: Finetune tests
o-laurent Jun 17, 2024
63e874a
Merge pull request #101 from ENSTA-U2IS-AI/trajectory
o-laurent Jun 17, 2024
a601ff9
:ok_hand: Take review comments into account
o-laurent Jun 18, 2024
3ea90e9
:books: Improve documentation & tutorials
o-laurent Jun 18, 2024
84fb04f
:book: Add a tutorial on Packed-Ensembles
o-laurent Jun 18, 2024
f6fb41c
:white_check_mark: Improve tests
o-laurent Jun 18, 2024
676f272
:shirt: Improve ReadMe
o-laurent Jun 18, 2024
80aaaa8
:bug: Fix SWAG
o-laurent Jun 18, 2024
06b990f
:sparkles: Propagate changes to the other routines & update tests
o-laurent Jun 18, 2024
725bd9c
:hammer: rename inference_size to eval_size
o-laurent Jun 18, 2024
fcbfeaa
:bug: Fix regression routines
o-laurent Jun 18, 2024
c9e0404
:sparkles: Add first version for ABNN
o-laurent Jun 18, 2024
4fe4aec
:white_check_mark: Improve coverage
o-laurent Jun 18, 2024
7a57586
:wrench: Lock plt version
o-laurent Jun 18, 2024
0a3a5a7
:bug: Minor changes
o-laurent Jun 19, 2024
41f2f80
:fire: Remove webdataset
o-laurent Jun 19, 2024
17c3071
:books: Improve API Page
o-laurent Jun 21, 2024
f56866a
:white_check_mark: Slightly improve tests
o-laurent Jun 21, 2024
2d84fe6
:ok_hand: Make review modifications before merging
o-laurent Jun 26, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/run-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ jobs:
if: steps.changed-files-specific.outputs.only_changed != 'true'
run: |
python3 -m pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cpu
python3 -m pip install .[image,dev,docs]
python3 -m pip install .[all]

- name: Check style & format
if: steps.changed-files-specific.outputs.only_changed != 'true'
Expand Down
46 changes: 23 additions & 23 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,21 +18,21 @@ _TorchUncertainty_ is a package designed to help you leverage [uncertainty quant

:books: Our webpage and documentation is available here: [torch-uncertainty.github.io](https://torch-uncertainty.github.io). :books:

TorchUncertainty contains the *official implementations* of multiple papers from *major machine-learning and computer vision conferences* and was/will be featured in tutorials at **WACV 2024** and **ECCV 2024**.
TorchUncertainty contains the *official implementations* of multiple papers from *major machine-learning and computer vision conferences* and was/will be featured in tutorials at **[WACV](https://wacv2024.thecvf.com/) 2024**, **[HAICON](https://haicon24.de/) 2024** and **[ECCV](https://eccv.ecva.net/) 2024**.

---

This package provides a multi-level API, including:

- easy-to-use ⚡️ lightning **uncertainty-aware** training & evaluation routines for **4 tasks**: classification, probabilistic and pointwise regression, and segmentation.
- easy-to-use :zap: lightning **uncertainty-aware** training & evaluation routines for **4 tasks**: classification, probabilistic and pointwise regression, and segmentation.
- ready-to-train baselines on research datasets, such as ImageNet and CIFAR
- [pretrained weights](https://huggingface.co/torch-uncertainty) for these baselines on ImageNet and CIFAR (work in progress 🚧).
- [pretrained weights](https://huggingface.co/torch-uncertainty) for these baselines on ImageNet and CIFAR (:construction: work in progress :construction:).
- **layers**, **models**, **metrics**, & **losses** available for use in your networks
- scikit-learn style post-processing methods such as Temperature Scaling.

Have a look at the [Reference page](https://torch-uncertainty.github.io/references.html) or the [API reference](https://torch-uncertainty.github.io/api.html) for a more exhaustive list of the implemented methods, datasets, metrics, etc.

## ⚙️ Installation
## :gear: Installation

TorchUncertainty requires Python 3.10 or greater. Install the desired PyTorch version in your environment.
Then, install the package from PyPI:
Expand All @@ -51,7 +51,6 @@ We make a quickstart available at [torch-uncertainty.github.io/quickstart](https

TorchUncertainty currently supports **classification**, **probabilistic** and pointwise **regression**, **segmentation** and **pixelwise regression** (such as monocular depth estimation). It includes the official codes of the following papers:

- *A Symmetry-Aware Exploration of Bayesian Neural Network Posteriors* - [ICLR 2024](https://arxiv.org/abs/2310.08287)
- *LP-BNN: Encoding the latent posterior of Bayesian Neural Networks for uncertainty quantification* - [IEEE TPAMI](https://arxiv.org/abs/2012.02818)
- *Packed-Ensembles for Efficient Uncertainty Estimation* - [ICLR 2023](https://arxiv.org/abs/2210.09184) - [Tutorial](https://torch-uncertainty.github.io/auto_tutorials/tutorial_pe_cifar10.html)
- *MUAD: Multiple Uncertainties for Autonomous Driving, a benchmark for multiple uncertainty types and tasks* - [BMVC 2022](https://arxiv.org/abs/2203.01437)
Expand All @@ -60,17 +59,16 @@ We also provide the following methods:

### Baselines

To date, the following deep learning baselines have been implemented:
To date, the following deep learning baselines have been implemented. **Click on the methods for tutorials**:

- Deep Ensembles
- MC-Dropout - [Tutorial](https://torch-uncertainty.github.io/auto_tutorials/tutorial_mc_dropout.html)
- BatchEnsemble
- Masksembles
- MIMO
- Packed-Ensembles (see [Blog post](https://medium.com/@adrien.lafage/make-your-neural-networks-more-reliable-with-packed-ensembles-7ad0b737a873)) - [Tutorial](https://torch-uncertainty.github.io/auto_tutorials/tutorial_pe_cifar10.html)
- Bayesian Neural Networks :construction: Work in progress :construction: - [Tutorial](https://torch-uncertainty.github.io/auto_tutorials/tutorial_bayesian.html)
- [Deep Ensembles](https://torch-uncertainty.github.io/auto_tutorials/tutorial_from_de_to_pe.html), BatchEnsemble, Masksembles, & MIMO
- [MC-Dropout](https://torch-uncertainty.github.io/auto_tutorials/tutorial_mc_dropout.html)
- [Packed-Ensembles](https://torch-uncertainty.github.io/auto_tutorials/tutorial_from_de_to_pe.html) (see [Blog post](https://medium.com/@adrien.lafage/make-your-neural-networks-more-reliable-with-packed-ensembles-7ad0b737a873))
- [Variational Bayesian Neural Networks](https://torch-uncertainty.github.io/auto_tutorials/tutorial_bayesian.html)
- Checkpoint Ensembles & Snapshot Ensembles
- Stochastic Weight Averaging & Stochastic Weight Averaging Gaussian
- Regression with Beta Gaussian NLL Loss
- Deep Evidential Classification & Regression - [Tutorial](https://torch-uncertainty.github.io/auto_tutorials/tutorial_evidential_classification.html)
- [Deep Evidential Classification](https://torch-uncertainty.github.io/auto_tutorials/tutorial_evidential_classification.html) & [Regression](https://torch-uncertainty.github.io/auto_tutorials/tutorial_der_cubic.html)

### Augmentation methods

Expand All @@ -82,16 +80,18 @@ The following data augmentation methods have been implemented:

To date, the following post-processing methods have been implemented:

- Temperature, Vector, & Matrix scaling - [Tutorial](https://torch-uncertainty.github.io/auto_tutorials/tutorial_scaler.html)
- Monte Carlo Batch Normalization - [Tutorial](https://torch-uncertainty.github.io/auto_tutorials/tutorial_mc_batch_norm.html)
- [Temperature](https://torch-uncertainty.github.io/auto_tutorials/tutorial_scaler.html), Vector, & Matrix scaling
- [Monte Carlo Batch Normalization](https://torch-uncertainty.github.io/auto_tutorials/tutorial_mc_batch_norm.html)
- Laplace approximation using the [Laplace library](https://github.com/aleximmer/Laplace)

## Tutorials

Our documentation contains the following tutorials:
Check out our tutorials at [torch-uncertainty.github.io/auto_tutorials](https://torch-uncertainty.github.io/auto_tutorials/index.html).

## :telescope: Projects using TorchUncertainty

The following projects use TorchUncertainty:

- *A Symmetry-Aware Exploration of Bayesian Neural Network Posteriors* - [ICLR 2024](https://arxiv.org/abs/2310.08287)

- [From a Standard Classifier to a Packed-Ensemble](https://torch-uncertainty.github.io/auto_tutorials/tutorial_pe_cifar10.html)
- [Training a Bayesian Neural Network in 3 minutes](https://torch-uncertainty.github.io/auto_tutorials/tutorial_bayesian.html)
- [Improve Top-label Calibration with Temperature Scaling](https://torch-uncertainty.github.io/auto_tutorials/tutorial_scaler.html)
- [Deep Evidential Regression on a Toy Example](https://torch-uncertainty.github.io/auto_tutorials/tutorial_der_cubic.html)
- [Training a LeNet with Monte-Carlo Dropout](https://torch-uncertainty.github.io/auto_tutorials/tutorial_mc_dropout.html)
- [Training a LeNet with Deep Evidential Classification](https://torch-uncertainty.github.io/auto_tutorials/tutorial_evidential_classification.html)
**If you are using TorchUncertainty in your project, please let us know, we will add your project to this list!**
28 changes: 20 additions & 8 deletions auto_tutorials_source/tutorial_bayesian.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,12 +55,12 @@
# We will use the Adam optimizer with the default learning rate of 0.001.


def optim_lenet(model: nn.Module) -> dict:
def optim_lenet(model: nn.Module):
o-laurent marked this conversation as resolved.
Show resolved Hide resolved
optimizer = optim.Adam(
model.parameters(),
lr=1e-3,
)
return {"optimizer": optimizer}
return optimizer


# %%
Expand All @@ -75,7 +75,7 @@ def optim_lenet(model: nn.Module) -> dict:
trainer = Trainer(accelerator="cpu", enable_progress_bar=False, max_epochs=1)

# datamodule
root = Path("") / "data"
root = Path("data")
datamodule = MNISTDataModule(root=root, batch_size=128, eval_ood=False)

# model
Expand Down Expand Up @@ -105,6 +105,7 @@ def optim_lenet(model: nn.Module) -> dict:
num_classes=datamodule.num_classes,
loss=loss,
optim_recipe=optim_lenet(model),
is_ensemble=True
)

# %%
Expand All @@ -125,8 +126,10 @@ def optim_lenet(model: nn.Module) -> dict:
# 6. Testing the Model
# ~~~~~~~~~~~~~~~~~~~~
#
# Now that the model is trained, let's test it on MNIST

# Now that the model is trained, let's test it on MNIST.
# Please note that we apply a reshape to the logits to determine the dimension corresponding to the ensemble
# and to the batch. As for TorchUncertainty 0.2.0, the ensemble dimension is merged with the batch dimension
# in this order (num_estimator x batch, classes).
import matplotlib.pyplot as plt
import numpy as np
import torch
Expand All @@ -148,14 +151,23 @@ def imshow(img):
imshow(torchvision.utils.make_grid(images[:4, ...]))
print("Ground truth: ", " ".join(f"{labels[j]}" for j in range(4)))

logits = model(images)
# Put the model in eval mode to use several samples
model = model.eval()
logits = model(images).reshape(16, 128, 10) # num_estimators, batch_size, num_classes

# We apply the softmax on the classes and average over the estimators
probs = torch.nn.functional.softmax(logits, dim=-1)
avg_probs = probs.mean(dim=0)
var_probs = probs.std(dim=0)

_, predicted = torch.max(probs, 1)
_, predicted = torch.max(avg_probs, 1)

print("Predicted digits: ", " ".join(f"{predicted[j]}" for j in range(4)))

print("Std. dev. of the scores over the posterior samples", " ".join(f"{var_probs[j][predicted[j]]:.3}" for j in range(4)))
# %%
# Here, we show the variance of the top prediction. This is a non-standard but intuitive way to show the diversity of the predictions
# of the ensemble. Ideally, the variance should be high when the average top prediction is incorrect.
#
# References
# ----------
#
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
"""
Image Corruptions
=================
Corrupting Images with TorchUncertainty to Benchmark Robustness
===============================================================

This tutorial shows the impact of the different corruptions available in the
TorchUncertainty library. These corruptions were first proposed in the paper
Expand Down
9 changes: 3 additions & 6 deletions auto_tutorials_source/tutorial_der_cubic.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,6 @@

We also need to define an optimizer using torch.optim and the neural network utils within torch.nn.
"""
# %%
import torch
from lightning.pytorch import Trainer
from lightning import LightningDataModule
Expand All @@ -49,15 +48,13 @@
def optim_regression(
model: nn.Module,
learning_rate: float = 5e-4,
) -> dict:
):
optimizer = optim.Adam(
model.parameters(),
lr=learning_rate,
o-laurent marked this conversation as resolved.
Show resolved Hide resolved
weight_decay=0,
)
return {
"optimizer": optimizer,
}
return optimizer


# %%
Expand All @@ -69,7 +66,7 @@ def optim_regression(
# Please note that this MLP finishes with a NormalInverseGammaLayer that interpret the outputs of the model
# as the parameters of a Normal Inverse Gamma distribution.

trainer = Trainer(accelerator="cpu", max_epochs=50)#, enable_progress_bar=False)
trainer = Trainer(accelerator="cpu", max_epochs=50) #, enable_progress_bar=False)

# dataset
train_ds = Cubic(num_samples=1000)
Expand Down
Loading
Loading