This code is based on ECINN: Efficient Counterfactuals from Invertible Neural Networks.
I have made all entrypoints accessible through the main.py file to remove clutter.
$> python main.py -h
Will show you the way. There are three modules.
- First, the
train
module should be used to train an IB-INN model. - Next, the
list
module can be used to display indexed output folders, for ease of indexing when choosing model to explain. - Finally, the
counterfactual
module computes counterfactual examples and stores them in the directory specified inconfig.ini
.
The python requirements are listed in requirements.yml
and can be installed with conda as follows:
$> conda env create -f requirements.yml
This will produce conda environment ecinn
with necessary dependencies.
It will also make a directory in the root named src
. This contains the
FrEIA framework for invertible neural
networks in PyTorch.
Furthermore, this code makes use of the IB-INN code base. To clone the code into the submodule directory, run the following command.
$> git submodule update --init
When counterfactuals are computed, they are stored as separate files
that are located in subdirectories with the root being specified in the
config.ini
file.
In the paper, we introduce a new Dataset called the FakeMNIST dataset, where we took the MNIST data, scrambled the labels and added a little dot in the top-left corner to indicate the new label.
dataset/fakemnist.py
contains a pytorch Dataloader, which deterministically
scrambles labels and draws the dots in the top-left corner.