Skip to content

giandos200/Zemel-et-al.-2013-Learning-Fair-Representations-

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Zemel-et-al.-2013-Learning-Fair-Representations-

Reproduction of Zemel, et al. "Learning Fair Representations", ICML 2013 The code take inspiration from two repositories.

The idea was to reproduce part of the experiment proposed in the article since no guidelines are available in both the repositories, and different problems trying to reproduce the article with both the repo. In the litterature, this is considered as a PRE-PROCESSING fair representation algorithm. However, the authors propose LFR as both a PRE and IN-PROCESSING algorithm. Zemel, et al. use different not usual metrics:

Following, the metrics used in the plots proposed by myself:

Installation guidelines:

Following, the instruction to install the correct packages for running the experiments (numba==0.48.0 is mandatory)

$ python3 -m venv venv_lfr
$ source venv_lfr/bin/activate
$ pip install --upgrade pip
$ pip install -r requirements.txt

Training and test the model

To train and evaluate LFR with all the metrics, you may run the following command:

$ python -u main.py

or run the personilized dataset, changing LINE 19 & 20 ("sensitive_features" and "dataset") and the "dataloader" of main.py script with an IDE or with sudo nano main.py

Results

Following the graphics of train and test at various thresholds for different metrics (Accuracy, Difference in Equal Opportunity, Difference in Averages Odds, Fair_DEO==ACC*(1-DEO), Fair_DAO==ACC*(1-DAO)) and both German and Adult. The plot show the accuracy-fairness trade-off at various thresholdes for the generated(or predicted) y_hat.

  • GERMAN dataset

image !image

Next the metrics used by Zemel, et al. on the Test set
Accuracy: 0.6702218787753681
Discrimination: 0.03959704798033192
Consistency: 0.4259762244816738
  • ADULT dataset

image image

Next the metrics used by Zemel, et al. on the Test set
Accuracy: 0.6303079728728369
Discrimination: 0.001884131225272645
Consistency: 0.7975905194197657

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages