Official code for Debiasing Pre-Trained Language Models via Efficient Fine-Tuning published in the Second Workshop on Language Technology for Equality, Diversity, Inclusion at ACL 2022.
Currently placeholder. Code will be polished and published soon! In the meantime, you can take a look at the old code.
Our fine-tuning dataset consists of the WinoBias and CrowS-Pairs datasets. After cloning the Git submodules for the respective datasets, run:
python dataset/prepare.py
prepare.py
combines the datasets from each repository and splits them into a training (80%), cross-validation (10%), and testing sets (10%).