The General Language Understanding Evaluation (GLUE) benchmark is a collection of nine sentence- or sentence-pair language understanding tasks for evaluating and analyzing natural language understanding systems.
Before running anyone of these GLUE tasks you should download the GLUE data by running this script and unpack it to some directory data_dir.
cd examples/pytorch/nlp/huggingface_models/pruning/magnitude/eager
pip install -r requirements.txt
Pruning now support basic magnitude for distilbert and gradient sensitivity for bert-base:
- Enable magnitude pruning example:
bash run_pruning.sh --topology=distilbert_SST-2 --data_dir=path/to/dataset --output_model=path/to/output_model --config=path/to/conf.yaml