We introduce Spoken Language Understanding Evaluation (SLUE) benchmark. This toolkit provides codes to download and pre-process the SLUE datasets, train the baseline models, and evaluate SLUE tasks. Refer https://arxiv.org/abs/2111.10367 for more details.
- Nov. 22: We release the SLUE paper on arXiv along with the slue-toolkit repository. The repository contains data processing and evaluation scripts. We will publish the scripts for trainig the baseline models soon.
- git clone this repository and install slue-toolkit (development mode)
git clone https://github.com/asappresearch/slue-toolkit.git
pip install -e .
or install directly from Github
pip install git+https://github.com/asappresearch/slue-toolkit.git
- Install additional dependency based on your choice (e.g. you need
fairseq
andtransformers
for baselines)
Although this is not a SLU task, ASR can help analyze performance of downstream SLU tasks on the same domain. Additionally, pipeline approaches depend on ASR outputs, making ASR relevant to SLU. ASR is evaluated using word error rate (WER).
Named entity recognition involves detecting the named entities and their tags (types) in a given sentence. We evaluate performance using micro-averaged F1 and label-F1 scores. The F1 score evaluates an unordered list of named entity phrase and tag pairs predicted for each sentence. Only the tag predictions are considered for label-F1.
Sentiment analysis refers to classifying a given speech segment as having negative, neutral, or positive sentiment. We evaluate SA using macro-averaged (unweighted) recall and F1 scores.
Corpus | Size - utts (hours) | Tasks | License | ||
---|---|---|---|---|---|
Fine-tune | Dev | Test | |||
SLUE-VoxPopuli | 5,000 (14.5) | 1,753 (5.0) | 1,842 (4.9) | ASR, NER | CC0 (check complete license here) |
SLUE-VoxCeleb | 5,777 (12.8) | 955 (2.1) | 4,052 (9.0) | ASR, SA | CC-BY 4.0 (check complete license here) |
For SLUE, you need VoxCeleb and VoxPopuli dataset. We carefully curated subset of those dataset for fine-tuning and evaluation for SLUE tasks, and we re-distribute the the subsets. Thus, you don't need to download a whole gigantic datasets. In the dataset, we also includes the human annotation and transcription for SLUE tasks. All you need to do is just running the script below and it will download and pre-process the dataset.
bash scripts/download_datasets.sh
The test set data and annotation will be used for the official SLUE score evaluation, however we will not release the test set annotation. Thus, the SLUE score can be evaluated by submitting your prediction result in tsv format. We will prepare the website to accept your submission. Please stay tuned for this.
To train model, You can use fine-tuning and dev sets (audio, transcription and annotation) except the test set of SLUE task. Additionally you can use any kind of external dataset whether it is labeled or unlabeled for any purpose of training (e.g. pre-training and fine-tuning).
For vadidation of your model, you can use official dev set we provide, or you can make your own splits or cross-validation splits by mixing fine-tuning and dev set all together.
Assuming that the preprocessed manifest files are in manifest/slue-voxceleb
and manifest/slue-voxpopuli
for SLUE-VoxCeleb and SLUE-VoxPopuli. This command fine-tune a wav2vec 2.0 base model on these two datasets using one GPU.
bash baselines/asr/ft-w2v2-base.sh manifest/slue-voxceleb save/asr/w2v2-base-vc
bash baselines/asr/ft-w2v2-base.sh manifest/slue-voxpopuli save/asr/w2v2-base-vp
To evaluate the fine-tuned wav2vec 2.0 ASR models on the dev set, please run the following commands.
python slue_toolkit/eval/eval_w2v.py eval_asr save/asr/w2v2-base-vc --data manifest/slue-voxceleb --subset dev
python slue_toolkit/eval/eval_w2v.py eval_asr save/asr/w2v2-base-vp --data manifest/slue-voxpopuli --subset dev
The WER will be printed directly.
The predictions are saved in save/asr/w2v2-base-vc/pred-dev.wrd
and save/asr/w2v2-base-vp/pred-dev.wrd
and can be used for pipeline models.
More detail baseline experiment described here
Assuming that the preprocessed manifest files are in manifest/slue-voxpopuli
for SLUE-VoxPopuli. This command fine-tune a wav2vec 2.0 base model using one GPU.
bash baselines/ner/e2e_scripts/ft-w2v2-base.sh manifest/slue-voxpopuli/e2e_ner save/e2e_ner/w2v2-base
To evaluate the fine-tuned wav2vec 2.0 E2E NER model on the dev set, please run the following command. (decoding without language model)
bash baselines/ner/e2e_scripts/eval-ner.sh w2v2-base dev combined nolm
More detail baseline experiment described here
This command fine-tune a wav2vec 2.0 base model on the voxceleb dataset
bash baselines/sentiment/e2e_scripts/ft-w2v2-base-senti.sh manifest/slue-voxceleb save/sentiment/w2v2-base
To evaluate the fine-tuned wav2vec 2.0 sentiment model, run following commands or run baselines/sentiment/e2e_scripts/eval.sh
python3 slue_toolkit/eval/eval_w2v_sentiment.py --save-dir save/sentiment/w2v2-base --data manifest/slue-voxceleb --subset dev
More detail baseline experiment described here
in .tsv format, prepare 4 columns as in below example. Please specify none, if the prediction is not exist (for example, Voxpopuli is for ASR and NER, it needs id
, pred_text
, pred_ner
columns, and all entities in pred_sentiment
should be none
"
id pred_text pred_ner pred_sentiment
id10012_0AXjxNXiEzo_00001 like i said less manicured in a good way i think i think that what you know people none Positive
20150518-0900-PLENARY-15-en_20150518-18:48:27_2 we all agreed at the last session in strasbourg that development is important but we need to remember it now when we are talking about the financial contributions [['GPE', 37, 10]] none
Send tsv file to [email protected].