opus1m-2021-05-16.zip dataset: opus1m model: transformer-align source language(s): afr ang deu eng enm fry gos target language(s): heb model: transformer-align pre-processing: normalization + SentencePiece (spm32k,spm32k) download: opus1m-2021-05-16.zip test set translations: opus1m-2021-05-16.test.txt test set scores: opus1m-2021-05-16.eval.txt Benchmarks testset BLEU chr-F #sent #words BP Tatoeba-test.afr-heb 35.4 0.788 1 4 1.000 Tatoeba-test.ang-heb 2.9 0.132 3 15 1.000 Tatoeba-test.deu-heb 33.0 0.544 3090 20329 0.982 Tatoeba-test.eng-heb 34.7 0.571 10000 60344 1.000 Tatoeba-test.enm-heb 8.7 0.168 11 56 0.926 Tatoeba-test.fry-heb 29.7 0.375 2 5 1.000 Tatoeba-test.gos-heb 9.1 0.310 2 13 0.834 Tatoeba-test.hrx-heb 16.0 0.116 1 4 1.000 Tatoeba-test.multi-heb 34.5 0.564 10000 61637 1.000 Tatoeba-test.nld-heb 6.6 0.278 2500 15745 1.000 Tatoeba-test.stq-heb 16.0 0.161 1 3 1.000 Tatoeba-test.yid-heb 0.0 0.056 570 3128 1.000