opus1m-2021-05-16.zip dataset: opus1m model: transformer-align source language(s): ast cat fra gcf glg ita lad lat mol oci pob por ron target language(s): rus model: transformer-align pre-processing: normalization + SentencePiece (spm32k,spm32k) download: opus1m-2021-05-16.zip test set translations: opus1m-2021-05-16.test.txt test set scores: opus1m-2021-05-16.eval.txt Benchmarks testset BLEU chr-F #sent #words BP newstest2012.fra-rus 16.6 0.442 3003 64830 0.998 newstest2013.fra-rus 18.0 0.452 3000 58560 0.994 Tatoeba-test.ast-rus 11.3 0.635 1 6 1.000 Tatoeba-test.cat-rus 45.5 0.637 185 1216 0.987 Tatoeba-test.fra-rus 38.3 0.585 10000 60596 1.000 Tatoeba-test.gcf-rus 0.0 0.230 1 3 1.000 Tatoeba-test.glg-rus 38.3 0.600 37 217 0.981 Tatoeba-test.ita-rus 40.5 0.603 10000 65459 0.987 Tatoeba-test.lad_Latn-rus 16.0 0.445 13 59 0.966 Tatoeba-test.lad-rus 3.0 0.301 18 81 1.000 Tatoeba-test.lat-rus 13.6 0.323 1041 6289 1.000 Tatoeba-test.multi-rus 39.6 0.599 10000 63240 0.997 Tatoeba-test.oci-rus 17.5 0.358 84 530 1.000 Tatoeba-test.por-rus 39.9 0.609 10000 65172 0.999 Tatoeba-test.ron-rus 43.9 0.631 782 4461 0.982 Tatoeba-test.spa-rus 23.8 0.443 10000 65627 1.000