opus4m+btTCv20210807-2022-01-19.zip dataset: opus4m+btTCv20210807 model: transformer source language(s): dan swe target language(s): fin raw source language(s): dan swe raw target language(s): fin model: transformer pre-processing: normalization + SentencePiece (spm32k,spm32k) download: opus4m+btTCv20210807-2022-01-19.zip test set translations: opus4m+btTCv20210807-2022-01-19.test.txt test set scores: opus4m+btTCv20210807-2022-01-19.eval.txt Benchmarks testset BLEU chr-F #sent #words BP Tatoeba-test-v2021-08-07.dan-fin 37.8 0.624 2665 14297 0.940 Tatoeba-test-v2021-08-07.isl-fin 9.5 0.075 2 8 1.000 Tatoeba-test-v2021-08-07.multi-fin 38.0 0.623 2665 14305 0.936 Tatoeba-test-v2021-08-07.nor-fin 22.0 0.460 2488 13054 0.933 Tatoeba-test-v2021-08-07.swe-fin 43.2 0.664 2841 15615 0.960