opus1m-2021-05-16.zip dataset: opus1m model: transformer-align source language(s): cat target language(s): fin model: transformer-align pre-processing: normalization + SentencePiece (spm12k,spm12k) download: opus1m-2021-05-16.zip test set translations: opus1m-2021-05-16.test.txt test set scores: opus1m-2021-05-16.eval.txt Benchmarks testset BLEU chr-F #sent #words BP Tatoeba-test.cat-fin 19.1 0.407 5 40 0.867 Tatoeba-test.fra-fin 2.4 0.148 1930 9759 0.844 Tatoeba-test.ita-fin 2.0 0.172 1039 5437 0.905 Tatoeba-test.lad-fin 0.0 0.178 1 3 1.000 Tatoeba-test.lat-fin 1.1 0.133 294 1479 0.728 Tatoeba-test.multi-fin 19.1 0.407 5 40 0.867 Tatoeba-test.oci-fin 3.5 0.137 6 31 0.532 Tatoeba-test.por-fin 2.9 0.207 477 2375 0.967 Tatoeba-test.ron-fin 3.4 0.139 14 89 0.918 Tatoeba-test.spa-fin 5.4 0.222 2500 14057 0.854