opus+bt-2021-04-10.zip dataset: opus+bt model: transformer-align source language(s): eng target language(s): tur model: transformer-align pre-processing: normalization + SentencePiece (spm32k,spm32k) download: opus+bt-2021-04-10.zip test set translations: opus+bt-2021-04-10.test.txt test set scores: opus+bt-2021-04-10.eval.txt Benchmarks testset BLEU chr-F #sent #words BP newsdev2016-entr.eng-tur 21.5 0.575 1001 16127 1.000 newstest2016-entr.eng-tur 21.4 0.558 3000 50782 0.986 newstest2017-entr.eng-tur 22.8 0.572 3007 51977 0.960 newstest2018-entr.eng-tur 20.8 0.561 3000 53731 0.963 Tatoeba-test.eng-tur 41.5 0.684 10000 60469 0.932 opusTCv20210807+bt_transformer-big_2022-02-25.zip dataset: opusTCv20210807+bt model: transformer-big source language(s): eng target language(s): tur raw source language(s): eng raw target language(s): tur model: transformer-big pre-processing: normalization + SentencePiece (spm32k,spm32k) download: opusTCv20210807+bt_transformer-big_2022-02-25.zip test set translations: opusTCv20210807+bt_transformer-big_2022-02-25.test.txt test set scores: opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt Benchmarks testset BLEU chr-F #sent #words BP newsdev2016-entr.eng-tur 23.4 0.59349 1001 16127 1.000 newstest2016-entr.eng-tur 23.4 0.57623 3000 50782 1.000 newstest2017-entr.eng-tur 25.4 0.58858 3007 51977 0.977 newstest2018-entr.eng-tur 22.6 0.57863 3000 53731 0.986 Tatoeba-test-v2021-08-07.eng-tur 42.3 0.68784 10000 60634 0.964