You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The hebrew to english model outputs really is nonsensical in a way.
this is the input
"רוב האמריקאים רואים בישראל את בעלת ברית העליונה של ארה""ב ומדגישים את הערכים המשותפים של המדינות. כך על פי סקר חדש בארה""ב. יותר רפובליקנים ועצמאיים כינו את ישראל כבעלת הברית הבכירה של ארה""ב מאשר דמוקרטים"
i am using the models directly this is the code snippet i am using to do the translation, and to brief about the process of the script below, it is that, that i am reading the text from a .txt file which is then translated and stored in the output .txt file,
the texts that i am using are meaningful text articles, actually data from the production, so the original text is legitimate
the code snippet that i am using for the translation
def translate_text_file(input_filename, output_filename):
# Load tokenizer and model
model_name = fetch_model_name(source_language, target_language)
if "tc-big" in model_name:
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
with open(input_filename, "r", encoding="utf-8") as file:
input_text = file.read()
# translate text
inputs = tokenizer(
input_text, return_tensors="pt", padding=True, truncation=True
)
with torch.no_grad():
outputs = model.generate(**inputs)
translated_text = [
tokenizer.decode(t, skip_special_tokens=True) for t in outputs
]
# Save translated text to output file
with open(output_filename, "w", encoding="utf-8") as file:
for translation in translated_text:
file.write(translation + "\n")
this code is a part of script, here. as far as i know i have written this script as documented. there is a possibility where the way in which i am trying to translate might be wrong for the model.
but right now, with this script the generated translation is not proper.
slow translation of models
is there any factor of hardware that might affect the speed of the process.
what is the general speed in which the opus-mt-<src_lang>- model translates in
these are the snapshots that i took after i timed the executions
this one is timed when translating hebrew to english
Execution time: 172 seconds
this one is timed for russian to english translation
Execution time: 22 seconds
these models are built on marian nmt which are heavily dependent on the hardware, so if the speed of the translation models is dependent on the hardware, what would be the speed of the translation in the general machine
and looking at the results, the time it took to translate hebrew to english is too much. and though being patient the result was not fruitful
The text was updated successfully, but these errors were encountered:
This seems to be the case with all their models which originate from Tatoeba Challenge. Only the models which are included here seem to work using Hugging Face. Up until a month ago, I hadn't encountered such problems.
Probably that's why the translation time is too slow! The ru-en model must be one of the older models which are still working.
Yes this narrows some things down, I am not really sure what Tatoeba challenge is though. Here in this thread the author of the thread explains the possible problem. Hope this may help you
yeah i solved the problem, it's mainly a problem in the sampling/decoding, the default sampling approach to all of the models is mainly greedy search, this article is very helpful and will help you learn more on how to sample/decode your generated text https://huggingface.co/blog/how-to-generate
tatoeba challenge models are trained on this data compilation: https://github.com/Helsinki-NLP/Tatoeba-Challenge/
For speed I recommend to use the native Marian-NMT models and not the pytorch versions from the transformer library. Alternatively, you can also convert to ctranslate2 for fast decoding.
Otherwise, is the output still broken when using the transfomer models? I think this has been fixed, hasn't it? Otherwise, it would be a question to ask at the huggingface repositories.
The hebrew to english model outputs really is nonsensical in a way.
slow translation of models
Execution time: 172 seconds
Execution time: 22 seconds
The text was updated successfully, but these errors were encountered: