You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I use pytorch_pretrained_BERT/examples/python run_lm_finetuning.py to fit the model with monolingual set of sentences. I use bert multilingual cased model.
Once the model is fine-tuned, I get the loss for given sentences with the following code:
Hi,
I use
pytorch_pretrained_BERT/examples/python run_lm_finetuning.py
to fit the model with monolingual set of sentences. I use bert multilingual cased model.Once the model is fine-tuned, I get the loss for given sentences with the following code:
78637.05198167797
7.919475431571431
The strange thing is that the fine-tuned model returns much higher loss scores, even if the evaluated sentence appeared in monolingual training data.
Is something I am doing wrong? I just want to check how well the given sentence fits into LM.
Regards and thanks in advance
The text was updated successfully, but these errors were encountered: