Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

XLM-R Hyperparameters #2

Open
MaksymDel opened this issue Apr 6, 2020 · 2 comments
Open

XLM-R Hyperparameters #2

MaksymDel opened this issue Apr 6, 2020 · 2 comments

Comments

@MaksymDel
Copy link

MaksymDel commented Apr 6, 2020

Hi @JunjieHu!

I am trying to reproduce results for XLM-R. The paper suggests that lr=3e-5 and effective bs=16 should be used for XLM-R.
It would be very helpful if you could share some more details on hyperparameters (code in repo is configured only for mBert).

Specifically:

  1. Are hparams the same for all the datasets?
  2. Are they also the same for XLM and XLM-R-large?
  3. Did you do warmup updates or/and learning rate scheduling?
  4. Any other non-default specific decisions that you made that could impact training (dropout/grad clipping)?

Thanks in advance for your answers and thanks for this resource.

@JunjieHu
Copy link
Owner

JunjieHu commented Apr 10, 2020

Hi @maksym-del

Here are my answers to your questions.

  1. Mostly we used the hyper-parameters provided in the scripts/*.sh. For tasks except the QA parts, I used lr=2e-5, effective batch_size=32, and max epoch=5, 10 or 20 depending on the size of the English training data. I did try to search among [2e-6, 2e-5, 3e-5, 5e-5], and 2e-5 generally worked well.
  2. The same hparameters in the bash scripts are used for XLM/XLM-R-large/mBERT for most tasks. For the QA tasks, Sebastian might tune a little bit on different parameters.
  3. For the fine-tuning, we didn't do warmup and learning rate scheduling in this work. In the other work, I tried Polynormials warmup w/ AdamW optimizer for the XNLI task, and it worked better than constant learning rate w/o warmup.
  4. In my experience, the learning rate and optimizer could impact the performance greatly. In some cases, a smaller learning rate works better. As for optimizer, LAMB and SGD are slightly better than Adam, although we used Adam in this work without introducing more complexity to the code.

@MaksymDel
Copy link
Author

MaksymDel commented Apr 14, 2020

Hi @JunjieHu

thanks for the clarification!

Regarding answer 3, I can see from the config file that you are indeed not using warmup steps, but from the code, it looks like a linear LR decay is still applied (just w/o warmup).

Please correct me if I'm wrong.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants