Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deprecated Warning in RAdam with torch==1.7.1 #58

Closed
wenmin-wu opened this issue Feb 5, 2021 · 2 comments
Closed

Deprecated Warning in RAdam with torch==1.7.1 #58

wenmin-wu opened this issue Feb 5, 2021 · 2 comments

Comments

@wenmin-wu
Copy link

Hi @LiyuanLucasLiu , thanks for your incredible lib. With RAdam I got better performance without changing any hyperparameter. However, this a deprecated warning in RAdam with torch==1.7.1:

UserWarning: This overload of addcmul_ is deprecated:
	addcmul_(Number value, Tensor tensor1, Tensor tensor2)
Consider using one of the following signatures instead:
	addcmul_(Tensor tensor1, Tensor tensor2, *, Number value) (Triggered internally at  /pytorch/torch/csrc/utils/python_arg_parser.cpp:882.)
  exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)

according to the doc of addcmul_ in torch==1.7.1

Docstring:
addcmul_(tensor1, tensor2, *, value=1) -> Tensor

In-place version of :meth:`~Tensor.addcmul`
Type:      builtin_function_or_method

so to adapt to 1.7.1 and disable this warning, I only need to change exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad) to exp_avg_sq.mul_(beta2).addcmul_(grad, grad, 1 - beta2), right?

@wenmin-wu wenmin-wu changed the title Deprecated Warning in RAdam Deprecated Warning in RAdam with torch==1.7.1 Feb 5, 2021
@LiyuanLucasLiu
Copy link
Owner

Yes, there is a PR you can refer to: #51

@LiyuanLucasLiu
Copy link
Owner

LiyuanLucasLiu commented Feb 5, 2021

BTW, glad you enjoy the paper (performance improvment is indeed a sideproduct of our study) and thanks for letting me know : -)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants