-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
optimizer = Ranger21(params=model.parameters(), lr=learning_rate) File "/mnt/Drive1/florian/msblob/Ranger21/ranger21/ranger21.py", line 179, in __init__ self.total_iterations = num_epochs * num_batches_per_epoch TypeError: unsupported operand type(s) for *: 'NoneType' and 'NoneType' #12
Comments
Well, have you tried as shown below,
|
Hi @neuronflow, |
thank you, with the above the training seems to start until it's crashing with this error:
If I understand it correctly ranger21 contains a lr scheduler, so it does not sense to combine it with cosine annealing and warm restarts? |
Hi @neuronflow, To your other point - by default Ranger21 will handle the lr scheduling internally for you, so you would not want to use with cosine annealing or other lr scheduling. I can see that it might be simpler if had a single use_lr_scheduling = True/False so I think I'll add that soon...but for now, turning warmup and warmdown off will have r21 operate as an optimizer with no scheduling, and then you can drive the lr with your own schedule. |
thank you once again, for the fast and detailed response with the latest update it seems to work! :) |
One further question, I have a training where I use multiple training data loaders with different batch length..is it possible to apply ranger21 in this context? |
I get the following error when starting my training:
initializing ranger with:
The text was updated successfully, but these errors were encountered: