-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Debugging help: How can I trace an error that occurs before a model restart? #375
Comments
options(nlmixr2.retryFocei=FALSE) Would probably work. |
Thanks, that helped. My error is coming from here: Line 1397 in 74a732e
The error message from Looking more into the code, my best guess is that Line 1988 in 74a732e
Any insight into how I could troubleshoot more (realizing that I can't share the model/data itself)? P.S. I may be able to make a reproducible example, eventually because the underlying issue is a parameter that is not informed by any of the data. |
P.P.S. My attempt to replicate the error in a model I could share gave a different zero gradient error. I don't know of a simple replication method. |
This is probably the hardest part to debug, but perhaps I can do something... |
Is there a sprinkling of |
You could add where it is wherever the I can't simply look at the code and figure out what is going on in your case (sorry). |
I just changed the function so that it moved the Line 1988 in 74a732e
To just above this line Line 2009 in 74a732e
Then, I got the following message, and estimation continued:
That seems like the desired behavior when there is a zero derivative, but maybe I'm missing something important. Would that be a good fix? It does appear to be related to not calling the |
Well if there is a zero gradient then yes that is a good fix. But, the adjust readjusts the parameters to the correct scale sometimes changing from an eta drift to modifying the theta and then continuing the estimation |
(TL;DR: Should I move line 1988 to just before 2009 as a PR for this? I'm not sure.) My real issue with this model is that there is a zero gradient that does not appear until after the initial estimation; I don't know exactly the cause (I would have thought that one of the parameters would have a zero gradient on the first estimation step), so I can't readily make a reproducible example that I can share. The "But..." part makes me wonder/worry that I shouldn't make the suggested fix. It looks like the code after line 1988 is possibly shifting the etas around, and then I don't think that doing the work of the lines after 1988 would be a problem, but I'm also not sure if I may be missing something. |
I don't think that is the right solution. Without the adjustment you may not continue the optimization at the same point. |
Could try #384 |
I have a model where I get the following error:
and, after the error, I see "Restart 1" (or 2 or 3). I'd like to trace where that error occurs, but my normal method of
options(error = recover)
doesn't work because it's not an error (or I think more accurately, it's captured internally). Is there a way that I can trace the error to its source?The text was updated successfully, but these errors were encountered: