-
-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding a thin wrapper to NLPModels #790
Comments
Sounds good |
I've been exploring the Since this would be a wrapper of a wrapper, I think the levels of inderection compound and could result in some unwieldy code. Or at least that is what I could see happening by wrapping CUTEst.jl, which is the only package currently wrapped that I'm somewhat familiar with |
Could you link to some code to understand this better?
Right, this was my concern in the slack thread as well, I don't have an answer for you right away. If you could link to some relevant parts of the codebase that you were looking at so far it'd be helpful to come up with a concrete answer here |
In this case, (I think) the fortran side of the code won't break AD |
Okay, I think an API that looks like this
might be doable? Take a look at the https://github.com/SciML/SciMLBase.jl/blob/56ba8819c1637fffbae5722057c5532b8d48c21c/src/problems/optimization_problems.jl#L130-L145 NonlinearFunction/Problem -> OptimizationFunction/Problem here for some reference, thought that's pretty trivial comparatively since they look quite similar already |
The NLPModel could be passed as the parameter to the objective as well, so
|
Working on issue SciML/SciMLBenchmarks.jl#935 to include problems CUTEst.jl to the SciML Benchmarks we discussed that a wrapper to
NLPModels
would be useful to that issue and possibly to other users. Currently, CUTEst.jl only producesNLPModels
, so being able to create anOptimizationProblem
from them would make including not only CUTEst, but other libraries as part of the benchmarks much easierIf this wrapper would be welcome, I'd like to start a PR to work on it
The text was updated successfully, but these errors were encountered: