-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add option to use direct model in the JuMP model creation #1031
Comments
@clizbe, Are you interested in this one? It could be nice to do some coding in the model if you fancy it and have time to work on it 😉 |
Thanks for the rec - I'll see if I have time. |
Great! I also think we can add a section in our docs with this option (maybe in the Adding that to the main description of this issue 😉 BTW: this is a recommendation from the JuMP developers 😄 |
@datejada In the docs for direct_model, it says that the optimizer cannot be changed once the model is created. Since we currently have the optimizer argument in solve_model, that means we would have to either:
|
Oh wait, maybe I just add the optimizer argument in create_model with a default of HiGHS. Then add to the docs that changing the optimizer in solve_model won't work if they're using direct_model in create_model. |
Those are good questions. I am uncertain whether it is worth covering the use case of changing the solver when using a direct model. It seems highly unlikely that a user would switch solvers, resolve the model with a different solver, and then proceed again. Typically, when creating a model, users choose one solver and stick with it throughout the entire process (for example, using Gurobi, Xpress, Cplex, or HiGHS). I haven't seen instances where optimization is first created and solve with Gurobi and then switched to Xpress or HiGHS to resolve the same model, primarily because solver licenses are expensive. Most users will only have a license for one solver, whether that’s Gurobi, Xpress, Cplex, or they will use an open-source solver. My main point is that, by default, direct_model should not be used so that it always works, even in the unlikely event that the solver is changed. With performance tips, we can guide users on using direct_model with the understanding that they cannot switch solvers on the fly while running a sequence of optimizations that reuse the model. If we continue like that, then the changes in the code are more simple, and we don't over engineer for rare use cases. Only if becomes a highly requested feature or a must have, then we can go into the details. What do you think? |
Blocked because I can't generate the docs locally. |
@datejada I agree it's a fringe case. So what do you suggest for the docs? I was going to add But if we don't want to emphasize it, I could explain thoroughly in the How-To and in the docstring have something like "For args enable_names, direct_model, and optimizer_with_attributes, see this [How-To]." |
Description
This option can be used to create a
direct_model
instead ofmodel
to reduce memory allocation (and potentially solution time), see docs here:https://jump.dev/JuMP.jl/stable/api/JuMP/#direct_model
example:
Validation and testing
performance tips
?).Motivation
Have options to reduce the memory allocations of the model
Target audience
Developers
Can you help?
Always ;)
The text was updated successfully, but these errors were encountered: