Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add option to use direct model in the JuMP model creation #1031

Open
3 tasks
datejada opened this issue Feb 18, 2025 · 8 comments · May be fixed by #1067
Open
3 tasks

Add option to use direct model in the JuMP model creation #1031

datejada opened this issue Feb 18, 2025 · 8 comments · May be fixed by #1067
Assignees
Labels
good first issue Good for newcomers

Comments

@datejada
Copy link
Member

datejada commented Feb 18, 2025

Description

This option can be used to create a direct_model instead of model to reduce memory allocation (and potentially solution time), see docs here:

https://jump.dev/JuMP.jl/stable/api/JuMP/#direct_model

example:

model = direct_model(HiGHS.Optimizer())

Validation and testing

  • Use the benchmark case study with and without the option and compare the memory allocations. Test using HiGHS, Gurobi, and Xpress.
  • Add a test that uses the option
  • Add a docs section (in the how-to?) explaining the use of this option and the (enable_names) to speedup the model (something like performance tips?).

Motivation

Have options to reduce the memory allocations of the model

Target audience

Developers

Can you help?

Always ;)

@datejada datejada added the good first issue Good for newcomers label Feb 18, 2025
@datejada
Copy link
Member Author

@clizbe, Are you interested in this one? It could be nice to do some coding in the model if you fancy it and have time to work on it 😉

@clizbe
Copy link
Member

clizbe commented Feb 19, 2025

Thanks for the rec - I'll see if I have time.

@datejada
Copy link
Member Author

datejada commented Feb 19, 2025

Thanks for the rec - I'll see if I have time.

Great! I also think we can add a section in our docs with this option (maybe in the how to) and another (enable_names) that improves the performance when creating the model so users know that there are a couple of tricks to speed up the model even more.

Adding that to the main description of this issue 😉

BTW: this is a recommendation from the JuMP developers 😄

@clizbe clizbe self-assigned this Mar 3, 2025
@clizbe
Copy link
Member

clizbe commented Mar 3, 2025

@datejada In the docs for direct_model, it says that the optimizer cannot be changed once the model is created. Since we currently have the optimizer argument in solve_model, that means we would have to either:

  • Add the optimizer argument to create_model and remove it from solve_model
    • This would mean the user cannot create the model once and then solve multiple times with different optimizers... which I think is something we want to keep
  • OR Add the optimizer argument to create_model only if they set direct_model = true, which it passes to solve_model. And include in the docs that setting/changing the optimizer in solve_model is only allowed when not using direct_model.
    • But I'm not sure how to implement this. How do I make the optimizer argument required in create_model only if direct_model = true?

@clizbe
Copy link
Member

clizbe commented Mar 3, 2025

Oh wait, maybe I just add the optimizer argument in create_model with a default of HiGHS. Then add to the docs that changing the optimizer in solve_model won't work if they're using direct_model in create_model.

@datejada
Copy link
Member Author

datejada commented Mar 3, 2025

Those are good questions. I am uncertain whether it is worth covering the use case of changing the solver when using a direct model. It seems highly unlikely that a user would switch solvers, resolve the model with a different solver, and then proceed again. Typically, when creating a model, users choose one solver and stick with it throughout the entire process (for example, using Gurobi, Xpress, Cplex, or HiGHS). I haven't seen instances where optimization is first created and solve with Gurobi and then switched to Xpress or HiGHS to resolve the same model, primarily because solver licenses are expensive. Most users will only have a license for one solver, whether that’s Gurobi, Xpress, Cplex, or they will use an open-source solver.

My main point is that, by default, direct_model should not be used so that it always works, even in the unlikely event that the solver is changed. With performance tips, we can guide users on using direct_model with the understanding that they cannot switch solvers on the fly while running a sequence of optimizations that reuse the model.

If we continue like that, then the changes in the code are more simple, and we don't over engineer for rare use cases. Only if becomes a highly requested feature or a must have, then we can go into the details.

What do you think?

@clizbe
Copy link
Member

clizbe commented Mar 3, 2025

Blocked because I can't generate the docs locally.

@clizbe
Copy link
Member

clizbe commented Mar 3, 2025

@datejada I agree it's a fringe case. So what do you suggest for the docs?

I was going to add arg = true/false explanations in the docstrings as well as a blurb about performance in the How-To.

But if we don't want to emphasize it, I could explain thoroughly in the How-To and in the docstring have something like "For args enable_names, direct_model, and optimizer_with_attributes, see this [How-To]."

@clizbe clizbe linked a pull request Mar 3, 2025 that will close this issue
4 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants