-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve performance v0.11.0 #1015
Conversation
Benchmark ResultsBenchmark in progress... |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1015 +/- ##
==========================================
+ Coverage 95.22% 95.25% +0.02%
==========================================
Files 29 29
Lines 1152 1158 +6
==========================================
+ Hits 1097 1103 +6
Misses 55 55 ☔ View full report in Codecov by Sentry. |
48bfb7a
to
f3cda08
Compare
I don't know what is the error in the benchmark, but it doesn't seem related to the cancellation due to exceeded time. Does it run the benchmark locally? |
f3cda08
to
c99898e
Compare
I forgot to update the duration for the incoming inflows. I fixed and it runs locally. The results in v0.10.4 are here: #1013 (comment) I expect this Benchmark to be around the same time, which is good since we achieve more or less the same performance we got in that version, but...20min to create the model is too much :/ Especially since the hourly problem of the whole year (a bigger problem) takes around 5 minutes to create. The bottleneck is this function |
That's the counterpart of https://github.com/TulipaEnergy/TulipaEnergyModel.jl/pull/987/files#diff-43e9d15ce9a457e188412a614efc15efee4860e85525d1b47ef2892074e90d43, right? Hopefully something similar to what was done there, i.e., change the loops to use DuckDB tables and explicit mark some parts with types. I used Cthulhu and |
So, to recap (for myself and having things clear), we have identified two bottlenecks:
The first point leaves us at a similar performance level to v0.10.4 (which is still slow for the use cases we are solving now). The second point is challenging, but it feels the way to go. I will try to think about it, too, but if you have time between the multi-year refactor and this, it would be appreciated. Thanks! |
I will try to spend some time on it |
@abelsiqueira I just realized that the benchmark would never work here since the current main can't create the EU case study in less than the GitHub time limit 😆 Locally, I can see the improvements in this PR, but it won't work here. So, I want to try something else with the JuMP function, but I think the DuckDB part should be in another PR. We should merge these changes to have the new reference. What do you think? |
There is one problem with merging this in the current state: It will become the benchmark! I.e., the next pull request improving it won't run in the GitHub time limit! |
I agree with your suggestion. Let's use a smaller version to benchmark here; I will create a case with 36 resource points (36rp) and commit it for reference. Nevertheless, I would like to maintain the 100 resource points (100rp) as the benchmark for future comparisons (after we merge this PR). We can consider merging this branch at the end, but we should do so without that commit of the EU case by using interactive rebasing before merging. Let’s discuss this further next week. In the meantime, I will update the data for the 36rp benchmark. As for the slowness issue, it takes about an hour to create the model on my PC, whereas version 0.10.4 takes around 30 minutes for the same dataset. Thanks! |
I checked the logs and the benchmark doesn't even finish the baseline (i.e., Do you have an idea of how the time depends on the The GitHub time limit is 6h. The model creation is run 4 times (or 5, if it compiles by running, I'm not sure). I just realised there is an extra |
Closing this PR in favor of #1017 |
JuMP.add_to_expression!
andsubset
when creating the constraints over clustered year (inter-temporal constraints) to improve efficiency.Related issues
Closes #1014
Checklist
I am following the contributing guidelines
Tests are passing
Lint workflow is passing
Docs were updated and workflow is passing