-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add regression suite for NonEmpty and update current one #92
Conversation
(as discussed in #90 ) |
@nobrakal Thanks! Why did both benchmark suites fail? Also, it's not really obvious which module is benchmarked where -- could make it visible in the Travis report by using a (possibly unused) environment setting like |
It fails because of the time needed to download and build the dependencies of the bench suite. It simply takes too long. Normally, it fails only the first time, then the dependencies are cached and everything is ok after it. Please re-run the failed build to see. I have added better variable names, I hope they are effectively better ^^ |
Travis has 50 min time limit, but tasks failed after 45 min. Are you forcing the termination earlier? |
@nobrakal Yep, that's better, thank you :) |
Yep, either Travis does not cache what was built |
Well, at the moment both benchmarking suites are failing consistently. Could you drop the internal time limit? I don't quite undertand what it achieves: instead of failing to cache things, it now fails completely (and also fails to cache things because it failed!) -- so, I don't see how this makes things better :) |
Sorry if I wasn't clear: If the time limit is reached, travis does not cache anything. But is a job fails, it cache. So the idea is to "manually" fail when we approach the time limit to force the cache to be done. For the current state of this PR , see: https://travis-ci.org/snowleopard/alga/jobs/402850821#L2019 . If now you are restarting it, it should build just fine. |
OK, I've restarted the jobs. Let's see how this goes... |
@nobrakal Aha, it worked! I noticed that one of the benchmarks is called |
Great :)
Note that I didn't replaced functions name with their |
I don't understand: both I think the regression suite should simply list the functions that have been benchmarked -- exactly how they are named in the library API. |
Ah sorry, I thought you was suggesting something for the graph suite itself, not this specific version. Ok, it can be dine with some |
Oops, sorry for being ambiguous :) I meant just this regression suite. |
No problem ^^ I have updated the script, please try to restart the two corresponding travis jobs :) (only |
@nobrakal Many thanks, looks good! One more suggestion while we are at it: could you find a way to hide all build/compilation information which is not really useful for performance regression? We should of course show all warnings etc on usual Travis instances, but for performance regression we'd like to see only performance figures, if that's possible. |
So I have reworked it. Now there is a separated It should drop unwanted outputs, and save time :) I have also tweaked the script to hide all output. |
Hmm, for some reason Travis results are now missing (only AppVeyor is shown). Could you push another commit to try to trigger a rebuild? |
Ooops it was my fault, the yaml syntax is a bit tricky when it comes to new line ^^ |
@nobrakal We now get:
Perhaps, you could output something to indicate progress, e.g. just dots on a line |
Fun that my tests ran without hitting this error ! I have corrected my script in nobrakal/benchAlgaPr@9a0547f . It output a line every 5 minutes when installing dependencies. |
@nobrakal Many thanks! All looks good now -- merged. |
Hi,
I have updated the script to support the benchmarks of NonEmpty graphs and created two Travis jobs dedicated to the benchmark of the
Algebra.Graph
andAlgebra.Graph.NonEmpty
module.The tricky part is that the benchmark suite need criterion which need many dependencies. I needed to set a timeout for the action, so it fail before the Travis time limit and allow caching these dependencies (if the time limit is reached, the is no caching).
So if the build first fail, it certainly due to benchamrks dependencies, so re-run it and, as they have been cached, the build should pass.
Example here: https://travis-ci.com/nobrakal/alga/builds/78616254