-
-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add old benchmarks.ipynb and updated benchmark plot.jl #62
Conversation
These two files are sources from the JuliaLang/www.julialang.org repo. I believe it is a good idea to move these plotting scripts here instead of having them in the website repo. Once this is merged in, a PR will be sent to www.julialang.org to remove these two scripts from their repo
Maybe just use Pluto notebooks if feasible? |
That's actually a great idea, I'll add a commit to use the plot.jl file as a pluto notebook. For the older benchmarks.ipynb file I'd like to keep it as long as the current graph is up on the julialang.org website since that was the exact file used to produce the plot. Once the plot on the website is changed (i.e. closure of #48) we can confidently remove the older benchmarks file and keep only the new one around. |
We can just update the plot on the Julia website as well. It is really old. |
Would it be too crazy to just pull the performance timings right out of the Github Actions? Maybe that is the easiest way to actually run the benchmarks. The big issue would be that we can't get numbers for commercial software. |
So basically like on every commit, get the
For sure, Github Actions has been a boon.
Yeah... How I'm currently handling this (to get the graph as shown here) is to interpolate the actual timings based on the ratios on the last known timing data for those languages which we don't have timings for. I'm not sure if publishing that kind of interpolated data on the JuliaLang website is honest (even with appropriate disclaimers), but I do think that our graph should contain data for those languages as no other benchmarks do. (I'm personally okay with this myself though, interpolated data is better than no data) There are options for CI as discussed here and if it comes down to it, I am still a student and have licenses for these commercial languages. I can try to run the tests myself on local hardware once I fix up tooling PRs such as this one.
While it is old, the information from the new graph is very similar to the previous graph. Rust and Julia both overtake Lua, but that's the only significant (trend) changes besides overall improvements in individual benchmarks. Let's try to 1) use interpolated data or 2) get commercial software working (CI/locally). I'm totally fine making PRs to the julialang website with option 1) as a stopgap till we get updated data with 2). |
Sadly I wasn't able to get Pluto working, for now I'll merge these scripts from the JuliaLang website repo as is and people can make PRs to them to add more info to the graph i.e. the geo mean. |
Maybe @fonsp can help? Can you explain the issues? |
I'll make an issue over at the Pluto repo soon enough. Edit: Running
did the trick! |
@acxz also feel free to contact me on zulip or email! [email protected] |
These two files are sources from the JuliaLang/www.julialang.org repo. I believe it is a good idea to move these plotting scripts here instead of having them in the website repo. Once this is merged in, a PR will be sent to www.julialang.org to remove these two scripts from their repo.
This now makes changing the ploting code much easier for us as we don't have to go through JuliaLang reviewers, only when we submit the finally
benchmarks.svg
file to integrate with the Benchmarks webpage.See: #48 (comment) for more relevant discussion