Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using run_benchmark() for R benchmarks and the state of result schemas #105

Open
alistaire47 opened this issue May 26, 2022 · 0 comments
Open

Comments

@alistaire47
Copy link
Contributor

I wrote a doc about what is required to run the R benchmarks via BenchmarkR with arrowbench::run_benchmark() (which runs for each case) instead of arrowbench::run_one() (which runs for a single case). A major part of this is making sure the right data and metadata from each run flows around correctly such that eventually it can be POSTed to conbench, so the doc devotes a lot of time to benchmark result (at the case level) schemas.

The implication is probably moving to a more standardized and unified form of benchmark result schema across the different levels and languages, so please add comments with opinions on what that might look like.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant