Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compute significant digits, limit precision in human-readable outputs #564

Open
gcflymoto opened this issue Sep 13, 2022 · 4 comments
Open

Comments

@gcflymoto
Copy link

gcflymoto commented Sep 13, 2022

Hi thank you for an excellent tool. By default hyperfine has high precision output in some of the exported file formats (see bottom). Currently, there is no way to control the precision, depending on the benchmark high precision can be important, other benchmarks that run for hours do not need the high precision. I would suggest a switch to control the digits of precision and also this could force consistent data across all exported formats. I was thinking if there was a way to perform auto-precision, but that wouldn't be a good UX.

hyperfine --precision

hyperfine --precision 2 would reduce precision to hundreds, --precision 0 reduces it to whole seconds.

Here is some of my data
(ignore the copy and paste issue with UTF characters)

asciidoc
| 120.164 q 1.488
| 118.870
| 121.790
| 1.82 302261 0.02

csv
120.16350913027999,1.4883295068241194,119.83107793328,133.77354540000002,42.28216786666667

json
"mean": 120.16350913027999,
"stddev": 1.4883295068241194,
"median": 119.83107793328,
"user": 133.77354540000002,
"system": 42.28216786666667,
"min": 118.86950493527999,
"max": 121.78994452228,
"times": [
118.86950493527999,
121.78994452228,
119.83107793328

markdown
120.164 302261 1.488 | 118.870 | 121.790 | 1.82 302261 0.02

Thanks again!

@gcflymoto gcflymoto changed the title Feature Precision control or auto precision Feature Request: precision control or auto precision Sep 13, 2022
@sharkdp
Copy link
Owner

sharkdp commented Oct 29, 2022

For JSON and CSV I would assume it doesn't really matter. These formats will be read by a machine, not by a human, right?

For the other outputs, I think we use one decimal digit if the time unit is milliseconds. For seconds, we use three decimal digits (i.e. millisecond resolution). I agree that this seems* too much if the benchmark time itself is on the order of 10s, 100s, or larger. Maybe that should be changed by default? Or do you think we really need a new command-line option for that?

* it actually depends on the measurement resolution, which is typically on the order of milliseconds. Meaning: we can actually measure 4, 5 or even 6 significant digits if a command is long-running.

@gcflymoto
Copy link
Author

gcflymoto commented Oct 29, 2022

For JSON and CSV I would assume it doesn't really matter. These formats will be read by a machine, not by a human, right?

@sharkdp both :), depending on the use case. For example, CSV files are imported into Excel and used to generate "human readable" tables/spreadsheets.

Maybe that should be changed by default?

I proposed that as part of the "auto precision" question. ie. should the default precision be based on the magnitude of the timings?

@sharkdp
Copy link
Owner

sharkdp commented Nov 19, 2022

For JSON and CSV I would assume it doesn't really matter. These formats will be read by a machine, not by a human, right?

@sharkdp both :), depending on the use case. For example, CSV files are imported into Excel and used to generate "human readable" tables/spreadsheets.

Yeah, okay. But it's also easy enough to fix this in Excel.

For the Markdown and other human readable formats, I'd be okay with a more sophisticated algorithm to compute the amount of digits that we display. I think it's not wrong to display everything in ms resolution, as the CPU clock should be that precise. But a result like 123.456 s is only helpful if the standard deviation is also on the order of 0.001 seconds.

@gcflymoto
Copy link
Author

Taking into account standard deviation is brilliant.

@sharkdp sharkdp changed the title Feature Request: precision control or auto precision Compute significant digits, limit precision in human-readable outputs Apr 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants