-
-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Request: exclude tinybench overhead from the benchmark results #189
Comments
Comparing the timing of an algo implemented in different langages with the same measurement tool just tells you which language runs it faster. And measuring the overhead of timestamping a block execution time in an interpreted language has nothing to do with the measurement of executing noop: in your example the latency median of the noop is zero with a zero median absolute deviation. Furthermore, a benchmarking tool for an interpreted language that is not including the interpreter overhead in its measurement is just meaningless. The measurement methodology used in BenchmarkDotNet is utterly wrong: #143 (comment) |
Before we go any further i have a question: What's the difference between "bench 1" and "bench 2", is "bench 2" correct? or is there a way to benchmark a super fast code by tinybench? function fn() {
// a small fn that only runs for a few nanoseconds
}
function bigFn() {
// a big fn that runs for a few milliseconds
for (let i = 0; i < 100_000_000; i ++) {
fn()
}
}
const bench = new Bench({ time: 500 });
bench
.add('bench 1', () => fn())
.add('bench 2', () => bigFn());
await bench.run(); |
It's two different experiments that have nothing in common. |
OK I understand what you mean now. |
Hi thank you for creating such an amazing benchmarking tool! However the benchmarking result is not exactly what I want.
I have read these issues
I think i'm requesting another feature so i write this issue.
For example we write this code to benchmark:
and the result is
Refer to this example to reproduce the result.
Hmm I don't think this simple fibonacci algorithm would take that long to run. Or even a noop function takes 44 ns to run. A noop function should take zero time, or less than 1 ns due to the direct function call if it's not inlined.
Let's assume the tinybench overhead is 44.29 ns. By excluding 44.29 ns the benchmarking results will be:
I cannot say this results are correct because I don't know if we can just consider the noop benchmark result as tinybench overhead. But at least it shows how we can get close to the correct result.
I tried other benchmarking tools and here're the benchmarking results:
Those results are significantly different from the tinybench results.
If we look into the BenchmarkDotNet logs, we will see "OverheadActual", "WorkloadActual", for example:
If we subtract them we can get
WorkloadActual - OverheadActual
= 8.0104 ns/op, it's pretty close to the average result 8.0300 ns/op.What BenchmarkDotNet actually does is slightly different from that. You can read How it works. It says BenchmarkDotNet gets the result by calcualting
Result = ActualWorkload - <MedianOverhead>
.The text was updated successfully, but these errors were encountered: