1.0.0 Foundations
This is the first official release of ReBench as a "feature-complete" product.
Feature-complete here means, it is a tried and tested tool for benchmark
execution. It is highly configurable, documented, and successfully used.
This 1.0 release does not signify any new major features, but instead marks a
point where ReBench has been stable and relieable for a long time.
ReBench is designed to
- enable reproduction of experiments;
- document all benchmark parameters;
- provide a flexible execution model,
with support for interrupting and continuing benchmarking; - enable the definition of complex sets of comparisons
and their flexible execution; - report results to continuous performance monitoring systems,
e.g., Codespeed or ReBenchDB; - provide basic support for building/compiling benchmarks/experiments
on demand; - be extensible to parse output of custom benchmark harnesses.
ReBench isn't
- a framework for microbenchmarks.
Instead, it relies on existing harnesses and can be extended to parse their
output. - a performance analysis tool. It is meant to execute experiments and
record the corresponding measurements. - a data analysis tool. It provides only a bare minimum of statistics,
but has an easily parseable data format that can be processed, e.g., with R.
To use ReBench, install it with Python's pip:
pip install rebench
Acknowledgements
ReBench has been used by a number of people over the years, and their feedback and contributions made it what it is today. Not all of these contributions are recorded, but I'd still like to thank everyone, from the anonymous reviewer of artifacts, to the students who had to wade through bugs and missing documentation.
Thank you!