-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fairer comparison of Babel and Buble #58
Comments
To be absolutely clear: the goal of the Web Tooling Benchmark is not to compare the performance of these tools between themselves, but rather to measure the JS engine's performance when running an example payload using that library. This is in no way meant to be a comparison of Babel vs. Bublé! Still, we could consider a PR that changes https://github.com/v8/web-tooling-benchmark/blob/master/src/acorn-benchmark.js to move the parsing out of the measured time region. We have a separate acorn benchmark after all. It's important that the measurements are of the same order as the other tests, though. |
Did you mean to link to the Buble benchmark?
Not sure what you mean. Can you give an example? |
@aleclarson Right, I posted the wrong link indeed. Sorry for the confusion! By "the measurements should roughly be of the same order" I mean the following. Here's the example from the README:
We want each test to be roughly around the same number of runs/s. For example, it would not be acceptable to have a single test that takes 100x as long, or that is 100x as fast, as the mean. |
The current Babel benchmark should include Babylon parsing just like the Buble benchmark includes Acorn parsing. Thoughts?
PS: This benchmark is a great comparison of existing Javascript parsers, and you can even verify the results from your browser! Perhaps this repository should take a similar approach?
cc: @hzoo @Rich-Harris
The text was updated successfully, but these errors were encountered: