Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Browser tests #2

Open
FrameMuse opened this issue Feb 1, 2024 · 4 comments
Open

Browser tests #2

FrameMuse opened this issue Feb 1, 2024 · 4 comments

Comments

@FrameMuse
Copy link

Please, provide tests for browsers, at least Chrome.

I decided to recreate the tests myself and I found out that results are different for browsers.
It seems that reduce has either the same performance as for..of or even better!

Also, please redo tests for Node 20LTS, which is latest, node recently got optimization updates, maybe you will get similar results to browsers environment.

@Andriy-LL
Copy link
Member

Here are the test results on a different hardware.

Chrome 121.0.6167.86 on Windows:

Benchmarking forEach:
Array.forEach x 206 ops/sec ±8.41% (61 runs sampled)
for of x 171 ops/sec ±8.06% (59 runs sampled)
for <array.length, indexing x 281 ops/sec ±0.91% (57 runs sampled)
for <len, indexing x 274 ops/sec ±1.21% (62 runs sampled)
for <array.length, tmp element x 277 ops/sec ±1.27% (62 runs sampled)
for <len, tmp element x 274 ops/sec ±1.51% (61 runs sampled)
for <array.length, indexing,for <array.length, tmp element,Array.forEach is faster

Benchmarking map:
Array.map x 15.00 ops/sec ±19.10% (28 runs sampled)
Array.map, destructuring x 26.15 ops/sec ±11.88% (37 runs sampled)
for of x 25.08 ops/sec ±20.86% (29 runs sampled)
for of, destructuring x 24.39 ops/sec ±20.10% (24 runs sampled)
for, init array x 149 ops/sec ±1.64% (56 runs sampled)
for, init array is faster

Benchmarking reduce:
Array.reduce x 129 ops/sec ±0.81% (62 runs sampled)
Array.reduce, destructuring x 125 ops/sec ±1.05% (60 runs sampled)
for of x 174 ops/sec ±12.01% (55 runs sampled)
for x 441 ops/sec ±1.46% (62 runs sampled)
for is faster

Node v20.10.0 on WSL2 Ubuntu:

Benchmarking forEach:
Array.forEach x 113 ops/sec ±0.67% (82 runs sampled)
for of x 156 ops/sec ±1.57% (79 runs sampled)
for <array.length, indexing x 157 ops/sec ±1.10% (80 runs sampled)
for <len, indexing x 156 ops/sec ±0.48% (80 runs sampled)
for <array.length, tmp element x 151 ops/sec ±1.37% (77 runs sampled)
for <len, tmp element x 154 ops/sec ±0.76% (79 runs sampled)
for <array.length, indexing is faster

Benchmarking map:
Array.map x 58.05 ops/sec ±11.60% (65 runs sampled)
Array.map, destructuring x 64.66 ops/sec ±0.39% (67 runs sampled)
for of x 48.31 ops/sec ±10.03% (51 runs sampled)
for of, destructuring x 46.63 ops/sec ±11.08% (52 runs sampled)
for, init array x 86.84 ops/sec ±1.93% (74 runs sampled)
for, init array is faster

Benchmarking reduce:
Array.reduce x 99.42 ops/sec ±1.05% (73 runs sampled)
Array.reduce, destructuring x 98.16 ops/sec ±0.50% (73 runs sampled)
for of x 211 ops/sec ±0.40% (89 runs sampled)
for x 211 ops/sec ±0.70% (89 runs sampled)
for of,for is faster

The results vary, but loops are still faster on average.

@FrameMuse
Copy link
Author

My initial thought was about objects, I retested it using object once more.

I used this setup

function buildReduceSuite(array) {
  return new Benchmark.Suite("reduce")
    .add("Array.reduce", function () {
      return array.reduce((p, x, i) => {
        p[i] = x.a + x.b;
        return p;
      }, {});
    })
    .add("Array.reduce (spread)", function () {
      return array.reduce((p, x) => ({ ...p, [x]: x.a + x.b }), {});
    })
    // .add("for of (indexOf)", function () {
    //   const result = {};
    //   for (let x of array) {
    //     const i = array.indexOf(x);
    //     result[i] = x.a + x.b;
    //   }
    //   return result;
    // })
    .add("for of (entries)", function () {
      const result = {};
      for (let [i, x] of array.entries()) {
        result[i] = x.a + x.b;
      }
      return result;
    })
    .add("for", function () {
      const result = {};
      for (let i = 0; i < array.length; ++i) {
        result[i] = array[i].a + array[i].b;
      }
      return result;
    });
}

I got these results with node 21

Benchmarking reduce:
Array.reduce x 20.06 ops/sec ±6.04% (37 runs sampled)
Array.reduce (spread) x 9.27 ops/sec ±0.52% (27 runs sampled)
for of (entries) x 15.32 ops/sec ±3.53% (40 runs sampled)
for x 22.92 ops/sec ±4.65% (41 runs sampled)
for is faster

I see that plain for is faster, but as you can see it's not that much faster than reduce, we can see that almost same as for and IS faster than for of.

My conclusions:

  • For big numbers reduce is a bad choice, use for or for of
  • For objects for of is a bad choice, use for or reduce

Please update your README.md and the article to represent actual results, your current ones are far away from reality and don't represent use cases like Numbers and Objects.
reduce is not that bad nowadays, especially with objects.

@Andriy-LL
Copy link
Member

Well, objects add a lot of overhead to the tests. That's why there's only a minor difference. In production code, it is best to use whatever makes it simple and more readable, focusing on performance only when it matters. For example, when dealing with large arrays and simple operations.

With your setup, I've gotten results similar to yours. However, when changing the code to a simple sum, removing quite heavy object construction:

function buildReduceSuite(array) {
  return new Benchmark.Suite("reduce")
    .add("Array.reduce", function () {
      return array.reduce((p, x, i) => p + x.a + x.b, 0);
    })
    .add("for of (entries)", function () {
      let result = 0;
      for (let [i, x] of array.entries()) {
        result = result + x.a + x.b;
      }
      return result;
    })
    .add("for", function () {
      let result = 0;
      for (let i = 0; i < array.length; ++i) {
        result = result + array[i].a + array[i].b;
      }
      return result;
    });
}

I've got these results:

Benchmarking reduce:
Array.reduce x 112 ops/sec ±1.12% (82 runs sampled)
for of (entries) x 89.08 ops/sec ±3.04% (65 runs sampled)
for x 237 ops/sec ±0.66% (86 runs sampled)
for is faster
Done in 16.58s.

We'll update the Readme and the article with Node v20 numbers, as it's not four times slower anymore.

@FrameMuse
Copy link
Author

FrameMuse commented Feb 7, 2024

@Andriy-LL Nice to hear about upcoming changes, but my point about objects were exactly about objects. reduce is not always about numbers, it's used with objects as well like this one
https://stackoverflow.com/questions/14446511/most-efficient-method-to-groupby-on-an-array-of-objects
And actually many others, so it's good to know what reduce is optimized for even though there are many other cool use-cases.

It also goes along with convenience and readability since it's good to know that you can make it fast and readable without pain and for loops.

In details, what I meant in my examples is when you use spread syntax in reduce's return, it will be very slow and inefficient comparing to defining properties on the object and returning the same one.
Which is very important to know, as well as that it may be absolutely different performance when working with numbers.

Anyway, I use reduce as it's stupid convenient comparing to for loops and generally agree with what you mentioned, thanks for this discussion :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants