Benchmarks and Optimizations #2891
Replies: 4 comments 10 replies
-
Can we have an issue to discuss this instead of discussions? i think it it is better to discuss this in a linear way instead of the non-linear way of github discussions. |
Beta Was this translation helpful? Give feedback.
-
In my opinion the only requirements for our With that said, it's in my opinion that we can do whatever we want in our implementation. Take whatever shortcuts we want, make optimizations, etc. As long as we achieve the two requirements I just mentioned, we are in business. @KhafraDev has taken over a majority of the web api maintenance these days, and I know its his opinion that we don't deviate from the spec even in the implementation so that it is easier to maintain over time. I think that is fine, especially since he is the one doing most of the work, but I do believe it will limit our performance optimization capabilities. I spoke to Luca from Deno about how he has improved the performance of Fetch. Apparently, their implementation is a lot like what I described. They don't strictly follow every step of the spec, but assert that the behavior is as close to that of the browser as possible (and making the necessary server-side changes like we do).
|
Beta Was this translation helpful? Give feedback.
-
I personally prefer the microbenchmarks being with the PRs. @tsctx was before just posting them into the initial PR comment. But we should keep them with the repo. Usually I checkout the performance pull requests and run the benchmarks and maybe do some code golfing. Also imho we should optimize everything. even cold paths. |
Beta Was this translation helpful? Give feedback.
-
I wanted to keep open the discussion for a little longer to have more of your thoughts about it, I'll move it to an issue to potentially draft something there 🙂 @tsctx I really liked what you did for the benchmarks over the I'll try to draft something later this week, but any suggestion is welcomed |
Beta Was this translation helpful? Give feedback.
-
Hey! 👋
Wanted to open a discussion just to get your thoughts on this as I have been thinking about this topic for a while.
I've seen recently quite a few PRs for doing optimizations over
fetch
especially, and I believe and appreciate all of them as it's super helpful to have people contributing to those ends. Highly appreciated.Nevertheless, I'm starting to see a pattern of constant micro-optimizations (sometimes companioned by micro-benchmarks) that focus on pretty specific parts of the whole flow on
fetch
that often than not might not make a big impact if we take a holistic view over thefetch
implementation (e.g. cold-paths, use cases that are not often seen, etc.).I believe that those contributions can be of even greater value if we shift that focus to the holistic view of
fetch
and how the numbers change based on the changes proposed and if they are of impact or not, as it will help assess the value of the change as well as what other parts we can shift our focus to makefetch
faster (or evenundici
).For instance we can focus on having a better benchmark setup for
fetch
only, its Web APIs (e.g.Cookies
,CacheStorage
,WebSocket
, etc.).Feel free to discard the idea, or maybe propose something different 🙂
cc: @nodejs/undici
Beta Was this translation helpful? Give feedback.
All reactions