You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One thing I noticed when investigating something with request profiling is that the actual request response time is always a few hundred ms longer than the stackprof or vernier resulting profile.
While both are relatively low overhead when active, I think generating and extracting the profiling data takes a while and impact user experience.
When doing continuous profiling, instead of doing all that from ProfilingMiddleware, we could register a rack.after_reply callback so it's done out of band. Of course of ad hoc profiling, we can't do that because we need to change the response etc.
The advantages are multiple:
Less impact on the user experience (less latency impact)
The profile would also include writing the response to the client, so if there is performance to be gained in this part of the server, we'd see it.
Oh I see. I guess my case wasn't exactly continuous profiling. I was adding logic to the profiling middleware to then call super with a specific Parameter object, hence why I had user impact.
One thing I noticed when investigating something with request profiling is that the actual request response time is always a few hundred ms longer than the stackprof or vernier resulting profile.
While both are relatively low overhead when active, I think generating and extracting the profiling data takes a while and impact user experience.
When doing continuous profiling, instead of doing all that from
ProfilingMiddleware
, we could register arack.after_reply
callback so it's done out of band. Of course of ad hoc profiling, we can't do that because we need to change the response etc.The advantages are multiple:
cc @dalehamel @bmansoob thoughts ?
The text was updated successfully, but these errors were encountered: