Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add request Content-Length to PerformanceResourceTiming entries #1777

Open
kyledevans opened this issue Sep 25, 2024 · 2 comments
Open

Add request Content-Length to PerformanceResourceTiming entries #1777

kyledevans opened this issue Sep 25, 2024 · 2 comments
Labels
addition/proposal New features or enhancements needs implementer interest Moving the issue forward requires implementers to express interest

Comments

@kyledevans
Copy link

kyledevans commented Sep 25, 2024

What problem are you trying to solve?

We would like the ability to display an upload throughput indicator in our UI (IE: 13.1 Mbps). We are using the @azure/storage-blob SDK to upload files directly to an Azure Storage Blob account. Throughput calculations require: 1) Start time, 2) duration, 3) payload size in bytes. The SDK internally splits files into chunks and then uploads each chunk with separate fetch calls which makes getting the raw data for the calculation very difficult.

The PerformanceResourceTiming API has multiple properties related to the response that can be used to measure download performance. But there is not a standard place that records the size of the request payload in order to measure uploads.

It seems clear (perhaps only to me) that request payload size is a key metric that really belongs at the standards level. Third party libraries such as the Azure SDK I linked above certainly should try and expose this information. But having this metric at the standards level enables developers to fill in the functionality gaps of third party libraries. It also isn't really a stretch to imagine how observability platforms would benefit from being able to record and visualize bottlenecks and issues for file uploads.

What solutions exist today?

Current solutions to calculate throughput require the ability to measure the request payload size when initiating the fetch call and then find the corresponding PerformanceResourceTiming entry. This can be difficult to achieve in practice because it requires directly measuring the size of the payload stream (or serialized JSON, or whatever) and then looking up the corresponding timing entry to get the start time and duration. It's even more difficult if the fetch call happens deep inside a third party library.

Some solutions that come to mind:

  • Monkey patch the fetch API so that I can get payload size. Then attempt to correlate that request to a PerformanceResourceTiming entry. yuck
  • Ditch the third party library I'm using for uploads, and manually implement file splitting. yuck
  • Try and convince my coworkers and higher-ups that in the year 2024 - calculating upload throughput is just too hard. This is the approach I'm going with for now.

How would you solve it?

Waves a magic wand: Add a property to PerformanceResourceTiming entries called requestContentLength that is hydrated from the Content-Length header on the request.

Waves Dumbledore's Elder Wand: Add an API to track fetch progress updates in the browser Performance APIs. This might be a bit over-ambitious but it sure would be nice to finally have a standard way to measure upload (and download) progress and throughput. Perhaps this solution isn't realistic.

Anything else?

This feature is about getting a key piece of information for calculating throughput. The actual throughput calculation itself is also very difficult, but is outside the scope of this request. Library authors and application developers looking to calculate throughput will quickly see how deep the rabbit hole goes, and avoid feature requests for throughput indicators.

This feature is attempting to simplify (even if only by a little bit) what is already a difficult task.

Some discussions I've found that are relevant:

@kyledevans kyledevans added addition/proposal New features or enhancements needs implementer interest Moving the issue forward requires implementers to express interest labels Sep 25, 2024
@annevk
Copy link
Member

annevk commented Sep 29, 2024

You could grab the Content-Type request header in a service worker, but you'd still run into compression (although maybe no request is compressed currently?) and it not working for streams.

#607 still seems like the right API shape for exposing more information about a fetch.

@kyledevans
Copy link
Author

I didn't think that outgoing payloads were compressed. My hope was that adding the request Content-Length to the performance entry would be more realistically achievable because it looks like the FetchObserver proposal has been stalled for 7 years. Also I find it odd that we have similar properties describing the response payload but not the request.

That is an interesting idea about using a service worker to get this information. I wish I had the time to try that out. Perhaps in a few months my workload will change, I can try it out, and follow up with some more insight on that experience.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
addition/proposal New features or enhancements needs implementer interest Moving the issue forward requires implementers to express interest
Development

No branches or pull requests

2 participants