You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As I stream the server to the user's browser via Express, I monitor the ZIP size and call stream.destroy() and res.end() to end the operation if it exceeds the max file size.
This ends the stream to the user's browser and causes them to recieve a corrupt ZIP (no problem, we'll send them an email alert explaining what happened).
However - under the hood, it seems like s3-zip continues its operation and memory continues to fill up. In a production environment on Heroku, our web app crashes way before stream.destroy is even called due to memory limits.
How can we implement a max file size, or stop all operations if the ZIP being built reaches a certain size? Otherwise, any implementation of S3-zip makes a server easy susceptible to overloading and crashing.
As I stream the server to the user's browser via Express, I monitor the ZIP size and call
stream.destroy()
andres.end()
to end the operation if it exceeds the max file size.This ends the stream to the user's browser and causes them to recieve a corrupt ZIP (no problem, we'll send them an email alert explaining what happened).
However - under the hood, it seems like s3-zip continues its operation and memory continues to fill up. In a production environment on Heroku, our web app crashes way before stream.destroy is even called due to memory limits.
How can we implement a max file size, or stop all operations if the ZIP being built reaches a certain size? Otherwise, any implementation of S3-zip makes a server easy susceptible to overloading and crashing.
Thanks!
The text was updated successfully, but these errors were encountered: