You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
AWS and GCP provide a Serverless service where a server in a container starts on demand. This container has a default timeout that, usually, allows Scalene to profile the current job and not raise a job was too fast error. If there are no requests to the service the container stops.
Given that mostly all server frameworks (flask, sanic etc) have on_server_start and on_server_stop custom functions, would be nice to profile in production starting/stopping Scalene in these custom functions. (--profile-interval < container default timeout)
I've tested using scalene in GCP Cloud Run with CMD python -m scalene --profile-interval 2.0 --cli --html --outfile scalene.html app.py and failed to start the container even tho the CPU is always allocated (not allocated only during request processing).
AWS and GCP provide a Serverless service where a server in a container starts on demand. This container has a default timeout that, usually, allows Scalene to profile the current job and not raise a job was too fast error. If there are no requests to the service the container stops.
Given that mostly all server frameworks (flask, sanic etc) have
on_server_start
andon_server_stop
custom functions, would be nice to profile in production starting/stopping Scalene in these custom functions. (--profile-interval
<container default timeout
)I've tested using scalene in GCP Cloud Run with
CMD python -m scalene --profile-interval 2.0 --cli --html --outfile scalene.html app.py
and failed to start the container even tho the CPU is always allocated (not allocated only during request processing).#483 #432 #35
The text was updated successfully, but these errors were encountered: