-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Endpoint performance benchmarking/profiling #10
Comments
Note on this, a thing that I have played around with in the past is that on the postgres database you can set it up with the pg_stat_statements module, and then get back statistics on how many queries are being made to the database (see e.g. I've never quite managed yet how to many this easy to incorporate into e.g. a pytest run (you need to maybe have pgtest setup the test database with the module, then have a fixture that resets the statistics before each test) |
Copying some comments by @CasperWA and @flavianojs, from the last aiida meeting here:
Firstly, it would be helpful if you guys could think of any "metrics" we should be aiming for? In terms of the |
Obviously the time to e.g. retrieve a Node etc is very dependent on aiida-core, plus postgres setup, plus hardware etc.
But... it would be nice if we had some basic feedback on the magnitude of the times related to different endpoints, and flag any that are particularly problematic.
For example, in aiida-core I set up: https://aiidateam.github.io/aiida-core/dev/bench/ubuntu-18.04/django/
Perhaps, there is some pydantic specific tool to achieve this?
The text was updated successfully, but these errors were encountered: