You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm looking into better testing the performance of our software on large systems, both for generation user experience and to safeguard against regressions. I would like to see the runtime of notebook pre-processing broken down per-notebook (or per-cell).
When pytest is managing notebooks, I can add --durations=20 to get an idea which cells are the slowest. (Usually there's something of a Pareto distribution, <5 cells taking most of the wall time.)
The text was updated successfully, but these errors were encountered:
I'm looking into better testing the performance of our software on large systems, both for generation user experience and to safeguard against regressions. I would like to see the runtime of notebook pre-processing broken down per-notebook (or per-cell).
When
pytest
is managing notebooks, I can add--durations=20
to get an idea which cells are the slowest. (Usually there's something of a Pareto distribution, <5 cells taking most of the wall time.)The text was updated successfully, but these errors were encountered: