diff --git a/docs/manpage.rst b/docs/manpage.rst index 7c9227af8..ea3f90339 100644 --- a/docs/manpage.rst +++ b/docs/manpage.rst @@ -129,48 +129,55 @@ Test commands Result storage commands ^^^^^^^^^^^^^^^^^^^^^^^ -.. option:: --delete-stored-session=UUID +.. option:: --delete-stored-sessions=SELECT_SPEC - Delete the stored session with the specified UUID from the results database. + Delete the stored sessions matching the given selection criteria. + + Check :ref:`session-selection` for information on the exact syntax of ``SELECT_SPEC``. .. versionadded:: 4.7 -.. option:: --describe-stored-session=UUID +.. option:: --describe-stored-sessions=SELECT_SPEC + + Get detailed information of the sessions matching the given selection criteria. - Get detailed information of the session with the specified UUID. The output is in JSON format. + Check :ref:`session-selection` for information on the exact syntax of ``SELECT_SPEC``. .. versionadded:: 4.7 -.. option:: --describe-stored-testcases=SESSION_UUID|TIME_PERIOD +.. option:: --describe-stored-testcases=SELECT_SPEC - Get detailed test case information of the session with the specified UUID or from the specified time period. + Get detailed information of the test cases matching the given selection criteria. - If a session UUID is provided only information about the test cases of this session will be provided. - This option can be combined with :option:`--name` to restrict the listing to specific tests. - For the exact syntax of ``TIME_PERIOD`` check the :ref:`time-period-syntax` section. + This option can be combined with :option:`--name` and :option:`--filter-expr` to restrict further the test cases. + + Check :ref:`session-selection` for information on the exact syntax of ``SELECT_SPEC``. .. versionadded:: 4.7 .. _--list-stored-sessions: -.. option:: --list-stored-sessions[=TIME_PERIOD] +.. option:: --list-stored-sessions[=SELECT_SPEC|all] + + List sessions stored in the results database matching the given selection criteria. - List sessions stored in the results database. + If ``all`` is given instead of ``SELECT_SPEC``, all stored sessions will be listed. + This is equivalent to ``19700101T0000+0000:now``. + If the ``SELECT_SPEC`` is not specified, only the sessions of last week will be listed (equivalent to ``now-1w:now``). - If ``TIME_PERIOD`` is ``all``, all stored sessions will be listed. - If not specified, only the sessions of last week will be listed. - For the exact syntax of ``TIME_PERIOD`` check the :ref:`time-period-syntax`. + Check :ref:`session-selection` for information on the exact syntax of ``SELECT_SPEC``. .. versionadded:: 4.7 -.. option:: --list-stored-testcases=SESSION_UUID|TIME_PERIOD +.. option:: --list-stored-testcases=CMPSPEC - List all test cases from the session with the specified UUID or from the specified time period. + Select and list information of stored testcases. - If a session UUID is provided only the test cases of this session will be listed. - This option can be combined with :option:`--name` to restrict the listing to specific tests. - For the exact syntax of ``TIME_PERIOD`` check the :ref:`time-period-syntax` section. + The ``CMPSPEC`` argument specifies how testcases will be selected, aggregated and presented. + This option can be combined with :option:`--name` and :option:`--filter-expr` to restrict the listed tests. + + Check the :ref:`querying-past-results` section for the exact syntax of ``CMPSPEC``. .. versionadded:: 4.7 @@ -178,8 +185,10 @@ Result storage commands Compare the performance of test cases that have run in the past. - This option can be combined with :option:`--name` to restrict the comparison to specific tests. - Check the :ref:`performance-comparisons` section for the exact syntax of ``CMPSPEC``. + The ``CMPSPEC`` argument specifies how testcases will be selected, aggregated and presented. + This option can be combined with :option:`--name` and :option:`--filter-expr` to restrict the listed tests. + + Check the :ref:`querying-past-results` section for the exact syntax of ``CMPSPEC``. .. versionadded:: 4.7 @@ -1108,10 +1117,9 @@ Miscellaneous options Print a report summarizing the performance of all performance tests that have run in the current session. For each test all of their performance variables are reported and optionally compared to past results based on the ``CMPSPEC`` specified. + If not specified, ``CMPSPEC`` defaults to ``now:now/last:/+job_nodelist+result``, meaning that the current performance will not be compared to any past run and, additionally, the ``job_nodelist`` and the test result (``pass`` or ``fail``) will be listed. - If not specified, the default ``CMPSPEC`` is ``now:now/last:/+job_nodelist+result``, meaning that the current performance will not be compared to any past run and, additionally, the ``job_nodelist`` and the test result (``pass`` or ``fail``) will be listed. - - For the exact syntax of ``CMPSPEC``, refer to :ref:`performance-comparisons`. + For the exact syntax of ``CMPSPEC``, refer to :ref:`querying-past-results`. .. versionchanged:: 4.7 @@ -1135,10 +1143,13 @@ Miscellaneous options Annotate the current session with custom key/value metadata. The key/value data is specified as a comma-separated list of `key=value` pairs. - When listing stored sessions with the :option:`--list-stored-sessions` option, any associated custom metadata will be presented by default. + When listing stored sessions with the :option:`--list-stored-sessions` option, any associated custom metadata will be presented. + + This option can be specified multiple times, in which case the data from all options will be combined in a single list of key/value data. .. versionadded:: 4.7 + .. option:: --system=NAME Load the configuration for system ``NAME``. @@ -1152,21 +1163,19 @@ Miscellaneous options This option can also be set using the :envvar:`RFM_SYSTEM` environment variable. -.. option:: --table-format=csv|plain|pretty +.. option:: --table-format=csv|plain|outline|grid Set the formatting of tabular output printed by the options :option:`--performance-compare`, :option:`--performance-report` and the options controlling the stored sessions. The acceptable values are the following: - ``csv``: Generate CSV output + - ``grid``: Generate a table with grid lines + - ``outline``: (default) Generate a table with lines outlining the table and the header - ``plain``: Generate a plain table without any lines - - ``pretty``: (default) Generate a pretty table - - .. versionadded:: 4.7 - -.. option:: --table-hide-columns=COLUMNS - Hide the specified comma-separated list of columns from the tabular output printed by the options :option:`--performance-compare`, :option:`--performance-report` and the options controlling the stored sessions. + Note that the default ``outline`` format will not render correctly multi-line cells. + In this cases, prefer the ``grid`` or ``plain`` formats. .. versionadded:: 4.7 @@ -1331,50 +1340,84 @@ The test cases of the session are indexed by their run job completion time for q The database file is controlled by the :attr:`~config.storage.sqlite_db_file` configuration parameter and multiple ReFrame processes can access it safely simultaneously. -There are several command-line options that allow users to query the results database, such as the :option:`--list-stored-sessions`, :option:`--list-stored-testcases`, :option:`--describe-stored-session` etc. +There are several command-line options that allow users to query the results database, such as the :option:`--list-stored-sessions`, :option:`--list-stored-testcases`, :option:`--describe-stored-sessions` etc. Other options that access the results database are the :option:`--performance-compare` and :option:`--performance-report` which compare the performance results of the same test cases in different periods of time or from different sessions. Check the :ref:`commands` section for the complete list and details of each option related to the results database. Since the report file information is now kept in the results database, there is no need to keep the report files separately, although this remains the default behavior for backward compatibility. You can disable the report generation by turning off the :attr:`~config.general.generate_file_reports` configuration parameter. -The file report of any session can be retrieved from the database with the :option:`--describe-stored-session` option. +The file report of any session can be retrieved from the database with the :option:`--describe-stored-sessions` option. -.. _performance-comparisons: +.. _querying-past-results: -Performance comparisons -======================= +Querying past results +===================== .. versionadded:: 4.7 -The :option:`--performance-compare` and :option:`--performance-report` options accept a ``CMPSPEC`` argument that specifies how to select and compare test cases. -The full syntax of ``CMPSPEC`` is the following: +ReFrame provides several options for querying and inspecting past sessions and test case results. +All those options follow a common syntax that builds on top of the following elements: + +1. Selection of sessions and test cases +2. Grouping of test cases and performance aggregations +3. Selection of test case attributes to present + +Throughout the documentation, we use the ``///`` for implicit performance comparisons (see :option:`--performance-report`) or for simple performance aggregations (see :option:`--list-stored-testcases`). + +In the following we present in detail the exact syntax of every of the above syntactic elements. + +.. _session-selection: + +Selecting sessions and test cases +---------------------------------- + +The syntax for selecting sessions or test cases can take one of the following forms: + +1. `` := ?``: A valid Python expression on the available session information including any user-specific session extras (see also :option:`--session-extras`), e.g., ``?'xyz=="123"'``. + In this case, the testcases from all sessions matching the filter will be retrieved. +3. `` "/")? ::= | | + ::= /* any valid UUID */ + ::= "?" + ::= /* any valid Python expression */ + ::= ":" + ::= ("now" | ) (("+" | "-") ("w" | "d" | "h" | "m"))? + ::= /* any timestamp of the format `%Y%m%d`, `%Y%m%dT%H%M`, `%Y%m%dT%H%M%S` */ + ::= [0-9]+ Environment =========== diff --git a/docs/tutorial.rst b/docs/tutorial.rst index 55c52bd1d..f769256a7 100644 --- a/docs/tutorial.rst +++ b/docs/tutorial.rst @@ -2000,111 +2000,111 @@ its unique identifier, its start and end time and how many test cases have run: ┍━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━┯━━━━━━━━━━━━━┑ │ UUID │ Start time │ End time │ Num runs │ Num cases │ ┝━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━━━━┿━━━━━━━━━━━━┿━━━━━━━━━━━━━┥ - │ fedb2cf8-6efa-43d8-a6dc-e72c868deba6 │ 20240823T104554+0000 │ 20240823T104557+0000 │ 1 │ 1 │ - │ 4253d6b3-3926-4c4c-a7e8-3f7dffe9bf23 │ 20240823T104608+0000 │ 20240823T104612+0000 │ 1 │ 1 │ - │ 453e64a2-f941-49e2-b628-bf50883a6387 │ 20240823T104721+0000 │ 20240823T104725+0000 │ 1 │ 1 │ - │ d923cca2-a72b-43ca-aca1-de741b65088b │ 20240823T104753+0000 │ 20240823T104757+0000 │ 1 │ 1 │ - │ 300b973b-84a6-4932-89eb-577a832fe357 │ 20240823T104814+0000 │ 20240823T104815+0000 │ 1 │ 2 │ - │ 1fb8488e-c361-4355-b7df-c0dcf3cdcc1e │ 20240823T104834+0000 │ 20240823T104835+0000 │ 1 │ 4 │ - │ 2a00c55d-4492-498c-89f0-7cf821f308c1 │ 20240823T104843+0000 │ 20240823T104845+0000 │ 1 │ 4 │ - │ 98fe5a68-2582-49ca-9c3c-6bfd9b877143 │ 20240823T104902+0000 │ 20240823T104903+0000 │ 1 │ 4 │ - │ 4bbc27bc-be50-4cca-9d1b-c5fb4988a5c0 │ 20240823T104922+0000 │ 20240823T104933+0000 │ 1 │ 26 │ - │ 200ea28f-6c3a-4973-a2b7-aa08408dbeec │ 20240823T104939+0000 │ 20240823T104943+0000 │ 1 │ 10 │ - │ b756755b-3181-4bb4-9eaa-cc8c3a9d7a43 │ 20240823T104955+0000 │ 20240823T104956+0000 │ 1 │ 10 │ - │ a8a99808-c22d-4b9c-83bc-164289fe6aa7 │ 20240823T105007+0000 │ 20240823T105007+0000 │ 1 │ 4 │ - │ f9b63cdc-7dda-44c5-ab85-1e9752047834 │ 20240823T105019+0000 │ 20240823T105020+0000 │ 1 │ 10 │ - │ 271fc2e7-b550-4325-b8bb-57bdf95f1d0d │ 20240823T105020+0000 │ 20240823T105020+0000 │ 1 │ 1 │ - │ 50cdb774-f231-4f61-8472-7daaa5199d57 │ 20240823T105031+0000 │ 20240823T105032+0000 │ 1 │ 5 │ + │ 340178ef-a51e-4ce8-8476-1e42ceb2efdd │ 20241011T092927+0000 │ 20241011T092930+0000 │ 1 │ 1 │ + │ 68f9a457-f132-459f-8c11-0e6533be3a24 │ 20241011T092931+0000 │ 20241011T092934+0000 │ 1 │ 1 │ + │ c1d3e813-e783-41aa-92b6-e7ff8eb3e4ec │ 20241011T092934+0000 │ 20241011T092935+0000 │ 1 │ 2 │ + │ 6a79ccf5-95c4-4cc0-a4a2-b3e49012565b │ 20241011T092936+0000 │ 20241011T092937+0000 │ 1 │ 4 │ + │ aa953baf-63d9-47b1-8800-1c6d05883334 │ 20241011T092938+0000 │ 20241011T092939+0000 │ 1 │ 4 │ + │ e8b23332-534a-4f48-aff7-1ae9d4085ecc │ 20241011T092939+0000 │ 20241011T092951+0000 │ 1 │ 26 │ + │ 57cfb5f3-94dd-4e7f-87c9-648a651b1337 │ 20241011T092951+0000 │ 20241011T092955+0000 │ 1 │ 10 │ + │ ec116664-5534-462f-aa33-87dad3bd794b │ 20241011T092956+0000 │ 20241011T092957+0000 │ 1 │ 10 │ + │ 92eaa50e-af92-411f-a11e-47e9fa938202 │ 20241011T092957+0000 │ 20241011T092957+0000 │ 1 │ 4 │ + │ 5bb110fd-9f6a-487d-af4f-4ab582406047 │ 20241011T092958+0000 │ 20241011T092959+0000 │ 1 │ 10 │ + │ 4a522d23-6ae4-4a28-bf39-d2872badcf01 │ 20241011T092959+0000 │ 20241011T092959+0000 │ 1 │ 1 │ + │ 2a6bb3b7-93d3-41ed-8618-48c268de5fcb │ 20241011T093000+0000 │ 20241011T093001+0000 │ 1 │ 5 │ ┕━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━┷━━━━━━━━━━━━━┙ -You can use the :option:`--list-stored-testcases` to list the test cases of a specific session or those that have run within a certain period of time: + +You can use :option:`--list-stored-testcases` to list the test cases of a specific session or those that have run within a certain period of time. +In the following example, we list the test cases of session ``aa953baf-63d9-47b1-8800-1c6d05883334`` showing the maximum performance for every performance variable. +Note that a session may contain multiple runs of the same test. .. code-block:: bash :caption: Run in the single-node container. - reframe --list-stored-testcases=1fb8488e-c361-4355-b7df-c0dcf3cdcc1e + reframe --list-stored-testcases=aa953baf-63d9-47b1-8800-1c6d05883334/max:/ .. code-block:: console - ┍━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┑ - │ Name │ SysEnv │ Nodelist │ Completion Time │ Result │ UUID │ - ┝━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━━━━━━━━━┿━━━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━━━━┿━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┥ - │ build_stream ~tutorialsys:default+gnu │ tutorialsys:default+gnu │ │ n/a │ pass │ 1fb8488e-c361-4355-b7df-c0dcf3cdcc1e:0:0 │ - │ build_stream ~tutorialsys:default+clang │ tutorialsys:default+clang │ │ n/a │ pass │ 1fb8488e-c361-4355-b7df-c0dcf3cdcc1e:0:1 │ - │ stream_test │ tutorialsys:default+gnu │ myhost │ 20240823T104835+0000 │ pass │ 1fb8488e-c361-4355-b7df-c0dcf3cdcc1e:0:2 │ - │ stream_test │ tutorialsys:default+clang │ myhost │ 20240823T104835+0000 │ pass │ 1fb8488e-c361-4355-b7df-c0dcf3cdcc1e:0:3 │ - ┕━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┙ - + ┍━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━┯━━━━━━━━━┯━━━━━━━━━┑ + │ name │ sysenv │ pvar │ punit │ pval │ + ┝━━━━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━━━━━━━━━┿━━━━━━━━━━┿━━━━━━━━━┿━━━━━━━━━┥ + │ stream_test │ tutorialsys:default+gnu │ copy_bw │ MB/s │ 25169.3 │ + │ stream_test │ tutorialsys:default+gnu │ triad_bw │ MB/s │ 19387.8 │ + │ stream_test │ tutorialsys:default+clang │ copy_bw │ MB/s │ 25129.7 │ + │ stream_test │ tutorialsys:default+clang │ triad_bw │ MB/s │ 29232.8 │ + ┕━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━┷━━━━━━━━━┷━━━━━━━━━┙ -The test case UUID comprises the UUID of the session where this test case belongs to, its run index (which run inside the session) and its test case index inside the run. -A session may have multiple runs if it has retried some failed test cases (see :option:`--max-retries`) or if it has run its tests repeatedly (see :option:`--reruns` and :option:`--duration`). +The grouping of the test cases, the aggregation and the actual columns shown in the final table are fully configurable. +The exact syntax and the various posibilities are described in :ref:`querying-past-results`. -You can also list the test cases that have run in a certain period of time use the :ref:`time period ` of :option:`--list-stored-testcases`: +You can also list the test cases that have run in a certain period of time by passing a time period argument to :option:`--list-stored-testcases`. +For example, the following will list the mean performance of all test cases that have run the last day: .. code-block:: bash :caption: Run in the single-node container. - reframe --list-stored-testcases=20240823T104835+0000:now + reframe --list-stored-testcases=now-1d:now/mean:/ .. code-block:: console - ┍━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┑ - │ Name │ SysEnv │ Nodelist │ Completion Time │ Result │ UUID │ - ┝━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━━━━━━━━━┿━━━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━━━━┿━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┥ - │ stream_test │ tutorialsys:default+gnu │ myhost │ 20240823T104835+0000 │ pass │ 1fb8488e-c361-4355-b7df-c0dcf3cdcc1e:0:2 │ - │ stream_test │ tutorialsys:default+clang │ myhost │ 20240823T104835+0000 │ pass │ 1fb8488e-c361-4355-b7df-c0dcf3cdcc1e:0:3 │ - │ stream_test │ tutorialsys:default+gnu │ myhost │ 20240823T104844+0000 │ pass │ 2a00c55d-4492-498c-89f0-7cf821f308c1:0:2 │ - │ stream_test │ tutorialsys:default+clang │ myhost │ 20240823T104845+0000 │ pass │ 2a00c55d-4492-498c-89f0-7cf821f308c1:0:3 │ - │ stream_test │ tutorialsys:default+gnu │ myhost │ 20240823T104903+0000 │ pass │ 98fe5a68-2582-49ca-9c3c-6bfd9b877143:0:2 │ - │ stream_test │ tutorialsys:default+clang │ myhost │ 20240823T104903+0000 │ pass │ 98fe5a68-2582-49ca-9c3c-6bfd9b877143:0:3 │ + ┍━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━┯━━━━━━━━━┯━━━━━━━━━┑ + │ name │ sysenv │ pvar │ punit │ pval │ + ┝━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┿━━━━━━━━━━┿━━━━━━━━━┿━━━━━━━━━┥ + │ stream_test │ generic:default+builtin │ copy_bw │ MB/s │ 40288 │ + │ stream_test │ generic:default+builtin │ triad_bw │ MB/s │ 30530.1 │ + │ stream_test │ tutorialsys:default+baseline │ copy_bw │ MB/s │ 40305.1 │ + │ stream_test │ tutorialsys:default+baseline │ triad_bw │ MB/s │ 30540.6 │ ... - │ T6 │ generic:default+builtin │ myhost │ 20240823T105020+0000 │ pass │ 271fc2e7-b550-4325-b8bb-57bdf95f1d0d:0:0 │ - │ T0 │ generic:default+builtin │ myhost │ 20240823T105031+0000 │ pass │ 50cdb774-f231-4f61-8472-7daaa5199d57:0:0 │ - │ T4 │ generic:default+builtin │ myhost │ 20240823T105031+0000 │ pass │ 50cdb774-f231-4f61-8472-7daaa5199d57:0:1 │ - │ T5 │ generic:default+builtin │ myhost │ 20240823T105031+0000 │ pass │ 50cdb774-f231-4f61-8472-7daaa5199d57:0:2 │ - │ T1 │ generic:default+builtin │ myhost │ 20240823T105031+0000 │ pass │ 50cdb774-f231-4f61-8472-7daaa5199d57:0:3 │ - │ T6 │ generic:default+builtin │ myhost │ 20240823T105032+0000 │ pass │ 50cdb774-f231-4f61-8472-7daaa5199d57:0:4 │ - ┕━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┙ + │ stream_test %num_threads=1 %thread_placement=close │ tutorialsys:default+gnu │ copy_bw │ MB/s │ 46906.3 │ + │ stream_test %num_threads=1 %thread_placement=close │ tutorialsys:default+gnu │ triad_bw │ MB/s │ 35309.3 │ + │ stream_test %num_threads=1 %thread_placement=close │ tutorialsys:default+clang │ copy_bw │ MB/s │ 46811.4 │ + │ stream_test %num_threads=1 %thread_placement=close │ tutorialsys:default+clang │ triad_bw │ MB/s │ 35634.3 │ + ┕━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━┷━━━━━━━━━┷━━━━━━━━━┙ -To get all the details of a session or a set of test cases you can use the :option:`--describe-stored-session` and :option:`--describe-stored-testcases` options which will return a JSON record with all the details. +Note that the :option:`--list-stored-testcases` will list only performance tests. +You can get all the details of stored sessions or a set of test cases using the :option:`--describe-stored-sessions` and :option:`--describe-stored-testcases` options which will return a detailed JSON record. -You can also combine the :option:`-n` option with the :option:`--list-stored-testcases` and :option:`--describe-stored-testcases` options in order to restrict the listing to specific tests only: +You can also combine :option:`--list-stored-testcases` and :option:`--describe-stored-testcases` with the :option:`-n` and :option:`-E` options in order to restrict the listing to specific tests only: .. code-block:: bash :caption: Run in the single-node container. - reframe --list-stored-testcases=20240823T104835+0000:now -n stream_test + reframe --list-stored-testcases=now-1d:now/mean:/ -n 'stream_test %' -E 'num_threads == 2' Comparing performance of test cases ----------------------------------- ReFrame can be used to compare the performance of the same test cases run in different time periods using the :option:`--performance-compare` option. -The following will compare the performance of the test cases of the session ``1fb8488e-c361-4355-b7df-c0dcf3cdcc1e`` with any other same test case that has run the last 24h: +The following will compare the performance of the test cases of the session ``aa953baf-63d9-47b1-8800-1c6d05883334`` with any other same test case that has run the last 24h: .. code-block:: bash :caption: Run in the single-node container. - reframe --performance-compare=1fb8488e-c361-4355-b7df-c0dcf3cdcc1e/now-1d:now/mean:/ + reframe --performance-compare=aa953baf-63d9-47b1-8800-1c6d05883334/now-1d:now/mean:/ .. code-block:: console - ┍━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━┯━━━━━━━━━┯━━━━━━━━━┯━━━━━━━━━┑ - │ name │ sysenv │ pvar │ pval │ punit │ pdiff │ - ┝━━━━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━━━━━━━━━┿━━━━━━━━━━┿━━━━━━━━━┿━━━━━━━━━┿━━━━━━━━━┥ - │ stream_test │ tutorialsys:default+gnu │ copy_bw │ 44139 │ MB/s │ +11.14% │ - │ stream_test │ tutorialsys:default+gnu │ triad_bw │ 39344.7 │ MB/s │ +20.77% │ - │ stream_test │ tutorialsys:default+clang │ copy_bw │ 44979.1 │ MB/s │ +10.81% │ - │ stream_test │ tutorialsys:default+clang │ triad_bw │ 39330.8 │ MB/s │ +8.28% │ - ┕━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━┷━━━━━━━━━┷━━━━━━━━━┷━━━━━━━━━┙ + ┍━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━┯━━━━━━━━━┯━━━━━━━━━━┯━━━━━━━━━━┯━━━━━━━━━┑ + │ name │ sysenv │ pvar │ punit │ pval_A │ pval_B │ pdiff │ + ┝━━━━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━━━━━━━━━┿━━━━━━━━━━┿━━━━━━━━━┿━━━━━━━━━━┿━━━━━━━━━━┿━━━━━━━━━┥ + │ stream_test │ tutorialsys:default+gnu │ copy_bw │ MB/s │ 25169.3 │ 46554.8 │ -45.94% │ + │ stream_test │ tutorialsys:default+gnu │ triad_bw │ MB/s │ 19387.8 │ 37660.5 │ -48.52% │ + │ stream_test │ tutorialsys:default+clang │ copy_bw │ MB/s │ 25129.7 │ 47072.2 │ -46.61% │ + │ stream_test │ tutorialsys:default+clang │ triad_bw │ MB/s │ 29232.8 │ 40177.2 │ -27.24% │ + ┕━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━┷━━━━━━━━━┷━━━━━━━━━━┷━━━━━━━━━━┷━━━━━━━━━┙ + +Note that the absolute base performance (``pval_A`` column) is listed along with the target performance (``pval_B`` column). + +:option:`--performance-compare` can also be combined with the :option:`-n` and :option:`-E` options in order to restrict the comparison to specific tests only. -The :option:`-n` option can also be combined with :option:`--performance-compare` to restrict the test cases listed. -Similarly to the :option:`--performance-compare` option, the :option:`--performance-report` option can compare the performance of the current run with any arbitrary past session or past time period. +Similarly, the :option:`--performance-report` option can compare the performance of the current run with any arbitrary past session or past time period. -Finally, you can delete completely a stored session using the :option:`--delete-stored-session` option: +Finally, a stored session can be deleted using the :option:`--delete-stored-sessions` option: .. code-block:: bash - reframe --delete-stored-session=1fb8488e-c361-4355-b7df-c0dcf3cdcc1e + reframe --delete-stored-sessions=1fb8488e-c361-4355-b7df-c0dcf3cdcc1e Deleting a session will also delete all its test cases from the database. diff --git a/reframe/frontend/cli.py b/reframe/frontend/cli.py index 8e8c72dea..cda760463 100644 --- a/reframe/frontend/cli.py +++ b/reframe/frontend/cli.py @@ -401,20 +401,20 @@ def main(): 'for the selected tests and exit'), ) action_options.add_argument( - '--delete-stored-session', action='store', metavar='UUID', - help='Delete stored session' + '--delete-stored-sessions', action='store', metavar='QUERY', + help='Delete stored sessions' ) action_options.add_argument( '--describe', action='store_true', help='Give full details on the selected tests' ) action_options.add_argument( - '--describe-stored-session', action='store', metavar='UUID', + '--describe-stored-sessions', action='store', metavar='QUERY', help='Get detailed session information in JSON' ) action_options.add_argument( '--describe-stored-testcases', action='store', - metavar='SESSION_UUID|PERIOD', + metavar='QUERY', help='Get detailed test case information in JSON' ) action_options.add_argument( @@ -434,12 +434,11 @@ def main(): ) action_options.add_argument( '--list-stored-sessions', nargs='?', action='store', - const='now-1w:now', metavar='PERIOD', help='List stored sessions' + const='now-1w:now', metavar='QUERY', help='List stored sessions' ) action_options.add_argument( - '--list-stored-testcases', action='store', - metavar='SESSION_UUID|PERIOD', - help='List stored testcases by session or time period' + '--list-stored-testcases', action='store', metavar='QUERY', + help='List performance info for stored testcases' ) action_options.add_argument( '-l', '--list', nargs='?', const='T', choices=['C', 'T'], @@ -617,7 +616,7 @@ def main(): '(default: "now:now/last:+job_nodelist/+result")') ) reporting_options.add_argument( - '--session-extras', action='store', metavar='KV_DATA', + '--session-extras', action='append', metavar='KV_DATA', help='Annotate session with custom key/value data' ) @@ -641,15 +640,10 @@ def main(): envvar='RFM_SYSTEM' ) misc_options.add_argument( - '--table-format', choices=['csv', 'plain', 'pretty'], + '--table-format', choices=['csv', 'plain', 'outline', 'grid'], help='Table formatting', envvar='RFM_TABLE_FORMAT', configvar='general/table_format' ) - misc_options.add_argument( - '--table-hide-columns', metavar='COLS', action='store', - help='Hide specific columns from the final table', - envvar='RFM_TABLE_HIDE_COLUMNS', configvar='general/table_hide_columns' - ) misc_options.add_argument( '-v', '--verbose', action='count', help='Increase verbosity level of output', @@ -831,7 +825,7 @@ def restrict_logging(): if (options.show_config or options.detect_host_topology or options.describe or - options.describe_stored_session or + options.describe_stored_sessions or options.describe_stored_testcases): logging.getlogger().setLevel(logging.ERROR) return True @@ -983,11 +977,11 @@ def restrict_logging(): if options.list_stored_sessions: with exit_gracefully_on_error('failed to retrieve session data', printer): - time_period = options.list_stored_sessions - if time_period == 'all': - time_period = None + spec = options.list_stored_sessions + if spec == 'all': + spec = '19700101T0000+0000:now' - printer.table(reporting.session_data(time_period)) + printer.table(reporting.session_data(spec)) sys.exit(0) if options.list_stored_testcases: @@ -995,17 +989,17 @@ def restrict_logging(): with exit_gracefully_on_error('failed to retrieve test case data', printer): printer.table(reporting.testcase_data( - options.list_stored_testcases, namepatt + options.list_stored_testcases, namepatt, options.filter_expr )) sys.exit(0) - if options.describe_stored_session: + if options.describe_stored_sessions: # Restore logging level printer.setLevel(logging.INFO) with exit_gracefully_on_error('failed to retrieve session data', printer): printer.info(jsonext.dumps(reporting.session_info( - options.describe_stored_session + options.describe_stored_sessions ), indent=2)) sys.exit(0) @@ -1020,11 +1014,12 @@ def restrict_logging(): ), indent=2)) sys.exit(0) - if options.delete_stored_session: - session_uuid = options.delete_stored_session + if options.delete_stored_sessions: + query = options.delete_stored_sessions with exit_gracefully_on_error('failed to delete session', printer): - reporting.delete_session(session_uuid) - printer.info(f'Session {session_uuid} deleted successfully.') + for uuid in reporting.delete_sessions(query): + printer.info(f'Session {uuid} deleted successfully.') + sys.exit(0) if options.performance_compare: @@ -1033,7 +1028,9 @@ def restrict_logging(): printer): printer.table( reporting.performance_compare(options.performance_compare, - namepatt=namepatt) + None, + namepatt, + options.filter_expr) ) sys.exit(0) @@ -1598,9 +1595,10 @@ def module_unuse(*paths): if options.session_extras: # Update report's extras extras = {} - for arg in options.session_extras.split(','): - k, v = arg.split('=', maxsplit=1) - extras[k] = v + for sess in options.session_extras: + for arg in sess.split(','): + k, v = arg.split('=', maxsplit=1) + extras[k] = v report.update_extras(extras) diff --git a/reframe/frontend/printer.py b/reframe/frontend/printer.py index a54c333ec..d1324e1cf 100644 --- a/reframe/frontend/printer.py +++ b/reframe/frontend/printer.py @@ -269,29 +269,14 @@ def table(self, data, **kwargs): # Map our options to tabulate if table_format == 'plain': tablefmt = 'plain' - elif table_format == 'pretty': + elif table_format == 'outline': tablefmt = 'mixed_outline' + elif table_format == 'grid': + tablefmt = 'mixed_grid' else: raise ValueError(f'invalid table format: {table_format}') kwargs.setdefault('headers', 'firstrow') kwargs.setdefault('tablefmt', tablefmt) kwargs.setdefault('numalign', 'right') - hide_columns = rt.runtime().get_option('general/0/table_hide_columns') - if hide_columns and kwargs['headers'] == 'firstrow' and data: - hide_columns = hide_columns.split(',') - colidx = [i for i, col in enumerate(data[0]) - if col not in hide_columns] - - def _access(seq, i, default=None): - # Safe access of i-th element of a sequence - try: - return seq[i] - except IndexError: - return default - - tab_data = [[_access(rec, col) for col in colidx] for rec in data] - else: - tab_data = data - - self.info(tabulate(tab_data, **kwargs)) + self.info(tabulate(data, **kwargs)) diff --git a/reframe/frontend/reporting/__init__.py b/reframe/frontend/reporting/__init__.py index 1c7254f83..08d3deb21 100644 --- a/reframe/frontend/reporting/__init__.py +++ b/reframe/frontend/reporting/__init__.py @@ -15,6 +15,7 @@ import socket import time import uuid +from collections import UserDict from collections.abc import Hashable from filelock import FileLock @@ -27,16 +28,39 @@ from reframe.core.warnings import suppress_deprecations from reframe.utility import nodelist_abbrev, OrderedSet from .storage import StorageBackend -from .utility import Aggregator, parse_cmp_spec, parse_time_period, is_uuid +from .utility import Aggregator, parse_cmp_spec, parse_query_spec # The schema data version # Major version bumps are expected to break the validation of previous schemas DATA_VERSION = '4.0' -_SCHEMA = os.path.join(rfm.INSTALL_PREFIX, 'reframe/schemas/runreport.json') +_SCHEMA = None +_RESERVED_SESSION_INFO_KEYS = None _DATETIME_FMT = r'%Y%m%dT%H%M%S%z' +def _schema(): + global _SCHEMA + if _SCHEMA is not None: + return _SCHEMA + + with open(os.path.join(rfm.INSTALL_PREFIX, + 'reframe/schemas/runreport.json')) as fp: + _SCHEMA = json.load(fp) + return _SCHEMA + + +def _reserved_session_info_keys(): + global _RESERVED_SESSION_INFO_KEYS + if _RESERVED_SESSION_INFO_KEYS is not None: + return _RESERVED_SESSION_INFO_KEYS + + _RESERVED_SESSION_INFO_KEYS = set( + _schema()['properties']['session_info']['properties'].keys() + ) + return _RESERVED_SESSION_INFO_KEYS + + def _format_sysenv(system, partition, environ): return f'{system}:{partition}+{environ}' @@ -183,11 +207,8 @@ def _restore_session(filename): f'report file {filename!r} is not a valid JSON file') from e # Validate the report - with open(_SCHEMA) as fp: - schema = json.load(fp) - try: - jsonschema.validate(report, schema) + jsonschema.validate(report, _schema()) except jsonschema.ValidationError as e: try: found_ver = report['session_info']['data_version'] @@ -272,10 +293,12 @@ def update_timestamps(self, ts_start, ts_end): def update_extras(self, extras): '''Attach user-specific metadata to the session''' - # We prepend a special character to the user extras in order to avoid - # possible conflicts with existing keys - for k, v in extras.items(): - self.__report['session_info'][f'${k}'] = v + clashed_keys = set(extras.keys()) & _reserved_session_info_keys() + if clashed_keys: + raise ValueError('cannot use reserved keys ' + f'`{",".join(clashed_keys)}` as session extras') + + self.__report['session_info'].update(extras) def update_run_stats(self, stats): session_uuid = self.__report['session_info']['uuid'] @@ -361,7 +384,13 @@ def update_run_stats(self, stats): key = alt_name if alt_name else name try: with suppress_deprecations(): - entry[key] = getattr(check, name) + val = getattr(check, name) + + if name in test_cls.raw_params: + # Attribute is parameter, so format it + val = test_cls.raw_params[name].format(val) + + entry[key] = val except AttributeError: entry[key] = '' @@ -495,27 +524,59 @@ def save_junit(self, filename): ) -def _group_key(groups, testcase): +class _TCProxy(UserDict): + '''Test case proxy class to support dynamic fields''' + _required_keys = ['name', 'system', 'partition', 'environ'] + + def __init__(self, testcase, include_only=None): + if isinstance(testcase, _TCProxy): + testcase = testcase.data + + if include_only is not None: + self.data = {} + for k in include_only + self._required_keys: + if k in testcase: + self.data.setdefault(k, testcase[k]) + else: + self.data = testcase + + def __getitem__(self, key): + val = super().__getitem__(key) + if key == 'job_nodelist': + val = nodelist_abbrev(val) + + return val + + def __missing__(self, key): + if key == 'basename': + return self.data['name'].split()[0] + elif key == 'sysenv': + return _format_sysenv(self.data['system'], + self.data['partition'], + self.data['environ']) + elif key == 'pdiff': + return None + else: + raise KeyError(key) + + +def _group_key(groups, testcase: _TCProxy): key = [] for grp in groups: with reraise_as(ReframeError, (KeyError,), 'no such group'): val = testcase[grp] + if not isinstance(val, Hashable): + val = str(val) - if grp == 'job_nodelist': - # Fold nodelist before adding as a key element - key.append(nodelist_abbrev(val)) - elif not isinstance(val, Hashable): - key.append(str(val)) - else: key.append(val) return tuple(key) @time_function -def _group_testcases(testcases, group_by, extra_cols): +def _group_testcases(testcases, groups, columns): grouped = {} - for tc in testcases: + for tc in map(_TCProxy, testcases): for pvar, reftuple in tc['perfvalues'].items(): pvar = pvar.split(':')[-1] pval, pref, plower, pupper, punit = reftuple @@ -526,16 +587,16 @@ def _group_testcases(testcases, group_by, extra_cols): plower = pref * (1 + plower) if plower is not None else -math.inf pupper = pref * (1 + pupper) if pupper is not None else math.inf - record = { + record = _TCProxy(tc, include_only=columns) + record.update({ 'pvar': pvar, 'pval': pval, 'pref': pref, 'plower': plower, 'pupper': pupper, 'punit': punit, - **{k: tc[k] for k in group_by + extra_cols if k in tc} - } - key = _group_key(group_by, record) + }) + key = _group_key(groups, record) grouped.setdefault(key, []) grouped[key].append(record) @@ -551,66 +612,98 @@ def _aggregate_perf(grouped_testcases, aggr_fn, cols): delim = '\n' other_aggr = Aggregator.create('join_uniq', delim) + count_aggr = Aggregator.create('count') aggr_data = {} for key, seq in grouped_testcases.items(): aggr_data.setdefault(key, {}) - aggr_data[key]['pval'] = aggr_fn(tc['pval'] for tc in seq) with reraise_as(ReframeError, (KeyError,), 'no such column'): for c in cols: - aggr_data[key][c] = other_aggr( - nodelist_abbrev(tc[c]) if c == 'job_nodelist' else tc[c] - for tc in seq - ) + if c == 'pval': + fn = aggr_fn + elif c == 'psamples': + fn = count_aggr + else: + fn = other_aggr + + if fn is count_aggr: + aggr_data[key][c] = fn(seq) + else: + aggr_data[key][c] = fn(tc[c] for tc in seq) return aggr_data @time_function def compare_testcase_data(base_testcases, target_testcases, base_fn, target_fn, - extra_group_by=None, extra_cols=None): - extra_group_by = extra_group_by or [] - extra_cols = extra_cols or [] - group_by = (['name', 'system', 'partition', 'environ', 'pvar', 'punit'] + - extra_group_by) - - grouped_base = _group_testcases(base_testcases, group_by, extra_cols) - grouped_target = _group_testcases(target_testcases, group_by, extra_cols) - pbase = _aggregate_perf(grouped_base, base_fn, extra_cols) - ptarget = _aggregate_perf(grouped_target, target_fn, []) + groups=None, columns=None): + groups = groups or [] + columns = columns or [] + grouped_base = _group_testcases(base_testcases, groups, columns) + grouped_target = _group_testcases(target_testcases, groups, columns) + pbase = _aggregate_perf(grouped_base, base_fn, columns) + ptarget = _aggregate_perf(grouped_target, target_fn, columns) + + # For visual purposes if `name` is in `groups`, consider also its + # derivative `basename` to be in, so as to avoid duplicate columns + if 'name' in groups: + groups.append('basename') # Build the final table data - data = [['name', 'sysenv', 'pvar', 'pval', - 'punit', 'pdiff'] + extra_group_by + extra_cols] - for key, aggr_data in pbase.items(): - pval = aggr_data['pval'] - try: - target_pval = ptarget[key]['pval'] - except KeyError: - pdiff = 'n/a' + extra_cols = set(columns) - set(groups) - {'pdiff'} + + # Header line + header = [] + for c in columns: + if c in extra_cols: + header += [f'{c}_A', f'{c}_B'] else: - if pval is None or target_pval is None: - pdiff = 'n/a' + header.append(c) + + data = [header] + for key, aggr_data in pbase.items(): + pdiff = None + line = [] + for c in columns: + base = aggr_data.get(c) + try: + target = ptarget[key][c] + except KeyError: + target = None + + if c == 'pval': + line.append('n/a' if base is None else base) + line.append('n/a' if target is None else target) + + # compute diff for later usage + if base is not None and target is not None: + if base == 0 and target == 0: + pdiff = math.nan + elif target == 0: + pdiff = math.inf + else: + pdiff = (base - target) / target + pdiff = '{:+7.2%}'.format(pdiff) + elif c == 'pdiff': + line.append('n/a' if pdiff is None else pdiff) + elif c in extra_cols: + line.append('n/a' if base is None else base) + line.append('n/a' if target is None else target) else: - pdiff = (pval - target_pval) / target_pval - pdiff = '{:+7.2%}'.format(pdiff) - - name, system, partition, environ, pvar, punit, *extras = key - line = [name, _format_sysenv(system, partition, environ), - pvar, pval, punit, pdiff, *extras] - # Add the extra columns - line += [aggr_data[c] for c in extra_cols] + line.append('n/a' if base is None else base) + data.append(line) return data @time_function -def performance_compare(cmp, report=None, namepatt=None): +def performance_compare(cmp, report=None, namepatt=None, test_filter=None): with reraise_as(ReframeError, (ValueError,), 'could not parse comparison spec'): match = parse_cmp_spec(cmp) - if match.period_base is None and match.session_base is None: + backend = StorageBackend.default() + if match.base is None: if report is None: raise ValueError('report cannot be `None` ' 'for current run comparisons') @@ -625,37 +718,22 @@ def performance_compare(cmp, report=None, namepatt=None): tcs_base.append(tc) except IndexError: tcs_base = [] - elif match.period_base is not None: - tcs_base = StorageBackend.default().fetch_testcases_time_period( - *match.period_base, namepatt - ) else: - tcs_base = StorageBackend.default().fetch_testcases_from_session( - match.session_base, namepatt - ) - - if match.period_target: - tcs_target = StorageBackend.default().fetch_testcases_time_period( - *match.period_target, namepatt - ) - else: - tcs_target = StorageBackend.default().fetch_testcases_from_session( - match.session_target, namepatt - ) + tcs_base = backend.fetch_testcases(match.base, namepatt, test_filter) + tcs_target = backend.fetch_testcases(match.target, namepatt, test_filter) return compare_testcase_data(tcs_base, tcs_target, match.aggregator, - match.aggregator, match.extra_groups, - match.extra_cols) + match.aggregator, match.groups, match.columns) @time_function -def session_data(time_period): - '''Retrieve all sessions''' +def session_data(query): + '''Retrieve sessions''' data = [['UUID', 'Start time', 'End time', 'Num runs', 'Num cases']] extra_cols = OrderedSet() - for sess_data in StorageBackend.default().fetch_sessions_time_period( - *parse_time_period(time_period) if time_period else (None, None) + for sess_data in StorageBackend.default().fetch_sessions( + parse_query_spec(query) ): session_info = sess_data['session_info'] record = [session_info['uuid'], @@ -666,12 +744,12 @@ def session_data(time_period): # Expand output with any user metadata for k in session_info: - if k.startswith('$'): - extra_cols.add(k[1:]) + if k not in _reserved_session_info_keys(): + extra_cols.add(k) # Add any extras recorded so far for key in extra_cols: - record.append(session_info.get(f'${key}', '')) + record.append(session_info.get(key, '')) data.append(record) @@ -690,73 +768,43 @@ def session_data(time_period): @time_function -def testcase_data(spec, namepatt=None): - storage = StorageBackend.default() - if is_uuid(spec): - testcases = storage.fetch_testcases_from_session(spec, namepatt) - else: - testcases = storage.fetch_testcases_time_period( - *parse_time_period(spec), namepatt - ) +def testcase_data(spec, namepatt=None, test_filter=None): + with reraise_as(ReframeError, (ValueError,), + 'could not parse comparison spec'): + match = parse_cmp_spec(spec, default_extra_cols=['pval']) - data = [['Name', 'SysEnv', - 'Nodelist', 'Completion Time', 'Result', 'UUID']] - for tc in testcases: - ts_completed = tc['job_completion_time_unix'] - if not ts_completed: - completion_time = 'n/a' - else: - # Always format the completion time as users can set their own - # formatting in the log record - completion_time = time.strftime(_DATETIME_FMT, - time.localtime(ts_completed)) - - data.append([ - tc['name'], - _format_sysenv(tc['system'], tc['partition'], tc['environ']), - nodelist_abbrev(tc['job_nodelist']), - completion_time, - tc['result'], - tc['uuid'] - ]) + if match.base is not None: + raise ReframeError('only one time period or session are allowed: ' + 'if you want to compare performance, ' + 'use the `--performance-compare` option') + + storage = StorageBackend.default() + testcases = storage.fetch_testcases(match.target, namepatt, test_filter) + aggregated = _aggregate_perf( + _group_testcases(testcases, match.groups, match.columns), + match.aggregator, match.columns + ) + data = [match.columns] + for aggr_data in aggregated.values(): + data.append([aggr_data[c] for c in match.columns]) return data @time_function -def session_info(uuid): +def session_info(query): '''Retrieve session details as JSON''' - session = StorageBackend.default().fetch_session_json(uuid) - if not session: - raise ReframeError(f'no such session: {uuid}') - - return session + return StorageBackend.default().fetch_sessions(parse_query_spec(query)) @time_function -def testcase_info(spec, namepatt=None): +def testcase_info(query, namepatt=None, test_filter=None): '''Retrieve test case details as JSON''' - testcases = [] - if is_uuid(spec): - session_uuid, *tc_index = spec.split(':') - session = session_info(session_uuid) - if not tc_index: - for run in session['runs']: - testcases += run['testcases'] - else: - run_index, test_index = tc_index - testcases.append( - session['runs'][run_index]['testcases'][test_index] - ) - else: - testcases = StorageBackend.default().fetch_testcases_time_period( - *parse_time_period(spec), namepatt - ) - - return testcases + return StorageBackend.default().fetch_testcases(parse_query_spec(query), + namepatt, test_filter) @time_function -def delete_session(session_uuid): - StorageBackend.default().remove_session(session_uuid) +def delete_sessions(query): + return StorageBackend.default().remove_sessions(parse_query_spec(query)) diff --git a/reframe/frontend/reporting/storage.py b/reframe/frontend/reporting/storage.py index 686332e2d..b7005f156 100644 --- a/reframe/frontend/reporting/storage.py +++ b/reframe/frontend/reporting/storage.py @@ -4,6 +4,7 @@ # SPDX-License-Identifier: BSD-3-Clause import abc +import functools import os import re import sqlite3 @@ -15,6 +16,8 @@ from reframe.core.exceptions import ReframeError from reframe.core.logging import getlogger, time_function, getprofiler from reframe.core.runtime import runtime +from reframe.utility import nodelist_abbrev +from ..reporting.utility import QuerySelector class StorageBackend: @@ -38,12 +41,35 @@ def store(self, report, report_file): '''Store the given report''' @abc.abstractmethod - def fetch_session_time_period(self, session_uuid): - '''Fetch the time period from specific session''' + def fetch_testcases(self, selector: QuerySelector, name_patt=None, + test_filter=None): + '''Fetch test cases based on the specified query selector. + + :arg selector: an instance of :class:`QuerySelector` that will specify + the actual type of query requested. + :arg name_patt: regex to filter test cases by name. + :arg test_filter: arbitrary Python exrpession to filter test cases, + e.g., ``'job_nodelist == "nid01"'``. + :returns: A list of matching test cases. + ''' @abc.abstractmethod - def fetch_testcases_time_period(self, ts_start, ts_end): - '''Fetch all test cases from specified period''' + def fetch_sessions(self, selector: QuerySelector): + '''Fetch sessions based on the specified query selector. + + :arg selector: an instance of :class:`QuerySelector` that will specify + the actual type of query requested. + :returns: A list of matching sessions. + ''' + + @abc.abstractmethod + def remove_sessions(self, selector: QuerySelector): + '''Remove sessions based on the specified query selector + + :arg selector: an instance of :class:`QuerySelector` that will specify + the actual type of query requested. + :returns: A list of the session UUIDs that were succesfully deleted. + ''' class _SqliteStorage(StorageBackend): @@ -80,6 +106,16 @@ def _db_matches(self, patt, item): regex = re.compile(patt) return regex.match(item) is not None + def _db_filter_json(self, expr, item): + if expr is None: + return True + + if 'job_nodelist' in expr: + item['abbrev'] = nodelist_abbrev + expr = expr.replace('job_nodelist', 'abbrev(job_nodelist)') + + return eval(expr, None, item) + def _db_connect(self, *args, **kwargs): timeout = runtime().get_option('storage/0/sqlite_conn_timeout') kwargs.setdefault('timeout', timeout) @@ -190,6 +226,33 @@ def store(self, report, report_file=None): with self._db_lock(): return self._db_store_report(conn, report, report_file) + @time_function + def _decode_sessions(self, results, sess_filter): + '''Decode sessions from the raw DB results. + + Return a map of session uuids to decoded session data + ''' + sessions = {} + for uuid, json_blob in results: + sessions.setdefault(uuid, json_blob) + + # Join all sessions and decode them at once + reports_blob = '[' + ','.join(sessions.values()) + ']' + getprofiler().enter_region('json decode') + reports = jsonext.loads(reports_blob) + getprofiler().exit_region() + + # Reindex and filter sessions based on their decoded data + sessions.clear() + for rpt in reports: + try: + if self._db_filter_json(sess_filter, rpt['session_info']): + sessions[rpt['session_info']['uuid']] = rpt + except Exception: + continue + + return sessions + @time_function def _fetch_testcases_raw(self, condition): # Retrieve relevant session info and index it in Python @@ -205,20 +268,7 @@ def _fetch_testcases_raw(self, condition): results = conn.execute(query).fetchall() getprofiler().exit_region() - - sessions = {} - for uuid, json_blob in results: - sessions.setdefault(uuid, json_blob) - - # Join all sessions and decode them at once - reports_blob = '[' + ','.join(sessions.values()) + ']' - getprofiler().enter_region('json decode') - reports = jsonext.loads(reports_blob) - getprofiler().exit_region() - - # Reindex sessions with their decoded data - for rpt in reports: - sessions[rpt['session_info']['uuid']] = rpt + sessions = self._decode_sessions(results, None) # Extract the test case data by extracting their UUIDs getprofiler().enter_region('sqlite testcase query') @@ -248,108 +298,114 @@ def _fetch_testcases_raw(self, condition): return testcases @time_function - def fetch_session_time_period(self, session_uuid): + def _fetch_testcases_from_session(self, selector, + name_patt=None, test_filter=None): + query = 'SELECT uuid, json_blob from sessions' + if selector.by_session_uuid(): + query += f' WHERE uuid == "{selector.value}"' + + getprofiler().enter_region('sqlite session query') with self._db_connect(self._db_file()) as conn: - query = ('SELECT session_start_unix, session_end_unix ' - f'FROM sessions WHERE uuid == "{session_uuid}" ' - 'LIMIT 1') getlogger().debug(query) results = conn.execute(query).fetchall() - if results: - return results[0] - return None, None + getprofiler().exit_region() + if not results: + return [] + + sessions = self._decode_sessions( + results, selector.value if selector.by_session_filter() else None + ) + return [tc for sess in sessions.values() + for run in sess['runs'] for tc in run['testcases'] + if (self._db_matches(name_patt, tc['name']) and + self._db_filter_json(test_filter, tc))] @time_function - def fetch_testcases_time_period(self, ts_start, ts_end, name_pattern=None): + def _fetch_testcases_time_period(self, ts_start, ts_end, name_patt=None, + test_filter=None): expr = (f'job_completion_time_unix >= {ts_start} AND ' f'job_completion_time_unix <= {ts_end}') - if name_pattern: - expr += f' AND name REGEXP "{name_pattern}"' + if name_patt: + expr += f' AND name REGEXP "{name_patt}"' - return self._fetch_testcases_raw( + testcases = self._fetch_testcases_raw( f'({expr}) ORDER BY job_completion_time_unix' ) + filt_fn = functools.partial(self._db_filter_json, test_filter) + return [*filter(filt_fn, testcases)] @time_function - def fetch_testcases_from_session(self, session_uuid, name_pattern=None): - with self._db_connect(self._db_file()) as conn: - query = ('SELECT json_blob from sessions ' - f'WHERE uuid == "{session_uuid}"') - getlogger().debug(query) - results = conn.execute(query).fetchall() - - if not results: - return [] - - session_info = jsonext.loads(results[0][0]) - return [tc for run in session_info['runs'] for tc in run['testcases'] - if self._db_matches(name_pattern, tc['name'])] + def fetch_testcases(self, selector: QuerySelector, + name_patt=None, test_filter=None): + if selector.by_time_period(): + return self._fetch_testcases_time_period( + *selector.value, name_patt, test_filter + ) + else: + return self._fetch_testcases_from_session( + selector, name_patt, test_filter + ) @time_function - def fetch_sessions_time_period(self, ts_start=None, ts_end=None): - with self._db_connect(self._db_file()) as conn: - query = 'SELECT json_blob from sessions' - if ts_start or ts_end: - query += ' WHERE (' - if ts_start: - query += f'session_start_unix >= {ts_start}' - - if ts_end: - query += f' AND session_start_unix <= {ts_end}' - - query += ')' + def fetch_sessions(self, selector: QuerySelector): + query = 'SELECT uuid, json_blob FROM sessions' + if selector.by_time_period(): + ts_start, ts_end = selector.value + query += (f' WHERE (session_start_unix >= {ts_start} AND ' + f'session_start_unix <= {ts_end})') + elif selector.by_session_uuid(): + query += f' WHERE uuid == "{selector.value}"' - query += ' ORDER BY session_start_unix' - getlogger().debug(query) - results = conn.execute(query).fetchall() - - if not results: - return [] - - return [jsonext.loads(json_blob) for json_blob, *_ in results] - - @time_function - def fetch_session_json(self, uuid): with self._db_connect(self._db_file()) as conn: - query = f'SELECT json_blob FROM sessions WHERE uuid == "{uuid}"' getlogger().debug(query) results = conn.execute(query).fetchall() - return jsonext.loads(results[0][0]) if results else {} - - def _do_remove(self, uuid): - with self._db_lock(): - with self._db_connect(self._db_file()) as conn: - # Enable foreign keys for delete action to have cascade effect - conn.execute('PRAGMA foreign_keys = ON') - - # Check first if the uuid exists - query = f'SELECT * FROM sessions WHERE uuid == "{uuid}"' - getlogger().debug(query) - if not conn.execute(query).fetchall(): - raise ReframeError(f'no such session: {uuid}') + session = self._decode_sessions( + results, selector.value if selector.by_session_filter() else None + ) + return [*session.values()] + + def _do_remove(self, conn, uuids): + '''Remove sessions''' + + # Enable foreign keys for delete action to have cascade effect + conn.execute('PRAGMA foreign_keys = ON') + uuids_sql = ','.join(f'"{uuid}"' for uuid in uuids) + query = f'DELETE FROM sessions WHERE uuid IN ({uuids_sql})' + getlogger().debug(query) + conn.execute(query).fetchall() + + # Retrieve the uuids that have been removed + query = f'SELECT uuid FROM sessions WHERE uuid IN ({uuids_sql})' + getlogger().debug(query) + results = conn.execute(query).fetchall() + not_removed = {rec[0] for rec in results} + return list(set(uuids) - not_removed) + + def _do_remove2(self, conn, uuids): + '''Remove sessions using the RETURNING keyword''' + + # Enable foreign keys for delete action to have cascade effect + conn.execute('PRAGMA foreign_keys = ON') + uuids_sql = ','.join(f'"{uuid}"' for uuid in uuids) + query = (f'DELETE FROM sessions WHERE uuid IN ({uuids_sql}) ' + 'RETURNING uuid') + getlogger().debug(query) + results = conn.execute(query).fetchall() + return [rec[0] for rec in results] - query = f'DELETE FROM sessions WHERE uuid == "{uuid}"' - getlogger().debug(query) - conn.execute(query) + @time_function + def remove_sessions(self, selector: QuerySelector): + if selector.by_session_uuid(): + uuids = [selector.value] + else: + uuids = [sess['session_info']['uuid'] + for sess in self.fetch_sessions(selector)] - def _do_remove2(self, uuid): - '''Remove a session using the RETURNING keyword''' with self._db_lock(): with self._db_connect(self._db_file()) as conn: - # Enable foreign keys for delete action to have cascade effect - conn.execute('PRAGMA foreign_keys = ON') - query = (f'DELETE FROM sessions WHERE uuid == "{uuid}" ' - 'RETURNING *') - getlogger().debug(query) - deleted = conn.execute(query).fetchall() - if not deleted: - raise ReframeError(f'no such session: {uuid}') - - @time_function - def remove_session(self, uuid): - if sqlite3.sqlite_version_info >= (3, 35, 0): - self._do_remove2(uuid) - else: - self._do_remove(uuid) + if sqlite3.sqlite_version_info >= (3, 35, 0): + return self._do_remove2(conn, uuids) + else: + return self._do_remove(conn, uuids) diff --git a/reframe/frontend/reporting/utility.py b/reframe/frontend/reporting/utility.py index 54de563f8..d1a772b7c 100644 --- a/reframe/frontend/reporting/utility.py +++ b/reframe/frontend/reporting/utility.py @@ -10,7 +10,6 @@ from collections import namedtuple from datetime import datetime, timedelta, timezone from numbers import Number -from .storage import StorageBackend class Aggregator: @@ -28,6 +27,8 @@ def create(cls, name, *args, **kwargs): return AggrMin(*args, **kwargs) elif name == 'max': return AggrMax(*args, **kwargs) + elif name == 'count': + return AggrCount(*args, **kwargs) elif name == 'join_uniq': return AggrJoinUniqueValues(*args, **kwargs) else: @@ -85,6 +86,18 @@ def __call__(self, iterable): return self.__delim.join(unique_vals) +class AggrCount(Aggregator): + def __call__(self, iterable): + if hasattr(iterable, '__len__'): + return len(iterable) + + count = 0 + for _ in iterable: + count += 1 + + return count + + def _parse_timestamp(s): if isinstance(s, Number): return s @@ -140,73 +153,132 @@ def is_uuid(s): return _UUID_PATTERN.match(s) is not None +class QuerySelector: + '''A union class for the different session and testcase queries. + + A session or testcase query can be of one of the following kinds: + + - Query by time period + - Query by session uuid + - Query by session filtering expression + + This class holds only a single value that is interpreted differently, + depending on how it was constructed. + There are methods to query the actual kind of the held value, so that + callers can take appropriate action. + ''' + BY_SESS_FILTER = 1 + BY_SESS_UUID = 2 + BY_TIME_PERIOD = 3 + + def __init__(self, value, kind): + self.__value = value + self.__kind = kind + + @property + def value(self): + return self.__value + + @property + def kind(self): + return self.__kind + + def by_time_period(self): + return self.__kind == self.BY_TIME_PERIOD + + def by_session_uuid(self): + return self.__kind == self.BY_SESS_UUID + + def by_session_filter(self): + return self.__kind == self.BY_SESS_FILTER + + @classmethod + def from_time_period(cls, ts_start, ts_end): + return cls((ts_start, ts_end), cls.BY_TIME_PERIOD) + + @classmethod + def from_session_uuid(cls, uuid): + return cls(uuid, cls.BY_SESS_UUID) + + @classmethod + def from_session_filter(cls, sess_filter): + return cls(sess_filter, cls.BY_SESS_FILTER) + + def __repr__(self): + clsname = type(self).__name__ + return f'{clsname}(value={self.__value}, kind={self.__kind})' + + def parse_time_period(s): - if is_uuid(s): - # Retrieve the period of a full session - try: - session_uuid = s - except IndexError: - raise ValueError(f'invalid session uuid: {s}') from None - else: - backend = StorageBackend.default() - ts_start, ts_end = backend.fetch_session_time_period( - session_uuid - ) - if not ts_start or not ts_end: - raise ValueError(f'no such session: {session_uuid}') - else: - try: - ts_start, ts_end = s.split(':') - except ValueError: - raise ValueError(f'invalid time period spec: {s}') from None + try: + ts_start, ts_end = s.split(':') + except ValueError: + raise ValueError(f'invalid time period spec: {s}') from None return _parse_timestamp(ts_start), _parse_timestamp(ts_end) -def _parse_extra_cols(s): - if s and not s.startswith('+'): +def _parse_columns(s, base_columns=None): + base_columns = base_columns or [] + if not s: + return base_columns + + if s.startswith('+'): + if ',' in s: + raise ValueError(f'invalid column spec: {s}') + + return base_columns + [x for x in s.split('+')[1:] if x] + + if '+' in s: raise ValueError(f'invalid column spec: {s}') - # Remove any empty columns - return [x for x in s.split('+')[1:] if x] + return s.split(',') -def _parse_aggregation(s): +def _parse_aggregation(s, base_columns=None): try: - op, extra_groups = s.split(':') + op, group_cols = s.split(':') except ValueError: raise ValueError(f'invalid aggregate function spec: {s}') from None - return Aggregator.create(op), _parse_extra_cols(extra_groups) + return Aggregator.create(op), _parse_columns(group_cols, base_columns) -_Match = namedtuple('_Match', - ['period_base', 'period_target', - 'session_base', 'session_target', - 'aggregator', 'extra_groups', 'extra_cols']) +def parse_query_spec(s): + if s is None: + return None + if is_uuid(s): + return QuerySelector.from_session_uuid(s) -def parse_cmp_spec(spec): - def _parse_period_spec(s): - if s is None: - return None, None + if s.startswith('?'): + return QuerySelector.from_session_filter(s[1:]) - if is_uuid(s): - return s, None + return QuerySelector.from_time_period(*parse_time_period(s)) - return None, parse_time_period(s) +_Match = namedtuple('_Match', + ['base', 'target', 'aggregator', 'groups', 'columns']) + +DEFAULT_GROUP_BY = ['name', 'sysenv', 'pvar', 'punit'] +DEFAULT_EXTRA_COLS = ['pval', 'pdiff'] + + +def parse_cmp_spec(spec, default_group_by=None, default_extra_cols=None): + default_group_by = default_group_by or list(DEFAULT_GROUP_BY) + default_extra_cols = default_extra_cols or list(DEFAULT_EXTRA_COLS) parts = spec.split('/') if len(parts) == 3: - period_base, period_target, aggr, cols = None, *parts + base_spec, target_spec, aggr, cols = None, *parts elif len(parts) == 4: - period_base, period_target, aggr, cols = parts + base_spec, target_spec, aggr, cols = parts else: raise ValueError(f'invalid cmp spec: {spec}') - session_base, period_base = _parse_period_spec(period_base) - session_target, period_target = _parse_period_spec(period_target) - aggr_fn, extra_groups = _parse_aggregation(aggr) - extra_cols = _parse_extra_cols(cols) - return _Match(period_base, period_target, session_base, session_target, - aggr_fn, extra_groups, extra_cols) + base = parse_query_spec(base_spec) + target = parse_query_spec(target_spec) + aggr_fn, group_cols = _parse_aggregation(aggr, default_group_by) + + # Update base columns for listing + columns = _parse_columns(cols, group_cols + default_extra_cols) + return _Match(base, target, aggr_fn, group_cols, columns) diff --git a/reframe/schemas/config.json b/reframe/schemas/config.json index 4e8b868be..f0e03ecab 100644 --- a/reframe/schemas/config.json +++ b/reframe/schemas/config.json @@ -588,7 +588,7 @@ "general/report_junit": null, "general/resolve_module_conflicts": true, "general/save_log_files": false, - "general/table_format": "pretty", + "general/table_format": "outline", "general/target_systems": ["*"], "general/timestamp_dirs": "%Y%m%dT%H%M%S%z", "general/trap_job_errors": false, diff --git a/unittests/test_cli.py b/unittests/test_cli.py index a8800b9b8..9b67f2068 100644 --- a/unittests/test_cli.py +++ b/unittests/test_cli.py @@ -1257,7 +1257,7 @@ def test_testlib_inherit_fixture_in_different_files(run_reframe): assert 'FAILED' not in stdout -@pytest.fixture(params=['csv', 'plain', 'pretty']) +@pytest.fixture(params=['csv', 'plain', 'grid', 'outline']) def table_format(request): return request.param @@ -1291,27 +1291,20 @@ def test_storage_options(run_reframe, tmp_path, table_format): stdout = assert_no_crash( *run_reframe2(action=f'--describe-stored-session={uuid}') )[1] - session_json = json.loads(stdout) + sessions = json.loads(stdout) # List test cases by session - assert_no_crash(*run_reframe2(action=f'--list-stored-testcases={uuid}')) + assert_no_crash(*run_reframe2( + action=f'--list-stored-testcases={uuid}/mean:/' + )) assert_no_crash( *run_reframe2(action=f'--describe-stored-testcases={uuid}') ) - # Check hiding of table column - stdout = assert_no_crash(*run_reframe2( - action=f'--list-stored-testcases={uuid}', - more_options=['--table-hide-columns=SysEnv,Nodelist,UUID'] - ))[1] - assert 'SysEnv' not in stdout - assert 'Nodelist' not in stdout - assert 'UUID' not in stdout - # List test cases by time period - ts_start = session_json['session_info']['time_start'] + ts_start = sessions[0]['session_info']['time_start'] assert_no_crash( - *run_reframe2(action=f'--list-stored-testcases={ts_start}:now') + *run_reframe2(action=f'--list-stored-testcases={ts_start}:now/mean:/') ) assert_no_crash( *run_reframe2(action=f'--describe-stored-testcases={ts_start}:now') @@ -1334,6 +1327,7 @@ def test_session_annotations(run_reframe): checkpath=['unittests/resources/checks/frontend_checks.py'], action='-r', more_options=['--session-extras', 'key1=val1,key2=val2', + '--session-extras', 'key3=val3', '-n', '^PerformanceFailureCheck'] ), exitcode=1) diff --git a/unittests/test_reporting.py b/unittests/test_reporting.py index 3b602b016..073356aea 100644 --- a/unittests/test_reporting.py +++ b/unittests/test_reporting.py @@ -17,11 +17,17 @@ import reframe.frontend.dependencies as dependencies import reframe.frontend.reporting as reporting import reframe.frontend.reporting.storage as report_storage -import reframe.frontend.reporting.utility as report_util +from reframe.frontend.reporting.utility import (parse_cmp_spec, is_uuid, + QuerySelector, + DEFAULT_GROUP_BY, + DEFAULT_EXTRA_COLS) from reframe.core.exceptions import ReframeError from reframe.frontend.reporting import RunReport +_DEFAULT_BASE_COLS = DEFAULT_GROUP_BY + DEFAULT_EXTRA_COLS + + # NOTE: We could move this to utility class _timer: '''Context manager for timing''' @@ -204,9 +210,10 @@ def time_period(request): def test_parse_cmp_spec_period(time_period): spec, duration = time_period duration = int(duration) - match = report_util.parse_cmp_spec(f'{spec}/{spec}/mean:/') - for period in ('period_base', 'period_target'): - ts_start, ts_end = getattr(match, period) + match = parse_cmp_spec(f'{spec}/{spec}/mean:/') + for query in ('base', 'target'): + assert getattr(match, query).by_time_period() + ts_start, ts_end = getattr(match, query).value if 'now' in spec: # Truncate splits of seconds if using `now` timestamps ts_start = int(ts_start) @@ -215,18 +222,18 @@ def test_parse_cmp_spec_period(time_period): assert ts_end - ts_start == duration # Check variant without base period - match = report_util.parse_cmp_spec(f'{spec}/mean:/') - assert match.period_base is None + match = parse_cmp_spec(f'{spec}/mean:/') + assert match.base is None @pytest.fixture(params=['first', 'last', 'mean', 'median', - 'min', 'max']) + 'min', 'max', 'count']) def aggregator(request): return request.param def test_parse_cmp_spec_aggregations(aggregator): - match = report_util.parse_cmp_spec(f'now-1m:now/now-1d:now/{aggregator}:/') + match = parse_cmp_spec(f'now-1m:now/now-1d:now/{aggregator}:/') data = [1, 2, 3, 4, 5] if aggregator == 'first': match.aggregator(data) == data[0] @@ -240,57 +247,72 @@ def test_parse_cmp_spec_aggregations(aggregator): match.aggregator(data) == 3 elif aggregator == 'mean': match.aggregator(data) == sum(data) / len(data) + elif aggregator == 'count': + match.aggregator(data) == len(data) # Check variant without base period - match = report_util.parse_cmp_spec(f'now-1d:now/{aggregator}:/') - assert match.period_base is None + match = parse_cmp_spec(f'now-1d:now/{aggregator}:/') + assert match.base is None -@pytest.fixture(params=[('', []), ('+', []), - ('+col1', ['col1']), ('+col1+', ['col1']), - ('+col1+col2', ['col1', 'col2'])]) -def extra_cols(request): +@pytest.fixture(params=[('', DEFAULT_GROUP_BY), + ('+', DEFAULT_GROUP_BY), + ('+col1', DEFAULT_GROUP_BY + ['col1']), + ('+col1+', DEFAULT_GROUP_BY + ['col1']), + ('+col1+col2', DEFAULT_GROUP_BY + ['col1', 'col2']), + ('col1,col2', ['col1', 'col2'])]) +def group_by_columns(request): return request.param -def test_parse_cmp_spec_group_by(extra_cols): - spec, expected = extra_cols - match = report_util.parse_cmp_spec( +def test_parse_cmp_spec_group_by(group_by_columns): + spec, expected = group_by_columns + match = parse_cmp_spec( f'now-1m:now/now-1d:now/min:{spec}/' ) - assert match.extra_groups == expected + assert match.groups == expected # Check variant without base period - match = report_util.parse_cmp_spec(f'now-1d:now/min:{spec}/') - assert match.period_base is None + match = parse_cmp_spec(f'now-1d:now/min:{spec}/') + assert match.base is None -def test_parse_cmp_spec_extra_cols(extra_cols): - spec, expected = extra_cols - match = report_util.parse_cmp_spec( +@pytest.fixture(params=[('', _DEFAULT_BASE_COLS), + ('+', _DEFAULT_BASE_COLS), + ('+col1', _DEFAULT_BASE_COLS + ['col1']), + ('+col1+', _DEFAULT_BASE_COLS + ['col1']), + ('+col1+col2', _DEFAULT_BASE_COLS + ['col1', 'col2']), + ('col1,col2', ['col1', 'col2'])]) +def columns(request): + return request.param + + +def test_parse_cmp_spec_extra_cols(columns): + spec, expected = columns + match = parse_cmp_spec( f'now-1m:now/now-1d:now/min:/{spec}' ) - assert match.extra_cols == expected + assert match.columns == expected # Check variant without base period - match = report_util.parse_cmp_spec(f'now-1d:now/min:/{spec}') - assert match.period_base is None + match = parse_cmp_spec(f'now-1d:now/min:/{spec}') + assert match.base is None def test_is_uuid(): # Test a standard UUID - assert report_util.is_uuid('7daf4a71-997b-4417-9bda-225c9cab96c2') + assert is_uuid('7daf4a71-997b-4417-9bda-225c9cab96c2') # Test a run UUID - assert report_util.is_uuid('7daf4a71-997b-4417-9bda-225c9cab96c2:0') + assert is_uuid('7daf4a71-997b-4417-9bda-225c9cab96c2:0') # Test a test case UUID - assert report_util.is_uuid('7daf4a71-997b-4417-9bda-225c9cab96c2:0:1') + assert is_uuid('7daf4a71-997b-4417-9bda-225c9cab96c2:0:1') # Test invalid UUIDs - assert not report_util.is_uuid('7daf4a71-997b-4417-9bda-225c9cab96c') - assert not report_util.is_uuid('7daf4a71-997b-4417-9bda-225c9cab96c2:') - assert not report_util.is_uuid('foo') + assert not is_uuid('7daf4a71-997b-4417-9bda-225c9cab96c') + assert not is_uuid('7daf4a71-997b-4417-9bda-225c9cab96c2:') + assert not is_uuid('foo') @pytest.fixture(params=[ @@ -309,17 +331,38 @@ def _uuids(s): base, target = None, None if len(parts) == 3: base = None - target = parts[0] if report_util.is_uuid(parts[0]) else None + target = parts[0] if is_uuid(parts[0]) else None else: - base = parts[0] if report_util.is_uuid(parts[0]) else None - target = parts[1] if report_util.is_uuid(parts[1]) else None + base = parts[0] if is_uuid(parts[0]) else None + target = parts[1] if is_uuid(parts[1]) else None return base, target - match = report_util.parse_cmp_spec(uuid_spec) + match = parse_cmp_spec(uuid_spec) base_uuid, target_uuid = _uuids(uuid_spec) - assert match.session_base == base_uuid - assert match.session_target == target_uuid + if match.base.by_session_uuid(): + assert match.base.value == base_uuid + + if match.target.by_session_uuid(): + assert match.target.value == target_uuid + + +@pytest.fixture(params=[ + '?xyz == "123"/?xyz == "789"/mean:/', + '?xyz == "789"/mean:/' +]) +def sess_filter(request): + return request.param + + +def test_parse_cmp_spec_with_filter(sess_filter): + match = parse_cmp_spec(sess_filter) + if match.base: + assert match.base.by_session_filter() + assert match.base.value == 'xyz == "123"' + + assert match.target.by_session_filter() + assert match.target.value == 'xyz == "789"' @pytest.fixture(params=['2024:07:01T12:34:56', '20240701', '20240701:', @@ -331,32 +374,41 @@ def invalid_time_period(request): def test_parse_cmp_spec_invalid_period(invalid_time_period): with pytest.raises(ValueError): - report_util.parse_cmp_spec(f'{invalid_time_period}/now-1d:now/min:/') + parse_cmp_spec(f'{invalid_time_period}/now-1d:now/min:/') with pytest.raises(ValueError): - report_util.parse_cmp_spec(f'now-1d:now/{invalid_time_period}/min:/') + parse_cmp_spec(f'now-1d:now/{invalid_time_period}/min:/') -@pytest.fixture(params=['mean', 'foo:', 'mean:col1+col2', 'mean:col1,col2']) +def test_parse_cmp_invalid_filter(): + invalid_sess_filter = 'xyz == "123"' + with pytest.raises(ValueError): + parse_cmp_spec(f'{invalid_sess_filter}/now-1d:now/min:/') + + with pytest.raises(ValueError): + parse_cmp_spec(f'now-1d:now/{invalid_sess_filter}/min:/') + + +@pytest.fixture(params=['mean', 'foo:', 'mean:col1+col2']) def invalid_aggr_spec(request): return request.param def test_parse_cmp_spec_invalid_aggregation(invalid_aggr_spec): with pytest.raises(ValueError): - report_util.parse_cmp_spec( + print(parse_cmp_spec( f'now-1m:now/now-1d:now/{invalid_aggr_spec}/' - ) + )) -@pytest.fixture(params=['col1+col2', 'col1,col2']) +@pytest.fixture(params=['col1+col2', '+col1,col2']) def invalid_col_spec(request): return request.param def test_parse_cmp_spec_invalid_extra_cols(invalid_col_spec): with pytest.raises(ValueError): - report_util.parse_cmp_spec( + parse_cmp_spec( f'now-1m:now/now-1d:now/mean:/{invalid_col_spec}' ) @@ -374,7 +426,7 @@ def various_invalid_specs(request): def test_parse_cmp_spec_various_invalid(various_invalid_specs): with pytest.raises(ValueError): - report_util.parse_cmp_spec(various_invalid_specs) + parse_cmp_spec(various_invalid_specs) def test_storage_api(make_async_runner, make_cases, common_exec_ctx, @@ -401,41 +453,30 @@ def _count_failed(testcases): backend = report_storage.StorageBackend.default() - # Test `fetch_sessions_time_period` - stored_sessions = backend.fetch_sessions_time_period() - assert len(stored_sessions) == 2 - for i, sess in enumerate(stored_sessions): - assert sess['session_info']['uuid'] == uuids[i] + from_time_period = QuerySelector.from_time_period + from_session_uuid = QuerySelector.from_session_uuid - # Test the time period version + # Test `fetch_sessions`: time period version now = time.time() - stored_sessions = backend.fetch_sessions_time_period(now - 60, now) + stored_sessions = backend.fetch_sessions(from_time_period(0, now)) assert len(stored_sessions) == 2 for i, sess in enumerate(stored_sessions): assert sess['session_info']['uuid'] == uuids[i] - # Test `fetch_session_json` + # Test `fetch_session`: session uuid version for uuid in uuids: - stored_session = backend.fetch_session_json(uuid) - assert stored_session['session_info']['uuid'] == uuid + stored_sessions = backend.fetch_sessions(from_session_uuid(uuid)) + assert stored_sessions[0]['session_info']['uuid'] == uuid # Test an invalid uuid - assert backend.fetch_session_json(0) == {} + assert backend.fetch_sessions(from_session_uuid(0)) == [] - # Test `fetch_session_time_period` - for i, uuid in enumerate(uuids): - ts_session = backend.fetch_session_time_period(uuid) - assert ts_session == timestamps[i] - - # Test an invalid uuid - assert backend.fetch_session_time_period(0) == (None, None) - - # Test `fetch_testcases_time_period` - testcases = backend.fetch_testcases_time_period(timestamps[0][0], - timestamps[1][1]) + # Test `fetch_testcases`: time period version + testcases = backend.fetch_testcases(from_time_period(timestamps[0][0], + timestamps[1][1])) # NOTE: test cases without an associated (run) job are not fetched by - # `fetch_testcases_time_period`; in + # `fetch_testcases` (time period version); in # this case 3 test cases per session are ignored: `BadSetupCheckEarly`, # `BadSetupCheck`, `CompileOnlyHelloTest`, which requires us to adapt the # expected counts below @@ -443,39 +484,40 @@ def _count_failed(testcases): assert _count_failed(testcases) == 6 # Test name filtering - testcases = backend.fetch_testcases_time_period(timestamps[0][0], - timestamps[1][1], - '^HelloTest') + testcases = backend.fetch_testcases( + from_time_period(timestamps[0][0], timestamps[1][1]), '^HelloTest' + ) assert len(testcases) == 2 assert _count_failed(testcases) == 0 # Test the inverted period - assert backend.fetch_testcases_time_period(timestamps[1][1], - timestamps[0][0]) == [] + assert backend.fetch_testcases(from_time_period(timestamps[1][1], + timestamps[0][0])) == [] - # Test `fetch_testcases_from_session` + # Test `fetch_testcases`: session version for i, uuid in enumerate(uuids): - testcases = backend.fetch_testcases_from_session(uuid) + testcases = backend.fetch_testcases(from_session_uuid(uuid)) assert len(testcases) == 9 assert _count_failed(testcases) == 5 # Test name filtering - testcases = backend.fetch_testcases_from_session(uuid, '^HelloTest') + testcases = backend.fetch_testcases(from_session_uuid(uuid), + '^HelloTest') assert len(testcases) == 1 assert _count_failed(testcases) == 0 # Test an invalid uuid - assert backend.fetch_testcases_from_session(0) == [] + assert backend.fetch_testcases(from_session_uuid(0)) == [] # Test session removal - backend.remove_session(uuids[-1]) - assert len(backend.fetch_sessions_time_period()) == 1 + removed = backend.remove_sessions(from_session_uuid(uuids[-1])) + assert removed == [uuids[-1]] + assert len(backend.fetch_sessions(from_time_period(0, now))) == 1 - testcases = backend.fetch_testcases_time_period(timestamps[0][0], - timestamps[1][1]) + testcases = backend.fetch_testcases(from_time_period(timestamps[0][0], + timestamps[1][1])) assert len(testcases) == 6 assert _count_failed(testcases) == 3 # Try an invalid uuid - with pytest.raises(ReframeError): - backend.remove_session(0) + backend.remove_sessions(from_session_uuid(0)) == []