You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 23, 2021. It is now read-only.
There are a bunch of things we could test for when actually running the firmware test suites to infer whether the tests ran as expected.
For example, if all the test suites run and pass, but take 5 hours to finish running, that indicates a bug somewhere. In fact, people are hitting issues with long delays as explained in #26. It would be cool to detect this and flag it as a potential issue. Note that this test is definitely separate from the regular firmware test suites we run, since checking the boot time is more internal to the operation of the luv live image and doesn't necessarily indicate a firmware issue.
So, we should build sefltests into the live image to detect potential problems.
But we also need offline tests that we can run to check things like the luv test manager, or the luv test parser, to ensure that new commits don't break the schema syntax we use to parse results and test the robustness of the parsers as noted in #25.
The offline tests need only be run as part of a continuous integration setup, i.e. when new commits are merged, and don't need to be built into the live image.
The text was updated successfully, but these errors were encountered:
Expanding on the above to make things a little more concrete...
For the online tests (those that run from the live image), what I'm thinking is a new recipe named luv-live-selftests that contains tests to check things like,
How long did the kernel take to boot? And is that a reasonable time?
How long did each test suite take to run? And is that a reasonable time?
Did any unit test crash/exit unexpectedly?
Did we run out of space when storing the logs?
The offline tests will need to be very different, since they're not built into the live image, and for lack of a better idea, we could name them luv-selftests. I'm unsure what the best way to design this recipe is, perhaps a native package?
Some general ideas for offline tests,
If the tests crash, does the parser handle that gracefully? Could we fuzz-test the parsers?
Has the live image grown unexpectedly (from commits that change the image generation)?
There are a bunch of things we could test for when actually running the firmware test suites to infer whether the tests ran as expected.
For example, if all the test suites run and pass, but take 5 hours to finish running, that indicates a bug somewhere. In fact, people are hitting issues with long delays as explained in #26. It would be cool to detect this and flag it as a potential issue. Note that this test is definitely separate from the regular firmware test suites we run, since checking the boot time is more internal to the operation of the luv live image and doesn't necessarily indicate a firmware issue.
So, we should build sefltests into the live image to detect potential problems.
But we also need offline tests that we can run to check things like the luv test manager, or the luv test parser, to ensure that new commits don't break the schema syntax we use to parse results and test the robustness of the parsers as noted in #25.
The offline tests need only be run as part of a continuous integration setup, i.e. when new commits are merged, and don't need to be built into the live image.
The text was updated successfully, but these errors were encountered: