Skip to content
This repository has been archived by the owner on Feb 23, 2021. It is now read-only.

Add selftests to check validity of tests themselves #27

Open
mfleming opened this issue Oct 18, 2014 · 2 comments
Open

Add selftests to check validity of tests themselves #27

mfleming opened this issue Oct 18, 2014 · 2 comments

Comments

@mfleming
Copy link
Contributor

There are a bunch of things we could test for when actually running the firmware test suites to infer whether the tests ran as expected.

For example, if all the test suites run and pass, but take 5 hours to finish running, that indicates a bug somewhere. In fact, people are hitting issues with long delays as explained in #26. It would be cool to detect this and flag it as a potential issue. Note that this test is definitely separate from the regular firmware test suites we run, since checking the boot time is more internal to the operation of the luv live image and doesn't necessarily indicate a firmware issue.

So, we should build sefltests into the live image to detect potential problems.

But we also need offline tests that we can run to check things like the luv test manager, or the luv test parser, to ensure that new commits don't break the schema syntax we use to parse results and test the robustness of the parsers as noted in #25.

The offline tests need only be run as part of a continuous integration setup, i.e. when new commits are merged, and don't need to be built into the live image.

@mfleming
Copy link
Contributor Author

mfleming commented Oct 19, 2014

Expanding on the above to make things a little more concrete...

For the online tests (those that run from the live image), what I'm thinking is a new recipe named luv-live-selftests that contains tests to check things like,

  • How long did the kernel take to boot? And is that a reasonable time?
  • How long did each test suite take to run? And is that a reasonable time?
  • Did any unit test crash/exit unexpectedly?
  • Did we run out of space when storing the logs?

The offline tests will need to be very different, since they're not built into the live image, and for lack of a better idea, we could name them luv-selftests. I'm unsure what the best way to design this recipe is, perhaps a native package?

Some general ideas for offline tests,

  • If the tests crash, does the parser handle that gracefully? Could we fuzz-test the parsers?
  • Has the live image grown unexpectedly (from commits that change the image generation)?
  • Does the partition scheme work across OSes? (See issue luv-results partition is not shown in Windows #23)
  • Does a single commit touch more than one layer (making maintenance difficult)?

@meghadey
Copy link
Contributor

Most of the online tests have already been incorporated in some form or another.

Offline tests:

  1. fuzzing parsers : (low priority) Need to add unit tests for parsers itself.
  2. image size increase : check on buildbot (arbritary percent say 50%)
  3. Partition scheme: Check on Mac OS
  4. single commit touching more than 1 layer: Write a script

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants