-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set up valtools subtree #325
Conversation
The test runners will enable coverage when GOCOVERDIR is set.
This file provides required top-level .scorecard.json fields. mpn.scorecard will stitch this and the other json files into a complete object.
Under 'go test -json', details about the test failures are in the action=output records for a test. Relay that output to hopefully give the test runner a better idea of why a test is failing.
The scripts for VT_TEST_RUNNERS need to implement fairly complex handling. When GOCOVERDIR is set, they need to instrument a binary for integration tests and to pass '-args -test.gocoverdir=$d' for unit tests. They need to do custom package selection, separating unit tests from integration tests and removing this valtools subtree. And for unit tests, they need to pass -coverpkgs in order for the tests of one package to count toward the coverage of another package. On top of that, the files of packages without tests are not included in the coverage profiles for unit tests invoked with -test.gocoverdir. Take the following module: |-- bar | `-- bar.go |-- foo | |-- foo.go | `-- foo_test.go `-- go.mod If 'go test -coverprofile coverage.out ./...' is called from the top level, the profile will include entries for bar.go. However, if 'go test -cover ./... -args -test.gocoverdir=$d' followed by 'go tool covdata textfmt -i $d -o coverage.out' is called to create the text format profile, entries for bar.go are _not_ present [*]. Given the above complexity, it's helpful to have a way to verify that the Go files in the final coverage JSON match what's expected. [*] tested under Go 1.22.5, which includes d1cb5c0605 (cmd/go: improve handling of no-test packages for coverage, 2023-05-09)
Flagged by golangci-lint: shadow: declaration of "localpath" shadows declaration at line 98 (govet)
This will be handled by a dedicated executable.
These are no longer current. An upcoming commit will generate a fresh set of markdown files in a new location.
Thanks to Seth for spotting.
This is the result of $ go get -u golang.org/x/tools $ go mod tidy The x/tools update is required to get a version that defines cover.ParseProfilesFromReader.
This is pulled in from an outside repo; don't let it crash bbi's lint workflow.
This is the result of 'make vt-gen-docs'.
Now that scripts/run-unit-tests exists, use it rather than repeating the logic for which packages to filter out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This all looks good to me, and in line with the (pkgr
#419](metrumresearchgroup/pkgr#419) I reviewed yesterday. Some docs and tests cleanup/additions are probably warranted but, as discussed previously, those will be in a separate PR.
### Synopsis | ||
|
||
summarize model(s), for example: | ||
bbi nonmem summary run001/run001 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A number of these don't render well (in particular, these "examples" are all on the same line), at least in the GitHub rendering of the Markdown.
I guess I'm not sure where/how they end up being rendered for the user, so this may not matter.
Either way, my understanding is that we fix this sort of thing in the code file (e.g. here) and then this is all generated. Also, I think we talked about any docs clean up being in a separate PR anyway, so no need to address now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess I'm not sure where/how they end up being rendered for the user, so this may not matter.
Right, it depends on how the renderer handle line breaks, but 1) it probably will be and 2) either way, I think we should adjust these to look okay on GitHub.
@kyleam one other comment, the grid tests did fail for me when I didn't have a worker node available. We may want to put in a wait for that, or at least document that the grid needs to be primed to run the full test suite. |
Okay, thanks for running. That's a bummer. The last time I tried with no workers up, it worked, so I'm not sure what's behind this (and am not even sure these failures aren't just the same intermittent failures I've been seeing when the workers are up).
My hesitation with adding the wait is that, as far as I can tell from reading the code and testing, the |
I've done three un-primed runs since then, one hit into the failure. I have an untested guess about what's going on: even though edit: meant to add that could then also explain the other intermittent failures when workers are already up, because the race wouldn't be specific for an un-primed grid, although that could make hitting into more likely. |
That all makes sense to me.
I think that is sufficient at this point. If you end up playing with the "dumb wait" option and have good results then we can add that in, but I don't think it's worth sinking much more time into it at this point. I'll defer to you though. |
This series wires up bbi to produce external scores for mpn.scorecard. Much of the work is done by the new
internal/valtools/
subtree, which has shared tooling that will also be used by pkgr (PR). To set that up on bbi's side, we need todefine a traceability matrix
add a command to generate markdown pages for the CLI
add scripts for running the tests
internal/valtools/README.md
has more details.With these changes, mpn.scorecard input can be generated by running
make vt-all
from the top-level directory.Note that I'm occasionally seeing integration test failures. I think they happen during the SGE tests, but I can't reliably trigger the failure and haven't been able to get to the bottom of it. While I've mostly been running with a worker already up, I checked that the integration succeeded when no workers are up yet (which should be the case, given the WaitForSGEToTerminate logic).
The goal of this PR is to get all the setup in place that we need for generating the scores, leaving additional changes and cleanup for that we want for the scorecard (e.g., fleshing out documentation and adding missing tests) to followup PRs.
Overview
Minor cleanup related to bringing in the new validation approach.
This is the merge commit that brings in the shared tooling.
These are focused on hooking up the
internal/valtools/
subtree