Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

code-coverage validation statistics (feature request) #693

Open
kwatsen opened this issue Jan 7, 2019 · 3 comments
Open

code-coverage validation statistics (feature request) #693

kwatsen opened this issue Jan 7, 2019 · 3 comments
Labels
is:enhancement Request for adding new feature or enahncing functionality.

Comments

@kwatsen
Copy link

kwatsen commented Jan 7, 2019

Precondition:

  • a YANG module (example.yang)
  • a bunch of instance example documents (a.xml, b.json, etc.)
  • a script that validates each instance example document against the YANG module
    • this script would run yanglint multiple times (distinct invocations)
    • presumably, there would be some special directory (e.g. ./.ycov/) that would be used to accumulate the statistics across runs (the user would be responsible for removing this directory before each fresh run).

Postcondition:

  • a report providing code-coverage like validation statistics.
    • presumably, this report would be provided by a final invocation of yanglint that would just output the report (e.g., tree diagram)
  • options (sorted by complexity: easiest to hardest):
    1. a single number representing percentage of nodes tested (e.g., 30% or 80%)
    2. a per top-level statement (data, rpc, notification, yang-data, etc.) percentage number
      • perhaps inlined notifications and action can be included here as well
    3. a tree-diagram like output that tags each node with the number of times it was tested
      • top-level nodes would have the highest numbers.
      • the value here would be in seeing which parts are not tested much, or at all
    4. some combination of all of the above
@rkrejci rkrejci added the is:enhancement Request for adding new feature or enahncing functionality. label Jan 8, 2019
@rkrejci
Copy link
Collaborator

rkrejci commented Jan 8, 2019

I like the feature, but I don't think that we have the capacity to do it anytime soon.

Few points for future implementation:

  • do you insist on that special working directory? yanglint is able to process multiple input files, so I would prefer to do the stats during a single run of yanglint keeping all working data in memory and, mainly, being sure that the context is still the same.
  • my idea is to implement this only in yanglint, not the libyang itself. The reason is that the usecase is quite rare, so I don't want to need some additional place for these stats directly in schema structures. Instead, some "database" of the visited nodes would be maintained inside yanglint

@kwatsen
Copy link
Author

kwatsen commented Jan 9, 2019

Hi Radek,

I don't insist on anything 😇 and, certainly, that might be a reasonable first effort.

But if you look at https://github.com/netconf-wg/zero-touch/blob/master/refs/validate-all.sh, you'll see a typical validation test script (each one of my drafts has a similar one, and I'm soon to introduce a BCP that all I-Ds include a test script of sorts like this one. As a reviewer of other people's drafts, I want to be able to use test scripts provided by them, and I want those test scripts to, in part, output test-coverage statistics...

Anyway, you can see that the test script linked above is written in a test-case by test-case fashion and, for each, it sometimes needs to do hacky things such as:

  • remove text that belongs in the draft, but not for yanglint (such as HTTP headers)
  • convert RESTCONF examples into NETCONF examples
  • convert "yang-data" statements into "container" statements

Perhaps all this could be done beforehand, and then everything fed into a single yanglint call, but I wouldn't be thrilled about needing to restructure my scripts to do that, or have to try to better understand yanglint error messages just to understand which of a number of tests failed...

Thank you for your consideration.
Kent

@rkrejci
Copy link
Collaborator

rkrejci commented Jan 10, 2019

ok, thanks for this input

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
is:enhancement Request for adding new feature or enahncing functionality.
Projects
None yet
Development

No branches or pull requests

2 participants