You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As near as I can find, the documentation does not contain any details as to how the tell if the input YAML files have been validated. The "Usage Examples" section shows output from the tool, but does not mention the error message "No errors..."
Some questions we might want to answer:
Is the error the only indication of success or are there other, better ways to tell?
Is validation by HART and thus this error must be checked for every HART?
What happens in a multi-HART situation where HARTs 0, 2 and 3 are fine, 1 is bad? Does the tool complete and provide all results or end at first failure?
What are common errors messages?
Given an error, what is the common way to correct/fix them?
What is the approach for handling multiple errors? Should the first one be corrected and tool re-run or should you attempt to fix them all?
Is there a way to get more information for an error, such as enabling verbose mode or something?
This is not a critical gap but would be extremely nice to address given that this tool will be called out for explicit use in the "RISC-V Compatible Trademark Permission Process" described on the RISC-V Branding Guidelines & Materials web page.
The text was updated successfully, but these errors were encountered:
* Is the error the only indication of success or are there other, better ways to tell?
Currently there are 2 indications of success: No error and a log message stating "Dumping normalised yaml ..."/
* Is validation by HART and thus this error must be checked for every HART?
Yes, validation is per HART although some checks happen at the overall level too(such as atleast one hart must have the hartid 0).
* What happens in a multi-HART situation where HARTs 0, 2 and 3 are fine, 1 is bad? Does the tool complete and provide all results or end at first failure?
The tool usually gives out all the errors in the input yaml during a single run. The errors are also printed with the key hierarchy for the field in the log. It is pretty easy to identify which field in the input yaml has caused the error.
* What are common errors messages?
This is really a difficult question to answer. Each csr has different rules as defined by the spec. Anything which violates the constraints imposed by the ISA spec is an error.
* Given an error, what is the common way to correct/fix them?
The only correct way to fix it is to verify that the definition provided is indeed inline with the spec. The spec is the golden reference for any errors related to the definitions.
* What is the approach for handling multiple errors? Should the first one be corrected and tool re-run or should you attempt to fix them all?
This is more of a user preference than a requirement/recommendation.
* Is there a way to get more information for an error, such as enabling verbose mode or something?
The logging verbosity can be controlled using --verbose debug on the CLI.
As near as I can find, the documentation does not contain any details as to how the tell if the input YAML files have been validated. The "Usage Examples" section shows output from the tool, but does not mention the error message "No errors..."
Some questions we might want to answer:
This is not a critical gap but would be extremely nice to address given that this tool will be called out for explicit use in the "RISC-V Compatible Trademark Permission Process" described on the RISC-V Branding Guidelines & Materials web page.
The text was updated successfully, but these errors were encountered: