-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[feature request] junit test reporter #58
Comments
I also vote to add a mechanism to add custom formatters to support other formats like:
We use eslint and stylelint to generate checkstyle reports in Jenkins. IT would be nice to be able to do this also with markdownlint. |
I'd also be interested in this to integrate markdownlint into Azure Devops pipelines using one of their supported unit test report formats (which includes JUnit as well as others). Looks like there's a number of npm packages to produce JUnit and Checkstyle and probably other formats as well. One request I'd add to this is to allow the tool to exit with a 0 return code even if there are test failures (either by default when outputting a file or with an additional flag). That way we can use existing tools to determine if markdownlint errors will fail the build or not. For example Azure Devops has a setting to specify if failed tests in the uploaded test file should fail the build or not. |
I have a plan around this, but need some more time to prove it out. Your comment about return values is interesting and I think it’s the first time I’ve heard that. Can you please point to some other tools that support that behavior? I would expect that people would just ignore the return value if they didn’t care about it; I’m not sure why a tool would need to support that explicitly. Thanks for helping me understand! |
The one I'm most familiar with is Pester for unit testing PowerShell. It won't generate any kind of "error" on a test failure by default (though you can specify a failure on test failure), requiring the user/script to interpret the returned information and determined what course of action to take. It's also incredibly useful in an environment like Azure DevOps, which can "consume" test results to generate reports and determine trends. The general workflow for running a test there is that you'll have a step that will run the test and generate a report file, and then run the Publish Test Results Task to consume that generated test. One way to do this is to allow the testing task to "fail" if the tests fail, and then set the Publish task to run even if previous steps have failed, so that the results will still get uploaded. But the experience for that isn't great, with the reason for failure of the build just being Another way to do it is for the testing task to succeed to indicate that it successfully ran the tests, then the Publish Test Results task can be configured to fail on test failures. I had that in another project I was building out and on a run where the tests failed I got this much more useful and friendly build report: It wouldn't be too hard for me to wrap the call to PowerShell also generally isn't very happy when things return non-0 codes because that indicates a failure of the command, and would prefer the command return 0 to indicate it executed successfully, then find a different way to report its data. The way I'd approach running a command, generally, is that I want to know if the command succeeded or not, and throwing an error because it successfully found something it was looking for would strike me as an anti-pattern (though that ship has certainly sailed and it's a pattern in use all over the place) |
Great explanation, thank you. I might still want to treat these two things separately, but I understand the motivation. |
OK so I decided to play around with this a little bit and ended up making a possibly workable POC for this: I used the package This will generate a single Test Suite called I added a new option, Based on my JavaScript skill (read: none) I don't want to make a PR for this, I'm mostly providing it as a proof of concept or guidance as to how this feature might be implemented. But if you think it's good enough for a PR let me know and I'll open one. But I won't be bothered if you don't 😄 |
This looks great! You used the same package I’d made a note to use. If this is your first project with JS, you should be really proud. I’ve got a set of nitpicks based on a quick review, but I’d suggest you send a PR because what I see seems very close to final. (After adding a couple of test cases.) I’m in agreement with most of your decisions here. Thanks a lot! |
@jjangga0214 @alejandroclaro I had wanted to try some different ideas for a markdownlint CLI and did so recently with https://github.com/DavidAnson/markdownlint-cli2 It supports pluggable output formatters and one of the ones I built to prove the concept is for JUnit in the style of what @FISHMANPET proposes: https://www.npmjs.com/package/markdownlint-cli2-formatter-junit If you find it useful or have any feedback, please let me know! |
I finally got around to trying markdownlint-cli2 and the jnit formatter included there. It appears to meet my needs running specifically within azure devops. I especially like the change in exit codes which aren't part of this issue but were mentioned in the comments. |
Hi, thank you for this convenient cli!
I'd like to suggest a feature which might be cool especially in CI environment.
If markdownlint-cli supports junit-style(de facto format) reporter, it'd be so comfortable to check the lint results in CI.
Other popular test runner and linters (e.g. jest, eslint) does support the feature as well.
How do you feel about this?
The text was updated successfully, but these errors were encountered: