Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhancement: Move SonarQube step after regression tests are executed #11

Open
deors opened this issue Apr 15, 2016 · 4 comments
Open

Enhancement: Move SonarQube step after regression tests are executed #11

deors opened this issue Apr 15, 2016 · 4 comments

Comments

@deors
Copy link

deors commented Apr 15, 2016

Moving the SonarQube step after regression tests are executed will allow us to pull into SonarQube dashboard the information from regression test execution and from performance test execution.
On a side note, regression test execution should be done with Failsafe (plugin which is actually a copy of Surefire) bind to integration-test phase, to ensure code coverage from regression tests is reported properly in SonarQube dashboard.

@deors deors changed the title Enhancement: Enhancement: Move SonarQube step after regression tests are executed Enhancement: Move SonarQube step after regression tests are executed Apr 15, 2016
@kramos
Copy link
Contributor

kramos commented Dec 12, 2016

The downside of this is that sonarqube will run much later in the pipeline i.e. after the deployment and the regression tests.

@deors
Copy link
Author

deors commented Dec 13, 2016

Yes, that's correct, however it brings a lot of benefit, including the ability to have integration test coverage and overall test coverage into the quality gates (which is a very good feature).
Another option would be to do a couple of analysis:

  1. A dry-run analysis after unit tests, with the outputs we have as of now in the cartridge.
  2. A second, full coverage, analysis, persisted into the database, at the end of the line just before pushing to prod, as the ultimate quality gate.
    Thoughts?

@nickdgriffin
Copy link
Contributor

Would a dry-run analysis still enable the build to be failed if the unit test coverage or number of critical/blockers were above a threshold? Because that's the part I'd want to know about before it makes it any further as knowing I've got good integration test coverage doesn't really help me if I still have to go back and make changes to satisfy the earlier conditions.

@deors
Copy link
Author

deors commented Feb 13, 2017

Dry run would work and use quality gate to break the build, at least in previous versions it worked that way. You would need to set a different quality gate i.e. to not look at IT coverage at this stage. @restalion and @viarellano were looking into this already, to ensure it works fine before sending the PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants