Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run in reporting mode vs failure mode #21

Open
isen0011 opened this issue May 8, 2024 · 0 comments
Open

Run in reporting mode vs failure mode #21

isen0011 opened this issue May 8, 2024 · 0 comments

Comments

@isen0011
Copy link

isen0011 commented May 8, 2024

Thanks for writing this - we look forward to using it widely to help us with the new Title II regulations.

One thing that would help us move forward is to have the ability to add the gem to all of our projects, but instead of having every system test fail when it encountered an accessibility issue to instead save the failure and instead give us a report of all the accessibility issues. We could then use that for reporting to our management and tracking our progress towards closing accessibility gaps.

Ideally we'd love to have something where we set a configuration setting, like config.capybara_accessibility_audit.reporting_only that, when run would give us output something like


Finished in 5 minutes 5 seconds (files took 2.45 seconds to load)
238 examples, 0 failures, 24 failed accessibility tests

Failed accessibility tests examples:
... detailed breakdown of the accessibility tests that failed...
isen0011 added a commit to isen0011/capybara_accessibility_audit that referenced this issue May 8, 2024
Works on thoughtbot#21.

This doesn't completely fix the problem that is identified in thoughtbot#21, but
it is a step in that direction.  The idea is that, if the configuration
setting `accessibility_audit_skip_on_error` is set to true, if an `axe`
violation is detected, instead of generating an error it will instead
mark the test as `skipped` with the `axe` output as the message in the
skip output.

There is a problem with this approach, which is the same problem
identified in issue thoughtbot#7 - as soon as a single test is marked skipped or
failed, then the rest of the test is skipped.  This means that if a
test has both failing application logic and failing accessibility
validation, the first of those tests to fail will be the one that is
reported.  For a system test, since any interaction with the page will
trigger the accessibility tests, it is likely that the test will be
marked as skipped and the application logic failure will be hidden.

This can be solved by running the tests a second time with
`accessibility_audit_enabled = false` and comparing the results, but
that does require two runs through the test suite.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant