You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@annakrystalli had noticed and fixed a silent bug in the snapshot tests for validations due to a leaky mock. My analysis was:
It looks like this bug popped up with the fix for #139 and was an urgent fix. It's completely understandable how it got through---it's a symptom of complex snapshot output not being easily scannable (i.e. it's hard to detect a check_failure/check_error from a check_success in a long list like this).
While this may on the surface appear to be an isolated incident, it highlights the limitations of snapshot tests in that they were intended to be for human-readable output. As the output increases in complexity, the potential for missed side-effects and fragility increases. The fact is that these snapshot tests rely on someone being able to glance at the test output and confirm that it's showing what we expect it to be showing. This is influenced by the developer and reviewer's state of mind when inspecting the output and I've never met a single developer who was 100% every time they were writing or reviewing code.
solution
I would like to go through the snapshot tests that use str() or output tibbles and add programmatic validations. It's not a high-priority item and it's not so exciting to consider, but it will pay off in the future as we continue to work on this codebase and refactor.
An example of such a change from this particular snapshot could be to create a vector from the hub_validations object of the tests that did not pass and compare them to an expected list:
# Confirm that the only check that failed was "file_n"check_path<- system.file("check_table.csv", package="hubValidations")
checks<-arrow::read_csv_arrow(check_path) |>dplyr::filter(.data$`parent fun`!="validate_model_metadata", !.data$optional)
expected<- setNames(rep(FALSE, nrow(checks)), checks$Name)
expected["file_n"] <-TRUE# there is a failure for duplicate filesfailures<-purrr::map_lgl(dup_model_out_val, not_pass)
expect_equal(failures, expected[names(failures)])
The text was updated successfully, but these errors were encountered:
@annakrystalli had noticed and fixed a silent bug in the snapshot tests for validations due to a leaky mock. My analysis was:
While this may on the surface appear to be an isolated incident, it highlights the limitations of snapshot tests in that they were intended to be for human-readable output. As the output increases in complexity, the potential for missed side-effects and fragility increases. The fact is that these snapshot tests rely on someone being able to glance at the test output and confirm that it's showing what we expect it to be showing. This is influenced by the developer and reviewer's state of mind when inspecting the output and I've never met a single developer who was 100% every time they were writing or reviewing code.
solution
I would like to go through the snapshot tests that use
str()
or output tibbles and add programmatic validations. It's not a high-priority item and it's not so exciting to consider, but it will pay off in the future as we continue to work on this codebase and refactor.An example of such a change from this particular snapshot could be to create a vector from the hub_validations object of the tests that did not pass and compare them to an expected list:
The text was updated successfully, but these errors were encountered: