You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
// We always normalize against the number of tests we are looking for,
// rather than the total number of tests we found. The trade-off is all
// about new tests being added to the set.
and web-platform-tests/wpt#38413 splitting this test up into variants, thus making the old test URL no longer a test URL. It is, however, still labelled.
We should, at the very least, track what tests in allTestsSet we've seen, and log those that we didn't find a result for. This would put something in the log and make it much easier to understand what's happening. Though perhaps with older runs and labelling changes this would be too noisy?
The text was updated successfully, but these errors were encountered:
From web-platform-tests/interop#281, we hit a case where a number of scores moved away from 100%, which people weren't expecting.
This is because of:
results-analysis/interop-scoring/main.js
Lines 332 to 334 in eba88da
and web-platform-tests/wpt#38413 splitting this test up into variants, thus making the old test URL no longer a test URL. It is, however, still labelled.
We should, at the very least, track what tests in
allTestsSet
we've seen, and log those that we didn't find a result for. This would put something in the log and make it much easier to understand what's happening. Though perhaps with older runs and labelling changes this would be too noisy?The text was updated successfully, but these errors were encountered: