-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bugfix #3020 develop grid_stat_seeps #3021
Bugfix #3020 develop grid_stat_seeps #3021
Conversation
… determine if SEEPS information should be written to the Grid-Stat NetCDF matched pairs output file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Documentation changes look good, and testing already performed appears to show the bug is fixed. Only failed checks are SonarQube and difference checks, both of which are expected.
@georgemccabe merging this PR to fix a bug in Grid-Stat triggered this METplus testing workflow run. That run failed due to differences in the Use Cases (S2S:2) group. I pulled the diffs and see that they are limited to a single .png output file: Visual inspection of the images show no obvious diffs. See below for truth on the left and output on the right. I'd classify this as a false positive. The MET code changes to Grid-Stat certainly would not directly impact this plot. I assume some other difference (like perhaps the OS used to run the METplus tests) has caused the images to no longer be bitwise identical. @georgemccabe how shall we proceed? Should we try to figure out potential diffs in the hardware/OS used? Or should we just updated the METplus truth dataset and cross our fingers? |
There were earlier develop and main_v6.0 runs triggered by changes I made to METplotpy that resulted in these diffs. They are not related to these MET changes. I also confirmed that the images look the same, so I think we can update the truth data and move on. |
…idStatConfig_SEEPS config file needs to be updated with nc_pairs_flag.seeps = TRUE in order for the same output to be produced by the unit tests.
Expected Differences
These changes were a bit more involved than I expected. I found that the
GridStatNcOutInfo::do_seeps
was missing entirely. So I updatedgrid_stat_conf_info.h/.cc
to parse it from the Config file, store its value, and then include it with the other nc output flags being checked. The updated ingrid_stat.cc
is to check that flag to determine whether or not SEEPS data should be written to the NetCDF matched pairs file. Prior to this fix, if SEEPS was computed, then it would always be written to the NetCDF matched pairs file with no way to disable that.Do these changes introduce new tools, command line arguments, or configuration file options? [No]
If yes, please describe:
Do these changes modify the structure of existing or add new output data types (e.g. statistic line types or NetCDF variables)? [No]
If yes, please describe:
Pull Request Testing
Manually tested on seneca in
/d1/projects/METplus/discussions/2794
This version, when listed in
GridStat_global_24h_precip_SEEPS_testing.conf
and run withrun_gs.sh
produces the segfault:This version runs without error:
Recommend testing for the reviewer(s) to perform, including the location of input datasets, and any additional instructions:
Please review unrelated doc updates to confirm that the updated URL links work.
Consider testing the code compiled in
seneca:/d1/personal/johnhg/MET/MET_development/MET-bugfix_3020_develop_grid_stat_seeps
Do these changes include sufficient documentation updates, ensuring that no errors or warnings exist in the build of the documentation? [Yes]
No real updates needed. However, unrelated, I also fixed some broken links to the NetCDF-CF conventions website, as requested by @michelleharrold.
Do these changes include sufficient testing updates? [No]
I added no additional tests for this bugfix. The test would be running a SEEPS unit test with ONLY SEEPS NetCDF output requested. The overhead of running grid_stat one more time isn't worth, since once this is fixed, it's unlikely to be broken.
Will this PR result in changes to the MET test suite? [No]
If yes, describe the new output and/or changes to the existing output:
Will this PR result in changes to existing METplus Use Cases? [No]
If yes, create a new Update Truth METplus issue to describe them.
Do these changes introduce new SonarQube findings? [Yes]
If yes, please describe:
When compared to the
develop
branch, this PR does flag 4 new code smells and increases the overall number of code smells from 18,315 to 18,317. This is not ideal, but I did review the 4 flagged ones and note that they're not easily fixed since they are to reduce theCognitive Complexity
and not nest too many conditionals. Fixing these would require changing the logic and structure of the code and doing so would introduce significant risk, which is not appropriate for a bugfix.Please complete this pull request review by [Tues 11/19/21].
Pull Request Checklist
See the METplus Workflow for details.
Select: Reviewer(s) and Development issue
Select: Milestone as the version that will include these changes
Select: Coordinated METplus-X.Y Support project for bugfix releases or MET-X.Y.Z Development project for official releases