You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here are some of the tasks that will need to be taken care of as part of the integration from the HydroVIS code side.
Maybe we need a separate card earlier in the epic for designs and tests? Maybe this is is the actual finalization of the HV changes? ie) Meta Data is more of a design question at this point. Same with error handling
Note: for Ripple, there is no dynamic inundation.
Update all code in the repo looking for the phrase ras2fim. We can take over much of that service as a renamed to Ripple. This includes: to name a few
Fix the mapx files if required (which services??, maybe none, but check it anyways. And look for Ripple column changes from ras2fim. DO NOT DO A FIND AND REPLACE. Review each
Review / Update Enviro / config files to handle Ripple version. We already know it will be 3.0.
Work with Ryan and Shawn to learn more of the details about how dynamic processing is done
Update step functions, lambdas, caching, dynamic load system, etc for Ripple instead of ras2fim
drop the old ras2fim databases and make a script to remove it in UAT and Prod
Review DB schema related to ras2fim. Might be removed.. TBD
Change the HAND/ras2fim manual data load code. We will, at a min, remove the static pre-load of ras2fim data. It is unlikely that there is anything we can pre-load for Ripple. It will likely all be dynamic and similar to HAND processing on the fly.
Add new integration system for Ripple. It will require finding the correct Ripple S3 folder based on HUC for processing. Details on processing are TBD. Might be able to let one lambda process one HUC. Note: If we do go this route of a step function, and a Ripple lamdba, processing each HUC... we will need to add code to look through the S3 bucket for the right folder. We have to do it dynamically instead of the way HAND does it as the folder names will not be standardized to just be a HUC name. AND.. one HUC might have more than one Ripple folder. TBD.
Multiple Ripple collections for one HUC8
- [ ] Figure out how we handle multiple Ripple models for one HUC. For now.. likely see if feature exist a second time in the HUC set and prioritize to one or the other. ie) if we say mip is prioritized and we find an mip and a ble record, drop the ble record in the dynamic flow processing. Carson/Derek will figure out which gets priority.
Look into CRS. RTX might be able to pre-set the CRS to 3857 for us.
Design topic: Figure out what the current ras2fim.max_geocurves is all about.
Design topic: Meta data:
- [ ] Review and compare meta data from ras2fim, to what is needed for flow calcs and what Ripple datasets have. Likely only the same 5 columns from the HAND Hydrotables.
- [ ] What meta fields are showing up in the UI now for ras2fim. Which do we still want? Some to add? ie) source_code = "ble" and source_name = "Bureau of land... ". Likely a reference to what Ripple dataset version. Maybe also a field showing the actual "ripple collection" folder name that was used to generate it. So we can trace a bug from the UI back to the original S3 folders.
- [ ] Talk to Derek / Carson about meta data fields for the UI and if they want them changed. ie) version, model_version (ie. Ripple 3.0 or based on their repo like HAND. ie) Ripple v0.8.3 (see versioning questions in other places)
Design topic: Error Handling:
- [ ] Talk to RTX about error handling and status codes on their part
- [ ] figure out a way to handle possible outputs for flow2fim.exe during processing. StdOut, StdErr? Other status codes? Can exceptions get through resulting in neither StnOut or StdErr? TBD. but we need to figure out a way to handle them in HV. Related... do we do something with the results, or maybe some basic QA tests before finishing the Lambda? Could be bad data easily from the Ripple model folders (notice I did not call them HUC folders?)
The text was updated successfully, but these errors were encountered:
Here are some of the tasks that will need to be taken care of as part of the integration from the HydroVIS code side.
Maybe we need a separate card earlier in the epic for designs and tests? Maybe this is is the actual finalization of the HV changes? ie) Meta Data is more of a design question at this point. Same with error handling
Note: for Ripple, there is no dynamic inundation.
Update all code in the repo looking for the phrase ras2fim. We can take over much of that service as a renamed to Ripple. This includes: to name a few
Add new integration system for Ripple. It will require finding the correct Ripple S3 folder based on HUC for processing. Details on processing are TBD. Might be able to let one lambda process one HUC. Note: If we do go this route of a step function, and a Ripple lamdba, processing each HUC... we will need to add code to look through the S3 bucket for the right folder. We have to do it dynamically instead of the way HAND does it as the folder names will not be standardized to just be a HUC name. AND.. one HUC might have more than one Ripple folder. TBD.
Multiple Ripple collections for one HUC8
- [ ] Figure out how we handle multiple Ripple models for one HUC. For now.. likely see if feature exist a second time in the HUC set and prioritize to one or the other. ie) if we say mip is prioritized and we find an mip and a ble record, drop the ble record in the dynamic flow processing. Carson/Derek will figure out which gets priority.
Look into CRS. RTX might be able to pre-set the CRS to 3857 for us.
Design topic: Figure out what the current ras2fim.max_geocurves is all about.
Design topic: Meta data:
- [ ] Review and compare meta data from ras2fim, to what is needed for flow calcs and what Ripple datasets have. Likely only the same 5 columns from the HAND Hydrotables.
- [ ] What meta fields are showing up in the UI now for ras2fim. Which do we still want? Some to add? ie) source_code = "ble" and source_name = "Bureau of land... ". Likely a reference to what Ripple dataset version. Maybe also a field showing the actual "ripple collection" folder name that was used to generate it. So we can trace a bug from the UI back to the original S3 folders.
- [ ] Talk to Derek / Carson about meta data fields for the UI and if they want them changed. ie) version, model_version (ie. Ripple 3.0 or based on their repo like HAND. ie) Ripple v0.8.3 (see versioning questions in other places)
Design topic: Error Handling:
- [ ] Talk to RTX about error handling and status codes on their part
- [ ] figure out a way to handle possible outputs for flow2fim.exe during processing. StdOut, StdErr? Other status codes? Can exceptions get through resulting in neither StnOut or StdErr? TBD. but we need to figure out a way to handle them in HV. Related... do we do something with the results, or maybe some basic QA tests before finishing the Lambda? Could be bad data easily from the Ripple model folders (notice I did not call them HUC folders?)
The text was updated successfully, but these errors were encountered: