Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improvement: Include options for model evaluation (runoff versus streamflow) #20

Open
lcunha0118 opened this issue Jul 11, 2022 · 3 comments
Assignees

Comments

@lcunha0118
Copy link

lcunha0118 commented Jul 11, 2022

Goal: add the capability of evaluating calibration based on runoff (results in cat files). This is relevant when we want to isolate hydrological uncertainties.

Changes will include additional parameters, and modification f the calibration_set.py function output.

See description here or below.

Additional parameters -add to yaml configuration file:

Eval_method: 'runoff’ (default is streamflow, future implementations might include an array like [precipitation, runoff, soil moisture])
runoff: evaluates volume of runoff generated in the basin. Not necessarily routed.
streamflow: evaluate the routed runoff

Evaluation_variable: "QOUT"
Column number or name of the variable to be evaluated in the ngen model output. Future implementations might include an array of variables [RAINFALL, QOUT, SW_STORAGE]). This maps the Eval_method to the simulation variable that corresponds to it.

Reampling:
This is an optional configuration. If not provided, evaluation will be performed at the temporal resolution of the simulated data. This is especially important when evaluating runoff, since we need to remove uncertainties due to the lack of routing.
Resampling_time: 'D'
Temporal resampling. Follows the pandas resample standard. For example, ‘D’ is for day, ‘H’ is for hour
Resampling_method: ‘sum’
#Optional parameter. Default is sum. Follows the pandas resample standard.
unit: "meter/hour"

Modification to the calibration_set.py, function output:

If Eval_method==”streamflow” - existing logic
If Eval_method==”runoff”:

Aggregate runoff for all cat files upstream of the point of interest. See logic below:

import geopandas as gp
import pandas as pd
catchment_file='catchment_data.geojson'
Results="./"
zones = gp.GeoDataFrame.from_file(catchment_file)
for index,row in zones.iterrows():
catstr=row[0]
area=zones.area_sqkm.iloc[index]

Cat_out_file=Results+catstr+".csv"
Cat_out=pd.read_csv(Cat_out_file,parse_dates=True,index_col=1)
if(isinstance(Cat_out.index.min(),str)):
		Cat_out.index=pd.to_datetime(Cat_out.index)

if(index==0):
		Total_out=Cat_out.copy()*area
else:
		Total_out=Total_out+Cat_out*area

Total_out=Total_out/sum(zones.area_sqkm)

Change the units*: for now we will change the units of the observed data to match the simulated. Another option is to change the unit of the simulated data. Eventually this will talk to unit test, but for now If “unit” is mm/hour, convert the observed to mm/hour: Obs_q_mh=(Obs_Q_cms/total_area)3.6/1000.
Resample in time
: resample both the observed and simulated in time.

*Note - might be better fit to function evaluate

@hellkite500
Copy link
Member

This is relative to #18 and are enhancements that can be added to that PR or separately after it has merged.

@hellkite500
Copy link
Member

#130 partially approaches a general mechanism to do this using plugin hooks that can supply model output time series in a generic way from any user definable function.

#111 will ultimately lead to similar semantics/capabilites for observations

From those, any customized model output and observation pair functinality can be plugged in. We should consider some core capabilities that we may want to maintain in the repository's ngen_hooks section.

@aaraney another use case directly related to our current efforts.

@aaraney
Copy link
Member

aaraney commented Aug 5, 2024

Implementing ngen_cal_model_observations(#155) and ngen_cal_model_output(#130) should make this possible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants