-
Notifications
You must be signed in to change notification settings - Fork 102
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Summary statistics for ensembles #208
Comments
I think this would be great - any way(s) to help distill down ensemble results is welcome. I guess you would process by observation group (and it seems like this is mostly about time series , right?). I think @smwesten-usgs might has some good metrics and/or ideas on this front that we could borrow from tsproc? On that idea of comparing history matching results to synthetic "truth" results, I think the paired simple-complex analysis of Doherty and Christensen is one of the best ways b/c it is focused on exposing biases in the relation between synthetic truth obs and history matching results. |
@wkitlasten - Any movement on this? It would be nice contrib... |
Unfortunately no movement from my end, but there may be an opportunity to
work it in for June if nobody beats me to it ((but I wouldn't mind at all
if they did!))
…On Sat, May 29, 2021, 2:35 AM J Dub ***@***.***> wrote:
@wkitlasten <https://github.com/wkitlasten> - Any movement on this? It
would be nice contrib...
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#208 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADSJXRAU52MNEDRUFUJ23QTTP6S4JANCNFSM4VRHPFTQ>
.
|
@weskitlasten and @jtwhite79 - sorry I missed this issue before. I made a framework for metrics like this and included some of them. It should be easy to add those that are missing in the framework I laid out there. https://github.com/pypest/pyemu/blob/develop/pyemu/utils/metrics.py If I get a few free moments, I'll try and add the ones that are in the paper you mentioned. |
Hey @mnfienen, nice work. I have been using the metrics recently. I hope to add something to it in the near future. |
@wkitlasten, moving this to discussions like #164 it'll probs be in the ongoing improvements basket? |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
I would like to propose including some simple methods to generate concise quantitative metrics for ensembles, similar to those discussed in the publication below. Any thoughts on those or other metrics that should be included?
https://lib.dr.iastate.edu/cgi/viewcontent.cgi?article=1072&context=ge_at_pubs
On a similar note, I am trying to evaluate how well my IES setup (parameterization, weights, etc) reproduces "synthetic" observations of interest (not in the history matching dataset but "known" from the simulation) from a suite of selected realizations. In my mind, the most logical measure is something along the lines of the number of ensemble standard deviations between the ens mean of the obs of interest and the "known" values. Then, if I have 10 tests for a single site I could hopefully infer that the same setup likely captures real world obs of interest within the same number of stds (assuming similar ensemble metrics mentioned above for the obs in the history matching dataset). Thoughts, or references along those lines?
The text was updated successfully, but these errors were encountered: