You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently validate can be run on individual raster and fitted model objects, but there is no way to compare two different spatial predictions in terms of their metrics. The idea is thus to add a handy function called compare() that acts as a wrapper on validation() outputs (or directly on model objects?).
Steps for implementation:
Ensure that each validate() output has an attribute that identifies the tibble as such.
Implement a C3 call for compare() on >= 2 outputs of these. The compare function then calculates the differential between the best and the next models and sorts the output (default parameter for sorting provided).
When called on a list the function should also work.
(Optional): Evaluate whether it is possible to run compare() directly on DistributionModels instead of validation outputs. This requires that each model is of the same family and/or can return some sort of parsimony criterion (AIC, BIC, WAIC, LOO, etc...). This likely requires considerably more work...
The text was updated successfully, but these errors were encountered:
Currently
validate
can be run on individual raster and fitted model objects, but there is no way to compare two different spatial predictions in terms of their metrics. The idea is thus to add a handy function calledcompare()
that acts as a wrapper onvalidation()
outputs (or directly on model objects?).Steps for implementation:
Ensure that each
validate()
output has an attribute that identifies thetibble
as such.Implement a C3 call for
compare()
on >= 2 outputs of these. The compare function then calculates the differential between the best and the next models and sorts the output (default parameter for sorting provided).When called on a list the function should also work.
(Optional): Evaluate whether it is possible to run
compare()
directly onDistributionModels
instead of validation outputs. This requires that each model is of the same family and/or can return some sort of parsimony criterion (AIC, BIC, WAIC, LOO, etc...). This likely requires considerably more work...The text was updated successfully, but these errors were encountered: