Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make a 'sanity check' test to check that the default model returns reasonable results #232

Open
nickmalleson opened this issue Dec 17, 2020 · 2 comments
Assignees
Labels
enhancement New feature or request

Comments

@nickmalleson
Copy link
Collaborator

No description provided.

@nickmalleson nickmalleson added the enhancement New feature or request label Dec 17, 2020
@nickmalleson nickmalleson self-assigned this Dec 17, 2020
@github-actions
Copy link

Branch nickmalleson-issue-232 created!

@nickmalleson
Copy link
Collaborator Author

Notes about what we might do first (but maybe redundant if we fix #231 ):

  • Move 'gam_cases.csv' to 'devon_data' (or anywhere else really; it shouldn't hang in the root directory)
  • Add a parameter in default.yml (observations-file) to specify where that file can be found

Started to make a test, but then gave up. Here's some code I'll put back in net year:

def test_model_performance(setup_results):
    """This is a high-level test that checks whether the model is performing
    reasonably well against a benchmark of real observations. Useful if a change
    means that the default model no longer works properly."""
    #TODO Implement this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant