-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
143 / Update docs to reflect new schema version v3.0.1 #163
Changes from 3 commits
72abd59
c613e84
10066b6
48d98c7
f330a80
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -122,3 +122,57 @@ Validation of forecast values occurs in two steps: | |
* Loads only data that are needed | ||
* Disadvantages: | ||
* Harder to work with; teams and people who want to work with files need to install additional libraries | ||
|
||
(model-output-schema)= | ||
## The importance of a stable model output file schema | ||
|
||
> Note the difference in the following discussion between [hubverse schema](https://github.com/hubverse-org/schemas) - the schema which hub config files are validated against - and [`arrow schema`](https://arrow.apache.org/docs/11.0/r/reference/Schema.html) - the mapping of model output columns to data types. | ||
|
||
Because we store model output data as separate files but open them as a single `arrow` dataset using the `hubData` package, for a hub to be successfully accessed as an `arrow dataset`, it's necesssary to ensure that all files conform to the same [`arrow schema`](https://arrow.apache.org/docs/11.0/r/reference/Schema.html) (i.e. share the same column data types) across the lifetime of the hub. This means that additions of new rounds should not change the overall hub schema at a later date (i.e. after submissions have already started being collected). | ||
|
||
Many common task IDs are covered by the [hubverse schema](#model-tasks-tasks-json-interactive-schema), are validated during hub config validation and should therefore have consistent and stable data types. However, there are a number of situations where a single consistent data type cannot be guaranteed, e.g.: | ||
- New rounds introducing changes in custom task ID value data types, which are not covered by the hubverse schema. | ||
- New rounds introducing changes in task IDs covered by the schema but which accept multiple data types (e.g. `scenario_id` where both `integer` and `character` are accepted or `age_group` where no data type is specified in the hubverse schema). | ||
- Adding new output types, which might introduce `output_type_id` values of a new data type. | ||
|
||
While validation of config files will alert hub administrations to discrepancies in task ID value data types across mideling tasks and rounds, any changes to a hub's config which has the potential to change the overall data type of model output columns after submissions have been collected could cause issues downstream and should be avoided. This is primarily a problem for parquet files, which encapsulate a schema within the file, but has a small chance to cause parsing errors in csvs too. | ||
|
||
(output-type-id-datatype)= | ||
### The `output_type_id` column data type | ||
|
||
Output types are configured and handled differently than task IDs in the hubverse. | ||
|
||
On the one hand, **different output types can have output type ID values of varying data type** and adhering to these data types is imposed by downstream, output type specific hubverse functionality like ensembling or visualisation. | ||
For example, hubs expect `double` output type ID values for `quantile` output types but `character` output type IDs for a `pmf` output type. | ||
|
||
On the other hand, the **use of a long format for hubverse model output files requires that these multiple data types are accomodated in a single `output_type_id` column.** | ||
This makes the output type ID column unique within the model output file in terms of how it's data type is determined, configured and validated. | ||
|
||
During submission validation, two checks are performed on the `output_type_id` column: | ||
1. **Subsets of `output_type_id` column values** associated with a given output type are **checked for being able to be coerced to the correct data type defined in the config** for that output type. This ensures correct output type specific downstream handling of the data is possible. | ||
2. The **overall data type of the `output_type_id` column** matches the overall hub schema expectation. | ||
|
||
#### Determining the overall `output_type_id` column data type automatically | ||
|
||
To determine the overall `output_type_id` data type, the default behaviour is to automatically **detect the simplest data type that can encode all output type ID values across all rounds and output types** from the config. | ||
|
||
The benefit of this automatic detection is that it provides flexibility to the `output_type_id` column to adapt to the output types a hub is actually collecting. For example, a hub which only collects `mean` and `quantile` output types would, by default, have a `double` `output_type_id` column. | ||
|
||
The risk of this automatic detection however arises if, in subsequent rounds -after submissions have begun-, the hub decides to also start collecting a `pmf` output type. This would change the default `output_type_id` column data type from `double` to `character` and cause a conflict between the `output_type_id` column data type in older and newer files when trying to open the hub as an `arrow` dataset. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I am not sure it will cause a conflict in this case. Arrow seems to only have issue in certain case. It needs more testing but it seems that casting For example, I have 2 files with > str(arrow::read_parquet("test/team2-modelb/2023-11-12-team2-modelb.parquet"))
Classes ‘data.table’ and 'data.frame': 1885 obs. of 9 variables:
[...]
$ output_type_id: chr "EW202422" "EW202421" "EW202420" "EW202419" ...
$ horizon : num NA NA NA NA NA NA NA NA NA NA ...
> str(arrow::read_parquet("test/teama-model1/2023-11-12-teama-model1.parquet"))
Classes ‘data.table’ and 'data.frame': 1495 obs. of 9 variables:
[...]
$ output_type_id: num 0.01 0.025 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 ...
$ horizon : logi NA NA NA NA NA NA ... if I open the files with the output type id column expected to be "numeric" I got an error (as expected): > schema
Schema
[...]
horizon: int32
output_type_id: double
> dc <- hubData::connect_model_output("test/", file_format = "parquet", schema = schema)
> dplyr::collect(dc)
Error in `compute.Dataset()`:
! Invalid: Failed to parse string: 'EW202346' as a scalar of type double but it works well if the output type id is expected to be > schema
Schema
[...]
horizon: int32
output_type_id: string
> dc <- hubData::connect_model_output("test/", file_format = "parquet", schema = schema)
> dplyr::collect(dc)
# A tibble: 3,380 × 10
origin_date scenario_id target horizon location age_group output_type output_type_id value
<date> <chr> <chr> <int> <chr> <chr> <chr> <chr> <dbl>
1 2023-11-12 A-2023-10-27 peak time h… NA 06 0-130 cdf EW202422 1
2 2023-11-12 A-2023-10-27 peak time h… NA 06 0-130 cdf EW202421 1
3 2023-11-12 A-2023-10-27 peak time h… NA 06 0-130 cdf EW202420 1
4 2023-11-12 A-2023-10-27 peak time h… NA 06 0-130 cdf EW202419 1
5 2023-11-12 A-2023-10-27 peak time h… NA 06 0-130 cdf EW202418 1
6 2023-11-12 A-2023-10-27 peak time h… NA 06 0-130 cdf EW202417 1
7 2023-11-12 A-2023-10-27 peak time h… NA 06 0-130 cdf EW202416 1
8 2023-11-12 A-2023-10-27 peak time h… NA 06 0-130 cdf EW202415 1
9 2023-11-12 A-2023-10-27 peak time h… NA 06 0-130 cdf EW202414 1
10 2023-11-12 A-2023-10-27 peak time h… NA 06 0-130 cdf EW202413 1
# ℹ 3,370 more rows
# ℹ 1 more variable: model_id <chr>
# ℹ Use `print(n = ...)` to see more rows And it seems that the |
||
|
||
### Fixing the `output_type_id` column data type with the `output_type_id_datatype` property | ||
|
||
To enable hub administrators to configure and communicate the data type of the `output_type_id` column at a hub level, the hubverse schema allows for the use of an optional `output_type_id_datatype` property. | ||
This property should be provided at the top level of `tasks.json` (i.e. sibling to `rounds` and `schema_version`), can take any of the following values: `"auto"`, `"character"`, `"double"`, `"integer"`, `"logical"`, `"Date"` and can be used to fix the `output_type_id` column data type. | ||
|
||
```json | ||
{ | ||
"schema_version": "https://raw.githubusercontent.com/hubverse-org/schemas/main/v3.0.1/tasks-schema.json", | ||
"rounds": [...], | ||
"output_type_id_datatype": "character" | ||
} | ||
``` | ||
If not supplied or if `"auto"` is set, the default behaviour of automatically detecting the data type from `output_type_id` values is used. | ||
|
||
This gives hub administrators the ability to future proof the `output_type_id` column in their model output files if they are unsure whether they may start collecting an output type that could affect the schema, by setting the column to `"character"` (the safest data type that all other values can be encoded as) at the start of data collection. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't think the "at the start" is necessary here. Setting it later on will also work I think There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think I agree with Lucie. unrelated: "future proof" should be "future-proof". There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I guess the issue with changing the specification after the start is that analysis/data retrieval/munging code that worked before might need to be changed. So maybe we should specify that it could be changed later, but any changes might impact other code that is in place. So maybe this fits in the "soft recommendation" category (maybe "should" is ok, with an explanation of why that is the recommendation.) |
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor suggestion: it's --> it is