Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

143 / Update docs to reflect new schema version v3.0.1 #163

Merged
merged 5 commits into from
Aug 5, 2024
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@
# -- Options for EPUB output
epub_show_urls = 'footnote'

schema_version = "v3.0.0"
schema_version = "v3.0.1"
# Use schema_branch variable to specify a branch in the schemas repository from which config schema will be source, especially for docson widgets.
# Useful if the schema being documented hasn't been released to the `main` branch in the schemas repo yet. If version has been released already, set this to "main".
schema_branch = "br-"+schema_version
Expand Down
16 changes: 15 additions & 1 deletion docs/source/quickstart-hub-admin/tasks-config.md
Original file line number Diff line number Diff line change
Expand Up @@ -247,4 +247,18 @@ There are [two ways](https://github.com/hubverse-org/schemas/blob/de580d56b8fc5c
```
<br/>


## Step 9: Optional - Set up `"output_type_id_datatype"`:

Once all modeling tasks and rounds have been configured, you may also choose to fix the `output_type_id` column data type across all model output files of the hub using the optional `"output_type_id_datatype"` property.

This property should be provided at the top level of `tasks.json` (i.e. sibling to `rounds` and `schema_version`) and can take any of the following values: `"auto"`, `"character"`, `"double"`, `"integer"`, `"logical"`, `"Date"`.

```json
{
"schema_version": "https://raw.githubusercontent.com/hubverse-org/schemas/main/v3.0.1/tasks-schema.json",
"rounds": [...],
"output_type_id_datatype": "character"
}
```

For more context and details on when and how to use this setting, please see the [`output_type_id` column data type](output-type-id-datatype) section on the **model output** page.
54 changes: 54 additions & 0 deletions docs/source/user-guide/model-output.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,3 +122,57 @@ Validation of forecast values occurs in two steps:
* Loads only data that are needed
* Disadvantages:
* Harder to work with; teams and people who want to work with files need to install additional libraries

(model-output-schema)=
## The importance of a stable model output file schema

> Note the difference in the following discussion between [hubverse schema](https://github.com/hubverse-org/schemas) - the schema which hub config files are validated against - and [`arrow schema`](https://arrow.apache.org/docs/11.0/r/reference/Schema.html) - the mapping of model output columns to data types.

Because we store model output data as separate files but open them as a single `arrow` dataset using the `hubData` package, for a hub to be successfully accessed as an `arrow dataset`, it's necesssary to ensure that all files conform to the same [`arrow schema`](https://arrow.apache.org/docs/11.0/r/reference/Schema.html) (i.e. share the same column data types) across the lifetime of the hub. This means that additions of new rounds should not change the overall hub schema at a later date (i.e. after submissions have already started being collected).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor suggestion: it's --> it is


Many common task IDs are covered by the [hubverse schema](#model-tasks-tasks-json-interactive-schema), are validated during hub config validation and should therefore have consistent and stable data types. However, there are a number of situations where a single consistent data type cannot be guaranteed, e.g.:
- New rounds introducing changes in custom task ID value data types, which are not covered by the hubverse schema.
- New rounds introducing changes in task IDs covered by the schema but which accept multiple data types (e.g. `scenario_id` where both `integer` and `character` are accepted or `age_group` where no data type is specified in the hubverse schema).
- Adding new output types, which might introduce `output_type_id` values of a new data type.

While validation of config files will alert hub administrations to discrepancies in task ID value data types across mideling tasks and rounds, any changes to a hub's config which has the potential to change the overall data type of model output columns after submissions have been collected could cause issues downstream and should be avoided. This is primarily a problem for parquet files, which encapsulate a schema within the file, but has a small chance to cause parsing errors in csvs too.

(output-type-id-datatype)=
### The `output_type_id` column data type

Output types are configured and handled differently than task IDs in the hubverse.

On the one hand, **different output types can have output type ID values of varying data type** and adhering to these data types is imposed by downstream, output type specific hubverse functionality like ensembling or visualisation.
For example, hubs expect `double` output type ID values for `quantile` output types but `character` output type IDs for a `pmf` output type.

On the other hand, the **use of a long format for hubverse model output files requires that these multiple data types are accomodated in a single `output_type_id` column.**
This makes the output type ID column unique within the model output file in terms of how it's data type is determined, configured and validated.

During submission validation, two checks are performed on the `output_type_id` column:
1. **Subsets of `output_type_id` column values** associated with a given output type are **checked for being able to be coerced to the correct data type defined in the config** for that output type. This ensures correct output type specific downstream handling of the data is possible.
2. The **overall data type of the `output_type_id` column** matches the overall hub schema expectation.

#### Determining the overall `output_type_id` column data type automatically

To determine the overall `output_type_id` data type, the default behaviour is to automatically **detect the simplest data type that can encode all output type ID values across all rounds and output types** from the config.

The benefit of this automatic detection is that it provides flexibility to the `output_type_id` column to adapt to the output types a hub is actually collecting. For example, a hub which only collects `mean` and `quantile` output types would, by default, have a `double` `output_type_id` column.

The risk of this automatic detection however arises if, in subsequent rounds -after submissions have begun-, the hub decides to also start collecting a `pmf` output type. This would change the default `output_type_id` column data type from `double` to `character` and cause a conflict between the `output_type_id` column data type in older and newer files when trying to open the hub as an `arrow` dataset.
Copy link
Contributor

@LucieContamin LucieContamin Aug 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure it will cause a conflict in this case. Arrow seems to only have issue in certain case. It needs more testing but it seems that casting double to character does not cause issue, but casting character to double does.

For example, I have 2 files with output type id as character or numeric and horizon as logical or numeric:

> str(arrow::read_parquet("test/team2-modelb/2023-11-12-team2-modelb.parquet"))
Classesdata.tableand 'data.frame':	1885 obs. of  9 variables:
[...]
 $ output_type_id: chr  "EW202422" "EW202421" "EW202420" "EW202419" ...
 $ horizon       : num  NA NA NA NA NA NA NA NA NA NA ...
 
> str(arrow::read_parquet("test/teama-model1/2023-11-12-teama-model1.parquet"))
Classesdata.tableand 'data.frame':	1495 obs. of  9 variables:
 [...]
 $ output_type_id: num  0.01 0.025 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 ...
 $ horizon       : logi  NA NA NA NA NA NA ...

if I open the files with the output type id column expected to be "numeric" I got an error (as expected):

> schema
Schema
[...]
horizon: int32
output_type_id: double

> dc <- hubData::connect_model_output("test/", file_format = "parquet", schema = schema)
> dplyr::collect(dc)
Error in `compute.Dataset()`:
! Invalid: Failed to parse string: 'EW202346' as a scalar of type double

but it works well if the output type id is expected to be character:

> schema
Schema
[...]
horizon: int32
output_type_id: string

> dc <- hubData::connect_model_output("test/", file_format = "parquet", schema = schema)
> dplyr::collect(dc)
# A tibble: 3,380 × 10
   origin_date scenario_id  target       horizon location age_group output_type output_type_id value
   <date>      <chr>        <chr>          <int> <chr>    <chr>     <chr>       <chr>          <dbl>
 1 2023-11-12  A-2023-10-27 peak time hNA 06       0-130     cdf         EW202422           1
 2 2023-11-12  A-2023-10-27 peak time hNA 06       0-130     cdf         EW202421           1
 3 2023-11-12  A-2023-10-27 peak time hNA 06       0-130     cdf         EW202420           1
 4 2023-11-12  A-2023-10-27 peak time hNA 06       0-130     cdf         EW202419           1
 5 2023-11-12  A-2023-10-27 peak time hNA 06       0-130     cdf         EW202418           1
 6 2023-11-12  A-2023-10-27 peak time hNA 06       0-130     cdf         EW202417           1
 7 2023-11-12  A-2023-10-27 peak time hNA 06       0-130     cdf         EW202416           1
 8 2023-11-12  A-2023-10-27 peak time hNA 06       0-130     cdf         EW202415           1
 9 2023-11-12  A-2023-10-27 peak time hNA 06       0-130     cdf         EW202414           1
10 2023-11-12  A-2023-10-27 peak time hNA 06       0-130     cdf         EW202413           1
# ℹ 3,370 more rows
# ℹ 1 more variable: model_id <chr>
# ℹ Use `print(n = ...)` to see more rows

And it seems that the horizon column does not encounter any issues neither. So some casting works but not all, and my guess is that forcing the columns to character will always work and can be use to by-pass some issue.


### Fixing the `output_type_id` column data type with the `output_type_id_datatype` property

To enable hub administrators to configure and communicate the data type of the `output_type_id` column at a hub level, the hubverse schema allows for the use of an optional `output_type_id_datatype` property.
This property should be provided at the top level of `tasks.json` (i.e. sibling to `rounds` and `schema_version`), can take any of the following values: `"auto"`, `"character"`, `"double"`, `"integer"`, `"logical"`, `"Date"` and can be used to fix the `output_type_id` column data type.

```json
{
"schema_version": "https://raw.githubusercontent.com/hubverse-org/schemas/main/v3.0.1/tasks-schema.json",
"rounds": [...],
"output_type_id_datatype": "character"
}
```
If not supplied or if `"auto"` is set, the default behaviour of automatically detecting the data type from `output_type_id` values is used.

This gives hub administrators the ability to future proof the `output_type_id` column in their model output files if they are unsure whether they may start collecting an output type that could affect the schema, by setting the column to `"character"` (the safest data type that all other values can be encoded as) at the start of data collection.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think the "at the start" is necessary here. Setting it later on will also work I think

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I agree with Lucie.

unrelated: "future proof" should be "future-proof".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the issue with changing the specification after the start is that analysis/data retrieval/munging code that worked before might need to be changed. So maybe we should specify that it could be changed later, but any changes might impact other code that is in place. So maybe this fits in the "soft recommendation" category (maybe "should" is ok, with an explanation of why that is the recommendation.)