-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
143 / Update docs to reflect new schema version v3.0.1 #163
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I might have misunderstood something but I don't think the different format is always an issue. It might cause one but it's not always the case, so I think we don't need to be really "strict" on that. For example, you say:
"While validation of config files will alert hub administrations to discrepancies in task ID value data types across modeling tasks and rounds, any changes to a hub's config which has the potential to change the overall data type of model output columns after submissions have been collected could cause issues downstream and should be avoided. "
I don't think "SHOULD" is the word here, because in a lot of case it might be totally fine. Especially as the schema
is generated on all the rounds and will often forced it to character
and not cause any issue.
Output type id can be different format in multiple files and still work if set to character
for example. So the output type id
can be flexible and I think it's really important to have that, as some hubs might evolve and need that
|
||
The benefit of this automatic detection is that it provides flexibility to the `output_type_id` column to adapt to the output types a hub is actually collecting. For example, a hub which only collects `mean` and `quantile` output types would, by default, have a `double` `output_type_id` column. | ||
|
||
The risk of this automatic detection however arises if, in subsequent rounds -after submissions have begun-, the hub decides to also start collecting a `pmf` output type. This would change the default `output_type_id` column data type from `double` to `character` and cause a conflict between the `output_type_id` column data type in older and newer files when trying to open the hub as an `arrow` dataset. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure it will cause a conflict in this case. Arrow seems to only have issue in certain case. It needs more testing but it seems that casting double
to character
does not cause issue, but casting character
to double
does.
For example, I have 2 files with output type id
as character or numeric and horizon as logical or numeric:
> str(arrow::read_parquet("test/team2-modelb/2023-11-12-team2-modelb.parquet"))
Classes ‘data.table’ and 'data.frame': 1885 obs. of 9 variables:
[...]
$ output_type_id: chr "EW202422" "EW202421" "EW202420" "EW202419" ...
$ horizon : num NA NA NA NA NA NA NA NA NA NA ...
> str(arrow::read_parquet("test/teama-model1/2023-11-12-teama-model1.parquet"))
Classes ‘data.table’ and 'data.frame': 1495 obs. of 9 variables:
[...]
$ output_type_id: num 0.01 0.025 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 ...
$ horizon : logi NA NA NA NA NA NA ...
if I open the files with the output type id column expected to be "numeric" I got an error (as expected):
> schema
Schema
[...]
horizon: int32
output_type_id: double
> dc <- hubData::connect_model_output("test/", file_format = "parquet", schema = schema)
> dplyr::collect(dc)
Error in `compute.Dataset()`:
! Invalid: Failed to parse string: 'EW202346' as a scalar of type double
but it works well if the output type id is expected to be character
:
> schema
Schema
[...]
horizon: int32
output_type_id: string
> dc <- hubData::connect_model_output("test/", file_format = "parquet", schema = schema)
> dplyr::collect(dc)
# A tibble: 3,380 × 10
origin_date scenario_id target horizon location age_group output_type output_type_id value
<date> <chr> <chr> <int> <chr> <chr> <chr> <chr> <dbl>
1 2023-11-12 A-2023-10-27 peak time h… NA 06 0-130 cdf EW202422 1
2 2023-11-12 A-2023-10-27 peak time h… NA 06 0-130 cdf EW202421 1
3 2023-11-12 A-2023-10-27 peak time h… NA 06 0-130 cdf EW202420 1
4 2023-11-12 A-2023-10-27 peak time h… NA 06 0-130 cdf EW202419 1
5 2023-11-12 A-2023-10-27 peak time h… NA 06 0-130 cdf EW202418 1
6 2023-11-12 A-2023-10-27 peak time h… NA 06 0-130 cdf EW202417 1
7 2023-11-12 A-2023-10-27 peak time h… NA 06 0-130 cdf EW202416 1
8 2023-11-12 A-2023-10-27 peak time h… NA 06 0-130 cdf EW202415 1
9 2023-11-12 A-2023-10-27 peak time h… NA 06 0-130 cdf EW202414 1
10 2023-11-12 A-2023-10-27 peak time h… NA 06 0-130 cdf EW202413 1
# ℹ 3,370 more rows
# ℹ 1 more variable: model_id <chr>
# ℹ Use `print(n = ...)` to see more rows
And it seems that the horizon
column does not encounter any issues neither. So some casting works but not all, and my guess is that forcing the columns to character
will always work and can be use to by-pass some issue.
``` | ||
If not supplied or if `"auto"` is set, the default behaviour of automatically detecting the data type from `output_type_id` values is used. | ||
|
||
This gives hub administrators the ability to future proof the `output_type_id` column in their model output files if they are unsure whether they may start collecting an output type that could affect the schema, by setting the column to `"character"` (the safest data type that all other values can be encoded as) at the start of data collection. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think the "at the start" is necessary here. Setting it later on will also work I think
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I agree with Lucie.
unrelated: "future proof" should be "future-proof".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess the issue with changing the specification after the start is that analysis/data retrieval/munging code that worked before might need to be changed. So maybe we should specify that it could be changed later, but any changes might impact other code that is in place. So maybe this fits in the "soft recommendation" category (maybe "should" is ok, with an explanation of why that is the recommendation.)
|
||
> Note the difference in the following discussion between [hubverse schema](https://github.com/hubverse-org/schemas) - the schema which hub config files are validated against - and [`arrow schema`](https://arrow.apache.org/docs/11.0/r/reference/Schema.html) - the mapping of model output columns to data types. | ||
|
||
Because we store model output data as separate files but open them as a single `arrow` dataset using the `hubData` package, for a hub to be successfully accessed as an `arrow dataset`, it's necesssary to ensure that all files conform to the same [`arrow schema`](https://arrow.apache.org/docs/11.0/r/reference/Schema.html) (i.e. share the same column data types) across the lifetime of the hub. This means that additions of new rounds should not change the overall hub schema at a later date (i.e. after submissions have already started being collected). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor suggestion: it's --> it is
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think overall the documentation here is very clear and well-written. I added a few small grammatical suggestions (ignore them at risk of incurring the grumpiness of a recovering English major!). And see specific comments/replies to @LucieContamin 's comments about including some clarifications about why we make some specific recommendations.
Thanks for the comments @LucieContamin and @nickreich ! Regarding your request to relax the language, note, firstly that I am careful to use language like
Having said that, once we have created functionality to inspect a hub for integrity , we are planning to add functionality that could repair any data type discrepancies and update files to conform to a changed schema hubverse-org/hubValidations#92. This could help admins in a situation where a breaking schema change needs to be introduced. However in general, and certainly until such functionality exists, I feel very strongly about being firm in our advice to steer users away from hub schema changes. |
THanks @annakrystalli for your reply, to follow up on that:
> dc <- hubData::connect_model_output("test/", file_format = "parquet", schema = schema)
> df <- dplyr::filter(dc, location == "06", output_type == "cdf")
> hubData::collect_hub(df)
# A tibble: 145 × 10
model_id origin_date scenario_id target horizon location age_group output_type output_type_id
* <chr> <date> <chr> <chr> <int> <chr> <chr> <chr> <chr>
1 team2-modelb 2023-11-12 A-2023-10-… peak … NA 06 0-130 cdf EW202422
2 team2-modelb 2023-11-12 A-2023-10-… peak … NA 06 0-130 cdf EW202421
3 team2-modelb 2023-11-12 A-2023-10-… peak … NA 06 0-130 cdf EW202420
4 team2-modelb 2023-11-12 A-2023-10-… peak … NA 06 0-130 cdf EW202419
5 team2-modelb 2023-11-12 A-2023-10-… peak … NA 06 0-130 cdf EW202418
6 team2-modelb 2023-11-12 A-2023-10-… peak … NA 06 0-130 cdf EW202417
7 team2-modelb 2023-11-12 A-2023-10-… peak … NA 06 0-130 cdf EW202416
8 team2-modelb 2023-11-12 A-2023-10-… peak … NA 06 0-130 cdf EW202415
9 team2-modelb 2023-11-12 A-2023-10-… peak … NA 06 0-130 cdf EW202414
10 team2-modelb 2023-11-12 A-2023-10-… peak … NA 06 0-130 cdf EW202413
# ℹ 135 more rows
# ℹ 1 more variable: value <dbl>
# ℹ Use `print(n = ...)` to see more rows as I reminder that's my input files: > str(arrow::read_parquet("test/team2-modelb/2023-11-12-team2-modelb.parquet"))
Classes ‘data.table’ and 'data.frame': 1885 obs. of 9 variables:
[...]
$ output_type_id: chr "EW202422" "EW202421" "EW202420" "EW202419" ...
$ horizon : num NA NA NA NA NA NA NA NA NA NA ...
> str(arrow::read_parquet("test/teama-model1/2023-11-12-teama-model1.parquet"))
Classes ‘data.table’ and 'data.frame': 1495 obs. of 9 variables:
[...]
$ output_type_id: num 0.01 0.025 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 ...
$ horizon : logi NA NA NA NA NA NA ... So I don't know if something change in Arrow, if they fix something but It might be good to maybe re-test to be sure the issue is still here and deserve a "firm" recommendation. I think especially for the scenario hubs, where it can be really difficult to guess what you are going to do in the future rounds. |
Yes but you did not filter on |
I see yes filtering on However, I am still not sure about being "strict", for example:
I think is misleading. As we just saw, you can load your whole dataset and/or filter on other columns, and it will be fine. For a Scenario Hub, my guess is that this rule is going to be difficult to follow. And if you have a round that has a different output type, then they can still load the full hub, filter just this particular round and the output type (as only the id column will be different). What I am trying to say is that I don't think it should not be on hubverse to be "strict", we can just explain quickly that behavior: saying the filtering on column with different format will not work but other filtering and load will work (might just need to provide a schema). And I can be on the hub maintainer side to deal with it. The |
As I said the language I use is should and could. If you can introduce the potential for errors in scenario hub and get away with not filtering on those particular columns then that is lucky for you but that should not affect what our general advice for best experience is. To me that is still a lucky hack. So I still stand by my original comment especially to act as a disclaimer for potential problems that might arise if the advice is not followed. |
I totally understand, and I am not saying we should not advice that, sorry if I am not clear. Also, as English is not my first language maybe I interpret the documentation a little wrong, if so I apologize about that. I am going to try to be clearer in what I am trying to say:
Again, if I misinterpret the documentation update, I apologize. I am just thinking of the new hubs that might worry about setting up first rounds with a lot of uncertainties about future round, if that make sense |
@LucieContamin to your comment about
being misleading, I can see your point so I clarified the meaning of what a successful interaction with a hub would be, i.e. for a hub to be successfully accessed and fully queryable across all columns as an I feel if a hub breaks these expectations, it should be up to the hub to communicate to their users what functionality will cause errors. Does that seem fair? |
BTW re detailing the different kinds of potentials for error, I did add a bit more detail here:
I think that should be enough as I don't think hubDocs is the place for detailed discussions of all the ways arrow can go wrong. Regarding adding new output types, that's why we've introduced the It sounds like scenario hub might in future make good use of the hub repair functionality once developed. And really I think that should be our best practice advice in the future, i.e. if you do get yourself in a corner use our hub repair functionality and correct your data rather than put up with limited or hacky ways to access your data. |
I agree, it should be enough. Thank you very much for the updates! I might be wrong but I agree, I think scenario hub might make good use of the hub repair (US SMH will use it for past rounds) and yes good practice should be to try to avoid doing column of mixed format. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you again for the update and additional information!
Thanks @LucieContamin ! Always appreciate your willingness to engage and think things through! |
This PR:
Both resolve #143 and bring documentation up to date with schema version v3.0.1