-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ADAP-1008] [Bug] dbt does not create _dbt_max_partition
for subsequent runs of models that are incremental (insert_overwrite) + partitioned + contract enforced.
#583
Comments
_dbt_max_partition
for subsequent runs of models that are incremental (insert_overwrite) + partitioned + contract enforced._dbt_max_partition
for subsequent runs of models that are incremental (insert_overwrite) + partitioned + contract enforced.
Shower thought: Is it just me or are there just so many knobs (config) for BQ models 😁 |
👍 I have also encountered this error when trying to build an incremental (insert_overwrite) + partitioned + contract enforced table: |
This issue has been marked as Stale because it has been open for 180 days with no activity. If you would like the issue to remain open, please comment on the issue or else it will be closed in 7 days. |
I created a PR with a fix for this issue by updating the bigquery__get_empty_subquery_sql macro. Hope it helps. |
* initial change for autorefresh * add tests for different values of autorefresh * changelog
* Update pre-commit requirement from ~=2.21 to ~=3.3 Updates the requirements on [pre-commit](https://github.com/pre-commit/pre-commit) to permit the latest version. - [Release notes](https://github.com/pre-commit/pre-commit/releases) - [Changelog](https://github.com/pre-commit/pre-commit/blob/main/CHANGELOG.md) - [Commits](https://github.com/pre-commit/pre-commit/commits/v3.3.0) --- updated-dependencies: - dependency-name: pre-commit dependency-type: direct:development ... Signed-off-by: dependabot[bot] <[email protected]> * Add automated changelog yaml from template for bot PR --------- Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Mike Alfare <[email protected]> Co-authored-by: Github Build Bot <[email protected]>
Is this a new bug in dbt-bigquery?
Current Behavior
As per title, if a model is:
Then the subsequent run does not declare the
_dbt_max_partition
variable as opposed to when the model was not contract enforced.This results in an error if the model uses
_dbt_max_partition
in it's body.Expected Behavior
We should be creating
_dbt_max_partition
regardless of contract enforcement.Steps To Reproduce
Relevant log output
Environment
Additional Context
Basically, when contract is enforced, we did not do:
Plus other DDL/DML.
I think in the schema enforced scenario - we try to first figure out what the data types are via this introspection query:
We need to know the data types to know if any datatypes have drifted and thus to raise an error or append new columns (
on_schema_change
config).Unfortunately the logic:
Has resolved and since we don't create
_dbt_max_partition
until later, we error at that introspection query.The text was updated successfully, but these errors were encountered: