-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1.7.11 airflow 2.8.1 #9852
Closed
marcusintrohive
wants to merge
65
commits into
dbt-labs:main
from
marcusintrohive:1.7.11_airflow_2.8.1
Closed
1.7.11 airflow 2.8.1 #9852
marcusintrohive
wants to merge
65
commits into
dbt-labs:main
from
marcusintrohive:1.7.11_airflow_2.8.1
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
dbt-labs#8865) (dbt-labs#8878) * fix * test * changelog (cherry picked from commit 35f46da) Co-authored-by: Chenyu Li <[email protected]>
* add test * fix test * first pass with constraint error * add back column checks for temp tables * changelog * Update .changes/unreleased/Fixes-20231024-145504.yaml (cherry picked from commit 98310b6) Co-authored-by: Emily Rockman <[email protected]>
…ion==0 (dbt-labs#8922) Co-authored-by: Kshitij Aranke <[email protected]>
* Fix issues around new get_catalog_by_relations macro (dbt-labs#8856) * Fix issues around new get_catalog_by_relations macro * Add changelog entry * Fix unit test. * Additional unit testing * Fix cased comparison in catalog-retrieval function (dbt-labs#8940) * Fix cased comparison in catalog-retrieval function. * Fix cased comparison in catalog-retrieval function.
(cherry picked from commit 211392c) Co-authored-by: Chenyu Li <[email protected]>
…atalog queries (dbt-labs#8945) (dbt-labs#8963) * changelog * write test case demonstrating the issue * update catalog query to reflect materialized views (cherry picked from commit bb21403) Co-authored-by: Mike Alfare <[email protected]>
…st during release process
* Fix back compat for run_results pre-v5 * Add type annotations * Add functional testing * Add inline annotations * Add changelog entry. * Consolidate upgrade_schema_version + upgrade_run_results_json * Restore accidentally reverted test cases * Pre-commit fixups --------- Co-authored-by: Jeremy Cohen <[email protected]>
…bt-labs#9021) (dbt-labs#9026) * changelog * use MANIFEST.in to identify package data (cherry picked from commit 839c720) Co-authored-by: Mike Alfare <[email protected]>
* wip * add tests * changelog * nits * pr feedback * nits (cherry picked from commit 01d481b) Co-authored-by: Chenyu Li <[email protected]>
…st during release process
(cherry picked from commit 3902137) Co-authored-by: Michelle Ark <[email protected]>
…-labs#9065) (dbt-labs#9074) * Add test asserting `SavedQuery` configs can be set from `dbt_project.yml` * Allow extraneous properties in Export configs This brings the Export config object more in line with how other config objects are specified in the unparsed definition. It allows for specifying of extra configs, although they won't get propagate to the final config. * Add `ExportConfig` options to `SavedQueryConfig` options This allows for specifying `ExportConfig` options at the `SavedQueryConfig` level. This also therefore allows these options to be specified in the dbt_project.yml config. The plan in the follow up commit is to merge the `SavedQueryConfig` options into all configs of `Exports` belonging to the saved query. There are a couple caveots to call out: 1. We've used `schema` instead of `schema_name` on the `SavedQueryConfig` despite it being called `schema_name` on the `ExportConfig`. This is because need `schema_name` to be the name of the property on the `ExportConfig`, but `schema` is the user facing specification. 2. We didn't add the `ExportConfig` `alias` property to the `SavedQueryConfig` This is because `alias` will always be specific to a single export, and thus it doesn't make sense to allow defining it on the `SavedQueryConfig` to then apply to all `Exports` belonging to the `SavedQuery` * Begin inheriting configs from saved query config, and transitively from project config Export configs will now inherit from saved query configs, with a preference for export config specifications. That is to say an export config will inherity a config attr from the saved query config only if a value hasn't been supplied on the export config directly. Additionally because the saved query config has a similar relationship with the project config, exports configs can inherit from the project config (again with a preference for export config specifications). * Correct conditional in export config building for map schema to schema_name I somehow wrote a really weird, but also valid, conditional statement. Previously the conditional was ``` if combined.get("schema") is not combined.get("schema_name") is None: ``` which basically checked whether `schema` was a boolean that didn't match the boolean of whether `schema_name` was None. This would pretty much always evaluate to True because `schema` should be a string or none, not a bool, and thus would never match the right hand side. Crazy. It has now been fixed to do the thing we want to it to do. If `schema` isn't `None`, and `schema_name` is `None`, then set `schema_name` to have the value of `schema`. * Update parameter names in `_get_export_config` to be more verbose (cherry picked from commit c2f7d75) Co-authored-by: Quigley Malcolm <[email protected]>
…errupt. (dbt-labs#9042) * During node execution, also treat SystemExit as an interrupt. (dbt-labs#8994) IDE worker process raises SystemExit in multiple scenarios, including user abort of a command. (cherry picked from commit 931b2db) * Add test asserting GraphRunnableTasks attempt to cancel connections on SystemExit (dbt-labs#9101) * Add test asserting GraphRunnableTasks attempt to cancel connections on SystemExit * Add test asserting GraphRunnableTasks attempt to cancel connections on KeyboardInterrupt * Add test asserting GraphRunnableNode doesn't try to cancel connections on generic Exception --------- Co-authored-by: Ben Mosher <[email protected]> Co-authored-by: Quigley Malcolm <[email protected]>
…st during release process
* Fixups for deps lock file (dbt-labs#9147) * Update git revision with commit SHA * Use PackageRenderer for lock file * add unit tests for git and tarball packages * deepcopy unrendered_packages_data before iteration, fix remaining tests * Add functional tests * Add changelog entries * Assert one more --------- Co-authored-by: Michelle Ark <[email protected]> * Restore warning on unpinned git packages --------- Co-authored-by: Michelle Ark <[email protected]>
… `dbt docs generate` (dbt-labs#9163) Co-authored-by: Kshitij Aranke <[email protected]>
…st during release process
…cts for manifest, catalog, sources, and run-results (dbt-labs#9229) * Drop `all_refs=True` from jsonschema-ization build process Passing `all_refs=True` makes it so that Everything is a ref, even the top level schema. In jsonschema land, this essentially makes the produced artifact not a full schema, but a fractal object to be included in a schema. Thus when `$id` is passed in, jsonschema tools blow up because `$id` is for identifying a schema, which we explicitly weren't creating. The alternative was to drop the inclusion of `$id`. Howver, we're intending to create a schema, and having an `$id` is recommended best practice. Additionally since we were intending to create a schema, not a fractal, it seemed best to create to full schema. * Explicity produce jsonschemas using DRAFT_2020_12 dialect Previously were were implicitly using the `DRAFT_2020_12` dialect through mashumaro. It felt wise to begin explicitly specifying this. First, it is closest in available mashumaro provided dialects to what we produced pre 1.7. Secondly, if mashumaro changes its default for whatever reason (say a new dialect is added, and mashumaro moves to that), we don't want to automatically inherit that. * Begin including schema dialect specification in produced jsonschema In jsonschema's documentation they state > It's not always easy to tell which draft a JSON Schema is using. > You can use the $schema keyword to declare which version of the JSON Schema specification the schema is written to. > It's generally good practice to include it, though it is not required. and > For brevity, the $schema keyword isn't included in most of the examples in this book, but it should always be used in the real world. Basically, to know how to parse a schema, it's important to include what schema dialect is being used for the schema specification. The change in this commit ensures we include that information. * Add change documentation for jsonschema schema production fix * Regenerate dbt jsonschemas with fixed mashumaro jsonschema production process Specifically we regenerated * catalog v1 * manifest v11 * run-results v5 * sources v3 using the command `scripts/collect-artifact-schema.py --path schemas`
…t-labs#9173 (dbt-labs#9255) * Move minimum DSI version to 0.4.2 We're backporting a feature "conversion metrics" to 1.7. Conversion metrics don't exist in DSI < 0.4.2 which is problematic if we allow for those versions. This ensures that those who are on a version of 1.7 that supports conversion metrics will also have the requisit version of DSI. * added ConversionTypeParams classes * updated parser for ConversionTypeParams * added step to populate input_measure for conversion metrics * added tests * added changelog * Regenerate v11 manifest jsonschema to include conversion metrics definition * Regenerate v11 manifest test artifact for testing version compatability --------- Co-authored-by: Will Deng <[email protected]>
…t-labs#9541) (cherry picked from commit 2b6e2e1) Co-authored-by: Gerda Shank <[email protected]>
…g event (dbt-labs#9568) * Add node_info to GenericExceptionOnRun, InternalErrorOnRun & SQLRunnerException * Changie * Formatting
…st during release process
…k run (dbt-labs#9645) * Use log_contextvars context manager in run_hooks * Changie * Add test * Fix unset_contextvars function
…2195 (dbt-labs#9638) (dbt-labs#9655) CVE-2024-22195 identified an issue in Jinja2 versions <= 3.1.2. As such we've gone and changed our dependency requirement specification to be 3.1.3 or greater (but less than 4). Note: Preivously we were using the `~=` version specifier. However due to some issues with the `~=` we've moved to using `>=` in combination with `<`. This gives us the same range that `~=` gave us, but avoids a pip resolution issue when multiple packages in an environment use `~=` for the same dependency. (cherry picked from commit 7ea4670) Co-authored-by: Quigley Malcolm <[email protected]>
* Restrict protobuf to version 4. * Restrict protobuf to major version 4. --------- Co-authored-by: Peter Allen Webb <[email protected]>
…st during release process
* Restrict protobuf to 4.* versions (dbt-labs#9630) Protobuf v5 has breaking changes. Here we are limiting the protobuf dependency to one major version, 4, so that we don't have to patch over handling 2 different major versions of protobuf. (cherry picked from commit e4fe839) --------- Co-authored-by: Quigley Malcolm <[email protected]> Co-authored-by: Quigley Malcolm <[email protected]>
…labs#9732) * Stop trying to parse deleted schema files (dbt-labs#9722) * Add test around deleting a YAML file containing semantic models and metrics It was raised in dbt-labs#8860 that an error is being raised during partial parsing when files containing metrics/semantic models are deleted. In further testing it looks like this error specifically happens when a file containing both semantic models and metrics is deleted. If the deleted file contains just semantic models or metrics there seems to be no issue. The next commit should contain the fix. * Skip deleted schema files when scheduling files during partial parsing Waaaay back (in 7563b99) deleted schema files started being separated out from deleted non-schema files. However ever since, when it came to scheduling files for reparsing, we've only done so for deleted non-schema files. We even missed this when we refactored the scheduling code in b37e5b5. This change updates `_schedule_for_parsing` which is used by `schedule_nodes_for_parsing` to begin skipping deleted schema files in addition to deleted non schema files. * Update `add_to_pp_files` to ignore `deleted_schema_files` As noted in the previous commit, we started separating out deleted schema files from deleted non-schema files a looong time ago. However, this whole time we've been adding `deleted_schema_files` to the list of files to be parsed. This change corrects for that. * Add changie doc for partial parsing KeyError fix (cherry picked from commit deedeeb) * Empty commit to trigger github actions --------- Co-authored-by: Quigley Malcolm <[email protected]> Co-authored-by: Quigley Malcolm <[email protected]>
* Add tests to check that saved queries show in `dbt list` * Update `list` task to support saved queries This is built off of @jtcohen6 work in d6e7cda on jerco/fix-9532. I didn't directly cherry pick because there was more work to do as well as merge conflicts. That is to say @jtcohen6 should be credited with some of the work. * Update error message when iterating over nodes during list command errors This was originally suggested by @jtcohen6 in d6e7cda of jerco/fix-9532. This commit just makes sure the change gets included because I didn't cherry-pick that commit into this work. * Add changie log for saved query list support
…est during release process
… (dbt-labs#9778) * Handle exceptions during node execution more elegantly. * Add changelog entry. * Fix import * Add task documentation. * Change event type for noting thread exceptions. Co-authored-by: Peter Webb <[email protected]>
* Add factory wrappers to renamed_relations * add test and postgres semantics --------- Co-authored-by: Mila Page <[email protected]> Co-authored-by: Mike Alfare <[email protected]>
Thank you for your pull request! We could not find a changelog entry for this change. For details on how to document a change, see the contributing guide. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
resolves #
Problem
Solution
Checklist