Releases: dagster-io/dagster
1.8.3 (core) / 0.24.3 (libraries)
New
- When different assets within a code location have different
PartitionsDefinition
s, there will no longer be an implicit asset job__ASSET_JOB_...
for eachPartitionsDefinition
; there will just be one with all the assets. This reduces the time it takes to load code locations with assets with many differentPartitionsDefinition
s.
1.8.2 (core) / 0.24.2 (libraries)
New
- [ui] Improved performance of the Automation history view for partitioned assets
- [ui] You can now delete dynamic partitions for an asset from the ui
- [dagster-sdf] Added support for quoted table identifiers (Thanks, @akbog!)
- [dagster-openai] Add additional configuration options for the
OpenAIResource
(Thanks, @chasleslr!) - [dagster-fivetran] Fivetran assets now have relation identifier metadata.
Bugfixes
- [ui] Fixed a collection of broken links pointing to renamed Declarative Automation pages.
- [dagster-dbt] Fixed issue preventing usage of
MultiPartitionMapping
with@dbt_assets
(Thanks, @arookieds!) - [dagster-azure] Fixed issue that would cause an error when configuring an
AzureBlobComputeLogManager
without asecret_key
(Thanks, @ion-elgreco and @HynekBlaha!)
Documentation
- Added API docs for
AutomationCondition
and associated static constructors. - [dagster-deltalake] Corrected some typos in the integration reference (Thanks, @dargmuesli!)
- [dagster-aws] Added API docs for the new
PipesCloudWatchMessageReader
1.8.1 (core) / 0.24.1 (libraries)
New
- If the sensor daemon fails while submitting runs, it will now checkpoint its progress and attempt to submit the remaining runs on the next evaluation.
build_op_context
andbuild_asset_context
now accepts arun_tags
argument.- Nested partially configured resources can now be used outside of
Definitions
. - [ui] Replaced GraphQL Explorer with GraphiQL.
- [ui] The run timeline can now be grouped by job or by automation.
- [ui] For users in the experimental navigation flag, schedules and sensors are now in a single merged automations table.
- [ui] Logs can now be filtered by metadata keys and values.
- [ui] Logs for
RUN_CANCELED
events now display relevant error messages. - [dagster-aws] The new
PipesCloudWatchMessageReader
can consume logs from CloudWatch as pipes messages. - [dagster-aws] Glue jobs launched via pipes can be automatically canceled if Dagster receives a termination signal.
- [dagster-azure]
AzureBlobComputeLogManager
now supports service principals, thanks @ion-elgreco! - [dagster-databricks]
dagster-databricks
now supportsdatabricks-sdk<=0.17.0
. - [dagster-datahub]
dagster-datahub
now allows pydantic versions below 3.0.0, thanks @kevin-longe-unmind! - [dagster-dbt] The
DagsterDbtTranslator
class now supports a modfiying theAutomationCondition
for dbt models by overridingget_automation_condition
. - [dagster-pandera]
dagster-pandera
now supportspolars
. - [dagster-sdf] Table and columns tests can now be used as asset checks.
- [dagster-embedded-elt] Column metadata and lineage can be fetched on Sling assets by chaining the new
replicate(...).fetch_column_metadata()
method. - [dagster-embedded-elt] dlt resource docstrings will now be used to populate asset descriptions, by default.
- [dagster-embedded-elt] dlt assets now generate column metadata.
- [dagster-embedded-elt] dlt transformers now refer to the base resource as upstream asset.
- [dagster-openai]
OpenAIResource
now supportsorganization
,project
andbase_url
for configurting the OpenAI client, thanks @chasleslr! - [dagster-pandas][dagster-pandera][dagster-wandb] These libraries no longer pin
numpy<2
, thanks @judahrand!
Bugfixes
- Fixed a bug for job backfills using backfill policies that materialized multiple partitions in a single run would be launched multiple times.
- Fixed an issue where runs would sometimes move into a FAILURE state rather than a CANCELED state if an error occurred after a run termination request was started.
- [ui] Fixed a bug where an incorrect dialog was shown when canceling a backfill.
- [ui] Fixed the asset page header breadcrumbs for assets with very long key path elements.
- [ui] Fixed the run timeline time markers for users in timezones that have off-hour offsets.
- [ui] Fixed bar chart tooltips to use correct timezone for timestamp display.
- [ui] Fixed an issue introduced in the 1.8.0 release where some jobs created from graph-backed assets were missing the “View as Asset Graph” toggle in the Dagster UI.
Breaking Changes
- [dagster-airbyte]
AirbyteCloudResource
now supportsclient_id
andclient_secret
for authentication - theapi_key
approach is no longer supported. This is motivated by the deprecation of portal.airbyte.com on August 15, 2024.
Deprecations
- [dagster-databricks] Removed deprecated authentication clients provided by
databricks-cli
anddatabricks_api
- [dagster-embedded-elt] Removed deprecated Sling resources
SlingSourceConnection
,SlingTargetConnection
- [dagster-embedded-elt] Removed deprecated Sling resources
SlingSourceConnection
,SlingTargetConnection
- [dagster-embedded-elt] Removed deprecated Sling methods
build_sling_assets
, andsync
Documentation
- The Integrating Snowflake & dbt with Dagster+ Insights guide no longer erroneously references BigQuery, thanks @dnxie12!
1.8.0 (core) / 0.24.0 (libraries)
Major changes since 1.7.0 (core) / 0.22.0 (libraries)
Core definition APIs
- You can now pass
AssetSpec
objects to theassets
argument ofDefinitions
, to let Dagster know about assets without associated materialization functions. This replaces the experimentalexternal_assets_from_specs
API, as well asSourceAsset
s, which are now deprecated. UnlikeSourceAsset
s,AssetSpec
s can be used for non-materializable assets with dependencies on Dagster assets, such as BI dashboards that live downstream of warehouse tables that are orchestrated by Dagster. [docs]. - [Experimental] You can now merge
Definitions
objects together into a single largerDefinitions
object, using the newDefinitions.merge
API (doc). This makes it easier to structure large Dagster projects, as you can construct aDefinitions
object for each sub-domain and then merge them together at the top level.
Partitions and backfills
BackfillPolicy
s assigned to assets are now respected for backfills launched from jobs that target those assets.- You can now wipe materializations for individual asset partitions.
Automation
- [Experimental] You can now add
AutomationCondition
s to your assets to have them automatically executed in response to specific conditions (docs). These serve as a drop-in replacement and improvement over theAutoMaterializePolicy
system, which is being marked as deprecated. - [Experimental] Sensors and schedules can now directly target assets, via the new
target
parameter, instead of needing to construct a job. - [Experimental] The Timeline page can now be grouped by job or automation. When grouped by automation, all runs launched by a sensor responsible for evaluating automation conditions will get bucketed to that sensor in the timeline instead of the "Ad-hoc materializations" row. Enable this by opting in to the
Experimental navigation
feature flag in user settings.
Catalog
- The Asset Details page now prominently displays row count and relation identifier (table name, schema, database), when corresponding asset metadata values are provided. For more information, see the metadata and tags docs.
- Introduced code reference metadata which can be used to open local files in your editor, or files in source control in your browser. Dagster can automatically attach code references to your assets’ Python source. For more information, see the docs.
Data quality and reliability
- [Experimental] Metadata bound checks – The new
build_metadata_bounds_checks
API [doc] enables easily defining asset checks that fail if a numeric asset metadata value falls outside given bounds. - [Experimental] Freshness checks from dbt config - Freshness checks can now be set on dbt assets, straight from dbt. Check out the API docs for build_freshness_checks_from_dbt_assets for more.
Integrations
- Dagster Pipes (
PipesSubprocessClient
) and its integrations with Lambda (PipesLambdaClient
), Kubernetes (PipesK8sClient
), and Databricks (PipesDatabricksClient
) are no longer experimental. - The new
DbtProject
class (docs) makes it simpler to define dbt assets that can be constructed in both development and production.DbtProject.prepare_if_dev()
eliminates boilerplate for local development, and thedagster-dbt project prepare-and-package
CLI can helps pull deps and generate the manifest at build time. - [Experimental] The
dagster-looker
package can be used to define a set of Dagster assets from a Looker project that is defined in LookML and is backed by git. See the GitHub discussion for more details.
Dagster Plus
- Catalog views — In Dagster+, selections into the catalog can now be saved and shared across an organization as catalog views. Catalog views have a name and description, and can be applied to scope the catalog, asset health, and global asset lineage pages against the view’s saved selection.
- Code location history — Dagster+ now stores a history of code location deploys, including the ability to revert to a previously deployed configuration.
Changes since 1.7.16 (core) / 0.22.16 (libraries)
New
-
The target of both schedules and sensors can now be set using an experimental
target
parameter that accepts anAssetSelection
or list of assets. Any assets passed this way will also be included automatically in theassets
list of the containingDefinitions
object. -
ScheduleDefinition
andSensorDefinition
now have atarget
argument that can accept anAssetSelection
. -
You can now wipe materializations for individual asset partitions.
-
AssetSpec
now has apartitions_def
attribute. All theAssetSpec
s provided to a@multi_asset
must have the samepartitions_def
. -
The
assets
argument onmaterialize
now acceptsAssetSpec
s. -
The
assets
argument onDefinitions
now acceptsAssetSpec
s. -
The new
merge
method onDefinitions
enables combining multipleDefinitions
object into a single largerDefinition
s object with their combined contents. -
Runs requested through the Declarative Automation system now have a
dagster/from_automation_condition: true
tag applied to them. -
Changed the run tags query to be more performant. Thanks @egordm!
-
Dagster Pipes and its integrations with Lambda, Kubernetes, and Databricks are no longer experimental.
-
The
Definitions
constructor will no longer raise errors when the provided definitions aren’t mutually resolve-able – e.g. when there are conflicting definitions with the same name, unsatisfied resource dependencies, etc. These errors will still be raised at code location load time. The newDefinitions.validate_loadable
static method also allows performing the validation steps that used to occur in constructor. -
AssetsDefinitions
object provided to aDefinitions
object will now be deduped by reference equality. That is, the following will now work:from dagster import asset, Definitions @asset def my_asset(): ... defs = Definitions(assets=[my_asset, my_asset]) # Deduped into just one AssetsDefinition.
-
[dagster-embedded-elt] Adds translator options for dlt integration to override auto materialize policy, group name, owners, and tags
-
[dagster-sdf] Introducing the dagster-sdf integration for data modeling and transformations powered by sdf.
-
[dagster-dbt] Added a new
with_insights()
method which can be used to more easily attach Dagster+ Insights metrics to dbt executions:dbt.cli(...).stream().with_insights()
Bugfixes
- Dagster now raises an error when an op yields an output corresponding to an unselected asset.
- Fixed a bug that caused downstream ops within a graph-backed asset to be skipped when they were downstream of assets within the graph-backed assets that aren’t part of the selection for the current run.
- Fixed a bug where code references did not work properly for self-hosted GitLab instances. Thanks @cooperellidge!
- [ui] When engine events with errors appear in run logs, their metadata entries are now rendered correctly.
- [ui] The asset catalog greeting now uses your first name from your identity provider.
- [ui] The create alert modal now links to the alerting documentation, and links to the documentation have been updated.
- [ui] Fixed an issue introduced in the 1.7.13 release where some asset jobs were only displaying their ops in the Dagster UI instead of their assets.
- Fixed an issue where terminating a run while it was using the Snowflake python connector would sometimes move it into a FAILURE state instead of a CANCELED state.
- Fixed an issue where backfills would sometimes move into a FAILURE state instead of a CANCELED state when the backfill was canceled.
Breaking Changes
- The experimental and deprecated
build_asset_with_blocking_check
has been removed. Use theblocking
argument on@asset_check
instead. - Users with
mypy
andpydantic
1 may now experience a “metaclass conflict” error when usingConfig
. Previously this would occur when using pydantic 2. AutoMaterializeSensorDefinition
has been renamedAutomationConditionSensorDefinition
.- The deprecated methods of the
ComputeLogManager
have been removed. CustomComputeLogManager
implementations must also implement theCapturedLogManager
interface. This will not affect any of the core implementations available in the coredagster
package or the library packages. - By default, an
AutomationConditionSensorDefinition
with the name“default_automation_condition_sensor”
will be constructed for each code location, and will handle evaluating and launching runs for allAutomationConditions
andAutoMaterializePolicies
within that code location. You can restore the previous behavior by setting:in your dagster.yaml file.auto_materialize: use_sensors: False
- [dagster-dbt] Support for
dbt-core==1.6.*
has been removed because the version is now end-of-life. - [dagster-dbt] The following deprecated APIs have been removed:
KeyPrefixDagsterDbtTranslator
has been removed. To modify the asset keys for a set of dbt assets, implementDagsterDbtTranslator.get_asset_key()
instead.- Support for setting ...
1.7.16 (core) / 0.23.16 (libraries)
Experimental
- [pipes] PipesGlueClient, an AWS Glue pipes client has been added to
dagster_aws
.
1.7.15 (core) / 0.23.15 (libraries)
New
- [dagster-celery-k8s] Added a
per_step_k8s_config
configuration option to thecelery_k8s_job_executor
, allowing the k8s configuration of individual steps to be configured at run launch time. Thanks @alekseik1! - [dagster-dbt] Deprecated the
log_column_level_metadata
macro in favor of the newwith_column_metadata
API. - [dagster-airbyte] Deprecated
load_assets_from_airbyte_project
as the Octavia CLI has been deprecated.
Bugfixes
- [ui] Fix global search to find matches on very long strings.
- Fixed an issue introduced in the 1.7.14 release where multi-asset sensors would sometimes raise an error about fetching too many event records.
- Fixes an issue introduced in 1.7.13 where type-checkers interpretted the return type of
RunRequest(...)
asNone
- [dagster-aws] Fixed an issue where the
EcsRunLauncher
would sometimes fail to launch runs when theinclude_sidecars
option was set toTrue
. - [dagster-dbt] Fixed an issue where errors would not propagate through deferred metadata fetches.
Dagster Plus
- On June 20, 2024, AWS changed the AWS CloudMap CreateService API to allow resource-level permissions. The Dagster+ ECS Agent uses this API to launch code locations. We’ve updated the Dagster+ ECS Agent CloudFormation template to accommodate this change for new users. Existing users have until October 14, 2024 to add the new permissions and should have already received similar communication directly from AWS.
- Fixed a bug with BigQuery cost tracking in Dagster+ insights, where some runs would fail if there were null values for either
total_byte_billed
ortotal_slot_ms
in the BigQueryINFORMATION_SCHEMA.JOBS
table. - Fixed an issue where code locations that failed to load with extremely large error messages or stack traces would sometimes cause errors with agent heartbeats until the code location was redeployed.
1.7.14 (core) / 0.23.14 (libraries)
New
- [blueprints] When specifying an asset key in
ShellCommandBlueprint
, you can now use slashes as a delimiter to generate anAssetKey
with multiple path components. - [community-controbution][mlflow] The mlflow resource now has a
mlflow_run_id
attribute (Thanks Joe Percivall!) - [community-contribution][mlflow] The mlflow resource will now retry when it fails to fetch the mlflow run ID (Thanks Joe Percivall!)
Bugfixes
- Fixed an issue introduced in the 1.7.13 release where Dagster would fail to load certain definitions when using Python 3.12.4.
- Fixed an issue where in-progress steps would continue running after an unexpected exception caused a run to fail.
- [dagster-dbt] Fixed an issue where column lineage was unable to be built in self-referential incremental models.
- Fixed an issue where
dagster dev
was logging unexpectedly without thegrpcio<1.65.0
pin. - Fixed an issue where a
ContextVar was created in a different context
error was raised when executing an async asset. - [community-contribution]
multi_asset
type-checker fix from @aksestok, thanks! - [community-contribution][ui] Fix to use relative links for manifest/favicon files, thanks @aebrahim!
Documentation
- [community-contribution] Fixed helm repo CLI command typo, thanks @fxd24!
Dagster Plus
- [ui] The deployment settings yaml editor is now on a page with its own URL, instead of within a dialog.
1.7.13 (core) / 0.23.13 (libraries)
New
- The
InputContext
passed to anIOManager
’sload_input
function when invoking theoutput_value
oroutput_for_node
methods onJobExecutionResult
now has the name"dummy_input_name"
instead ofNone
. - [dagster-ui] Asset materializations can now be reported from the dropdown menu in the asset list view.
- [dagster-dbt]
DbtProject
is adopted and no longer experimental. UsingDbtProject
helps achieve a setup where the dbt manifest file and dbt dependencies are available and up-to-date, during development and in production. Check out the API docs for more: https://docs.dagster.io/_apidocs/libraries/dagster-dbt#dagster_dbt.DbtProject. - [dagster-dbt] The
—use-dbt-project
flag was introduced for the cli commanddagster-dbt project scaffold
. Creating a Dagster project wrapping a dbt project using that flag will include aDbtProject
. - [dagster-ui] The Dagster UI now loads events in batches of 1000 in the run log viewer, instead of batches of 10000. This value can be adjusted by setting the
DAGSTER_UI_EVENT_LOAD_CHUNK_SIZE
environment variable on the Dagster webserver. - Asset backfills will now retry if there is an unexpected exception raised in the middle of the backfill. Previously, they would only retry if there was a problem connecting to the code server while launching runs in the backfill.
- Added the ability to monitor jobs which have failed to start in time with the
RunFailureReason.START_TIMEOUT
run monitoring failure reason. Thanks @jobicarter! - [experimental] Introduced the ability to attach code references to your assets, which allow you to view source code for an asset in your editor or in git source control. For more information, see the code references docs: https://docs.dagster.io/guides/dagster/code-references.
- [ui] Performance improvements to loading the asset overview tab.
- [ui] Performance improvements for rendering run gantt charts with 1000’s of ops/steps.
- [dagster-celery] Introduced a new Dagster Celery runner, a more lightweight way to run Dagster jobs without an executor. Thanks, @egordm!
Bugfixes
- Fixed a bug that caused tags added to
ObserveResult
objects to not be stored with the producedAssetObservation
event. - Fixed a bug which could cause
metadata
defined onSourceAssets
to be unavailable when accessed in an IOManager. - For subselections of graph-backed multi-assets, there are some situations where we used to unnecessarily execute some of the non-selected assets. Now, we no longer execute them in those situations. There are also some situations where we would skip execution of some ops that might be needed. More information on the particulars is available here.
- Fixed the
@graph_asset
decorator overload missing anowners
argument, thanks @askvinni! - Fixed behavior of passing custom image config to the K8sRunLauncher, thanks @marchinho11!
- [dagster-dbt] Fixed an issue with emitting column lineage when using BigQuery.
- [dagster-k8s] Added additional retries to
execute_k8s_job
when there was a transient failure while loading logs from the launched job. Thanks @piotrmarczydlo! - [dagster-fivetran] Fixed an issue where the Fivetran connector resource would sometimes hang if there was a networking issue while connecting to the Fivetran API.
- [dagster-aws] Fixed an issue where the EMR step launcher would sometimes fail due to multiple versions of the
dateutil
package being installed in the default EMR python evnrionment. - [ui] The “Create date” column in the runs table now correctly shows the time at which a run was created instead of the time when it started to execute.
- [ui] Fixed dark mode colors in run partitions graphs.
- [auto-materialize] Fixed an issue which could cause errors in the
AutoMaterializeRule.skip_on_parent_missing
rule when a parent asset had itsPartitionsDefinition
changed. - [declarative-automation] Fixed an issue which could cause errors when viewing the evaluation history of assets with
AutomationConditions
. - [declarative-automation] Previously,
AutomationCondition.newly_updated()
would trigger on anyASSET_OBSERVATION
event. Now, it only triggers when the data version on that event changes.
Breaking Changes
- [dagster-dbt] The cli command
dagster-dbt project prepare-for-deployment
has been replaced bydagster-dbt project prepare-and-package
. - [dagster-dbt] During development,
DbtProject
no longer prepares the dbt manifest file and dbt dependencies in its constructor during initialization. This process has been moved toprepare_if_dev()
, that can be called on theDbtProject
instance after initialization. Check out the API docs for more: https://docs.dagster.io/_apidocs/libraries/dagster-dbt#dagster_dbt.DbtProject.prepare_if_dev.
Deprecations
- Passing
GraphDefinition
as thejob
argument to schedules and sensors is deprecated. Derive a job from theGraphDefinition
usinggraph_def.to_job()
and pass this instead.
Documentation
- Added some additional copy, headings, and other formatting to the dbt quickstart.
- Added information about asset checks to the Testing assets guide.
- Updated
dagster-plus CLI
in the sidenav to correctly bedagster-cloud CLI
. - Thanks to Tim Nordenfur and Dimitar Vanguelov for fixing a few typos!
- Introduced guides to migrate Airflow pipelines to Dagster that leverage the TaskFlow API or are containerized and executed with an operator like the KubernetesPodOperator.
- Fixed instructions on setting secrets in Kubernetes Dagster deployments, thanks @abhinavDhulipala!
Dagster Plus
- A history of code location deploys can now be viewed on the Code Locations tab under the Deployment view. Previously deployed versions can now be restored from history.
- [ui] Various improvements have been made to the asset health dashboard, which is now no longer experimental.
- [ui] Fixed issues in per-event asset insights where bar charts incorrectly displayed events in reverse order, and with UTC timestamps.
- Fixed a recent regression where creating an alert that notifies asset owners that are teams raises an error.
1.7.12 (core)/ 0.23.12 (libraries)
Bugfixes
- [ui] fixes behavior issues with jobs and asset pages introduced in 1.7.11
1.7.11 (core)/ 0.23.11 (libraries)
New
- [ui] Improved performance for loading assets that are part of big asset graphs.
- [ui] Improved performance for loading job backfills that have thousands of partitions
- [ui] The code location page can now be filtered by status
- [agent] K8s and ECS agent main loop writes a sentinel file that can be used for liveness checks.
- [agent][experimental] ECS CloudFormation template with private IP addresses using NAT Gateways, security groups, IAM role separation, tighter permissions requirements, and improved documentation.
- Ephemeral asset jobs are now supported in run status sensors (thanks @the4thamigo-uk)!
Bugfixes
- In
AssetsDefinition
construction, enforce single key per output name - Fixed a bug where freshness checks on assets with both observations and materializations would incorrectly miss a materialization if there’s no observation with
dagster/last_updated_timestamp
. - Fixed a bug with anomaly detection freshness checks where “not enough records” result would cause the sensor to crash loop.
- Fixed a bug that could cause errors in the Asset Daemon if an asset using
AutoMaterializeRule.skip_on_not_all_parents_updated_since_cron()
rule gained a new dependency with a different PartitionsDefinition. - [ui] Fixed an issue that caused the backfill page not to be scrollable.
- [ui] Fixed an issue where filtering by partition on the Runs page wouldn’t work if fetching all of your partitions timed out.
- [dagster-dlt] Fixed bug with dlt integration in which partitioned assets would change the file name when using the filesystem destination.
- [ui] Fixed an issue where an erroring code location would cause multiple toast popups.
- Allow a string to be provided for
source_key_prefix
arg ofload_assets_from_modules
. (thanks @drjlin)! - Added a missing debug level log message when loading partitions with polars (thanks Daniel Gafni)!
- Set postgres timeout via statement, which improves storage-layer compatibility with Amazon RDS (thanks @james lewis)!
- In DBT integration, quote the table identifiers to handle cases where table names require quotes due to special characters. (thanks @alex launi)!
- remove deprecated param usage in dagster-wandb integration (thanks @chris histe)!
- Add missing QUEUED state to DatabricksRunLifeCycleState (thanks @gabor ratky)!
- Fixed a bug with dbt-cloud integration subsetting implementation (thanks @ivan tsarev)!
Breaking Changes
- [dagster-airflow]
load_assets_from_airflow_dag
no longer allows multiple tasks to materialize the same asset.
Documentation
- Added type-hinting to backfills example
- Added syntax highlighting to some examples (thanks @Niko)!
- Fixed broken link (thanks @federico caselli)!
Dagster Plus
- The
dagster-cloud ci init
CLI will now use the--deployment
argument as the base deployment when creating a branch deployment. This base deployment will be used for Change Tracking. - The BigQuery dbt insights wrapper
dbt_with_bigquery_insights
now respects CLI arguments for profile configuration and also selects location / dataset from the profile when available. - [experimental feature] Fixes a recent regression where the UI errored upon attempting to create an insights metric alert.