0.8.0 "In The Zone"
Major Changes
Please see the 080_MIGRATION.md
migration guide for details on updating existing code to be
compatible with 0.8.0
-
Workspace, host and user process separation, and repository definition Dagit and other tools no
longer load a single repository containing user definitions such as pipelines into the same
process as the framework code. Instead, they load a "workspace" that can contain multiple
repositories sourced from a variety of different external locations (e.g., Python modules and
Python virtualenvs, with containers and source control repositories soon to come).The repositories in a workspace are loaded into their own "user" processes distinct from the
"host" framework process. Dagit and other tools now communicate with user code over an IPC
mechanism. This architectural change has a couple of advantages:- Dagit no longer needs to be restarted when there is an update to user code.
- Users can use repositories to organize their pipelines, but still work on all of their
repositories using a single running Dagit. - The Dagit process can now run in a separate Python environment from user code so pipeline
dependencies do not need to be installed into the Dagit environment. - Each repository can be sourced from a separate Python virtualenv, so teams can manage their
dependencies (or even their own Python versions) separately.
We have introduced a new file format,
workspace.yaml
, in order to support this new architecture.
The workspace yaml encodes what repositories to load and their location, and supersedes the
repository.yaml
file and associated machinery.As a consequence, Dagster internals are now stricter about how pipelines are loaded. If you have
written scripts or tests in which a pipeline is defined and then passed across a process boundary
(e.g., using themultiprocess_executor
or dagstermill), you may now need to wrap the pipeline
in thereconstructable
utility function for it to be reconstructed across the process boundary.In addition, rather than instantiate the
RepositoryDefinition
class directly, users should now
prefer the@repository
decorator. As part of this change, the@scheduler
and
@repository_partitions
decorators have been removed, and their functionality subsumed under
@repository
.
-
Dagit organization The Dagit interface has changed substantially and is now oriented around
pipelines. Within the context of each pipeline in an environment, the previous "Pipelines" and
"Solids" tabs have been collapsed into the "Definition" tab; a new "Overview" tab provides
summary information about the pipeline, its schedules, its assets, and recent runs; the previous
"Playground" tab has been moved within the context of an individual pipeline. Related runs (e.g.,
runs created by re-executing subsets of previous runs) are now grouped together in the Playground
for easy reference. Dagit also now includes more advanced support for display of scheduled runs
that may not have executed ("schedule ticks"), as well as longitudinal views over scheduled runs,
and asset-oriented views of historical pipeline runs. -
Assets Assets are named materializations that can be generated by your pipeline solids, which
support specialized views in Dagit. For example, if we represent a database table with an asset
key, we can now index all of the pipelines and pipeline runs that materialize that table, and
view them in a single place. To use the asset system, you must enable an asset-aware storage such
as Postgres. -
Run launchers The distinction between "starting" and "launching" a run has been effaced. All
pipeline runs instigated through Dagit now make use of theRunLauncher
configured on the
Dagster instance, if one is configured. Additionally, run launchers can now support termination of
previously launched runs. If you have written your own run launcher, you may want to update it to
support termination. Note also that as of 0.7.9, the semantics ofRunLauncher.launch_run
have
changed; this method now takes therun_id
of an existing run and should no longer attempt to
create the run in the instance. -
Flexible reexecution Pipeline re-execution from Dagit is now fully flexible. You may
re-execute arbitrary subsets of a pipeline's execution steps, and the re-execution now appears
in the interface as a child run of the original execution. -
Support for historical runs Snapshots of pipelines and other Dagster objects are now persisted
along with pipeline runs, so that historial runs can be loaded for review with the correct
execution plans even when pipeline code has changed. This prepares the system to be able to diff
pipeline runs and other objects against each other. -
Step launchers and expanded support for PySpark on EMR and Databricks We've introduced a new
StepLauncher
abstraction that uses the resource system to allow individual execution steps to
be run in separate processes (and thus on separate execution substrates). This has made extensive
improvements to our PySpark support possible, including the option to execute individual PySpark
steps on EMR using theEmrPySparkStepLauncher
and on Databricks using the
DatabricksPySparkStepLauncher
Theemr_pyspark
example demonstrates how to use a step launcher. -
Clearer names What was previously known as the environment dictionary is now called the
run_config
, and the previousenvironment_dict
argument to APIs such asexecute_pipeline
is
now deprecated. We renamed this argument to focus attention on the configuration of the run
being launched or executed, rather than on an ambiguous "environment". We've also renamed the
config
argument to all use definitions to beconfig_schema
, which should reduce ambiguity
between the configuration schema and the value being passed in some particular case. We've also
consolidated and improved documentation of the valid types for a config schema. -
Lakehouse We're pleased to introduce Lakehouse, an experimental, alternative programming model
for data applications, built on top of Dagster core. Lakehouse allows developers to define data
applications in terms of data assets, such as database tables or ML models, rather than in terms
of the computations that produce those assets. Thesimple_lakehouse
example gives a taste of
what it's like to program in Lakehouse. We'd love feedback on whether this model is helpful! -
Airflow ingest We've expanded the tooling available to teams with existing Airflow installations
that are interested in incrementally adopting Dagster. Previously, we provided only injection
tools that allowed developers to write Dagster pipelines and then compile them into Airflow DAGs
for execution. We've now added ingestion tools that allow teams to move to Dagster for execution
without having to rewrite all of their legacy pipelines in Dagster. In this approach, Airflow
DAGs are kept in their own container/environment, compiled into Dagster pipelines, and run via
the Dagster orchestrator. See theairflow_ingest
example for details!
Breaking Changes
-
dagster
-
The
@scheduler
and@repository_partitions
decorators have been removed. Instances of
ScheduleDefinition
andPartitionSetDefinition
belonging to a repository should be specified
using the@repository
decorator instead. -
Support for the Dagster solid selection DSL, previously introduced in Dagit, is now uniform
throughout the Python codebase, with the previoussolid_subset
arguments (--solid-subset
in
the CLI) being replaced bysolid_selection
(--solid-selection
). In addition to the names of
individual solids, this argument now supports selection queries like*solid_name++
(i.e.,
solid_name
, all of its ancestors, its immediate descendants, and their immediate descendants). -
The built-in Dagster type
Path
has been removed. -
PartitionSetDefinition
names, including those defined by aPartitionScheduleDefinition
,
must now be unique within a single repository. -
Asset keys are now sanitized for non-alphanumeric characters. All characters besides
alphanumerics and_
are treated as path delimiters. Asset keys can also be specified using
AssetKey
, which accepts a list of strings as an explicit path. If you are running 0.7.10 or
later and using assets, you may need to migrate your historical event log data for asset keys
from previous runs to be attributed correctly. Thisevent_log
data migration can be invoked
as follows:from dagster.core.storage.event_log.migration import migrate_event_log_data from dagster import DagsterInstance migrate_event_log_data(instance=DagsterInstance.get())
-
The interface of the
Scheduler
base class has changed substantially. If you've written a
custom scheduler, please get in touch! -
The partitioned schedule decorators now generate
PartitionSetDefinition
names using
the schedule name, suffixed with_partitions
. -
The
repository
property onScheduleExecutionContext
is no longer available. If you were
using this property to pass toScheduler
instance methods, this interface has changed
significantly. Please see theScheduler
class documentation for details. -
The CLI option
--celery-base-priority
is no longer available for the command:
dagster pipeline backfill
. Use the tags option to specify the celery priority, (e.g.
dagster pipeline backfill my_pipeline --tags '{ "dagster-celery/run_priority": 3 }'
-
The
execute_partition_set
API has been removed. -
The deprecated
is_optional
parameter toField
andOutputDefinition
has been removed.
Useis_required
instead. -
The deprecated
runtime_type
property onInputDefinition
andOutputDefinition
has been
removed. Usedagster_type
instead. -
The deprecated
has_runtime_type
,runtime_type_named
, andall_runtime_types
methods on
PipelineDefinition
have been removed. Usehas_dagster_type
,dagster_type_named
, and
all_dagster_types
instead. -
The deprecated
all_runtime_types
method onSolidDefinition
andCompositeSolidDefinition
has been removed. Useall_dagster_types
instead. -
The deprecated
metadata
argument toSolidDefinition
and@solid
has been removed. Use
tags
instead. -
The graphviz-based DAG visualization in Dagster core has been removed. Please use Dagit!
-
-
dagit
dagit-cli
has been removed, anddagit
is now the only console entrypoint.
-
dagster-aws
- The AWS CLI has been removed.
dagster_aws.EmrRunJobFlowSolidDefinition
has been removed.
-
dagster-bash
- This package has been renamed to dagster-shell. The
bash_command_solid
andbash_script_solid
solid factory functions have been renamed tocreate_shell_command_solid
and
create_shell_script_solid
.
- This package has been renamed to dagster-shell. The
-
dagster-celery
- The CLI option
--celery-base-priority
is no longer available for the command:
dagster pipeline backfill
. Use the tags option to specify the celery priority, (e.g.
dagster pipeline backfill my_pipeline --tags '{ "dagster-celery/run_priority": 3 }'
- The CLI option
-
dagster-dask
- The config schema for the
dagster_dask.dask_executor
has changed. The previous config should
now be nested under the keylocal
.
- The config schema for the
-
dagster-gcp
- The
BigQueryClient
has been removed. Usebigquery_resource
instead.
- The
-
dagster-dbt
- The dagster-dbt package has been removed. This was inadequate as a reference integration, and
will be replaced in 0.8.x.
- The dagster-dbt package has been removed. This was inadequate as a reference integration, and
-
dagster-spark
dagster_spark.SparkSolidDefinition
has been removed - usecreate_spark_solid
instead.- The
SparkRDD
Dagster type, which only worked with an in-memory engine, has been removed.
-
dagster-twilio
- The
TwilioClient
has been removed. Usetwilio_resource
instead.
- The
New
-
dagster
- You may now set
asset_key
on anyMaterialization
to use the new asset system. You will also
need to configure an asset-aware storage, such as Postgres. Thelongitudinal_pipeline
example
demonstrates this system. - The partitioned schedule decorators now support an optional
end_time
. - Opt-in telemetry now reports the Python version being used.
- You may now set
-
dagit
- Dagit's GraphQL playground is now available at
/graphiql
as well as at/graphql
.
- Dagit's GraphQL playground is now available at
-
dagster-aws
- The
dagster_aws.S3ComputeLogManager
may now be configured to override the S3 endpoint and
associated SSL settings. - Config string and integer values in the S3 tooling may now be set using either environment
variables or literals.
- The
-
dagster-azure
- We've added the dagster-azure package, with support for Azure Data Lake Storage Gen2; you can
use theadls2_system_storage
or, for direct access, theadls2_resource
resource. (Thanks
@sd2k!)
- We've added the dagster-azure package, with support for Azure Data Lake Storage Gen2; you can
-
dagster-dask
- Dask clusters are now supported by
dagster_dask.dask_executor
. For full support, you will need
to install extras withpip install dagster-dask[yarn, pbs, kube]
. (Thanks @DavidKatz-il!)
- Dask clusters are now supported by
-
dagster-databricks
- We've added the dagster-databricks package, with support for running PySpark steps on Databricks
clusters through thedatabricks_pyspark_step_launcher
. (Thanks @sd2k!)
- We've added the dagster-databricks package, with support for running PySpark steps on Databricks
-
dagster-gcp
- Config string and integer values in the BigQuery, Dataproc, and GCS tooling may now be set
using either environment variables or literals.
- Config string and integer values in the BigQuery, Dataproc, and GCS tooling may now be set
-
dagster-k8s
- Added the
CeleryK8sRunLauncher
to submit execution plan steps to Celery task queues for
execution as k8s Jobs. - Added the ability to specify resource limits on a per-pipeline and per-step basis for k8s Jobs.
- Many improvements and bug fixes to the dagster-k8s Helm chart.
- Added the
-
dagster-pandas
- Config string and integer values in the dagster-pandas input and output schemas may now be set
using either environment variables or literals.
- Config string and integer values in the dagster-pandas input and output schemas may now be set
-
dagster-papertrail
- Config string and integer values in the
papertrail_logger
may now be set using either
environment variables or literals.
- Config string and integer values in the
-
dagster-pyspark
- PySpark solids can now run on EMR, using the
emr_pyspark_step_launcher
, or on Databricks using
the new dagster-databricks package. Theemr_pyspark
example demonstrates how to use a step
launcher.
- PySpark solids can now run on EMR, using the
-
dagster-snowflake
- Config string and integer values in the
snowflake_resource
may now be set using either
environment variables or literals.
- Config string and integer values in the
-
dagster-spark
dagster_spark.create_spark_solid
now accepts arequired_resource_keys
argument, which
enables setting up a step launcher for Spark solids, like theemr_pyspark_step_launcher
.
Bugfix
dagster pipeline execute
now sets a non-zero exit code when pipeline execution fails.