diff --git a/docs/docs-beta/docs/dagster-plus/features/alerts/index.md b/docs/docs-beta/docs/dagster-plus/features/alerts/index.md index 74761168028df..b869009c354d0 100644 --- a/docs/docs-beta/docs/dagster-plus/features/alerts/index.md +++ b/docs/docs-beta/docs/dagster-plus/features/alerts/index.md @@ -3,7 +3,7 @@ title: Setting up alerts on Dagster+ sidebar_label: "Dagster+ Alerts" sidebar_position: 30 --- -[comment]: <> (This file is automatically generated by `dagster-plus/features/alerts/generate_alerts_doc.py`) +[comment]: <> (This file is automatically generated by `dagster-plus/deployment/alerts/generate_alerts_doc.py`) Dagster+ allows you to configure alerts to automatically fire in response to a range of events. These alerts can be sent to a variety of different services, depending on your organization's needs. @@ -80,16 +80,16 @@ If desired, add **tags** in the format `{key}:{value}` to filter the runs that w ``` - + - + - + - + @@ -120,16 +120,16 @@ If desired, add **tags** in the format `{key}:{value}` to filter the runs that w ``` - + - + - + - + @@ -160,16 +160,16 @@ If desired, select a **target** from the dropdown menu to scope this alert to a ``` - + - + - + - + @@ -200,16 +200,16 @@ If desired, select a **target** from the dropdown menu to scope this alert to a ``` - + - + - + - + @@ -235,16 +235,16 @@ Alerts are sent only when a schedule/sensor transitions from **success** to **fa ``` - + - + - + - + @@ -268,16 +268,16 @@ You can set up alerts to fire when any code location fails to load due to an err ``` - + - + - + - + @@ -305,16 +305,16 @@ You can set up alerts to fire if your Hybrid agent hasn't sent a heartbeat in th ``` - + - + - + - + diff --git a/docs/docs-beta/docs/dagster-plus/features/authentication-and-access-control/rbac/audit-logs.md b/docs/docs-beta/docs/dagster-plus/features/authentication-and-access-control/rbac/audit-logs.md index 500205ea54000..cfc6c1cd81337 100644 --- a/docs/docs-beta/docs/dagster-plus/features/authentication-and-access-control/rbac/audit-logs.md +++ b/docs/docs-beta/docs/dagster-plus/features/authentication-and-access-control/rbac/audit-logs.md @@ -65,4 +65,4 @@ The **Filter** button near the top left of the page can be used to filter the li Audit logs can be accessed programmatically over the Dagster+ GraphQL API. You can access a visual GraphiQL interface by navigating to `https://.dagster.cloud//graphql` in your browser. You can also query the API directly using the Python client. - + diff --git a/docs/docs-beta/docs/dagster-plus/features/branch-deployments/setting-up-branch-deployments.md b/docs/docs-beta/docs/dagster-plus/features/branch-deployments/setting-up-branch-deployments.md index 1580a560bb953..39ef33b1362cb 100644 --- a/docs/docs-beta/docs/dagster-plus/features/branch-deployments/setting-up-branch-deployments.md +++ b/docs/docs-beta/docs/dagster-plus/features/branch-deployments/setting-up-branch-deployments.md @@ -116,7 +116,7 @@ While you can use your existing production agent, we recommend creating a dedica For example: - + @@ -125,7 +125,7 @@ While you can use your existing production agent, we recommend creating a dedica 2. After the agent is set up, modify your Helm values file to include the following: - + @@ -158,7 +158,7 @@ In the `dagster_cloud.yaml` file, replace `build.registry` with the registry use For example: - + ### Step 4.3: Configure GitHub Action secrets @@ -267,7 +267,7 @@ In the `dagster_cloud.yaml` file, replace `build.registry` with the registry use For example: - + ### Step 4.3: configure GitLab CI/CD variables diff --git a/docs/docs-beta/docs/dagster-plus/features/insights/google-bigquery.md b/docs/docs-beta/docs/dagster-plus/features/insights/google-bigquery.md index 0aa4b692755a7..1413a49ba0595 100644 --- a/docs/docs-beta/docs/dagster-plus/features/insights/google-bigquery.md +++ b/docs/docs-beta/docs/dagster-plus/features/insights/google-bigquery.md @@ -40,10 +40,10 @@ To enable this behavior, replace usage of `BigQueryResource` with `InsightsBigQu - + - + @@ -55,10 +55,10 @@ First, add a `.with_insights()` call to your `dbt.cli()` command(s). - + - + @@ -66,10 +66,10 @@ Then, add the following to your `dbt_project.yml`: - + - + diff --git a/docs/docs-beta/docs/dagster-plus/features/insights/snowflake.md b/docs/docs-beta/docs/dagster-plus/features/insights/snowflake.md index b826c97305d3c..ae4af2b9f1777 100644 --- a/docs/docs-beta/docs/dagster-plus/features/insights/snowflake.md +++ b/docs/docs-beta/docs/dagster-plus/features/insights/snowflake.md @@ -42,10 +42,10 @@ Only use `create_snowflake_insights_asset_and_schedule` in a single code locatio - + - + @@ -63,10 +63,10 @@ Only use `create_snowflake_insights_asset_and_schedule` in a single code locatio - + - + @@ -74,10 +74,10 @@ Then, add the following to your `dbt_project.yml`: - + - + This adds a comment to each query, which is used by Dagster+ to attribute cost metrics to the correct assets. diff --git a/docs/docs-beta/docs/guides/automate/asset-sensors.md b/docs/docs-beta/docs/guides/automate/asset-sensors.md index 597d97030cbcb..2c71bdaca5bd3 100644 --- a/docs/docs-beta/docs/guides/automate/asset-sensors.md +++ b/docs/docs-beta/docs/guides/automate/asset-sensors.md @@ -52,7 +52,7 @@ end This is an example of an asset sensor that triggers a job when an asset is materialized. The `daily_sales_data` asset is in the same code location as the job and other asset for this example, but the same pattern can be applied to assets in different code locations. - + ## Customize evaluation logic @@ -81,7 +81,7 @@ stateDiagram-v2 In the following example, the `@asset_sensor` decorator defines a custom evaluation function that returns a `RunRequest` object when the asset is materialized and certain metadata is present, otherwise it skips the run. - + ## Trigger a job with configuration @@ -89,7 +89,7 @@ By providing a configuration to the `RunRequest` object, you can trigger a job w For example, you might use a sensor to trigger a job when an asset is materialized, but also pass metadata about that materialization to the job: - + ## Monitor multiple assets @@ -97,7 +97,7 @@ When building a pipeline, you may want to monitor multiple assets with a single The following example uses a `@multi_asset_sensor` to monitor multiple assets and trigger a job when any of the assets are materialized: - + ## Next steps diff --git a/docs/docs-beta/docs/guides/automate/schedules.md b/docs/docs-beta/docs/guides/automate/schedules.md index 3749be2ed9c99..3d749758ecb87 100644 --- a/docs/docs-beta/docs/guides/automate/schedules.md +++ b/docs/docs-beta/docs/guides/automate/schedules.md @@ -19,7 +19,7 @@ To follow the steps in this guide, you'll need: A basic schedule is defined by a `JobDefinition` and a `cron_schedule` using the `ScheduleDefinition` class. A job can be thought of as a selection of assets or operations executed together. - + ## Run schedules in a different timezone @@ -43,14 +43,14 @@ If using partitions and jobs, you can create a schedule using the partition with If you have a [partitioned asset](/guides/build/create-a-pipeline/partitioning) and job: - + If you have a partitioned op job: - + diff --git a/docs/docs-beta/docs/guides/automate/sensors.md b/docs/docs-beta/docs/guides/automate/sensors.md index 3d05a915b1b6e..71be08e940220 100644 --- a/docs/docs-beta/docs/guides/automate/sensors.md +++ b/docs/docs-beta/docs/guides/automate/sensors.md @@ -25,7 +25,7 @@ Sensors are defined with the `@sensor` decorator. The following example includes If the sensor finds new files, it starts a run of `my_job`. If not, it skips the run and logs `No new files found` in the Dagster UI. - + :::tip Unless a sensor has a `default_status` of `DefaultSensorStatus.RUNNING`, it won't be enabled when first deployed to a Dagster instance. To find and enable the sensor, click **Automation > Sensors** in the Dagster UI. @@ -62,7 +62,7 @@ When dealing with a large number of events, you may want to implement a cursor t The following example demonstrates how you might use a cursor to only create `RunRequests` for files in a directory that have been updated since the last time the sensor ran. - + For sensors that consume multiple event streams, you may need to serialize and deserialize a more complex data structure in and out of the cursor string to keep track of the sensor's progress over the multiple streams. diff --git a/docs/docs-beta/docs/guides/build/assets-concepts/asset-dependencies.md b/docs/docs-beta/docs/guides/build/assets-concepts/asset-dependencies.md index 575aff2a39367..20495ba819d7f 100644 --- a/docs/docs-beta/docs/guides/build/assets-concepts/asset-dependencies.md +++ b/docs/docs-beta/docs/guides/build/assets-concepts/asset-dependencies.md @@ -31,7 +31,7 @@ To follow the steps in this guide, you'll need: A common and recommended approach to passing data between assets is explicitly managing data using external storage. This example pipeline uses a SQLite database as external storage: - + In this example, the first asset opens a connection to the SQLite database and writes data to it. The second asset opens a connection to the same database and reads data from it. The dependency between the first asset and the second asset is made explicit through the asset's `deps` argument. @@ -56,7 +56,7 @@ I/O managers handle: For a deeper understanding of I/O managers, check out the [Understanding I/O managers](/guides/build/configure/io-managers) guide. - + In this example, a `DuckDBPandasIOManager` is instantiated to run using a local file. The I/O manager handles both reading and writing to the database. @@ -87,7 +87,7 @@ carefully considering how you have modeled your pipeline: Consider this example: - + This example downloads a zip file from Google Drive, unzips it, and loads the data into a Pandas DataFrame. It relies on each asset running on the same file system to perform these operations. @@ -95,7 +95,7 @@ The assets are modeled as tasks, rather than as data assets. For more informatio In this refactor, the `download_files`, `unzip_files`, and `load_data` assets are combined into a single asset, `my_dataset`. This asset downloads the files, unzips them, and loads the data into a data warehouse. - + This approach still handles passing data explicitly, but no longer does it across assets, instead within a single asset. This pipeline still assumes enough disk and diff --git a/docs/docs-beta/docs/guides/build/create-a-pipeline/data-assets.md b/docs/docs-beta/docs/guides/build/create-a-pipeline/data-assets.md index 24f7e5a7a9c04..5b37ffaaf1ec0 100644 --- a/docs/docs-beta/docs/guides/build/create-a-pipeline/data-assets.md +++ b/docs/docs-beta/docs/guides/build/create-a-pipeline/data-assets.md @@ -30,7 +30,7 @@ Dagster has four types of asset decorators: The simplest way to define a data asset in Dagster is by using the `@asset` decorator. This decorator marks a Python function as an asset. - + In this example, `my_data_asset` is an asset that logs its output. Dagster automatically tracks its dependencies and handles its execution within the pipeline. @@ -43,7 +43,7 @@ When you need to generate multiple assets from a single operation, you can use t In this example, `my_multi_asset` produces two assets: `asset_one` and `asset_two`. Each is derived from the same function, which makes it easier to handle related data transformations together: - + This example could be expressed as: @@ -57,7 +57,7 @@ flowchart LR For cases where you need to perform multiple operations to produce a single asset, you can use the `@graph_asset` decorator. This approach encapsulates a series of operations and exposes them as a single asset, allowing you to model complex pipelines while only exposing the final output. - + In this example, `complex_asset` is an asset that's the result of two operations: `step_one` and `step_two`. These steps are combined into a single asset, abstracting away the intermediate representations. diff --git a/docs/docs-beta/docs/guides/deploy/deployment-options/kubernetes.md b/docs/docs-beta/docs/guides/deploy/deployment-options/kubernetes.md index d7b4d47e0cd8b..f29729c61072b 100644 --- a/docs/docs-beta/docs/guides/deploy/deployment-options/kubernetes.md +++ b/docs/docs-beta/docs/guides/deploy/deployment-options/kubernetes.md @@ -35,7 +35,7 @@ Next, you'll build a Docker image that contains your Dagster project and all of 2. Install `dagster`, `dagster-postgres`, and `dagster-k8s`, along with any other libraries your project depends on. The example project has a dependency on `pandas` so it's included in the `pip install` in the following example Dockerfile. 3. Expose port 80, which we'll use to set up port-forwarding later. - + ### Step 1.2: Build and push a Docker image @@ -114,7 +114,7 @@ To deploy your project, you'll need to set the following options: If you are following this guide on your local machine, you will also need to set `pullPolicy: IfNotPresent`. This will use the local version of the image built in Step 1. However, in production use cases when your Docker images are pushed to image registries, this value should remain `pullPolicy: Always`. - + In this example, the image `name` and `tag` are set to `iris_analysis` and `1` to match the image that was pushed in Step 1. To run the gPRC server, the path to the Dagster project needs to be specified, so `--python-file` and `/iris_analysis/definitions.py` are set for `dagsterApiGrpcArgs`. diff --git a/docs/docs-beta/docs/guides/test/data-freshness-testing.md b/docs/docs-beta/docs/guides/test/data-freshness-testing.md index 6209700255c21..a738941c2d8f5 100644 --- a/docs/docs-beta/docs/guides/test/data-freshness-testing.md +++ b/docs/docs-beta/docs/guides/test/data-freshness-testing.md @@ -41,7 +41,7 @@ Materializable assets are assets materialized by Dagster. To calculate whether a The example below defines a freshness check on an asset that fails if the asset's latest materialization occurred more than one hour before the current time. - + ## External asset freshness \{#external-assets} @@ -51,7 +51,7 @@ To run freshness checks on external assets, the checks need to know when the ext The example below defines a freshness check and adds a schedule to run the check periodically. - + ### Testing freshness with anomaly detection \{#anomaly-detection} @@ -61,7 +61,7 @@ Anomaly detection is a Dagster+ Pro feature. Instead of applying policies on an asset-by-asset basis, Dagster+ Pro users can use `build_anomaly_detection_freshness_checks` to take advantage of a time series anomaly detection model to determine if data arrives later than expected. - + :::note If the asset hasn't been updated enough times, the check will pass with a message indicating that more data is needed to detect anomalies.