Skip to content

Commit

Permalink
Merge pull request #308 from dbt-labs/release-0.3.19
Browse files Browse the repository at this point in the history
  • Loading branch information
b-per authored Oct 23, 2024
2 parents f345654 + 27a93a5 commit fbfa5ae
Show file tree
Hide file tree
Showing 19 changed files with 657 additions and 34 deletions.
14 changes: 13 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,19 @@

All notable changes to this project will be documented in this file.

## [Unreleased](https://github.com/dbt-labs/terraform-provider-dbtcloud/compare/v0.3.18...HEAD)
## [Unreleased](https://github.com/dbt-labs/terraform-provider-dbtcloud/compare/v0.3.19...HEAD)

# [0.3.19](https://github.com/dbt-labs/terraform-provider-dbtcloud/compare/v0.3.18...v0.3.19)

### Fixes

- Allow defining some `dbtcloud_databricks_credential` when using global connections which don't generate an `adapter_id` (seed docs for the resource for more details)

### Changes

- Add the ability to compare changes in a `dbtcloud_job` resource
- Add deprecation notice for `target_name` in `dbtcloud_databricks_credential` as those can't be set in the UI
- Make `versionless` the default version for environments, but can still be changed

# [0.3.18](https://github.com/dbt-labs/terraform-provider-dbtcloud/compare/v0.3.17...v0.3.18)

Expand Down
1 change: 1 addition & 0 deletions docs/data-sources/job.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ description: |-
- `id` (String) The ID of this resource.
- `job_completion_trigger_condition` (Set of Object) Which other job should trigger this job when it finishes, and on which conditions. (see [below for nested schema](#nestedatt--job_completion_trigger_condition))
- `name` (String) Given name for the job
- `run_compare_changes` (Boolean) Whether the CI job should compare data changes introduced by the code change in the PR.
- `self_deferring` (Boolean) Whether this job defers on a previous run of itself (overrides value in deferring_job_id)
- `timeout_seconds` (Number) Number of seconds before the job times out
- `triggers` (Map of Boolean) Flags for which types of triggers to use, keys of github_webhook, git_provider_webhook, schedule, on_merge
Expand Down
1 change: 1 addition & 0 deletions docs/data-sources/jobs.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,7 @@ Read-Only:
- `job_type` (String) The type of job (e.g. CI, scheduled)
- `name` (String) The name of the job
- `project_id` (Number) The ID of the project
- `run_compare_changes` (Boolean) Whether the job should compare data changes introduced by the code change in the PR
- `run_generate_sources` (Boolean) Whether the job test source freshness
- `schedule` (Attributes) (see [below for nested schema](#nestedatt--jobs--schedule))
- `settings` (Attributes) (see [below for nested schema](#nestedatt--jobs--settings))
Expand Down
18 changes: 13 additions & 5 deletions docs/resources/databricks_credential.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,20 @@ description: |-
## Example Usage

```terraform
# when using the Databricks adapter
# when using the Databricks adapter with a new `dbtcloud_global_connection`
# we don't provide an `adapter_id`
resource "dbtcloud_databricks_credential" "my_databricks_cred" {
project_id = dbtcloud_project.dbt_project.id
token = "abcdefgh"
schema = "my_schema"
adapter_type = "databricks"
}
# when using the Databricks adapter with a legacy `dbtcloud_connection`
# we provide an `adapter_id`
resource "dbtcloud_databricks_credential" "my_databricks_cred" {
project_id = dbtcloud_project.dbt_project.id
adapter_id = dbtcloud_connection.my_databricks_connection.adapter_id
target_name = "prod"
token = "abcdefgh"
schema = "my_schema"
adapter_type = "databricks"
Expand All @@ -27,7 +36,6 @@ resource "dbtcloud_databricks_credential" "my_databricks_cred" {
resource "dbtcloud_databricks_credential" "my_spark_cred" {
project_id = dbtcloud_project.dbt_project.id
adapter_id = dbtcloud_connection.my_databricks_connection.adapter_id
target_name = "prod"
token = "abcdefgh"
schema = "my_schema"
adapter_type = "spark"
Expand All @@ -39,16 +47,16 @@ resource "dbtcloud_databricks_credential" "my_spark_cred" {

### Required

- `adapter_id` (Number) Databricks adapter ID for the credential
- `adapter_type` (String) The type of the adapter (databricks or spark)
- `project_id` (Number) Project ID to create the Databricks credential in
- `schema` (String) The schema where to create models
- `token` (String, Sensitive) Token for Databricks user

### Optional

- `adapter_id` (Number) Databricks adapter ID for the credential (do not fill in when using global connections, only to be used for connections created with the legacy connection resource `dbtcloud_connection`)
- `catalog` (String) The catalog where to create models (only for the databricks adapter)
- `target_name` (String) Target name
- `target_name` (String, Deprecated) Target name

### Read-Only

Expand Down
6 changes: 3 additions & 3 deletions docs/resources/environment.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ This version of the provider has the `connection_id` as an optional field but it

```terraform
resource "dbtcloud_environment" "ci_environment" {
// the dbt_version is major.minor.0-latest , major.minor.0-pre or versionless (Beta on 15 Feb 2024, to always be on the latest dbt version)
// the dbt_version is major.minor.0-latest , major.minor.0-pre or versionless (by default, it is set to versionless if not configured)
dbt_version = "versionless"
name = "CI"
project_id = dbtcloud_project.dbt_project.id
Expand All @@ -48,7 +48,7 @@ resource "dbtcloud_environment" "dev_environment" {
name = "Dev"
project_id = dbtcloud_project.dbt_project.id
type = "development"
connection_id = dbtcloud_global_connection.my_other_global_connection
connection_id = dbtcloud_global_connection.my_other_global_connection.id
}
```

Expand All @@ -57,7 +57,6 @@ resource "dbtcloud_environment" "dev_environment" {

### Required

- `dbt_version` (String) Version number of dbt to use in this environment. It needs to be in the format `major.minor.0-latest` (e.g. `1.5.0-latest`), `major.minor.0-pre` or `versionless`. In a future version of the provider `versionless` will be the default if no version is provided
- `name` (String) Environment name
- `project_id` (Number) Project ID to create the environment in
- `type` (String) The type of environment (must be either development or deployment)
Expand All @@ -71,6 +70,7 @@ resource "dbtcloud_environment" "dev_environment" {
- To avoid Terraform state issues, when using this field, the `dbtcloud_project_connection` resource should be removed from the project or you need to make sure that the `connection_id` is the same in `dbtcloud_project_connection` and in the `connection_id` of the Development environment of the project
- `credential_id` (Number) Credential ID to create the environment with. A credential is not required for development environments but is required for deployment environments
- `custom_branch` (String) Which custom branch to use in this environment
- `dbt_version` (String) Version number of dbt to use in this environment. It needs to be in the format `major.minor.0-latest` (e.g. `1.5.0-latest`), `major.minor.0-pre` or `versionless`. Defaults to`versionless` if no version is provided
- `deployment_type` (String) The type of environment. Only valid for environments of type 'deployment' and for now can only be 'production', 'staging' or left empty for generic environments
- `extended_attributes_id` (Number) ID of the extended attributes for the environment
- `is_active` (Boolean) Whether the environment is active
Expand Down
1 change: 1 addition & 0 deletions docs/resources/job.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,6 +122,7 @@ resource "dbtcloud_job" "downstream_job" {
- `is_active` (Boolean) Should always be set to true as setting it to false is the same as creating a job in a deleted state. To create/keep a job in a 'deactivated' state, check the `triggers` config.
- `job_completion_trigger_condition` (Block Set, Max: 1) Which other job should trigger this job when it finishes, and on which conditions (sometimes referred as 'job chaining'). (see [below for nested schema](#nestedblock--job_completion_trigger_condition))
- `num_threads` (Number) Number of threads to use in the job
- `run_compare_changes` (Boolean) Whether the CI job should compare data changes introduced by the code changes. Requires `deferring_environment_id` to be set. (Advanced CI needs to be activated in the dbt Cloud Account Settings first as well)
- `run_generate_sources` (Boolean) Flag for whether the job should add a `dbt source freshness` step to the job. The difference between manually adding a step with `dbt source freshness` in the job steps or using this flag is that with this flag, a failed freshness will still allow the following steps to run.
- `schedule_cron` (String) Custom cron expression for schedule
- `schedule_days` (List of Number) List of days of week as numbers (0 = Sunday, 7 = Saturday) to execute the job at if running on a schedule
Expand Down
14 changes: 11 additions & 3 deletions examples/resources/dbtcloud_databricks_credential/resource.tf
Original file line number Diff line number Diff line change
@@ -1,8 +1,17 @@
# when using the Databricks adapter
# when using the Databricks adapter with a new `dbtcloud_global_connection`
# we don't provide an `adapter_id`
resource "dbtcloud_databricks_credential" "my_databricks_cred" {
project_id = dbtcloud_project.dbt_project.id
token = "abcdefgh"
schema = "my_schema"
adapter_type = "databricks"
}

# when using the Databricks adapter with a legacy `dbtcloud_connection`
# we provide an `adapter_id`
resource "dbtcloud_databricks_credential" "my_databricks_cred" {
project_id = dbtcloud_project.dbt_project.id
adapter_id = dbtcloud_connection.my_databricks_connection.adapter_id
target_name = "prod"
token = "abcdefgh"
schema = "my_schema"
adapter_type = "databricks"
Expand All @@ -12,7 +21,6 @@ resource "dbtcloud_databricks_credential" "my_databricks_cred" {
resource "dbtcloud_databricks_credential" "my_spark_cred" {
project_id = dbtcloud_project.dbt_project.id
adapter_id = dbtcloud_connection.my_databricks_connection.adapter_id
target_name = "prod"
token = "abcdefgh"
schema = "my_schema"
adapter_type = "spark"
Expand Down
4 changes: 2 additions & 2 deletions examples/resources/dbtcloud_environment/resource.tf
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
resource "dbtcloud_environment" "ci_environment" {
// the dbt_version is major.minor.0-latest , major.minor.0-pre or versionless (Beta on 15 Feb 2024, to always be on the latest dbt version)
// the dbt_version is major.minor.0-latest , major.minor.0-pre or versionless (by default, it is set to versionless if not configured)
dbt_version = "versionless"
name = "CI"
project_id = dbtcloud_project.dbt_project.id
Expand All @@ -25,5 +25,5 @@ resource "dbtcloud_environment" "dev_environment" {
name = "Dev"
project_id = dbtcloud_project.dbt_project.id
type = "development"
connection_id = dbtcloud_global_connection.my_other_global_connection
connection_id = dbtcloud_global_connection.my_other_global_connection.id
}
Loading

0 comments on commit fbfa5ae

Please sign in to comment.