Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update GitHub actions #9

Open
wants to merge 18 commits into
base: main
Choose a base branch
from
Open

Update GitHub actions #9

wants to merge 18 commits into from

Conversation

LoHertel
Copy link
Owner

No description provided.

@github-actions
Copy link

Prod Environment 🏞

Terraform Format and Style 🖌success

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # google_compute_firewall.ssh-server will be created
  + resource "google_compute_firewall" "ssh-server" {
      + creation_timestamp = (known after apply)
      + destination_ranges = (known after apply)
      + direction          = (known after apply)
      + enable_logging     = (known after apply)
      + id                 = (known after apply)
      + name               = "default-allow-ssh-prod"
      + network            = "vpc-network-prod"
      + priority           = 1000
      + project            = (known after apply)
      + self_link          = (known after apply)
      + source_ranges      = [
          + "0.0.0.0/0",
        ]
      + target_tags        = [
          + "ssh-server",
        ]

      + allow {
          + ports    = [
              + "22",
            ]
          + protocol = "tcp"
        }
    }

  # google_compute_instance.etl-host will be created
  + resource "google_compute_instance" "etl-host" {
      + allow_stopping_for_update = true
      + can_ip_forward            = false
      + cpu_platform              = (known after apply)
      + current_status            = (known after apply)
      + deletion_protection       = false
      + guest_accelerator         = (known after apply)
      + id                        = (known after apply)
      + instance_id               = (known after apply)
      + label_fingerprint         = (known after apply)
      + machine_type              = "e2-micro"
      + metadata                  = {
          + "enable-oslogin" = "true"
        }
      + metadata_fingerprint      = (known after apply)
      + min_cpu_platform          = (known after apply)
      + name                      = "etl-host-prod"
      + project                   = (known after apply)
      + self_link                 = (known after apply)
      + tags                      = [
          + "ssh-server",
        ]
      + tags_fingerprint          = (known after apply)
      + zone                      = "us-east1-d"

      + boot_disk {
          + auto_delete                = true
          + device_name                = (known after apply)
          + disk_encryption_key_sha256 = (known after apply)
          + kms_key_self_link          = (known after apply)
          + mode                       = "READ_WRITE"
          + source                     = (known after apply)

          + initialize_params {
              + image  = "ubuntu-os-cloud/ubuntu-2204-jammy-v20220420"
              + labels = (known after apply)
              + size   = (known after apply)
              + type   = (known after apply)
            }
        }

      + network_interface {
          + ipv6_access_type   = (known after apply)
          + name               = (known after apply)
          + network            = (known after apply)
          + network_ip         = (known after apply)
          + stack_type         = (known after apply)
          + subnetwork         = (known after apply)
          + subnetwork_project = (known after apply)

          + access_config {
              + nat_ip       = (known after apply)
              + network_tier = "STANDARD"
            }
        }
    }

  # google_compute_network.vpc_network will be created
  + resource "google_compute_network" "vpc_network" {
      + auto_create_subnetworks                   = true
      + delete_default_routes_on_create           = false
      + gateway_ipv4                              = (known after apply)
      + id                                        = (known after apply)
      + internal_ipv6_range                       = (known after apply)
      + mtu                                       = (known after apply)
      + name                                      = "vpc-network-prod"
      + network_firewall_policy_enforcement_order = "AFTER_CLASSIC_FIREWALL"
      + project                                   = (known after apply)
      + routing_mode                              = (known after apply)
      + self_link                                 = (known after apply)
    }

  # google_storage_bucket.data-lake-buckets["org-example-111111-prod-us-east1-data-lake"] will be created
  + resource "google_storage_bucket" "data-lake-buckets" {
      + force_destroy               = true
      + id                          = (known after apply)
      + labels                      = (known after apply)
      + location                    = "US-EAST1"
      + name                        = "org-example-111111-prod-us-east1-data-lake"
      + project                     = (known after apply)
      + public_access_prevention    = "enforced"
      + self_link                   = (known after apply)
      + storage_class               = "STANDARD"
      + uniform_bucket_level_access = true
      + url                         = (known after apply)

      + versioning {
          + enabled = false
        }
    }

  # google_storage_bucket.data-lake-buckets["org-example-111111-prod-us-east1-data-lake-anon"] will be created
  + resource "google_storage_bucket" "data-lake-buckets" {
      + force_destroy               = true
      + id                          = (known after apply)
      + labels                      = (known after apply)
      + location                    = "US-EAST1"
      + name                        = "org-example-111111-prod-us-east1-data-lake-anon"
      + project                     = (known after apply)
      + public_access_prevention    = "enforced"
      + self_link                   = (known after apply)
      + storage_class               = "STANDARD"
      + uniform_bucket_level_access = true
      + url                         = (known after apply)

      + versioning {
          + enabled = false
        }
    }

Plan: 5 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + ip = (known after apply)

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

Pushed by: @LoHertel, Action: pull_request

@github-actions
Copy link

Stage Environment 🏞

Terraform Format and Style 🖌success

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # google_compute_firewall.ssh-server will be created
  + resource "google_compute_firewall" "ssh-server" {
      + creation_timestamp = (known after apply)
      + destination_ranges = (known after apply)
      + direction          = (known after apply)
      + enable_logging     = (known after apply)
      + id                 = (known after apply)
      + name               = "default-allow-ssh-stage"
      + network            = "vpc-network-stage"
      + priority           = 1000
      + project            = (known after apply)
      + self_link          = (known after apply)
      + source_ranges      = [
          + "0.0.0.0/0",
        ]
      + target_tags        = [
          + "ssh-server",
        ]

      + allow {
          + ports    = [
              + "22",
            ]
          + protocol = "tcp"
        }
    }

  # google_compute_instance.etl-host will be created
  + resource "google_compute_instance" "etl-host" {
      + allow_stopping_for_update = true
      + can_ip_forward            = false
      + cpu_platform              = (known after apply)
      + current_status            = (known after apply)
      + deletion_protection       = false
      + guest_accelerator         = (known after apply)
      + id                        = (known after apply)
      + instance_id               = (known after apply)
      + label_fingerprint         = (known after apply)
      + machine_type              = "e2-micro"
      + metadata                  = {
          + "enable-oslogin" = "true"
        }
      + metadata_fingerprint      = (known after apply)
      + min_cpu_platform          = (known after apply)
      + name                      = "etl-host-stage"
      + project                   = (known after apply)
      + self_link                 = (known after apply)
      + tags                      = [
          + "ssh-server",
        ]
      + tags_fingerprint          = (known after apply)
      + zone                      = "us-east1-d"

      + boot_disk {
          + auto_delete                = true
          + device_name                = (known after apply)
          + disk_encryption_key_sha256 = (known after apply)
          + kms_key_self_link          = (known after apply)
          + mode                       = "READ_WRITE"
          + source                     = (known after apply)

          + initialize_params {
              + image  = "ubuntu-os-cloud/ubuntu-2204-jammy-v20220420"
              + labels = (known after apply)
              + size   = (known after apply)
              + type   = (known after apply)
            }
        }

      + network_interface {
          + ipv6_access_type   = (known after apply)
          + name               = (known after apply)
          + network            = (known after apply)
          + network_ip         = (known after apply)
          + stack_type         = (known after apply)
          + subnetwork         = (known after apply)
          + subnetwork_project = (known after apply)

          + access_config {
              + nat_ip       = (known after apply)
              + network_tier = "STANDARD"
            }
        }
    }

  # google_compute_network.vpc_network will be created
  + resource "google_compute_network" "vpc_network" {
      + auto_create_subnetworks                   = true
      + delete_default_routes_on_create           = false
      + gateway_ipv4                              = (known after apply)
      + id                                        = (known after apply)
      + internal_ipv6_range                       = (known after apply)
      + mtu                                       = (known after apply)
      + name                                      = "vpc-network-stage"
      + network_firewall_policy_enforcement_order = "AFTER_CLASSIC_FIREWALL"
      + project                                   = (known after apply)
      + routing_mode                              = (known after apply)
      + self_link                                 = (known after apply)
    }

  # google_storage_bucket.data-lake-buckets["org-example-111111-stage-us-east1-data-lake"] will be created
  + resource "google_storage_bucket" "data-lake-buckets" {
      + force_destroy               = true
      + id                          = (known after apply)
      + labels                      = (known after apply)
      + location                    = "US-EAST1"
      + name                        = "org-example-111111-stage-us-east1-data-lake"
      + project                     = (known after apply)
      + public_access_prevention    = "enforced"
      + self_link                   = (known after apply)
      + storage_class               = "STANDARD"
      + uniform_bucket_level_access = true
      + url                         = (known after apply)

      + versioning {
          + enabled = false
        }
    }

  # google_storage_bucket.data-lake-buckets["org-example-111111-stage-us-east1-data-lake-anon"] will be created
  + resource "google_storage_bucket" "data-lake-buckets" {
      + force_destroy               = true
      + id                          = (known after apply)
      + labels                      = (known after apply)
      + location                    = "US-EAST1"
      + name                        = "org-example-111111-stage-us-east1-data-lake-anon"
      + project                     = (known after apply)
      + public_access_prevention    = "enforced"
      + self_link                   = (known after apply)
      + storage_class               = "STANDARD"
      + uniform_bucket_level_access = true
      + url                         = (known after apply)

      + versioning {
          + enabled = false
        }
    }

Plan: 5 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + ip = (known after apply)

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

Pushed by: @LoHertel, Action: pull_request

@github-actions
Copy link

Prod Environment 🏞

Terraform Format and Style 🖌success

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # google_compute_firewall.ssh-server will be created
  + resource "google_compute_firewall" "ssh-server" {
      + creation_timestamp = (known after apply)
      + destination_ranges = (known after apply)
      + direction          = (known after apply)
      + enable_logging     = (known after apply)
      + id                 = (known after apply)
      + name               = "default-allow-ssh-prod"
      + network            = "vpc-network-prod"
      + priority           = 1000
      + project            = (known after apply)
      + self_link          = (known after apply)
      + source_ranges      = [
          + "0.0.0.0/0",
        ]
      + target_tags        = [
          + "ssh-server",
        ]

      + allow {
          + ports    = [
              + "22",
            ]
          + protocol = "tcp"
        }
    }

  # google_compute_instance.etl-host will be created
  + resource "google_compute_instance" "etl-host" {
      + allow_stopping_for_update = true
      + can_ip_forward            = false
      + cpu_platform              = (known after apply)
      + current_status            = (known after apply)
      + deletion_protection       = false
      + guest_accelerator         = (known after apply)
      + id                        = (known after apply)
      + instance_id               = (known after apply)
      + label_fingerprint         = (known after apply)
      + machine_type              = "e2-micro"
      + metadata                  = {
          + "enable-oslogin" = "true"
        }
      + metadata_fingerprint      = (known after apply)
      + min_cpu_platform          = (known after apply)
      + name                      = "etl-host-prod"
      + project                   = (known after apply)
      + self_link                 = (known after apply)
      + tags                      = [
          + "ssh-server",
        ]
      + tags_fingerprint          = (known after apply)
      + zone                      = "us-east1-d"

      + boot_disk {
          + auto_delete                = true
          + device_name                = (known after apply)
          + disk_encryption_key_sha256 = (known after apply)
          + kms_key_self_link          = (known after apply)
          + mode                       = "READ_WRITE"
          + source                     = (known after apply)

          + initialize_params {
              + image  = "ubuntu-os-cloud/ubuntu-2204-jammy-v20220420"
              + labels = (known after apply)
              + size   = (known after apply)
              + type   = (known after apply)
            }
        }

      + network_interface {
          + ipv6_access_type   = (known after apply)
          + name               = (known after apply)
          + network            = (known after apply)
          + network_ip         = (known after apply)
          + stack_type         = (known after apply)
          + subnetwork         = (known after apply)
          + subnetwork_project = (known after apply)

          + access_config {
              + nat_ip       = (known after apply)
              + network_tier = "STANDARD"
            }
        }
    }

  # google_compute_network.vpc_network will be created
  + resource "google_compute_network" "vpc_network" {
      + auto_create_subnetworks                   = true
      + delete_default_routes_on_create           = false
      + gateway_ipv4                              = (known after apply)
      + id                                        = (known after apply)
      + internal_ipv6_range                       = (known after apply)
      + mtu                                       = (known after apply)
      + name                                      = "vpc-network-prod"
      + network_firewall_policy_enforcement_order = "AFTER_CLASSIC_FIREWALL"
      + project                                   = (known after apply)
      + routing_mode                              = (known after apply)
      + self_link                                 = (known after apply)
    }

  # google_storage_bucket.data-lake-buckets["org-example-111111-prod-us-east1-data-lake"] will be created
  + resource "google_storage_bucket" "data-lake-buckets" {
      + force_destroy               = true
      + id                          = (known after apply)
      + labels                      = (known after apply)
      + location                    = "US-EAST1"
      + name                        = "org-example-111111-prod-us-east1-data-lake"
      + project                     = (known after apply)
      + public_access_prevention    = "enforced"
      + self_link                   = (known after apply)
      + storage_class               = "STANDARD"
      + uniform_bucket_level_access = true
      + url                         = (known after apply)

      + versioning {
          + enabled = false
        }
    }

  # google_storage_bucket.data-lake-buckets["org-example-111111-prod-us-east1-data-lake-anon"] will be created
  + resource "google_storage_bucket" "data-lake-buckets" {
      + force_destroy               = true
      + id                          = (known after apply)
      + labels                      = (known after apply)
      + location                    = "US-EAST1"
      + name                        = "org-example-111111-prod-us-east1-data-lake-anon"
      + project                     = (known after apply)
      + public_access_prevention    = "enforced"
      + self_link                   = (known after apply)
      + storage_class               = "STANDARD"
      + uniform_bucket_level_access = true
      + url                         = (known after apply)

      + versioning {
          + enabled = false
        }
    }

Plan: 5 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + ip = (known after apply)

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

Pushed by: @LoHertel, Action: pull_request

@github-actions
Copy link

Stage Environment 🏞

Terraform Format and Style 🖌success

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # google_compute_firewall.ssh-server will be created
  + resource "google_compute_firewall" "ssh-server" {
      + creation_timestamp = (known after apply)
      + destination_ranges = (known after apply)
      + direction          = (known after apply)
      + enable_logging     = (known after apply)
      + id                 = (known after apply)
      + name               = "default-allow-ssh-stage"
      + network            = "vpc-network-stage"
      + priority           = 1000
      + project            = (known after apply)
      + self_link          = (known after apply)
      + source_ranges      = [
          + "0.0.0.0/0",
        ]
      + target_tags        = [
          + "ssh-server",
        ]

      + allow {
          + ports    = [
              + "22",
            ]
          + protocol = "tcp"
        }
    }

  # google_compute_instance.etl-host will be created
  + resource "google_compute_instance" "etl-host" {
      + allow_stopping_for_update = true
      + can_ip_forward            = false
      + cpu_platform              = (known after apply)
      + current_status            = (known after apply)
      + deletion_protection       = false
      + guest_accelerator         = (known after apply)
      + id                        = (known after apply)
      + instance_id               = (known after apply)
      + label_fingerprint         = (known after apply)
      + machine_type              = "e2-micro"
      + metadata                  = {
          + "enable-oslogin" = "true"
        }
      + metadata_fingerprint      = (known after apply)
      + min_cpu_platform          = (known after apply)
      + name                      = "etl-host-stage"
      + project                   = (known after apply)
      + self_link                 = (known after apply)
      + tags                      = [
          + "ssh-server",
        ]
      + tags_fingerprint          = (known after apply)
      + zone                      = "us-east1-d"

      + boot_disk {
          + auto_delete                = true
          + device_name                = (known after apply)
          + disk_encryption_key_sha256 = (known after apply)
          + kms_key_self_link          = (known after apply)
          + mode                       = "READ_WRITE"
          + source                     = (known after apply)

          + initialize_params {
              + image  = "ubuntu-os-cloud/ubuntu-2204-jammy-v20220420"
              + labels = (known after apply)
              + size   = (known after apply)
              + type   = (known after apply)
            }
        }

      + network_interface {
          + ipv6_access_type   = (known after apply)
          + name               = (known after apply)
          + network            = (known after apply)
          + network_ip         = (known after apply)
          + stack_type         = (known after apply)
          + subnetwork         = (known after apply)
          + subnetwork_project = (known after apply)

          + access_config {
              + nat_ip       = (known after apply)
              + network_tier = "STANDARD"
            }
        }
    }

  # google_compute_network.vpc_network will be created
  + resource "google_compute_network" "vpc_network" {
      + auto_create_subnetworks                   = true
      + delete_default_routes_on_create           = false
      + gateway_ipv4                              = (known after apply)
      + id                                        = (known after apply)
      + internal_ipv6_range                       = (known after apply)
      + mtu                                       = (known after apply)
      + name                                      = "vpc-network-stage"
      + network_firewall_policy_enforcement_order = "AFTER_CLASSIC_FIREWALL"
      + project                                   = (known after apply)
      + routing_mode                              = (known after apply)
      + self_link                                 = (known after apply)
    }

  # google_storage_bucket.data-lake-buckets["org-example-111111-stage-us-east1-data-lake"] will be created
  + resource "google_storage_bucket" "data-lake-buckets" {
      + force_destroy               = true
      + id                          = (known after apply)
      + labels                      = (known after apply)
      + location                    = "US-EAST1"
      + name                        = "org-example-111111-stage-us-east1-data-lake"
      + project                     = (known after apply)
      + public_access_prevention    = "enforced"
      + self_link                   = (known after apply)
      + storage_class               = "STANDARD"
      + uniform_bucket_level_access = true
      + url                         = (known after apply)

      + versioning {
          + enabled = false
        }
    }

  # google_storage_bucket.data-lake-buckets["org-example-111111-stage-us-east1-data-lake-anon"] will be created
  + resource "google_storage_bucket" "data-lake-buckets" {
      + force_destroy               = true
      + id                          = (known after apply)
      + labels                      = (known after apply)
      + location                    = "US-EAST1"
      + name                        = "org-example-111111-stage-us-east1-data-lake-anon"
      + project                     = (known after apply)
      + public_access_prevention    = "enforced"
      + self_link                   = (known after apply)
      + storage_class               = "STANDARD"
      + uniform_bucket_level_access = true
      + url                         = (known after apply)

      + versioning {
          + enabled = false
        }
    }

Plan: 5 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + ip = (known after apply)

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

Pushed by: @LoHertel, Action: pull_request

@github-actions
Copy link

Prod Environment 🏞

Terraform Format and Style 🖌success

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # google_compute_firewall.ssh-server will be created
  + resource "google_compute_firewall" "ssh-server" {
      + creation_timestamp = (known after apply)
      + destination_ranges = (known after apply)
      + direction          = (known after apply)
      + enable_logging     = (known after apply)
      + id                 = (known after apply)
      + name               = "default-allow-ssh-prod"
      + network            = "vpc-network-prod"
      + priority           = 1000
      + project            = (known after apply)
      + self_link          = (known after apply)
      + source_ranges      = [
          + "0.0.0.0/0",
        ]
      + target_tags        = [
          + "ssh-server",
        ]

      + allow {
          + ports    = [
              + "22",
            ]
          + protocol = "tcp"
        }
    }

  # google_compute_instance.etl-host will be created
  + resource "google_compute_instance" "etl-host" {
      + allow_stopping_for_update = true
      + can_ip_forward            = false
      + cpu_platform              = (known after apply)
      + current_status            = (known after apply)
      + deletion_protection       = false
      + guest_accelerator         = (known after apply)
      + id                        = (known after apply)
      + instance_id               = (known after apply)
      + label_fingerprint         = (known after apply)
      + machine_type              = "e2-micro"
      + metadata                  = {
          + "enable-oslogin" = "true"
        }
      + metadata_fingerprint      = (known after apply)
      + min_cpu_platform          = (known after apply)
      + name                      = "etl-host-prod"
      + project                   = (known after apply)
      + self_link                 = (known after apply)
      + tags                      = [
          + "ssh-server",
        ]
      + tags_fingerprint          = (known after apply)
      + zone                      = "us-east1-d"

      + boot_disk {
          + auto_delete                = true
          + device_name                = (known after apply)
          + disk_encryption_key_sha256 = (known after apply)
          + kms_key_self_link          = (known after apply)
          + mode                       = "READ_WRITE"
          + source                     = (known after apply)

          + initialize_params {
              + image  = "ubuntu-os-cloud/ubuntu-2204-jammy-v20220420"
              + labels = (known after apply)
              + size   = (known after apply)
              + type   = (known after apply)
            }
        }

      + network_interface {
          + ipv6_access_type   = (known after apply)
          + name               = (known after apply)
          + network            = (known after apply)
          + network_ip         = (known after apply)
          + stack_type         = (known after apply)
          + subnetwork         = (known after apply)
          + subnetwork_project = (known after apply)

          + access_config {
              + nat_ip       = (known after apply)
              + network_tier = "STANDARD"
            }
        }
    }

  # google_compute_network.vpc_network will be created
  + resource "google_compute_network" "vpc_network" {
      + auto_create_subnetworks                   = true
      + delete_default_routes_on_create           = false
      + gateway_ipv4                              = (known after apply)
      + id                                        = (known after apply)
      + internal_ipv6_range                       = (known after apply)
      + mtu                                       = (known after apply)
      + name                                      = "vpc-network-prod"
      + network_firewall_policy_enforcement_order = "AFTER_CLASSIC_FIREWALL"
      + project                                   = (known after apply)
      + routing_mode                              = (known after apply)
      + self_link                                 = (known after apply)
    }

  # google_storage_bucket.data-lake-buckets["org-example-111111-prod-us-east1-data-lake"] will be created
  + resource "google_storage_bucket" "data-lake-buckets" {
      + force_destroy               = true
      + id                          = (known after apply)
      + labels                      = (known after apply)
      + location                    = "US-EAST1"
      + name                        = "org-example-111111-prod-us-east1-data-lake"
      + project                     = (known after apply)
      + public_access_prevention    = "enforced"
      + self_link                   = (known after apply)
      + storage_class               = "STANDARD"
      + uniform_bucket_level_access = true
      + url                         = (known after apply)

      + versioning {
          + enabled = false
        }
    }

  # google_storage_bucket.data-lake-buckets["org-example-111111-prod-us-east1-data-lake-anon"] will be created
  + resource "google_storage_bucket" "data-lake-buckets" {
      + force_destroy               = true
      + id                          = (known after apply)
      + labels                      = (known after apply)
      + location                    = "US-EAST1"
      + name                        = "org-example-111111-prod-us-east1-data-lake-anon"
      + project                     = (known after apply)
      + public_access_prevention    = "enforced"
      + self_link                   = (known after apply)
      + storage_class               = "STANDARD"
      + uniform_bucket_level_access = true
      + url                         = (known after apply)

      + versioning {
          + enabled = false
        }
    }

Plan: 5 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + ip = (known after apply)

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

Pushed by: @LoHertel, Action: pull_request

@github-actions
Copy link

💰 Infracost estimate: monthly cost will not change

@github-actions
Copy link

Terraform Prod Config

🖌 Format and Style: success

false

🛠 Initialization: success

🔎 Validation: success

📖 Plan: success

Show Plan
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  �[32m+�[0m create�[0m

Terraform will perform the following actions:

�[1m  # google_compute_firewall.ssh-server�[0m will be created
�[0m  �[32m+�[0m�[0m resource "google_compute_firewall" "ssh-server" {
      �[32m+�[0m�[0m creation_timestamp = (known after apply)
      �[32m+�[0m�[0m destination_ranges = (known after apply)
      �[32m+�[0m�[0m direction          = (known after apply)
      �[32m+�[0m�[0m enable_logging     = (known after apply)
      �[32m+�[0m�[0m id                 = (known after apply)
      �[32m+�[0m�[0m name               = "default-allow-ssh-prod"
      �[32m+�[0m�[0m network            = "vpc-network-prod"
      �[32m+�[0m�[0m priority           = 1000
      �[32m+�[0m�[0m project            = (known after apply)
      �[32m+�[0m�[0m self_link          = (known after apply)
      �[32m+�[0m�[0m source_ranges      = [
          �[32m+�[0m�[0m "0.0.0.0/0",
        ]
      �[32m+�[0m�[0m target_tags        = [
          �[32m+�[0m�[0m "ssh-server",
        ]

      �[32m+�[0m�[0m allow {
          �[32m+�[0m�[0m ports    = [
              �[32m+�[0m�[0m "22",
            ]
          �[32m+�[0m�[0m protocol = "tcp"
        }
    }

�[1m  # google_compute_instance.etl-host�[0m will be created
�[0m  �[32m+�[0m�[0m resource "google_compute_instance" "etl-host" {
      �[32m+�[0m�[0m allow_stopping_for_update = true
      �[32m+�[0m�[0m can_ip_forward            = false
      �[32m+�[0m�[0m cpu_platform              = (known after apply)
      �[32m+�[0m�[0m current_status            = (known after apply)
      �[32m+�[0m�[0m deletion_protection       = false
      �[32m+�[0m�[0m guest_accelerator         = (known after apply)
      �[32m+�[0m�[0m id                        = (known after apply)
      �[32m+�[0m�[0m instance_id               = (known after apply)
      �[32m+�[0m�[0m label_fingerprint         = (known after apply)
      �[32m+�[0m�[0m machine_type              = "e2-micro"
      �[32m+�[0m�[0m metadata                  = {
          �[32m+�[0m�[0m "enable-oslogin" = "true"
        }
      �[32m+�[0m�[0m metadata_fingerprint      = (known after apply)
      �[32m+�[0m�[0m min_cpu_platform          = (known after apply)
      �[32m+�[0m�[0m name                      = "etl-host-prod"
      �[32m+�[0m�[0m project                   = (known after apply)
      �[32m+�[0m�[0m self_link                 = (known after apply)
      �[32m+�[0m�[0m tags                      = [
          �[32m+�[0m�[0m "ssh-server",
        ]
      �[32m+�[0m�[0m tags_fingerprint          = (known after apply)
      �[32m+�[0m�[0m zone                      = "us-east1-d"

      �[32m+�[0m�[0m boot_disk {
          �[32m+�[0m�[0m auto_delete                = true
          �[32m+�[0m�[0m device_name                = (known after apply)
          �[32m+�[0m�[0m disk_encryption_key_sha256 = (known after apply)
          �[32m+�[0m�[0m kms_key_self_link          = (known after apply)
          �[32m+�[0m�[0m mode                       = "READ_WRITE"
          �[32m+�[0m�[0m source                     = (known after apply)

          �[32m+�[0m�[0m initialize_params {
              �[32m+�[0m�[0m image  = "ubuntu-os-cloud/ubuntu-2204-jammy-v20220420"
              �[32m+�[0m�[0m labels = (known after apply)
              �[32m+�[0m�[0m size   = (known after apply)
              �[32m+�[0m�[0m type   = (known after apply)
            }
        }

      �[32m+�[0m�[0m network_interface {
          �[32m+�[0m�[0m ipv6_access_type   = (known after apply)
          �[32m+�[0m�[0m name               = (known after apply)
          �[32m+�[0m�[0m network            = (known after apply)
          �[32m+�[0m�[0m network_ip         = (known after apply)
          �[32m+�[0m�[0m stack_type         = (known after apply)
          �[32m+�[0m�[0m subnetwork         = (known after apply)
          �[32m+�[0m�[0m subnetwork_project = (known after apply)

          �[32m+�[0m�[0m access_config {
              �[32m+�[0m�[0m nat_ip       = (known after apply)
              �[32m+�[0m�[0m network_tier = "STANDARD"
            }
        }
    }

�[1m  # google_compute_network.vpc_network�[0m will be created
�[0m  �[32m+�[0m�[0m resource "google_compute_network" "vpc_network" {
      �[32m+�[0m�[0m auto_create_subnetworks                   = true
      �[32m+�[0m�[0m delete_default_routes_on_create           = false
      �[32m+�[0m�[0m gateway_ipv4                              = (known after apply)
      �[32m+�[0m�[0m id                                        = (known after apply)
      �[32m+�[0m�[0m internal_ipv6_range                       = (known after apply)
      �[32m+�[0m�[0m mtu                                       = (known after apply)
      �[32m+�[0m�[0m name                                      = "vpc-network-prod"
      �[32m+�[0m�[0m network_firewall_policy_enforcement_order = "AFTER_CLASSIC_FIREWALL"
      �[32m+�[0m�[0m project                                   = (known after apply)
      �[32m+�[0m�[0m routing_mode                              = (known after apply)
      �[32m+�[0m�[0m self_link                                 = (known after apply)
    }

�[1m  # google_storage_bucket.data-lake-buckets["org-example-111111-prod-us-east1-data-lake"]�[0m will be created
�[0m  �[32m+�[0m�[0m resource "google_storage_bucket" "data-lake-buckets" {
      �[32m+�[0m�[0m force_destroy               = true
      �[32m+�[0m�[0m id                          = (known after apply)
      �[32m+�[0m�[0m labels                      = (known after apply)
      �[32m+�[0m�[0m location                    = "US-EAST1"
      �[32m+�[0m�[0m name                        = "org-example-111111-prod-us-east1-data-lake"
      �[32m+�[0m�[0m project                     = (known after apply)
      �[32m+�[0m�[0m public_access_prevention    = "enforced"
      �[32m+�[0m�[0m self_link                   = (known after apply)
      �[32m+�[0m�[0m storage_class               = "STANDARD"
      �[32m+�[0m�[0m uniform_bucket_level_access = true
      �[32m+�[0m�[0m url                         = (known after apply)

      �[32m+�[0m�[0m versioning {
          �[32m+�[0m�[0m enabled = false
        }
    }

�[1m  # google_storage_bucket.data-lake-buckets["org-example-111111-prod-us-east1-data-lake-anon"]�[0m will be created
�[0m  �[32m+�[0m�[0m resource "google_storage_bucket" "data-lake-buckets" {
      �[32m+�[0m�[0m force_destroy               = true
      �[32m+�[0m�[0m id                          = (known after apply)
      �[32m+�[0m�[0m labels                      = (known after apply)
      �[32m+�[0m�[0m location                    = "US-EAST1"
      �[32m+�[0m�[0m name                        = "org-example-111111-prod-us-east1-data-lake-anon"
      �[32m+�[0m�[0m project                     = (known after apply)
      �[32m+�[0m�[0m public_access_prevention    = "enforced"
      �[32m+�[0m�[0m self_link                   = (known after apply)
      �[32m+�[0m�[0m storage_class               = "STANDARD"
      �[32m+�[0m�[0m uniform_bucket_level_access = true
      �[32m+�[0m�[0m url                         = (known after apply)

      �[32m+�[0m�[0m versioning {
          �[32m+�[0m�[0m enabled = false
        }
    }

�[1mPlan:�[0m 5 to add, 0 to change, 0 to destroy.
�[0m
Changes to Outputs:
  �[32m+�[0m�[0m ip = (known after apply)
�[90m
─────────────────────────────────────────────────────────────────────────────�[0m

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

@github-actions
Copy link

💰 Infracost estimate: monthly cost will not change

Project Cost change New monthly cost
LoHertel/terraform-demo/stage $0 $7
Infracost output
──────────────────────────────────
Project: LoHertel/terraform-demo/stage

- google_compute_instance.etl-host
  -$7

    - Instance usage (Linux/UNIX, on-demand, e2-micro)
      -$6

    - Standard provisioned storage (pd-standard)
      -$0.40

+ google_compute_instance.etl_host
  +$7

    + Instance usage (Linux/UNIX, on-demand, e2-micro)
      +$6

    + Standard provisioned storage (pd-standard)
      +$0.40

- google_storage_bucket.data-lake-buckets["org-example-111111-stage-us-east1-data-lake"]
  Monthly cost depends on usage

    - Storage (standard)
      Monthly cost depends on usage
        -$0.02 per GiB

    - Object adds, bucket/object list (class A)
      Monthly cost depends on usage
        -$0.05 per 10k operations

    - Object gets, retrieve bucket/object metadata (class B)
      Monthly cost depends on usage
        -$0.004 per 10k operations

    - Network egress
    
        - Data transfer in same continent
          Monthly cost depends on usage
            -$0.02 per GB
    
        - Data transfer to worldwide excluding Asia, Australia (first 1TB)
          Monthly cost depends on usage
            -$0.12 per GB
    
        - Data transfer to Asia excluding China, but including Hong Kong (first 1TB)
          Monthly cost depends on usage
            -$0.12 per GB
    
        - Data transfer to China excluding Hong Kong (first 1TB)
          Monthly cost depends on usage
            -$0.23 per GB
    
        - Data transfer to Australia (first 1TB)
          Monthly cost depends on usage
            -$0.19 per GB

- google_storage_bucket.data-lake-buckets["org-example-111111-stage-us-east1-data-lake-anon"]
  Monthly cost depends on usage

    - Storage (standard)
      Monthly cost depends on usage
        -$0.02 per GiB

    - Object adds, bucket/object list (class A)
      Monthly cost depends on usage
        -$0.05 per 10k operations

    - Object gets, retrieve bucket/object metadata (class B)
      Monthly cost depends on usage
        -$0.004 per 10k operations

    - Network egress
    
        - Data transfer in same continent
          Monthly cost depends on usage
            -$0.02 per GB
    
        - Data transfer to worldwide excluding Asia, Australia (first 1TB)
          Monthly cost depends on usage
            -$0.12 per GB
    
        - Data transfer to Asia excluding China, but including Hong Kong (first 1TB)
          Monthly cost depends on usage
            -$0.12 per GB
    
        - Data transfer to China excluding Hong Kong (first 1TB)
          Monthly cost depends on usage
            -$0.23 per GB
    
        - Data transfer to Australia (first 1TB)
          Monthly cost depends on usage
            -$0.19 per GB

+ google_storage_bucket.data_lake_buckets["org-example-111111-stage-us-east1-data-lake"]
  Monthly cost depends on usage

    + Storage (standard)
      Monthly cost depends on usage
        +$0.026 per GiB

    + Object adds, bucket/object list (class A)
      Monthly cost depends on usage
        +$0.05 per 10k operations

    + Object gets, retrieve bucket/object metadata (class B)
      Monthly cost depends on usage
        +$0.004 per 10k operations

    + Network egress
    
        + Data transfer in same continent
          Monthly cost depends on usage
            +$0.02 per GB
    
        + Data transfer to worldwide excluding Asia, Australia (first 1TB)
          Monthly cost depends on usage
            +$0.12 per GB
    
        + Data transfer to Asia excluding China, but including Hong Kong (first 1TB)
          Monthly cost depends on usage
            +$0.12 per GB
    
        + Data transfer to China excluding Hong Kong (first 1TB)
          Monthly cost depends on usage
            +$0.23 per GB
    
        + Data transfer to Australia (first 1TB)
          Monthly cost depends on usage
            +$0.19 per GB

+ google_storage_bucket.data_lake_buckets["org-example-111111-stage-us-east1-data-lake-anonymized"]
  Monthly cost depends on usage

    + Storage (standard)
      Monthly cost depends on usage
        +$0.026 per GiB

    + Object adds, bucket/object list (class A)
      Monthly cost depends on usage
        +$0.05 per 10k operations

    + Object gets, retrieve bucket/object metadata (class B)
      Monthly cost depends on usage
        +$0.004 per 10k operations

    + Network egress
    
        + Data transfer in same continent
          Monthly cost depends on usage
            +$0.02 per GB
    
        + Data transfer to worldwide excluding Asia, Australia (first 1TB)
          Monthly cost depends on usage
            +$0.12 per GB
    
        + Data transfer to Asia excluding China, but including Hong Kong (first 1TB)
          Monthly cost depends on usage
            +$0.12 per GB
    
        + Data transfer to China excluding Hong Kong (first 1TB)
          Monthly cost depends on usage
            +$0.23 per GB
    
        + Data transfer to Australia (first 1TB)
          Monthly cost depends on usage
            +$0.19 per GB

Monthly cost change for LoHertel/terraform-demo/stage
Amount:  $0.00 ($7 → $7)
Percent: 0%

──────────────────────────────────
Key: ~ changed, + added, - removed

5 cloud resources were detected:
∙ 3 were estimated, all of which include usage-based costs, see https://infracost.io/usage-file
∙ 2 were free:
  ∙ 1 x google_compute_firewall
  ∙ 1 x google_compute_network

@github-actions

This comment was marked as outdated.

@github-actions
Copy link

💰 Infracost estimate: monthly cost will not change

Project Cost change New monthly cost
LoHertel/terraform-demo/prod $0 $7
LoHertel/terraform-demo/stage $0 $7
Infracost output
──────────────────────────────────
Project: LoHertel/terraform-demo/stage

- google_compute_instance.etl-host
  -$7

    - Instance usage (Linux/UNIX, on-demand, e2-micro)
      -$6

    - Standard provisioned storage (pd-standard)
      -$0.40

+ google_compute_instance.etl_host
  +$7

    + Instance usage (Linux/UNIX, on-demand, e2-micro)
      +$6

    + Standard provisioned storage (pd-standard)
      +$0.40

- google_storage_bucket.data-lake-buckets["org-example-111111-stage-us-east1-data-lake"]
  Monthly cost depends on usage

    - Storage (standard)
      Monthly cost depends on usage
        -$0.02 per GiB

    - Object adds, bucket/object list (class A)
      Monthly cost depends on usage
        -$0.05 per 10k operations

    - Object gets, retrieve bucket/object metadata (class B)
      Monthly cost depends on usage
        -$0.004 per 10k operations

    - Network egress
    
        - Data transfer in same continent
          Monthly cost depends on usage
            -$0.02 per GB
    
        - Data transfer to worldwide excluding Asia, Australia (first 1TB)
          Monthly cost depends on usage
            -$0.12 per GB
    
        - Data transfer to Asia excluding China, but including Hong Kong (first 1TB)
          Monthly cost depends on usage
            -$0.12 per GB
    
        - Data transfer to China excluding Hong Kong (first 1TB)
          Monthly cost depends on usage
            -$0.23 per GB
    
        - Data transfer to Australia (first 1TB)
          Monthly cost depends on usage
            -$0.19 per GB

- google_storage_bucket.data-lake-buckets["org-example-111111-stage-us-east1-data-lake-anon"]
  Monthly cost depends on usage

    - Storage (standard)
      Monthly cost depends on usage
        -$0.02 per GiB

    - Object adds, bucket/object list (class A)
      Monthly cost depends on usage
        -$0.05 per 10k operations

    - Object gets, retrieve bucket/object metadata (class B)
      Monthly cost depends on usage
        -$0.004 per 10k operations

    - Network egress
    
        - Data transfer in same continent
          Monthly cost depends on usage
            -$0.02 per GB
    
        - Data transfer to worldwide excluding Asia, Australia (first 1TB)
          Monthly cost depends on usage
            -$0.12 per GB
    
        - Data transfer to Asia excluding China, but including Hong Kong (first 1TB)
          Monthly cost depends on usage
            -$0.12 per GB
    
        - Data transfer to China excluding Hong Kong (first 1TB)
          Monthly cost depends on usage
            -$0.23 per GB
    
        - Data transfer to Australia (first 1TB)
          Monthly cost depends on usage
            -$0.19 per GB

+ google_storage_bucket.data_lake_buckets["org-example-111111-stage-us-east1-data-lake"]
  Monthly cost depends on usage

    + Storage (standard)
      Monthly cost depends on usage
        +$0.02 per GiB

    + Object adds, bucket/object list (class A)
      Monthly cost depends on usage
        +$0.05 per 10k operations

    + Object gets, retrieve bucket/object metadata (class B)
      Monthly cost depends on usage
        +$0.004 per 10k operations

    + Network egress
    
        + Data transfer in same continent
          Monthly cost depends on usage
            +$0.02 per GB
    
        + Data transfer to worldwide excluding Asia, Australia (first 1TB)
          Monthly cost depends on usage
            +$0.12 per GB
    
        + Data transfer to Asia excluding China, but including Hong Kong (first 1TB)
          Monthly cost depends on usage
            +$0.12 per GB
    
        + Data transfer to China excluding Hong Kong (first 1TB)
          Monthly cost depends on usage
            +$0.23 per GB
    
        + Data transfer to Australia (first 1TB)
          Monthly cost depends on usage
            +$0.19 per GB

+ google_storage_bucket.data_lake_buckets["org-example-111111-stage-us-east1-data-lake-anonymized"]
  Monthly cost depends on usage

    + Storage (standard)
      Monthly cost depends on usage
        +$0.02 per GiB

    + Object adds, bucket/object list (class A)
      Monthly cost depends on usage
        +$0.05 per 10k operations

    + Object gets, retrieve bucket/object metadata (class B)
      Monthly cost depends on usage
        +$0.004 per 10k operations

    + Network egress
    
        + Data transfer in same continent
          Monthly cost depends on usage
            +$0.02 per GB
    
        + Data transfer to worldwide excluding Asia, Australia (first 1TB)
          Monthly cost depends on usage
            +$0.12 per GB
    
        + Data transfer to Asia excluding China, but including Hong Kong (first 1TB)
          Monthly cost depends on usage
            +$0.12 per GB
    
        + Data transfer to China excluding Hong Kong (first 1TB)
          Monthly cost depends on usage
            +$0.23 per GB
    
        + Data transfer to Australia (first 1TB)
          Monthly cost depends on usage
            +$0.19 per GB

Monthly cost change for LoHertel/terraform-demo/stage
Amount:  $0.00 ($7 → $7)
Percent: 0%

──────────────────────────────────
Key: ~ changed, + added, - removed
The following projects have no cost estimate changes: LoHertel/terraform-demo/prod
Run the following command to see their breakdown: infracost breakdown --path=/path/to/code

──────────────────────────────────
10 cloud resources were detected:
∙ 6 were estimated, all of which include usage-based costs, see https://infracost.io/usage-file
∙ 4 were free:
  ∙ 2 x google_compute_firewall
  ∙ 2 x google_compute_network

@github-actions
Copy link

🏞 Prod - Terraform Config

🖌 Format and Style: success

🛠 Initialization: success

🔎 Validation: success

📖 Plan: success

Show Plan ```diff

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:

  • create

Terraform will perform the following actions:

google_compute_firewall.ssh-server will be created

  • resource "google_compute_firewall" "ssh-server" {
    • creation_timestamp = (known after apply)

    • destination_ranges = (known after apply)

    • direction = (known after apply)

    • enable_logging = (known after apply)

    • id = (known after apply)

    • name = "default-allow-ssh-prod"

    • network = "vpc-network-prod"

    • priority = 1000

    • project = (known after apply)

    • self_link = (known after apply)

    • source_ranges = [

      • "0.0.0.0/0",
        ]
    • target_tags = [

      • "ssh-server",
        ]
    • allow {

      • ports = [
        • "22",
          ]
      • protocol = "tcp"
        }
        }

google_compute_instance.etl-host will be created

  • resource "google_compute_instance" "etl-host" {
    • allow_stopping_for_update = true

    • can_ip_forward = false

    • cpu_platform = (known after apply)

    • current_status = (known after apply)

    • deletion_protection = false

    • guest_accelerator = (known after apply)

    • id = (known after apply)

    • instance_id = (known after apply)

    • label_fingerprint = (known after apply)

    • machine_type = "e2-micro"

    • metadata = {

      • "enable-oslogin" = "true"
        }
    • metadata_fingerprint = (known after apply)

    • min_cpu_platform = (known after apply)

    • name = "etl-host-prod"

    • project = (known after apply)

    • self_link = (known after apply)

    • tags = [

      • "ssh-server",
        ]
    • tags_fingerprint = (known after apply)

    • zone = "us-east1-d"

    • boot_disk {

      • auto_delete = true

      • device_name = (known after apply)

      • disk_encryption_key_sha256 = (known after apply)

      • kms_key_self_link = (known after apply)

      • mode = "READ_WRITE"

      • source = (known after apply)

      • initialize_params {

        • image = "ubuntu-os-cloud/ubuntu-2204-jammy-v20220420"
        • labels = (known after apply)
        • size = (known after apply)
        • type = (known after apply)
          }
          }
    • network_interface {

      • ipv6_access_type = (known after apply)

      • name = (known after apply)

      • network = (known after apply)

      • network_ip = (known after apply)

      • stack_type = (known after apply)

      • subnetwork = (known after apply)

      • subnetwork_project = (known after apply)

      • access_config {

        • nat_ip = (known after apply)
        • network_tier = "STANDARD"
          }
          }
          }

google_compute_network.vpc_network will be created

  • resource "google_compute_network" "vpc_network" {
    • auto_create_subnetworks = true
    • delete_default_routes_on_create = false
    • gateway_ipv4 = (known after apply)
    • id = (known after apply)
    • internal_ipv6_range = (known after apply)
    • mtu = (known after apply)
    • name = "vpc-network-prod"
    • network_firewall_policy_enforcement_order = "AFTER_CLASSIC_FIREWALL"
    • project = (known after apply)
    • routing_mode = (known after apply)
    • self_link = (known after apply)
      }

google_storage_bucket.data-lake-buckets["org-example-111111-prod-us-east1-data-lake"] will be created

  • resource "google_storage_bucket" "data-lake-buckets" {
    • force_destroy = true

    • id = (known after apply)

    • labels = (known after apply)

    • location = "US-EAST1"

    • name = "org-example-111111-prod-us-east1-data-lake"

    • project = (known after apply)

    • public_access_prevention = "enforced"

    • self_link = (known after apply)

    • storage_class = "STANDARD"

    • uniform_bucket_level_access = true

    • url = (known after apply)

    • versioning {

      • enabled = false
        }
        }

google_storage_bucket.data-lake-buckets["org-example-111111-prod-us-east1-data-lake-anon"] will be created

  • resource "google_storage_bucket" "data-lake-buckets" {
    • force_destroy = true

    • id = (known after apply)

    • labels = (known after apply)

    • location = "US-EAST1"

    • name = "org-example-111111-prod-us-east1-data-lake-anon"

    • project = (known after apply)

    • public_access_prevention = "enforced"

    • self_link = (known after apply)

    • storage_class = "STANDARD"

    • uniform_bucket_level_access = true

    • url = (known after apply)

    • versioning {

      • enabled = false
        }
        }

Plan: 5 to add, 0 to change, 0 to destroy.

Changes to Outputs:

  • ip = (known after apply)

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

</details>

@github-actions
Copy link

github-actions bot commented Jul 29, 2023

🏞 Stage - Terraform Config

🖌 Format and Style: success

🛠 Initialization: success

🔎 Validation: success

📖 Plan: success

Show Plan
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # google_compute_firewall.ssh_server will be created
  + resource "google_compute_firewall" "ssh_server" {
      + creation_timestamp = (known after apply)
      + destination_ranges = (known after apply)
      + direction          = (known after apply)
      + enable_logging     = (known after apply)
      + id                 = (known after apply)
      + name               = "default-allow-ssh-stage"
      + network            = "vpc-network-stage"
      + priority           = 1000
      + project            = (known after apply)
      + self_link          = (known after apply)
      + source_ranges      = [
          + "0.0.0.0/0",
        ]
      + target_tags        = [
          + "ssh-server",
        ]

      + allow {
          + ports    = [
              + "22",
            ]
          + protocol = "tcp"
        }
    }

  # google_compute_instance.etl_host will be created
  + resource "google_compute_instance" "etl_host" {
      + allow_stopping_for_update = true
      + can_ip_forward            = false
      + cpu_platform              = (known after apply)
      + current_status            = (known after apply)
      + deletion_protection       = false
      + guest_accelerator         = (known after apply)
      + id                        = (known after apply)
      + instance_id               = (known after apply)
      + label_fingerprint         = (known after apply)
      + machine_type              = "e2-micro"
      + metadata                  = {
          + "enable-oslogin" = "true"
        }
      + metadata_fingerprint      = (known after apply)
      + min_cpu_platform          = (known after apply)
      + name                      = "etl-host-stage"
      + project                   = (known after apply)
      + self_link                 = (known after apply)
      + tags                      = [
          + "ssh-server",
        ]
      + tags_fingerprint          = (known after apply)
      + zone                      = (known after apply)

      + boot_disk {
          + auto_delete                = true
          + device_name                = (known after apply)
          + disk_encryption_key_sha256 = (known after apply)
          + kms_key_self_link          = (known after apply)
          + mode                       = "READ_WRITE"
          + source                     = (known after apply)

          + initialize_params {
              + image  = "ubuntu-os-cloud/ubuntu-2204-jammy-v20220420"
              + labels = (known after apply)
              + size   = (known after apply)
              + type   = (known after apply)
            }
        }

      + network_interface {
          + ipv6_access_type   = (known after apply)
          + name               = (known after apply)
          + network            = (known after apply)
          + network_ip         = (known after apply)
          + stack_type         = (known after apply)
          + subnetwork         = (known after apply)
          + subnetwork_project = (known after apply)

          + access_config {
              + nat_ip       = (known after apply)
              + network_tier = "STANDARD"
            }
        }
    }

  # google_compute_network.vpc_network will be created
  + resource "google_compute_network" "vpc_network" {
      + auto_create_subnetworks                   = true
      + delete_default_routes_on_create           = false
      + gateway_ipv4                              = (known after apply)
      + id                                        = (known after apply)
      + internal_ipv6_range                       = (known after apply)
      + mtu                                       = (known after apply)
      + name                                      = "vpc-network-stage"
      + network_firewall_policy_enforcement_order = "AFTER_CLASSIC_FIREWALL"
      + project                                   = (known after apply)
      + routing_mode                              = (known after apply)
      + self_link                                 = (known after apply)
    }

  # google_storage_bucket.data_lake_buckets["org-example-111111-stage-us-east1-data-lake"] will be created
  + resource "google_storage_bucket" "data_lake_buckets" {
      + force_destroy               = true
      + id                          = (known after apply)
      + labels                      = (known after apply)
      + location                    = "US-EAST1"
      + name                        = "org-example-111111-stage-us-east1-data-lake"
      + project                     = (known after apply)
      + public_access_prevention    = "enforced"
      + self_link                   = (known after apply)
      + storage_class               = "STANDARD"
      + uniform_bucket_level_access = true
      + url                         = (known after apply)

      + versioning {
          + enabled = false
        }
    }

  # google_storage_bucket.data_lake_buckets["org-example-111111-stage-us-east1-data-lake-anonymized"] will be created
  + resource "google_storage_bucket" "data_lake_buckets" {
      + force_destroy               = true
      + id                          = (known after apply)
      + labels                      = (known after apply)
      + location                    = "US-EAST1"
      + name                        = "org-example-111111-stage-us-east1-data-lake-anonymized"
      + project                     = (known after apply)
      + public_access_prevention    = "enforced"
      + self_link                   = (known after apply)
      + storage_class               = "STANDARD"
      + uniform_bucket_level_access = true
      + url                         = (known after apply)

      + versioning {
          + enabled = false
        }
    }

Plan: 5 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + etl_host_ip = (known after apply)

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.
 

@github-actions
Copy link

💰 Infracost estimate: monthly cost will not change

Project Cost change New monthly cost
LoHertel/terraform-demo/prod $0 $7
LoHertel/terraform-demo/stage $0 $7
Infracost output
──────────────────────────────────
Project: LoHertel/terraform-demo/stage

- google_compute_instance.etl-host
  -$7

    - Instance usage (Linux/UNIX, on-demand, e2-micro)
      -$6

    - Standard provisioned storage (pd-standard)
      -$0.40

+ google_compute_instance.etl_host
  +$7

    + Instance usage (Linux/UNIX, on-demand, e2-micro)
      +$6

    + Standard provisioned storage (pd-standard)
      +$0.40

- google_storage_bucket.data-lake-buckets["org-example-111111-stage-us-east1-data-lake"]
  Monthly cost depends on usage

    - Storage (standard)
      Monthly cost depends on usage
        -$0.02 per GiB

    - Object adds, bucket/object list (class A)
      Monthly cost depends on usage
        -$0.05 per 10k operations

    - Object gets, retrieve bucket/object metadata (class B)
      Monthly cost depends on usage
        -$0.004 per 10k operations

    - Network egress
    
        - Data transfer in same continent
          Monthly cost depends on usage
            -$0.02 per GB
    
        - Data transfer to worldwide excluding Asia, Australia (first 1TB)
          Monthly cost depends on usage
            -$0.12 per GB
    
        - Data transfer to Asia excluding China, but including Hong Kong (first 1TB)
          Monthly cost depends on usage
            -$0.12 per GB
    
        - Data transfer to China excluding Hong Kong (first 1TB)
          Monthly cost depends on usage
            -$0.23 per GB
    
        - Data transfer to Australia (first 1TB)
          Monthly cost depends on usage
            -$0.19 per GB

- google_storage_bucket.data-lake-buckets["org-example-111111-stage-us-east1-data-lake-anon"]
  Monthly cost depends on usage

    - Storage (standard)
      Monthly cost depends on usage
        -$0.02 per GiB

    - Object adds, bucket/object list (class A)
      Monthly cost depends on usage
        -$0.05 per 10k operations

    - Object gets, retrieve bucket/object metadata (class B)
      Monthly cost depends on usage
        -$0.004 per 10k operations

    - Network egress
    
        - Data transfer in same continent
          Monthly cost depends on usage
            -$0.02 per GB
    
        - Data transfer to worldwide excluding Asia, Australia (first 1TB)
          Monthly cost depends on usage
            -$0.12 per GB
    
        - Data transfer to Asia excluding China, but including Hong Kong (first 1TB)
          Monthly cost depends on usage
            -$0.12 per GB
    
        - Data transfer to China excluding Hong Kong (first 1TB)
          Monthly cost depends on usage
            -$0.23 per GB
    
        - Data transfer to Australia (first 1TB)
          Monthly cost depends on usage
            -$0.19 per GB

+ google_storage_bucket.data_lake_buckets["org-example-111111-stage-us-east1-data-lake"]
  Monthly cost depends on usage

    + Storage (standard)
      Monthly cost depends on usage
        +$0.02 per GiB

    + Object adds, bucket/object list (class A)
      Monthly cost depends on usage
        +$0.05 per 10k operations

    + Object gets, retrieve bucket/object metadata (class B)
      Monthly cost depends on usage
        +$0.004 per 10k operations

    + Network egress
    
        + Data transfer in same continent
          Monthly cost depends on usage
            +$0.02 per GB
    
        + Data transfer to worldwide excluding Asia, Australia (first 1TB)
          Monthly cost depends on usage
            +$0.12 per GB
    
        + Data transfer to Asia excluding China, but including Hong Kong (first 1TB)
          Monthly cost depends on usage
            +$0.12 per GB
    
        + Data transfer to China excluding Hong Kong (first 1TB)
          Monthly cost depends on usage
            +$0.23 per GB
    
        + Data transfer to Australia (first 1TB)
          Monthly cost depends on usage
            +$0.19 per GB

+ google_storage_bucket.data_lake_buckets["org-example-111111-stage-us-east1-data-lake-anonymized"]
  Monthly cost depends on usage

    + Storage (standard)
      Monthly cost depends on usage
        +$0.02 per GiB

    + Object adds, bucket/object list (class A)
      Monthly cost depends on usage
        +$0.05 per 10k operations

    + Object gets, retrieve bucket/object metadata (class B)
      Monthly cost depends on usage
        +$0.004 per 10k operations

    + Network egress
    
        + Data transfer in same continent
          Monthly cost depends on usage
            +$0.02 per GB
    
        + Data transfer to worldwide excluding Asia, Australia (first 1TB)
          Monthly cost depends on usage
            +$0.12 per GB
    
        + Data transfer to Asia excluding China, but including Hong Kong (first 1TB)
          Monthly cost depends on usage
            +$0.12 per GB
    
        + Data transfer to China excluding Hong Kong (first 1TB)
          Monthly cost depends on usage
            +$0.23 per GB
    
        + Data transfer to Australia (first 1TB)
          Monthly cost depends on usage
            +$0.19 per GB

Monthly cost change for LoHertel/terraform-demo/stage
Amount:  $0.00 ($7 → $7)
Percent: 0%

──────────────────────────────────
Key: ~ changed, + added, - removed
The following projects have no cost estimate changes: LoHertel/terraform-demo/prod
Run the following command to see their breakdown: infracost breakdown --path=/path/to/code

──────────────────────────────────
10 cloud resources were detected:
∙ 6 were estimated, all of which include usage-based costs, see https://infracost.io/usage-file
∙ 4 were free:
  ∙ 2 x google_compute_firewall
  ∙ 2 x google_compute_network

@github-actions
Copy link

🏞 Stage - Terraform Config

🖌 Format and Style: success

🛠 Initialization: success

🔎 Validation: success

📖 Plan: success

Show Plan
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # google_compute_firewall.ssh_server will be created
  + resource "google_compute_firewall" "ssh_server" {
      + creation_timestamp = (known after apply)
      + destination_ranges = (known after apply)
      + direction          = (known after apply)
      + enable_logging     = (known after apply)
      + id                 = (known after apply)
      + name               = "default-allow-ssh-stage"
      + network            = "vpc-network-stage"
      + priority           = 1000
      + project            = (known after apply)
      + self_link          = (known after apply)
      + source_ranges      = [
          + "0.0.0.0/0",
        ]
      + target_tags        = [
          + "ssh-server",
        ]

      + allow {
          + ports    = [
              + "22",
            ]
          + protocol = "tcp"
        }
    }

  # google_compute_instance.etl_host will be created
  + resource "google_compute_instance" "etl_host" {
      + allow_stopping_for_update = true
      + can_ip_forward            = false
      + cpu_platform              = (known after apply)
      + current_status            = (known after apply)
      + deletion_protection       = false
      + guest_accelerator         = (known after apply)
      + id                        = (known after apply)
      + instance_id               = (known after apply)
      + label_fingerprint         = (known after apply)
      + machine_type              = "e2-micro"
      + metadata                  = {
          + "enable-oslogin" = "true"
        }
      + metadata_fingerprint      = (known after apply)
      + min_cpu_platform          = (known after apply)
      + name                      = "etl-host-stage"
      + project                   = (known after apply)
      + self_link                 = (known after apply)
      + tags                      = [
          + "ssh-server",
        ]
      + tags_fingerprint          = (known after apply)
      + zone                      = (known after apply)

      + boot_disk {
          + auto_delete                = true
          + device_name                = (known after apply)
          + disk_encryption_key_sha256 = (known after apply)
          + kms_key_self_link          = (known after apply)
          + mode                       = "READ_WRITE"
          + source                     = (known after apply)

          + initialize_params {
              + image  = "ubuntu-os-cloud/ubuntu-2204-jammy-v20220420"
              + labels = (known after apply)
              + size   = (known after apply)
              + type   = (known after apply)
            }
        }

      + network_interface {
          + ipv6_access_type   = (known after apply)
          + name               = (known after apply)
          + network            = (known after apply)
          + network_ip         = (known after apply)
          + stack_type         = (known after apply)
          + subnetwork         = (known after apply)
          + subnetwork_project = (known after apply)

          + access_config {
              + nat_ip       = (known after apply)
              + network_tier = "STANDARD"
            }
        }
    }

  # google_compute_network.vpc_network will be created
  + resource "google_compute_network" "vpc_network" {
      + auto_create_subnetworks                   = true
      + delete_default_routes_on_create           = false
      + gateway_ipv4                              = (known after apply)
      + id                                        = (known after apply)
      + internal_ipv6_range                       = (known after apply)
      + mtu                                       = (known after apply)
      + name                                      = "vpc-network-stage"
      + network_firewall_policy_enforcement_order = "AFTER_CLASSIC_FIREWALL"
      + project                                   = (known after apply)
      + routing_mode                              = (known after apply)
      + self_link                                 = (known after apply)
    }

  # google_storage_bucket.data_lake_buckets["org-example-111111-stage-us-east1-data-lake"] will be created
  + resource "google_storage_bucket" "data_lake_buckets" {
      + force_destroy               = true
      + id                          = (known after apply)
      + labels                      = (known after apply)
      + location                    = "US-EAST1"
      + name                        = "org-example-111111-stage-us-east1-data-lake"
      + project                     = (known after apply)
      + public_access_prevention    = "enforced"
      + self_link                   = (known after apply)
      + storage_class               = "STANDARD"
      + uniform_bucket_level_access = true
      + url                         = (known after apply)

      + versioning {
          + enabled = false
        }
    }

  # google_storage_bucket.data_lake_buckets["org-example-111111-stage-us-east1-data-lake-anonymized"] will be created
  + resource "google_storage_bucket" "data_lake_buckets" {
      + force_destroy               = true
      + id                          = (known after apply)
      + labels                      = (known after apply)
      + location                    = "US-EAST1"
      + name                        = "org-example-111111-stage-us-east1-data-lake-anonymized"
      + project                     = (known after apply)
      + public_access_prevention    = "enforced"
      + self_link                   = (known after apply)
      + storage_class               = "STANDARD"
      + uniform_bucket_level_access = true
      + url                         = (known after apply)

      + versioning {
          + enabled = false
        }
    }

Plan: 5 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + etl_host_ip = (known after apply)

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.
 

@github-actions
Copy link

🏞 Prod - Terraform Config

🖌 Format and Style: success

🛠 Initialization: success

🔎 Validation: success

📖 Plan: success

Show Plan
Acquiring state lock. This may take a few moments...

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # google_compute_firewall.ssh-server will be created
  + resource "google_compute_firewall" "ssh-server" {
      + creation_timestamp = (known after apply)
      + destination_ranges = (known after apply)
      + direction          = (known after apply)
      + enable_logging     = (known after apply)
      + id                 = (known after apply)
      + name               = "default-allow-ssh-prod"
      + network            = "vpc-network-prod"
      + priority           = 1000
      + project            = (known after apply)
      + self_link          = (known after apply)
      + source_ranges      = [
          + "0.0.0.0/0",
        ]
      + target_tags        = [
          + "ssh-server",
        ]

      + allow {
          + ports    = [
              + "22",
            ]
          + protocol = "tcp"
        }
    }

  # google_compute_instance.etl-host will be created
  + resource "google_compute_instance" "etl-host" {
      + allow_stopping_for_update = true
      + can_ip_forward            = false
      + cpu_platform              = (known after apply)
      + current_status            = (known after apply)
      + deletion_protection       = false
      + guest_accelerator         = (known after apply)
      + id                        = (known after apply)
      + instance_id               = (known after apply)
      + label_fingerprint         = (known after apply)
      + machine_type              = "e2-micro"
      + metadata                  = {
          + "enable-oslogin" = "true"
        }
      + metadata_fingerprint      = (known after apply)
      + min_cpu_platform          = (known after apply)
      + name                      = "etl-host-prod"
      + project                   = (known after apply)
      + self_link                 = (known after apply)
      + tags                      = [
          + "ssh-server",
        ]
      + tags_fingerprint          = (known after apply)
      + zone                      = "us-east1-d"

      + boot_disk {
          + auto_delete                = true
          + device_name                = (known after apply)
          + disk_encryption_key_sha256 = (known after apply)
          + kms_key_self_link          = (known after apply)
          + mode                       = "READ_WRITE"
          + source                     = (known after apply)

          + initialize_params {
              + image  = "ubuntu-os-cloud/ubuntu-2204-jammy-v20220420"
              + labels = (known after apply)
              + size   = (known after apply)
              + type   = (known after apply)
            }
        }

      + network_interface {
          + ipv6_access_type   = (known after apply)
          + name               = (known after apply)
          + network            = (known after apply)
          + network_ip         = (known after apply)
          + stack_type         = (known after apply)
          + subnetwork         = (known after apply)
          + subnetwork_project = (known after apply)

          + access_config {
              + nat_ip       = (known after apply)
              + network_tier = "STANDARD"
            }
        }
    }

  # google_compute_network.vpc_network will be created
  + resource "google_compute_network" "vpc_network" {
      + auto_create_subnetworks                   = true
      + delete_default_routes_on_create           = false
      + gateway_ipv4                              = (known after apply)
      + id                                        = (known after apply)
      + internal_ipv6_range                       = (known after apply)
      + mtu                                       = (known after apply)
      + name                                      = "vpc-network-prod"
      + network_firewall_policy_enforcement_order = "AFTER_CLASSIC_FIREWALL"
      + project                                   = (known after apply)
      + routing_mode                              = (known after apply)
      + self_link                                 = (known after apply)
    }

  # google_storage_bucket.data-lake-buckets["org-example-111111-prod-us-east1-data-lake"] will be created
  + resource "google_storage_bucket" "data-lake-buckets" {
      + force_destroy               = true
      + id                          = (known after apply)
      + labels                      = (known after apply)
      + location                    = "US-EAST1"
      + name                        = "org-example-111111-prod-us-east1-data-lake"
      + project                     = (known after apply)
      + public_access_prevention    = "enforced"
      + self_link                   = (known after apply)
      + storage_class               = "STANDARD"
      + uniform_bucket_level_access = true
      + url                         = (known after apply)

      + versioning {
          + enabled = false
        }
    }

  # google_storage_bucket.data-lake-buckets["org-example-111111-prod-us-east1-data-lake-anon"] will be created
  + resource "google_storage_bucket" "data-lake-buckets" {
      + force_destroy               = true
      + id                          = (known after apply)
      + labels                      = (known after apply)
      + location                    = "US-EAST1"
      + name                        = "org-example-111111-prod-us-east1-data-lake-anon"
      + project                     = (known after apply)
      + public_access_prevention    = "enforced"
      + self_link                   = (known after apply)
      + storage_class               = "STANDARD"
      + uniform_bucket_level_access = true
      + url                         = (known after apply)

      + versioning {
          + enabled = false
        }
    }

Plan: 5 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + ip = (known after apply)

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.
 

@github-actions
Copy link

💰 Infracost estimate: monthly cost will not change

Project Cost change New monthly cost
LoHertel/terraform-demo/prod $0 $7
LoHertel/terraform-demo/stage $0 $7
Infracost output
──────────────────────────────────
Project: LoHertel/terraform-demo/stage

- google_compute_instance.etl-host
  -$7

    - Instance usage (Linux/UNIX, on-demand, e2-micro)
      -$6

    - Standard provisioned storage (pd-standard)
      -$0.40

+ google_compute_instance.etl_host
  +$7

    + Instance usage (Linux/UNIX, on-demand, e2-micro)
      +$6

    + Standard provisioned storage (pd-standard)
      +$0.40

- google_storage_bucket.data-lake-buckets["org-example-111111-stage-us-east1-data-lake"]
  Monthly cost depends on usage

    - Storage (standard)
      Monthly cost depends on usage
        -$0.02 per GiB

    - Object adds, bucket/object list (class A)
      Monthly cost depends on usage
        -$0.05 per 10k operations

    - Object gets, retrieve bucket/object metadata (class B)
      Monthly cost depends on usage
        -$0.004 per 10k operations

    - Network egress
    
        - Data transfer in same continent
          Monthly cost depends on usage
            -$0.02 per GB
    
        - Data transfer to worldwide excluding Asia, Australia (first 1TB)
          Monthly cost depends on usage
            -$0.12 per GB
    
        - Data transfer to Asia excluding China, but including Hong Kong (first 1TB)
          Monthly cost depends on usage
            -$0.12 per GB
    
        - Data transfer to China excluding Hong Kong (first 1TB)
          Monthly cost depends on usage
            -$0.23 per GB
    
        - Data transfer to Australia (first 1TB)
          Monthly cost depends on usage
            -$0.19 per GB

- google_storage_bucket.data-lake-buckets["org-example-111111-stage-us-east1-data-lake-anon"]
  Monthly cost depends on usage

    - Storage (standard)
      Monthly cost depends on usage
        -$0.02 per GiB

    - Object adds, bucket/object list (class A)
      Monthly cost depends on usage
        -$0.05 per 10k operations

    - Object gets, retrieve bucket/object metadata (class B)
      Monthly cost depends on usage
        -$0.004 per 10k operations

    - Network egress
    
        - Data transfer in same continent
          Monthly cost depends on usage
            -$0.02 per GB
    
        - Data transfer to worldwide excluding Asia, Australia (first 1TB)
          Monthly cost depends on usage
            -$0.12 per GB
    
        - Data transfer to Asia excluding China, but including Hong Kong (first 1TB)
          Monthly cost depends on usage
            -$0.12 per GB
    
        - Data transfer to China excluding Hong Kong (first 1TB)
          Monthly cost depends on usage
            -$0.23 per GB
    
        - Data transfer to Australia (first 1TB)
          Monthly cost depends on usage
            -$0.19 per GB

+ google_storage_bucket.data_lake_buckets["org-example-111111-stage-us-east1-data-lake"]
  Monthly cost depends on usage

    + Storage (standard)
      Monthly cost depends on usage
        +$0.02 per GiB

    + Object adds, bucket/object list (class A)
      Monthly cost depends on usage
        +$0.05 per 10k operations

    + Object gets, retrieve bucket/object metadata (class B)
      Monthly cost depends on usage
        +$0.004 per 10k operations

    + Network egress
    
        + Data transfer in same continent
          Monthly cost depends on usage
            +$0.02 per GB
    
        + Data transfer to worldwide excluding Asia, Australia (first 1TB)
          Monthly cost depends on usage
            +$0.12 per GB
    
        + Data transfer to Asia excluding China, but including Hong Kong (first 1TB)
          Monthly cost depends on usage
            +$0.12 per GB
    
        + Data transfer to China excluding Hong Kong (first 1TB)
          Monthly cost depends on usage
            +$0.23 per GB
    
        + Data transfer to Australia (first 1TB)
          Monthly cost depends on usage
            +$0.19 per GB

+ google_storage_bucket.data_lake_buckets["org-example-111111-stage-us-east1-data-lake-anonymized"]
  Monthly cost depends on usage

    + Storage (standard)
      Monthly cost depends on usage
        +$0.02 per GiB

    + Object adds, bucket/object list (class A)
      Monthly cost depends on usage
        +$0.05 per 10k operations

    + Object gets, retrieve bucket/object metadata (class B)
      Monthly cost depends on usage
        +$0.004 per 10k operations

    + Network egress
    
        + Data transfer in same continent
          Monthly cost depends on usage
            +$0.02 per GB
    
        + Data transfer to worldwide excluding Asia, Australia (first 1TB)
          Monthly cost depends on usage
            +$0.12 per GB
    
        + Data transfer to Asia excluding China, but including Hong Kong (first 1TB)
          Monthly cost depends on usage
            +$0.12 per GB
    
        + Data transfer to China excluding Hong Kong (first 1TB)
          Monthly cost depends on usage
            +$0.23 per GB
    
        + Data transfer to Australia (first 1TB)
          Monthly cost depends on usage
            +$0.19 per GB

Monthly cost change for LoHertel/terraform-demo/stage
Amount:  $0.00 ($7 → $7)
Percent: 0%

──────────────────────────────────
Key: ~ changed, + added, - removed
The following projects have no cost estimate changes: LoHertel/terraform-demo/prod
Run the following command to see their breakdown: infracost breakdown --path=/path/to/code

──────────────────────────────────
10 cloud resources were detected:
∙ 6 were estimated, all of which include usage-based costs, see https://infracost.io/usage-file
∙ 4 were free:
  ∙ 2 x google_compute_firewall
  ∙ 2 x google_compute_network

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant