-
Notifications
You must be signed in to change notification settings - Fork 399
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ISSUE] Issue with databricks_workspace
resource - replaced after global tag change
#4399
Comments
The The real problem is here:
as you can't change virtual network for the existing workspace. |
Thanks Alex, I will open issue to them as well. It's strange because only tags were changed, so I don't understand why is trying to change it, when module definition looks like it: #---------------------------------------------------------
# Read Virtual Network
#----------------------------------------------------------
module "data_virtual_network" {
source = "../data-virtual-network"
virtual_network = var.subscription == "dev" ? "" : ""
virtual_network_rg = var.subscription == "dev" ? "" : ""
location = "westeurope"
}
#---------------------------------------------------------
# Configure Databricks Networking Configuration
#----------------------------------------------------------
module "databricks_networking" {
source = "../databricks-networking"
location = "westeurope"
nsg_resource_group_name = var.dbw_resource_group_name
virtual_network = module.data_virtual_network.virtual_network_name
virtual_network_resource_group = module.data_virtual_network.virtual_network_rg
public_subnet_name = "adb-public-${var.workspace}-${var.environment}"
private_subnet_name = "adb-private-${var.workspace}-${var.environment}"
public_subnet_address_prefix = var.public_subnet_address_prefix
private_subnet_address_prefix = var.private_subnet_address_prefix
nsg_name = "nsg-${var.workspace}-${var.environment}"
tags = var.tags
depends_on = [ module.data_virtual_network ]
}
#---------------------------------------------------------
# Create Databricks Workspace
#----------------------------------------------------------
resource "azurerm_databricks_workspace" "dbw" {
# provider = azurerm
name = "dbw-${var.workspace}-${var.environment}"
resource_group_name = var.dbw_resource_group_name
sku = "premium"
location = "westeurope"
managed_resource_group_name = "rg-${var.workspace}-${var.environment}-managed"
customer_managed_key_enabled = true
public_network_access_enabled = true
network_security_group_rules_required = true
custom_parameters {
no_public_ip = true
virtual_network_id = module.data_virtual_network.virtual_network_id
public_subnet_name = module.databricks_networking.public_subnet
private_subnet_name = module.databricks_networking.private_subnet
public_subnet_network_security_group_association_id = module.databricks_networking.public_association_id
private_subnet_network_security_group_association_id = module.databricks_networking.private_association_id
}
tags = var.tags
depends_on = [ module.databricks_networking ]
} |
it may happen if network information is retrieved using data sources, but it's hard to say without seen the whole code |
Is there any way to change the way it works in such setup? I hardcoded VNET id just for test and now all of the suden I have a cycle coming from different modules, which doesn't make that much sense to me since there are no direct dependencies between them. Honestly I'm just looking for the best balance between modular setup and dynamic provisioning of infra. Any tips for resolving the issue locally? Now only permissions and private endpoint is replaced, others are replaced in place:
|
|
Configuration
Expected Behavior
All resources updated in-place. Changes applied without issue
Actual Behavior
Databricks workspace resource is replaced (destroyed and created).
Steps to Reproduce
Terraform and provider versions
Is it a regression?
No idea
Debug Output
Important Factoids
Parts with empty quotes ("") were redacted.
My project is basically using different modules and submodules to provision 'environment':
Workspace (it's created after all whole enviornment module is applied) is separate module and consist of submodules, in the end it creates:
The issue is that when provisioning the infra everything works, but when I want to change global tags (passed as tags=var.tags) workspace resource is trying to be replaced, even though all environment resources are modified in-place.
If I don't modify tags for the environment and just for the workspace it will work - even for the global RG created in the environment module
The fact that Databricks workspace is replaced is messing up passing of databricks.workspace provider for different resources - this is why there's the problem. Basically databricks.workspace is taken from an output which will be changed (to the same value, but well) and it fails
The plan goes through but it fails as soon as the apply begins.
The text was updated successfully, but these errors were encountered: