Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[datadog_logs_archives] Handle encryption field for S3 destinations for Logs Archives #2740

Merged
merged 16 commits into from
Feb 4, 2025
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 24 additions & 0 deletions datadog/resource_datadog_logs_archive.go
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,8 @@
ValidateDiagFunc: validators.ValidateAWSAccountID,
},
"role_name": {Description: "Your AWS role name", Type: schema.TypeString, Required: true},
"encryption_type": {Description: "The type of encryption on your archive.", Type: schema.TypeString, Required: true},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this new field is required wouldn't users need to add them to their terraform configuration? Let's say a user updates the terraform provider version, wouldn't it break their setup until they add it?

If yes then what can/should we do about it? 🤔

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I switched the encryption type to be optional instead of required, it should now be implemented in the code

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@michelledeng30 Nice, although I wasn't necessarily saying to change it to be optional. I was wondering if we had a good reason for it to be either required or optional and if we were aware of the consequences of choosing either option? These weren't rhetorical questions in any way

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it makes sense for the field to be optional.

If encryption type is required, then every time users interact with their s3 archives via Terraform, they must deliberately set encryption_type in their Terraform configuration. Most users don't need this added functionality of encrypting their data at an object level, since they use default bucket encryption set in s3. I don't think it's necessary for it to be required, and I could see how making it required would be misleading/confusing to users who aren't familiar with/don't need the ability to set encryption in their PUT requests rather than the bucket.

If the field is optional, we can allow users to only set this field if they need the extra level of encryption, otherwise it's automatically set to NO_OVERRIDE for most users.

Also, comparing the Terraform resource and the logs archives API reference, it looks like the same fields are required or optional. In the public API, we support creating/updating archives without any encryption type, so it makes sense to do the same in Terraform.

if encryption-type is made to be required wouldn't it be set to NO_OVERRIDE by default if users don't explicitly set a value for it in their configuration

I believe if a required parameter is not set in a Terraform configuration, it will fail terraform validate and the user will be forced to define it to pass the check

@EdwinSri

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for the explanations, I understand better now and I agree this makes sense like this 🙇

"encryption_key": {Description: "The AWS KMS encryption key.", Type: schema.TypeString, Required: false},
},
},
},
Expand Down Expand Up @@ -253,10 +255,13 @@
func buildS3Map(destination datadogV2.LogsArchiveDestinationS3) map[string]interface{} {
result := make(map[string]interface{})
integration := destination.GetIntegration()
encryption := destination.GetEncryption()

Check failure on line 258 in datadog/resource_datadog_logs_archive.go

View workflow job for this annotation

GitHub Actions / linter-checks

destination.GetEncryption undefined (type datadogV2.LogsArchiveDestinationS3 has no field or method GetEncryption)

Check failure on line 258 in datadog/resource_datadog_logs_archive.go

View workflow job for this annotation

GitHub Actions / test-tofu

destination.GetEncryption undefined (type datadogV2.LogsArchiveDestinationS3 has no field or method GetEncryption)

Check failure on line 258 in datadog/resource_datadog_logs_archive.go

View workflow job for this annotation

GitHub Actions / test-tofu

destination.GetEncryption undefined (type datadogV2.LogsArchiveDestinationS3 has no field or method GetEncryption)
result["account_id"] = integration.GetAccountId()
result["role_name"] = integration.GetRoleName()
result["bucket"] = destination.GetBucket()
result["path"] = destination.GetPath()
result["encryption_type"] = encryption.GetType();
result["encryption_key"] = encryption.GetKey();
return result
}

Expand Down Expand Up @@ -421,10 +426,29 @@
if !ok {
path = ""
}
encryptionType, ok := d["encryptionType"]
michelledeng30 marked this conversation as resolved.
Show resolved Hide resolved
if !ok {
return &datadogV2.LogsArchiveDestinationS3{}, fmt.Errorf("encryption type is not defined")
}
encryptionKey, ok := d["encryptionKey"]
michelledeng30 marked this conversation as resolved.
Show resolved Hide resolved
var LogsArchiveEncryptionS3 encryption

Check failure on line 434 in datadog/resource_datadog_logs_archive.go

View workflow job for this annotation

GitHub Actions / linter-checks

declared and not used: LogsArchiveEncryptionS3

Check failure on line 434 in datadog/resource_datadog_logs_archive.go

View workflow job for this annotation

GitHub Actions / linter-checks

undefined: encryption

Check failure on line 434 in datadog/resource_datadog_logs_archive.go

View workflow job for this annotation

GitHub Actions / test-tofu

declared and not used: LogsArchiveEncryptionS3

Check failure on line 434 in datadog/resource_datadog_logs_archive.go

View workflow job for this annotation

GitHub Actions / test-tofu

undefined: encryption

if !ok {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume from reading the code that if the user sets SSE_KMS as the encryption type but doesn't provide an encryption key we do not want to handle this error here and just pass the values along is that correct?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes that's right, technically setting SSE_KMS with no encryption key is also a valid method (just not done very commonly)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, I didn't know that. Thanks for explaining!

encryption = datadogV2.NewLogsArchiveEncryptionS3(

Check failure on line 437 in datadog/resource_datadog_logs_archive.go

View workflow job for this annotation

GitHub Actions / linter-checks

undefined: encryption

Check failure on line 437 in datadog/resource_datadog_logs_archive.go

View workflow job for this annotation

GitHub Actions / linter-checks

undefined: datadogV2.NewLogsArchiveEncryptionS3

Check failure on line 437 in datadog/resource_datadog_logs_archive.go

View workflow job for this annotation

GitHub Actions / test-tofu

undefined: encryption

Check failure on line 437 in datadog/resource_datadog_logs_archive.go

View workflow job for this annotation

GitHub Actions / test-tofu

undefined: datadogV2.NewLogsArchiveEncryptionS3
encryptionType.(string),
)
} else {
encryption = datadogV2.NewLogsArchiveEncryptionS3(

Check failure on line 441 in datadog/resource_datadog_logs_archive.go

View workflow job for this annotation

GitHub Actions / linter-checks

undefined: encryption

Check failure on line 441 in datadog/resource_datadog_logs_archive.go

View workflow job for this annotation

GitHub Actions / linter-checks

undefined: datadogV2.NewLogsArchiveEncryptionS3

Check failure on line 441 in datadog/resource_datadog_logs_archive.go

View workflow job for this annotation

GitHub Actions / test-tofu

undefined: encryption

Check failure on line 441 in datadog/resource_datadog_logs_archive.go

View workflow job for this annotation

GitHub Actions / test-tofu

undefined: datadogV2.NewLogsArchiveEncryptionS3
encryptionType.(string),
encryptionKey.(string),
)
}

destination := datadogV2.NewLogsArchiveDestinationS3(
bucket.(string),
*integration,
*encryption,

Check failure on line 450 in datadog/resource_datadog_logs_archive.go

View workflow job for this annotation

GitHub Actions / linter-checks

undefined: encryption

Check failure on line 450 in datadog/resource_datadog_logs_archive.go

View workflow job for this annotation

GitHub Actions / test-tofu

undefined: encryption
datadogV2.LOGSARCHIVEDESTINATIONS3TYPE_S3,

Check failure on line 451 in datadog/resource_datadog_logs_archive.go

View workflow job for this annotation

GitHub Actions / linter-checks

too many arguments in call to datadogV2.NewLogsArchiveDestinationS3

Check failure on line 451 in datadog/resource_datadog_logs_archive.go

View workflow job for this annotation

GitHub Actions / test-tofu

too many arguments in call to datadogV2.NewLogsArchiveDestinationS3
)
destination.Path = datadog.PtrString(path.(string))
return destination, nil
Expand Down
3 changes: 3 additions & 0 deletions docs/resources/logs_archive.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ resource "datadog_logs_archive" "my_s3_archive" {
path = "/path/foo"
account_id = "001234567888"
role_name = "my-role-name"
encryption_type = "SSE_S3"
}
}
```
Expand Down Expand Up @@ -87,6 +88,8 @@ Required:
Optional:

- `path` (String) Path where the archive is stored.
- `encryption_type` (String) The type of server-side encryption to use when uploading data to your s3 bucket. `NO_OVERRIDE`, `SSE_S3`, and `SSE_KMS` are the possible types. `NO_OVERRIDE` is used most commonly to leave data unencrypted or leave encryption to the default encryption set on s3 bucket settings.
Copy link

@EdwinSri EdwinSri Jan 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this NO_OVERRIDE value set by default for users who don't define the encryption_type explicitly to either of the two other options?
Also is that really true "most commonly to leave data unencrypted" ? I thought AWS encrypted all s3 buckets by default?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes the NO_OVERRIDE value is the default type if users don't choose another type.
It looks like SSE_S3 is the default encryption for S3 buckets, so I will revise this statement to make this clear

Copy link

@EdwinSri EdwinSri Jan 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@michelledeng30 Linking this message my other comment i posted today, if NO_OVERRIDE is set by default for the encryption-type field, does it make sense for for the encryption-type field to be optional? I really don't know much about the terraform provider code so I'm only talking hypothetically but if encryption-type is made to be required wouldn't it be set to NO_OVERRIDE by default if users don't explicitly set a value for it in their configuration?

I'm trying to understand which way of doing it makes the most sense but if we really don't know maybe we can ask someone who is more knowledgeable about this.

Copy link
Contributor Author

@michelledeng30 michelledeng30 Jan 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Response is in this comment ! - #2740 (comment) @EdwinSri

- `encryption_key` (String) The key ARN used to identify your customer managed key for AWS KMS encryption. Only set this value if the `encryption_type` is set to `SSE_KMS`.

## Import

Expand Down
Loading