Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[datadog_logs_archives] Handle encryption field for S3 destinations for Logs Archives #2740

Open
wants to merge 16 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 13 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 21 additions & 1 deletion datadog/resource_datadog_logs_archive.go
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,9 @@ func resourceDatadogLogsArchive() *schema.Resource {
Required: true,
ValidateDiagFunc: validators.ValidateAWSAccountID,
},
"role_name": {Description: "Your AWS role name", Type: schema.TypeString, Required: true},
"role_name": {Description: "Your AWS role name", Type: schema.TypeString, Required: true},
"encryption_type": {Description: "The type of encryption on your archive.", Type: schema.TypeString, Optional: true, Default: "NO_OVERRIDE"},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"encryption_type": {Description: "The type of encryption on your archive.", Type: schema.TypeString, Optional: true, Default: "NO_OVERRIDE"},
"encryption_type": {Description: "The type of encryption on your archive.", Type: schema.TypeString, Optional: true, Default: datadogV2.LOGSARCHIVEENCRYPTIONS3TYPE_NO_OVERRIDE, ValidateDiagFunc: validators.ValidateEnumValue(datadogV2.NewLogsArchiveEncryptionS3TypeFromValue)},

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does sending a default value all the time without an encryption key change the behavior in any way?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think these should be in its own encryption block since it is an optional field. This would also avoid the issues without default value always being sent:

"encryption": {
	Type:     schema.TypeList,
	MaxItems: 1,
	Optional: true,
	Elem: &schema.Resource{
		Schema: map[string]*schema.Schema{
			"encryption_type": {
				Description:      "The type of encryption on your archive.",
				Type:             schema.TypeString,
				Optional:         true,
				Default:          datadogV2.LOGSARCHIVEENCRYPTIONS3TYPE_NO_OVERRIDE,
				ValidateDiagFunc: validators.ValidateEnumValue(datadogV2.NewLogsArchiveEncryptionS3TypeFromValue)},
			"encryption_key": {Description: "The AWS KMS encryption key.", Type: schema.TypeString, Optional: true},
		},
	},
},

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the encryption object is optional, wouldn't this make encryption_type required? @skarimo

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should not conflict. encryption_type can still be kept as optional. The main thing with this change would be, we would have a new encryption block that is optional.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I created the optional encryption block. I was originally getting errors when testing a resource that didn't have any encryption information, since it's not required to provide it (the first step in TestAccDatadogLogsArchiveS3Update_basic).

The terraform error output displays encryption_type = "NO_OVERRIDE" -> null. Since in the Datadog API, if no encryption is provided in the s3 destination, it will return an encryption object with type set to NO_OVERRIDE. If I understand correctly, because encryption is not in the Terraform plan, this causes the mismatch. To resolve this, I added the Computed: true field to the resource, let me know if I'm using it correctly in this case.

"encryption_key": {Description: "The AWS KMS encryption key.", Type: schema.TypeString, Optional: true},
},
},
},
Expand Down Expand Up @@ -253,8 +255,15 @@ func buildGCSMap(destination datadogV2.LogsArchiveDestinationGCS) map[string]int
func buildS3Map(destination datadogV2.LogsArchiveDestinationS3) map[string]interface{} {
result := make(map[string]interface{})
integration := destination.GetIntegration()
encryption := destination.GetEncryption()
result["account_id"] = integration.GetAccountId()
result["role_name"] = integration.GetRoleName()
if encryptionType, ok := encryption.GetTypeOk(); ok {
result["encryption_type"] = encryptionType
}
if encryptionKey, ok := encryption.GetKeyOk(); ok {
result["encryption_key"] = encryptionKey
}
result["bucket"] = destination.GetBucket()
result["path"] = destination.GetPath()
return result
Expand Down Expand Up @@ -426,6 +435,17 @@ func buildS3Destination(dest interface{}) (*datadogV2.LogsArchiveDestinationS3,
*integration,
datadogV2.LOGSARCHIVEDESTINATIONS3TYPE_S3,
)
encryptionType, ok := d["encryption_type"]
if ok && encryptionType != "" {
encryption := datadogV2.NewLogsArchiveEncryptionS3(
datadogV2.LogsArchiveEncryptionS3Type(encryptionType.(string)),
)
encryptionKey, ok := d["encryption_key"]
if ok && encryptionKey != "" {
encryption.SetKey(encryptionKey.(string))
}
destination.SetEncryption(*encryption)
}
destination.Path = datadog.PtrString(path.(string))
return destination, nil
}
Expand Down
Original file line number Diff line number Diff line change
@@ -1 +1 @@
2021-03-12T17:11:37.185942-05:00
2025-01-23T18:41:36.656668-05:00
267 changes: 179 additions & 88 deletions datadog/tests/cassettes/TestAccDatadogLogsArchiveOrder_basic.yaml
Original file line number Diff line number Diff line change
@@ -1,90 +1,181 @@
---
version: 2
interactions:
- request:
body: |
{"data":{"attributes":{"archive_ids":[]},"type":"archive_order"}}
form: {}
headers:
Accept:
- application/json
Content-Type:
- application/json
url: https://api.datadoghq.com/api/v2/logs/config/archive-order
method: PUT
id: 0
response:
body: '{"errors":["Missing order for archive FDhaAaBtQ9yXG41RPzHFzQ"]}'
headers:
Content-Type:
- application/json
status: 422 Unprocessable Entity
code: 422
duration: "0ms"
- request:
body: ""
form: {}
headers:
Accept:
- application/json
url: https://api.datadoghq.com/api/v2/logs/config/archive-order
method: GET
id: 1
response:
body: '{"data":{"type":"archive_order","attributes":{"archive_ids":["FDhaAaBtQ9yXG41RPzHFzQ"]}}}'
headers:
Content-Type:
- application/json
status: 200 OK
code: 200
duration: "0ms"
- request:
body: ""
form: {}
headers:
Accept:
- application/json
url: https://api.datadoghq.com/api/v2/logs/config/archive-order
method: GET
id: 2
response:
body: '{"data":{"type":"archive_order","attributes":{"archive_ids":["FDhaAaBtQ9yXG41RPzHFzQ"]}}}'
headers:
Content-Type:
- application/json
status: 200 OK
code: 200
duration: "0ms"
- request:
body: ""
form: {}
headers:
Accept:
- application/json
url: https://api.datadoghq.com/api/v2/logs/config/archive-order
method: GET
id: 3
response:
body: '{"data":{"type":"archive_order","attributes":{"archive_ids":["FDhaAaBtQ9yXG41RPzHFzQ"]}}}'
headers:
Content-Type:
- application/json
status: 200 OK
code: 200
duration: "0ms"
- request:
body: ""
form: {}
headers:
Accept:
- application/json
url: https://api.datadoghq.com/api/v2/logs/config/archive-order
method: GET
id: 4
response:
body: '{"data":{"type":"archive_order","attributes":{"archive_ids":["FDhaAaBtQ9yXG41RPzHFzQ"]}}}'
headers:
Content-Type:
- application/json
status: 200 OK
code: 200
duration: "0ms"
- id: 0
request:
proto: HTTP/1.1
proto_major: 1
proto_minor: 1
content_length: 66
transfer_encoding: []
trailer: {}
host: api.datadoghq.com
remote_addr: ""
request_uri: ""
body: |
{"data":{"attributes":{"archive_ids":[]},"type":"archive_order"}}
form: {}
headers:
Accept:
- application/json
Content-Type:
- application/json
url: https://api.datadoghq.com/api/v2/logs/config/archive-order
method: PUT
response:
proto: HTTP/1.1
proto_major: 1
proto_minor: 1
transfer_encoding:
- chunked
trailer: {}
content_length: -1
uncompressed: true
body: |
{"errors":["Missing order for archive BKmiim5bQXC9RDfsaq9bjQ"]}
headers:
Content-Type:
- application/json
status: 422 Unprocessable Entity
code: 422
duration: 147.416375ms
- id: 1
request:
proto: HTTP/1.1
proto_major: 1
proto_minor: 1
content_length: 0
transfer_encoding: []
trailer: {}
host: api.datadoghq.com
remote_addr: ""
request_uri: ""
body: ""
form: {}
headers:
Accept:
- application/json
url: https://api.datadoghq.com/api/v2/logs/config/archive-order
method: GET
response:
proto: HTTP/1.1
proto_major: 1
proto_minor: 1
transfer_encoding:
- chunked
trailer: {}
content_length: -1
uncompressed: true
body: |
{"data":{"type":"archive_order","attributes":{"archive_ids":["FDhaAaBtQ9yXG41RPzHFzQ","k97y1-gSTWKLXkJwfmNVbQ","s8blIHa5QmmFJZkyYK-MQg","CbNClyqUTbaVHga-YXGkIQ","BKmiim5bQXC9RDfsaq9bjQ","eEXtNWT_TamG48WUDiyGNQ"]}}}
headers:
Content-Type:
- application/json
status: 200 OK
code: 200
duration: 70.391916ms
- id: 2
request:
proto: HTTP/1.1
proto_major: 1
proto_minor: 1
content_length: 0
transfer_encoding: []
trailer: {}
host: api.datadoghq.com
remote_addr: ""
request_uri: ""
body: ""
form: {}
headers:
Accept:
- application/json
url: https://api.datadoghq.com/api/v2/logs/config/archive-order
method: GET
response:
proto: HTTP/1.1
proto_major: 1
proto_minor: 1
transfer_encoding:
- chunked
trailer: {}
content_length: -1
uncompressed: true
body: |
{"data":{"type":"archive_order","attributes":{"archive_ids":["FDhaAaBtQ9yXG41RPzHFzQ","k97y1-gSTWKLXkJwfmNVbQ","s8blIHa5QmmFJZkyYK-MQg","CbNClyqUTbaVHga-YXGkIQ","BKmiim5bQXC9RDfsaq9bjQ","eEXtNWT_TamG48WUDiyGNQ"]}}}
headers:
Content-Type:
- application/json
status: 200 OK
code: 200
duration: 77.23625ms
- id: 3
request:
proto: HTTP/1.1
proto_major: 1
proto_minor: 1
content_length: 0
transfer_encoding: []
trailer: {}
host: api.datadoghq.com
remote_addr: ""
request_uri: ""
body: ""
form: {}
headers:
Accept:
- application/json
url: https://api.datadoghq.com/api/v2/logs/config/archive-order
method: GET
response:
proto: HTTP/1.1
proto_major: 1
proto_minor: 1
transfer_encoding:
- chunked
trailer: {}
content_length: -1
uncompressed: true
body: |
{"data":{"type":"archive_order","attributes":{"archive_ids":["FDhaAaBtQ9yXG41RPzHFzQ","k97y1-gSTWKLXkJwfmNVbQ","s8blIHa5QmmFJZkyYK-MQg","CbNClyqUTbaVHga-YXGkIQ","BKmiim5bQXC9RDfsaq9bjQ","eEXtNWT_TamG48WUDiyGNQ"]}}}
headers:
Content-Type:
- application/json
status: 200 OK
code: 200
duration: 66.465667ms
- id: 4
request:
proto: HTTP/1.1
proto_major: 1
proto_minor: 1
content_length: 0
transfer_encoding: []
trailer: {}
host: api.datadoghq.com
remote_addr: ""
request_uri: ""
body: ""
form: {}
headers:
Accept:
- application/json
url: https://api.datadoghq.com/api/v2/logs/config/archive-order
method: GET
response:
proto: HTTP/1.1
proto_major: 1
proto_minor: 1
transfer_encoding:
- chunked
trailer: {}
content_length: -1
uncompressed: true
body: |
{"data":{"type":"archive_order","attributes":{"archive_ids":["FDhaAaBtQ9yXG41RPzHFzQ","k97y1-gSTWKLXkJwfmNVbQ","s8blIHa5QmmFJZkyYK-MQg","CbNClyqUTbaVHga-YXGkIQ","BKmiim5bQXC9RDfsaq9bjQ","eEXtNWT_TamG48WUDiyGNQ"]}}}
headers:
Content-Type:
- application/json
status: 200 OK
code: 200
duration: 69.449ms
Original file line number Diff line number Diff line change
@@ -1 +1 @@
2021-03-12T17:11:39.333811-05:00
2025-01-23T18:43:49.757164-05:00
Loading
Loading