diff --git a/samtranslator/schema/schema.json b/samtranslator/schema/schema.json index d1e313581..81c8e411a 100644 --- a/samtranslator/schema/schema.json +++ b/samtranslator/schema/schema.json @@ -55541,7 +55541,7 @@ "additionalProperties": false, "properties": { "CronExpression": { - "markdownDescription": "The schedule, as a Cron expression. The schedule interval must be between 1 hour and 1 year. For more information, see the [Cron expressions reference](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-cron-expressions.html) in the *Amazon EventBridge User Guide* .", + "markdownDescription": "The schedule, as a Cron expression. The schedule interval must be between 1 hour and 1 year. For more information, see the [Cron and rate expressions](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-scheduled-rule-pattern.html) in the *Amazon EventBridge User Guide* .", "title": "CronExpression", "type": "string" }, @@ -68065,7 +68065,7 @@ }, "ProvisionedThroughput": { "$ref": "#/definitions/AWS::DynamoDB::Table.ProvisionedThroughput", - "markdownDescription": "Represents the provisioned throughput settings for the specified global secondary index.\n\nFor current minimum and maximum provisioned throughput values, see [Service, Account, and Table Quotas](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html) in the *Amazon DynamoDB Developer Guide* .", + "markdownDescription": "Represents the provisioned throughput settings for the specified global secondary index. You must use either `OnDemandThroughput` or `ProvisionedThroughput` based on your table's capacity mode.\n\nFor current minimum and maximum provisioned throughput values, see [Service, Account, and Table Quotas](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html) in the *Amazon DynamoDB Developer Guide* .", "title": "ProvisionedThroughput" } }, @@ -69821,7 +69821,7 @@ "items": { "type": "string" }, - "markdownDescription": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n\nDefault: Any accelerator type", + "markdownDescription": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n- For instance types with Inference accelerators, specify `inference` .\n\nDefault: Any accelerator type", "title": "AcceleratorTypes", "type": "array" }, @@ -72922,7 +72922,7 @@ "items": { "type": "string" }, - "markdownDescription": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n\nDefault: Any accelerator type", + "markdownDescription": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n- For instance types with Inference accelerators, specify `inference` .\n\nDefault: Any accelerator type", "title": "AcceleratorTypes", "type": "array" }, @@ -77442,7 +77442,7 @@ "items": { "type": "string" }, - "markdownDescription": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n\nDefault: Any accelerator type", + "markdownDescription": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n- For instance types with Inference accelerators, specify `inference` .\n\nDefault: Any accelerator type", "title": "AcceleratorTypes", "type": "array" }, @@ -84228,7 +84228,7 @@ "type": "array" }, "Cpu": { - "markdownDescription": "The number of `cpu` units used by the task. If you use the EC2 launch type, this field is optional. Any value can be used. If you use the Fargate launch type, this field is required. You must use one of the following values. The value that you choose determines your range of valid values for the `memory` parameter.\n\nIf you're using the EC2 launch type or the external launch type, this field is optional. Supported values are between `128` CPU units ( `0.125` vCPUs) and `196608` CPU units ( `192` vCPUs). The CPU units cannot be less than 1 vCPU when you use Windows containers on Fargate.\n\n- 256 (.25 vCPU) - Available `memory` values: 512 (0.5 GB), 1024 (1 GB), 2048 (2 GB)\n- 512 (.5 vCPU) - Available `memory` values: 1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB)\n- 1024 (1 vCPU) - Available `memory` values: 2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB)\n- 2048 (2 vCPU) - Available `memory` values: 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB)\n- 4096 (4 vCPU) - Available `memory` values: 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB)\n- 8192 (8 vCPU) - Available `memory` values: 16 GB and 60 GB in 4 GB increments\n\nThis option requires Linux platform `1.4.0` or later.\n- 16384 (16vCPU) - Available `memory` values: 32GB and 120 GB in 8 GB increments\n\nThis option requires Linux platform `1.4.0` or later.", + "markdownDescription": "The number of `cpu` units used by the task. If you use the EC2 launch type, this field is optional. Any value can be used. If you use the Fargate launch type, this field is required. You must use one of the following values. The value that you choose determines your range of valid values for the `memory` parameter.\n\nIf you're using the EC2 launch type or the external launch type, this field is optional. Supported values are between `128` CPU units ( `0.125` vCPUs) and `196608` CPU units ( `192` vCPUs).\n\nThis field is required for Fargate. For information about the valid values, see [Task size](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#task_size) in the *Amazon Elastic Container Service Developer Guide* .", "title": "Cpu", "type": "string" }, @@ -95860,7 +95860,7 @@ "type": "string" }, "Type": { - "markdownDescription": "The type of the attribute, selected from a list of values.\n\n> Normalization is only supported for `NAME` , `ADDRESS` , `PHONE` , and `EMAIL_ADDRESS` .\n> \n> If you want to normalize `NAME_FIRST` , `NAME_MIDDLE` , and `NAME_LAST` , you must group them by assigning them to the `NAME` `groupName` .\n> \n> If you want to normalize `ADDRESS_STREET1` , `ADDRESS_STREET2` , `ADDRESS_STREET3` , `ADDRESS_CITY` , `ADDRESS_STATE` , `ADDRESS_COUNTRY` , and `ADDRESS_POSTALCODE` , you must group them by assigning them to the `ADDRESS` `groupName` .\n> \n> If you want to normalize `PHONE_NUMBER` and `PHONE_COUNTRYCODE` , you must group them by assigning them to the `PHONE` `groupName` .", + "markdownDescription": "The type of the attribute, selected from a list of values.\n\nLiveRamp supports: `NAME` | `NAME_FIRST` | `NAME_MIDDLE` | `NAME_LAST` | `ADDRESS` | `ADDRESS_STREET1` | `ADDRESS_STREET2` | `ADDRESS_STREET3` | `ADDRESS_CITY` | `ADDRESS_STATE` | `ADDRESS_COUNTRY` | `ADDRESS_POSTALCODE` | `PHONE` | `PHONE_NUMBER` | `EMAIL_ADDRESS` | `UNIQUE_ID` | `PROVIDER_ID`\n\nTransUnion supports: `NAME` | `NAME_FIRST` | `NAME_LAST` | `ADDRESS` | `ADDRESS_CITY` | `ADDRESS_STATE` | `ADDRESS_COUNTRY` | `ADDRESS_POSTALCODE` | `PHONE_NUMBER` | `EMAIL_ADDRESS` | `UNIQUE_ID` | `IPV4` | `IPV6` | `MAID`\n\nUnified ID 2.0 supports: `PHONE_NUMBER` | `EMAIL_ADDRESS` | `UNIQUE_ID`\n\n> Normalization is only supported for `NAME` , `ADDRESS` , `PHONE` , and `EMAIL_ADDRESS` .\n> \n> If you want to normalize `NAME_FIRST` , `NAME_MIDDLE` , and `NAME_LAST` , you must group them by assigning them to the `NAME` `groupName` .\n> \n> If you want to normalize `ADDRESS_STREET1` , `ADDRESS_STREET2` , `ADDRESS_STREET3` , `ADDRESS_CITY` , `ADDRESS_STATE` , `ADDRESS_COUNTRY` , and `ADDRESS_POSTALCODE` , you must group them by assigning them to the `ADDRESS` `groupName` .\n> \n> If you want to normalize `PHONE_NUMBER` and `PHONE_COUNTRYCODE` , you must group them by assigning them to the `PHONE` `groupName` .", "title": "Type", "type": "string" } @@ -170771,7 +170771,7 @@ "additionalProperties": false, "properties": { "LabelTemplate": { - "markdownDescription": "Specify a friendly human-readable name to use to identify this source account when you are viewing data from it in the monitoring account.\n\nYou can include the following variables in your template:\n\n- `$AccountName` is the name of the account\n- `$AccountEmail` is a globally-unique email address, which includes the email domain, such as `mariagarcia@example.com`\n- `$AccountEmailNoDomain` is an email address without the domain name, such as `mariagarcia`", + "markdownDescription": "Specify a friendly human-readable name to use to identify this source account when you are viewing data from it in the monitoring account.\n\nYou can include the following variables in your template:\n\n- `$AccountName` is the name of the account\n- `$AccountEmail` is a globally-unique email address, which includes the email domain, such as `mariagarcia@example.com`\n- `$AccountEmailNoDomain` is an email address without the domain name, such as `mariagarcia`\n\n> In the and Regions, the only supported option is to use custom labels, and the `$AccountName` , `$AccountEmail` , and `$AccountEmailNoDomain` variables all resolve as *account-id* instead of the specified variable.", "title": "LabelTemplate", "type": "string" }, @@ -170784,7 +170784,7 @@ "items": { "type": "string" }, - "markdownDescription": "An array of strings that define which types of data that the source account shares with the monitoring account. Valid values are `AWS::CloudWatch::Metric | AWS::Logs::LogGroup | AWS::XRay::Trace | AWS::ApplicationInsights::Application | AWS::InternetMonitor::Monitor | AWS::ApplicationSignals::Service | AWS::ApplicationSignals::ServiceLevelObjective` .", + "markdownDescription": "An array of strings that define which types of data that the source account shares with the monitoring account. Valid values are `AWS::CloudWatch::Metric | AWS::Logs::LogGroup | AWS::XRay::Trace | AWS::ApplicationInsights::Application | AWS::InternetMonitor::Monitor` .", "title": "ResourceTypes", "type": "array" }, @@ -170837,7 +170837,7 @@ "properties": { "LogGroupConfiguration": { "$ref": "#/definitions/AWS::Oam::Link.LinkFilter", - "markdownDescription": "Use this structure to filter which log groups are to send log events from the source account to the monitoring account.", + "markdownDescription": "Use this structure to filter which log groups are to share log events from this source account to the monitoring account.", "title": "LogGroupConfiguration" }, "MetricConfiguration": { @@ -170852,7 +170852,7 @@ "additionalProperties": false, "properties": { "Filter": { - "markdownDescription": "", + "markdownDescription": "When used in `MetricConfiguration` this field specifies which metric namespaces are to be shared with the monitoring account\n\nWhen used in `LogGroupConfiguration` this field specifies which log groups are to share their log events with the monitoring account. Use the term `LogGroupName` and one or more of the following operands.\n\nUse single quotation marks (') around log group names and metric namespaces.\n\nThe matching of log group names and metric namespaces is case sensitive. Each filter has a limit of five conditional operands. Conditional operands are `AND` and `OR` .\n\n- `=` and `!=`\n- `AND`\n- `OR`\n- `LIKE` and `NOT LIKE` . These can be used only as prefix searches. Include a `%` at the end of the string that you want to search for and include.\n- `IN` and `NOT IN` , using parentheses `( )`\n\nExamples:\n\n- `Namespace NOT LIKE 'AWS/%'` includes only namespaces that don't start with `AWS/` , such as custom namespaces.\n- `Namespace IN ('AWS/EC2', 'AWS/ELB', 'AWS/S3')` includes only the metrics in the EC2, Elastic Load Balancing , and Amazon S3 namespaces.\n- `Namespace = 'AWS/EC2' OR Namespace NOT LIKE 'AWS/%'` includes only the EC2 namespace and your custom namespaces.\n- `LogGroupName IN ('This-Log-Group', 'Other-Log-Group')` includes only the log groups with names `This-Log-Group` and `Other-Log-Group` .\n- `LogGroupName NOT IN ('Private-Log-Group', 'Private-Log-Group-2')` includes all log groups except the log groups with names `Private-Log-Group` and `Private-Log-Group-2` .\n- `LogGroupName LIKE 'aws/lambda/%' OR LogGroupName LIKE 'AWSLogs%'` includes all log groups that have names that start with `aws/lambda/` or `AWSLogs` .\n\n> If you are updating a link that uses filters, you can specify `*` as the only value for the `filter` parameter to delete the filter and share all log groups with the monitoring account.", "title": "Filter", "type": "string" } @@ -224478,7 +224478,7 @@ "type": "array" }, "AutoMinorVersionUpgrade": { - "markdownDescription": "Specifies whether minor engine upgrades are applied automatically to the DB cluster during the maintenance window. By default, minor engine upgrades are applied automatically.\n\nValid for Cluster Type: Aurora DB clusters and Multi-AZ DB cluster", + "markdownDescription": "Specifies whether minor engine upgrades are applied automatically to the DB cluster during the maintenance window. By default, minor engine upgrades are applied automatically.\n\nValid for Cluster Type: Aurora DB clusters and Multi-AZ DB cluster.\n\nFor more information about automatic minor version upgrades, see [Automatically upgrading the minor engine version](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Upgrading.html#USER_UpgradeDBInstance.Upgrading.AutoMinorVersionUpgrades) .", "title": "AutoMinorVersionUpgrade", "type": "boolean" }, @@ -264832,7 +264832,7 @@ "additionalProperties": false, "properties": { "StatusCode": { - "markdownDescription": "The HTTP response code.", + "markdownDescription": "The HTTP response code. Only `404` and `500` status codes are supported.", "title": "StatusCode", "type": "number" } @@ -265073,7 +265073,7 @@ "additionalProperties": false, "properties": { "StatusCode": { - "markdownDescription": "The HTTP response code.", + "markdownDescription": "The HTTP response code. Only `404` and `500` status codes are supported.", "title": "StatusCode", "type": "number" } diff --git a/schema_source/cloudformation-docs.json b/schema_source/cloudformation-docs.json index 8763b9e44..982e366f3 100644 --- a/schema_source/cloudformation-docs.json +++ b/schema_source/cloudformation-docs.json @@ -3170,6 +3170,7 @@ "ApiId": "The `Api` ID.", "CodeHandlers": "The event handler functions that run custom business logic to process published events and subscribe requests.", "CodeS3Location": "The Amazon S3 endpoint where the code is located.", + "HandlerConfigs": "", "Name": "The name of the channel namespace. This name must be unique within the `Api` .", "PublishAuthModes": "The authorization mode to use for publishing messages on the channel namespace. This configuration overrides the default `Api` authorization configuration.", "SubscribeAuthModes": "The authorization mode to use for subscribing to messages on the channel namespace. This configuration overrides the default `Api` authorization configuration.", @@ -3178,6 +3179,21 @@ "AWS::AppSync::ChannelNamespace AuthMode": { "AuthType": "The authorization type." }, + "AWS::AppSync::ChannelNamespace HandlerConfig": { + "Behavior": "", + "Integration": "" + }, + "AWS::AppSync::ChannelNamespace HandlerConfigs": { + "OnPublish": "", + "OnSubscribe": "" + }, + "AWS::AppSync::ChannelNamespace Integration": { + "DataSourceName": "", + "LambdaConfig": "" + }, + "AWS::AppSync::ChannelNamespace LambdaConfig": { + "InvokeType": "" + }, "AWS::AppSync::ChannelNamespace Tag": { "Key": "Describes the key of the tag.", "Value": "Describes the value of the tag." @@ -7554,6 +7570,17 @@ "AWS::CloudFront::CloudFrontOriginAccessIdentity CloudFrontOriginAccessIdentityConfig": { "Comment": "A comment to describe the origin access identity. The comment cannot be longer than 128 characters." }, + "AWS::CloudFront::ConnectionGroup": { + "AnycastIpListId": "The ID of the Anycast static IP list.", + "Enabled": "", + "Ipv6Enabled": "", + "Name": "", + "Tags": "A complex type that contains zero or more `Tag` elements." + }, + "AWS::CloudFront::ConnectionGroup Tag": { + "Key": "A string that contains `Tag` key.\n\nThe string length should be between 1 and 128 characters. Valid characters include `a-z` , `A-Z` , `0-9` , space, and the special characters `_ - . : / = + @` .", + "Value": "A string that contains an optional `Tag` value.\n\nThe string length should be between 0 and 256 characters. Valid characters include `a-z` , `A-Z` , `0-9` , space, and the special characters `_ - . : / = + @` ." + }, "AWS::CloudFront::ContinuousDeploymentPolicy": { "ContinuousDeploymentPolicyConfig": "Contains the configuration for a continuous deployment policy." }, @@ -7657,12 +7684,16 @@ "TrustedSigners": "> We recommend using `TrustedKeyGroups` instead of `TrustedSigners` . \n\nA list of AWS account IDs whose public keys CloudFront can use to validate signed URLs or signed cookies.\n\nWhen a cache behavior contains trusted signers, CloudFront requires signed URLs or signed cookies for all requests that match the cache behavior. The URLs or cookies must be signed with the private key of a CloudFront key pair in a trusted signer's AWS account . The signed URL or cookie contains information about which public key CloudFront should use to verify the signature. For more information, see [Serving private content](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html) in the *Amazon CloudFront Developer Guide* .", "ViewerProtocolPolicy": "The protocol that viewers can use to access the files in the origin specified by `TargetOriginId` when a request matches the path pattern in `PathPattern` . You can specify the following options:\n\n- `allow-all` : Viewers can use HTTP or HTTPS.\n- `redirect-to-https` : If a viewer submits an HTTP request, CloudFront returns an HTTP status code of 301 (Moved Permanently) to the viewer along with the HTTPS URL. The viewer then resubmits the request using the new URL.\n- `https-only` : If a viewer sends an HTTP request, CloudFront returns an HTTP status code of 403 (Forbidden).\n\nFor more information about requiring the HTTPS protocol, see [Requiring HTTPS Between Viewers and CloudFront](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-https-viewers-to-cloudfront.html) in the *Amazon CloudFront Developer Guide* .\n\n> The only way to guarantee that viewers retrieve an object that was fetched from the origin using HTTPS is never to use any other protocol to fetch the object. If you have recently changed from HTTP to HTTPS, we recommend that you clear your objects' cache because cached objects are protocol agnostic. That means that an edge location will return an object from the cache regardless of whether the current request protocol matches the protocol used previously. For more information, see [Managing Cache Expiration](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html) in the *Amazon CloudFront Developer Guide* ." }, + "AWS::CloudFront::Distribution Definition": { + "StringSchema": "" + }, "AWS::CloudFront::Distribution DistributionConfig": { "Aliases": "A complex type that contains information about CNAMEs (alternate domain names), if any, for this distribution.", "AnycastIpListId": "ID of the Anycast static IP list that is associated with the distribution.", "CNAMEs": "An alias for the CloudFront distribution's domain name.\n\n> This property is legacy. We recommend that you use [Aliases](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-cloudfront-distribution-distributionconfig.html#cfn-cloudfront-distribution-distributionconfig-aliases) instead.", "CacheBehaviors": "A complex type that contains zero or more `CacheBehavior` elements.", "Comment": "A comment to describe the distribution. The comment cannot be longer than 128 characters.", + "ConnectionMode": "", "ContinuousDeploymentPolicyId": "The identifier of a continuous deployment policy. For more information, see `CreateContinuousDeploymentPolicy` .", "CustomErrorResponses": "A complex type that controls the following:\n\n- Whether CloudFront replaces HTTP status codes in the 4xx and 5xx range with custom error messages before returning the response to the viewer.\n- How long CloudFront caches HTTP status codes in the 4xx and 5xx range.\n\nFor more information about custom error pages, see [Customizing Error Responses](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/custom-error-pages.html) in the *Amazon CloudFront Developer Guide* .", "CustomOrigin": "The user-defined HTTP server that serves as the origin for content that CloudFront distributes.\n\n> This property is legacy. We recommend that you use [Origin](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-cloudfront-distribution-origin.html) instead.", @@ -7678,6 +7709,7 @@ "Restrictions": "A complex type that identifies ways in which you want to restrict distribution of your content.", "S3Origin": "The origin as an Amazon S3 bucket.\n\n> This property is legacy. We recommend that you use [Origin](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-cloudfront-distribution-origin.html) instead.", "Staging": "A Boolean that indicates whether this is a staging distribution. When this value is `true` , this is a staging distribution. When this value is `false` , this is not a staging distribution.", + "TenantConfig": "", "ViewerCertificate": "A complex type that determines the distribution's SSL/TLS configuration for communicating with viewers.", "WebACLId": "A unique identifier that specifies the AWS WAF web ACL, if any, to associate with this distribution. To specify a web ACL created using the latest version of AWS WAF , use the ACL ARN, for example `arn:aws:wafv2:us-east-1:123456789012:global/webacl/ExampleWebACL/a1b2c3d4-5678-90ab-cdef-EXAMPLE11111` . To specify a web ACL created using AWS WAF Classic, use the ACL ID, for example `a1b2c3d4-5678-90ab-cdef-EXAMPLE11111` .\n\nAWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to CloudFront, and lets you control access to your content. Based on conditions that you specify, such as the IP addresses that requests originate from or the values of query strings, CloudFront responds to requests either with the requested content or with an HTTP 403 status code (Forbidden). You can also configure CloudFront to return a custom error page when a request is blocked. For more information about AWS WAF , see the [AWS WAF Developer Guide](https://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html) ." }, @@ -7760,6 +7792,10 @@ "Enabled": "A flag that specifies whether Origin Shield is enabled.\n\nWhen it's enabled, CloudFront routes all requests through Origin Shield, which can help protect your origin. When it's disabled, CloudFront might send requests directly to your origin from multiple edge locations or regional edge caches.", "OriginShieldRegion": "The AWS Region for Origin Shield.\n\nSpecify the AWS Region that has the lowest latency to your origin. To specify a region, use the region code, not the region name. For example, specify the US East (Ohio) region as `us-east-2` .\n\nWhen you enable CloudFront Origin Shield, you must specify the AWS Region for Origin Shield. For the list of AWS Regions that you can specify, and for help choosing the best Region for your origin, see [Choosing the AWS Region for Origin Shield](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/origin-shield.html#choose-origin-shield-region) in the *Amazon CloudFront Developer Guide* ." }, + "AWS::CloudFront::Distribution ParameterDefinition": { + "Definition": "", + "Name": "" + }, "AWS::CloudFront::Distribution Restrictions": { "GeoRestriction": "A complex type that controls the countries in which your content is distributed. CloudFront determines the location of your users using `MaxMind` GeoIP databases. To disable geo restriction, remove the [Restrictions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-cloudfront-distribution-distributionconfig.html#cfn-cloudfront-distribution-distributionconfig-restrictions) property from your stack template." }, @@ -7770,10 +7806,18 @@ "Items": "The items (status codes) for an origin group.", "Quantity": "The number of status codes." }, + "AWS::CloudFront::Distribution StringSchema": { + "Comment": "", + "DefaultValue": "", + "Required": "" + }, "AWS::CloudFront::Distribution Tag": { "Key": "A string that contains `Tag` key.\n\nThe string length should be between 1 and 128 characters. Valid characters include `a-z` , `A-Z` , `0-9` , space, and the special characters `_ - . : / = + @` .", "Value": "A string that contains an optional `Tag` value.\n\nThe string length should be between 0 and 256 characters. Valid characters include `a-z` , `A-Z` , `0-9` , space, and the special characters `_ - . : / = + @` ." }, + "AWS::CloudFront::Distribution TenantConfig": { + "ParameterDefinitions": "" + }, "AWS::CloudFront::Distribution ViewerCertificate": { "AcmCertificateArn": "> In CloudFormation, this field name is `AcmCertificateArn` . Note the different capitalization. \n\nIf the distribution uses `Aliases` (alternate domain names or CNAMEs) and the SSL/TLS certificate is stored in [AWS Certificate Manager (ACM)](https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html) , provide the Amazon Resource Name (ARN) of the ACM certificate. CloudFront only supports ACM certificates in the US East (N. Virginia) Region ( `us-east-1` ).\n\nIf you specify an ACM certificate ARN, you must also specify values for `MinimumProtocolVersion` and `SSLSupportMethod` . (In CloudFormation, the field name is `SslSupportMethod` . Note the different capitalization.)", "CloudFrontDefaultCertificate": "If the distribution uses the CloudFront domain name such as `d111111abcdef8.cloudfront.net` , set this field to `true` .\n\nIf the distribution uses `Aliases` (alternate domain names or CNAMEs), omit this field and specify values for the following fields:\n\n- `AcmCertificateArn` or `IamCertificateId` (specify a value for one, not both)\n- `MinimumProtocolVersion`\n- `SslSupportMethod`", @@ -7786,6 +7830,51 @@ "OriginReadTimeout": "Specifies how long, in seconds, CloudFront waits for a response from the origin. This is also known as the *origin response timeout* . The minimum timeout is 1 second, the maximum is 60 seconds, and the default (if you don't specify otherwise) is 30 seconds.\n\nFor more information, see [Response timeout (custom origins only)](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html#DownloadDistValuesOriginResponseTimeout) in the *Amazon CloudFront Developer Guide* .", "VpcOriginId": "The VPC origin ID." }, + "AWS::CloudFront::DistributionTenant": { + "ConnectionGroupId": "", + "Customizations": "", + "DistributionId": "The distribution's identifier. For example: `E1U5RQF7T870K0` .", + "Domains": "", + "Enabled": "", + "ManagedCertificateRequest": "", + "Name": "", + "Parameters": "", + "Tags": "A complex type that contains zero or more `Tag` elements." + }, + "AWS::CloudFront::DistributionTenant Certificate": { + "Arn": "" + }, + "AWS::CloudFront::DistributionTenant Customizations": { + "Certificate": "", + "GeoRestrictions": "", + "WebAcl": "" + }, + "AWS::CloudFront::DistributionTenant DomainResult": { + "Domain": "", + "Reason": "", + "Status": "" + }, + "AWS::CloudFront::DistributionTenant GeoRestrictionCustomization": { + "Locations": "", + "RestrictionType": "" + }, + "AWS::CloudFront::DistributionTenant ManagedCertificateRequest": { + "CertificateTransparencyLoggingPreference": "", + "PrimaryDomainName": "", + "ValidationTokenHost": "" + }, + "AWS::CloudFront::DistributionTenant Parameter": { + "Name": "", + "Value": "" + }, + "AWS::CloudFront::DistributionTenant Tag": { + "Key": "A string that contains `Tag` key.\n\nThe string length should be between 1 and 128 characters. Valid characters include `a-z` , `A-Z` , `0-9` , space, and the special characters `_ - . : / = + @` .", + "Value": "A string that contains an optional `Tag` value.\n\nThe string length should be between 0 and 256 characters. Valid characters include `a-z` , `A-Z` , `0-9` , space, and the special characters `_ - . : / = + @` ." + }, + "AWS::CloudFront::DistributionTenant WebAclCustomization": { + "Action": "", + "Arn": "" + }, "AWS::CloudFront::Function": { "AutoPublish": "A flag that determines whether to automatically publish the function to the `LIVE` stage when it\u2019s created. To automatically publish to the `LIVE` stage, set this property to `true` .", "FunctionCode": "The function code. For more information about writing a CloudFront function, see [Writing function code for CloudFront Functions](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/writing-function-code.html) in the *Amazon CloudFront Developer Guide* .", @@ -8357,7 +8446,7 @@ }, "AWS::CodeBuild::Fleet": { "BaseCapacity": "The initial number of machines allocated to the compute \ufb02eet, which de\ufb01nes the number of builds that can run in parallel.", - "ComputeConfiguration": "The compute configuration of the compute fleet. This is only required if `computeType` is set to `ATTRIBUTE_BASED_COMPUTE` .", + "ComputeConfiguration": "The compute configuration of the compute fleet. This is only required if `computeType` is set to `ATTRIBUTE_BASED_COMPUTE` or `CUSTOM_INSTANCE_TYPE` .", "ComputeType": "Information about the compute resources the compute fleet uses. Available values include:\n\n- `ATTRIBUTE_BASED_COMPUTE` : Specify the amount of vCPUs, memory, disk space, and the type of machine.\n\n> If you use `ATTRIBUTE_BASED_COMPUTE` , you must define your attributes by using `computeConfiguration` . AWS CodeBuild will select the cheapest instance that satisfies your specified attributes. For more information, see [Reserved capacity environment types](https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-compute-types.html#environment-reserved-capacity.types) in the *AWS CodeBuild User Guide* .\n- `BUILD_GENERAL1_SMALL` : Use up to 4 GiB memory and 2 vCPUs for builds.\n- `BUILD_GENERAL1_MEDIUM` : Use up to 8 GiB memory and 4 vCPUs for builds.\n- `BUILD_GENERAL1_LARGE` : Use up to 16 GiB memory and 8 vCPUs for builds, depending on your environment type.\n- `BUILD_GENERAL1_XLARGE` : Use up to 72 GiB memory and 36 vCPUs for builds, depending on your environment type.\n- `BUILD_GENERAL1_2XLARGE` : Use up to 144 GiB memory, 72 vCPUs, and 824 GB of SSD storage for builds. This compute type supports Docker images up to 100 GB uncompressed.\n- `BUILD_LAMBDA_1GB` : Use up to 1 GiB memory for builds. Only available for environment type `LINUX_LAMBDA_CONTAINER` and `ARM_LAMBDA_CONTAINER` .\n- `BUILD_LAMBDA_2GB` : Use up to 2 GiB memory for builds. Only available for environment type `LINUX_LAMBDA_CONTAINER` and `ARM_LAMBDA_CONTAINER` .\n- `BUILD_LAMBDA_4GB` : Use up to 4 GiB memory for builds. Only available for environment type `LINUX_LAMBDA_CONTAINER` and `ARM_LAMBDA_CONTAINER` .\n- `BUILD_LAMBDA_8GB` : Use up to 8 GiB memory for builds. Only available for environment type `LINUX_LAMBDA_CONTAINER` and `ARM_LAMBDA_CONTAINER` .\n- `BUILD_LAMBDA_10GB` : Use up to 10 GiB memory for builds. Only available for environment type `LINUX_LAMBDA_CONTAINER` and `ARM_LAMBDA_CONTAINER` .\n\nIf you use `BUILD_GENERAL1_SMALL` :\n\n- For environment type `LINUX_CONTAINER` , you can use up to 4 GiB memory and 2 vCPUs for builds.\n- For environment type `LINUX_GPU_CONTAINER` , you can use up to 16 GiB memory, 4 vCPUs, and 1 NVIDIA A10G Tensor Core GPU for builds.\n- For environment type `ARM_CONTAINER` , you can use up to 4 GiB memory and 2 vCPUs on ARM-based processors for builds.\n\nIf you use `BUILD_GENERAL1_LARGE` :\n\n- For environment type `LINUX_CONTAINER` , you can use up to 16 GiB memory and 8 vCPUs for builds.\n- For environment type `LINUX_GPU_CONTAINER` , you can use up to 255 GiB memory, 32 vCPUs, and 4 NVIDIA Tesla V100 GPUs for builds.\n- For environment type `ARM_CONTAINER` , you can use up to 16 GiB memory and 8 vCPUs on ARM-based processors for builds.\n\nFor more information, see [On-demand environment types](https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-compute-types.html#environment.types) in the *AWS CodeBuild User Guide.*", "EnvironmentType": "The environment type of the compute fleet.\n\n- The environment type `ARM_CONTAINER` is available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), EU (Frankfurt), and South America (S\u00e3o Paulo).\n- The environment type `ARM_EC2` is available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), South America (S\u00e3o Paulo), and Asia Pacific (Mumbai).\n- The environment type `LINUX_CONTAINER` is available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), South America (S\u00e3o Paulo), and Asia Pacific (Mumbai).\n- The environment type `LINUX_EC2` is available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), South America (S\u00e3o Paulo), and Asia Pacific (Mumbai).\n- The environment type `LINUX_GPU_CONTAINER` is available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), EU (Frankfurt), Asia Pacific (Tokyo), and Asia Pacific (Sydney).\n- The environment type `MAC_ARM` is available only in regions US East (Ohio), US East (N. Virginia), US West (Oregon), Europe (Frankfurt), and Asia Pacific (Sydney).\n- The environment type `WINDOWS_EC2` is available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), South America (S\u00e3o Paulo), and Asia Pacific (Mumbai).\n- The environment type `WINDOWS_SERVER_2019_CONTAINER` is available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Mumbai) and EU (Ireland).\n- The environment type `WINDOWS_SERVER_2022_CONTAINER` is available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), EU (Frankfurt), Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Tokyo), South America (S\u00e3o Paulo) and Asia Pacific (Mumbai).\n\nFor more information, see [Build environment compute types](https://docs.aws.amazon.com//codebuild/latest/userguide/build-env-ref-compute-types.html) in the *AWS CodeBuild user guide* .", "FleetProxyConfiguration": "Information about the proxy configurations that apply network access control to your reserved capacity instances.", @@ -8483,6 +8572,7 @@ "TimeoutInMins": "Specifies the maximum amount of time, in minutes, that the batch build must be completed in." }, "AWS::CodeBuild::Project ProjectCache": { + "CacheNamespace": "Defines the scope of the cache. You can use this namespace to share a cache across multiple projects. For more information, see [Cache sharing between projects](https://docs.aws.amazon.com/codebuild/latest/userguide/caching-s3.html#caching-s3-sharing) in the *AWS CodeBuild User Guide* .", "Location": "Information about the cache location:\n\n- `NO_CACHE` or `LOCAL` : This value is ignored.\n- `S3` : This is the S3 bucket name/prefix.", "Modes": "An array of strings that specify the local cache modes. You can use one or more local cache modes at the same time. This is only used for `LOCAL` cache types.\n\nPossible values are:\n\n- **LOCAL_SOURCE_CACHE** - Caches Git metadata for primary and secondary sources. After the cache is created, subsequent builds pull only the change between commits. This mode is a good choice for projects with a clean working directory and a source that is a large Git repository. If you choose this option and your project does not use a Git repository (GitHub, GitHub Enterprise, or Bitbucket), the option is ignored.\n- **LOCAL_DOCKER_LAYER_CACHE** - Caches existing Docker layers. This mode is a good choice for projects that build or pull large Docker images. It can prevent the performance issues caused by pulling large Docker images down from the network.\n\n> - You can use a Docker layer cache in the Linux environment only.\n> - The `privileged` flag must be set so that your project has the required Docker permissions.\n> - You should consider the security implications before you use a Docker layer cache.\n- **LOCAL_CUSTOM_CACHE** - Caches directories you specify in the buildspec file. This mode is a good choice if your build scenario is not suited to one of the other three local cache modes. If you use a custom cache:\n\n- Only directories can be specified for caching. You cannot specify individual files.\n- Symlinks are used to reference cached directories.\n- Cached directories are linked to your build before it downloads its project sources. Cached items are overridden if a source item has the same name. Directories are specified using cache paths in the buildspec file.", "Type": "The type of cache used by the build project. Valid values include:\n\n- `NO_CACHE` : The build project does not use any cache.\n- `S3` : The build project reads and writes from and to S3.\n- `LOCAL` : The build project stores a cache locally on a build host that is only available to that build host." @@ -8517,7 +8607,9 @@ "Status": "The current status of the S3 build logs. Valid values are:\n\n- `ENABLED` : S3 build logs are enabled for this build project.\n- `DISABLED` : S3 build logs are not enabled for this build project." }, "AWS::CodeBuild::Project ScopeConfiguration": { - "Name": "The name of either the enterprise or organization that will send webhook events to CodeBuild , depending on if the webhook is a global or organization webhook respectively." + "Domain": "The domain of the GitHub Enterprise organization or the GitLab Self Managed group. Note that this parameter is only required if your project's source type is GITHUB_ENTERPRISE or GITLAB_SELF_MANAGED.", + "Name": "The name of either the enterprise or organization that will send webhook events to CodeBuild , depending on if the webhook is a global or organization webhook respectively.", + "Scope": "The type of scope for a GitHub or GitLab webhook. The scope default is GITHUB_ORGANIZATION." }, "AWS::CodeBuild::Project Source": { "Auth": "Information about the authorization settings for AWS CodeBuild to access the source code to be built.", @@ -9159,7 +9251,7 @@ "ClientId": "The app client that's assigned to the branding style that you want more information about.", "ReturnMergedResources": "When `true` , returns values for branding options that are unchanged from Amazon Cognito defaults. When `false` or when you omit this parameter, returns only values that you customized in your branding style.", "Settings": "A JSON file, encoded as a `Document` type, with the the settings that you want to apply to your style.", - "UseCognitoProvidedValues": "When true, applies the default branding style options. This option reverts to default style options that are managed by Amazon Cognito. You can modify them later in the branding designer.\n\nWhen you specify `true` for this option, you must also omit values for `Settings` and `Assets` in the request.", + "UseCognitoProvidedValues": "When true, applies the default branding style options. This option reverts to default style options that are managed by Amazon Cognito. You can modify them later in the branding editor.\n\nWhen you specify `true` for this option, you must also omit values for `Settings` and `Assets` in the request.", "UserPoolId": "The user pool where the branding style is assigned." }, "AWS::Cognito::ManagedLoginBranding AssetType": { @@ -9354,7 +9446,7 @@ "AWS::Cognito::UserPoolDomain": { "CustomDomainConfig": "The configuration for a custom domain that hosts the sign-up and sign-in pages for your application. Use this object to specify an SSL certificate that is managed by ACM.\n\nWhen you create a custom domain, the passkey RP ID defaults to the custom domain. If you had a prefix domain active, this will cause passkey integration for your prefix domain to stop working due to a mismatch in RP ID. To keep the prefix domain passkey integration working, you can explicitly set RP ID to the prefix domain.", "Domain": "The name of the domain that you want to update. For custom domains, this is the fully-qualified domain name, for example `auth.example.com` . For prefix domains, this is the prefix alone, such as `myprefix` .", - "ManagedLoginVersion": "A version number that indicates the state of managed login for your domain. Version `1` is hosted UI (classic). Version `2` is the newer managed login with the branding designer. For more information, see [Managed login](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-managed-login.html) .", + "ManagedLoginVersion": "A version number that indicates the state of managed login for your domain. Version `1` is hosted UI (classic). Version `2` is the newer managed login with the branding editor. For more information, see [Managed login](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-managed-login.html) .", "UserPoolId": "The ID of the user pool that is associated with the domain you're updating." }, "AWS::Cognito::UserPoolDomain CustomDomainConfigType": { @@ -10975,7 +11067,7 @@ "RetainRule": "Information about the retention period for the snapshot archiving rule." }, "AWS::DLM::LifecyclePolicy CreateRule": { - "CronExpression": "The schedule, as a Cron expression. The schedule interval must be between 1 hour and 1 year. For more information, see the [Cron expressions reference](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-cron-expressions.html) in the *Amazon EventBridge User Guide* .", + "CronExpression": "The schedule, as a Cron expression. The schedule interval must be between 1 hour and 1 year. For more information, see the [Cron and rate expressions](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-scheduled-rule-pattern.html) in the *Amazon EventBridge User Guide* .", "Interval": "The interval between snapshots. The supported values are 1, 2, 3, 4, 6, 8, 12, and 24.", "IntervalUnit": "The interval unit.", "Location": "*[Custom snapshot policies only]* Specifies the destination for snapshots created by the policy. The allowed destinations depend on the location of the targeted resources.\n\n- If the policy targets resources in a Region, then you must create snapshots in the same Region as the source resource.\n- If the policy targets resources in a Local Zone, you can create snapshots in the same Local Zone or in its parent Region.\n- If the policy targets resources on an Outpost, then you can create snapshots on the same Outpost or in its parent Region.\n\nSpecify one of the following values:\n\n- To create snapshots in the same Region as the source resource, specify `CLOUD` .\n- To create snapshots in the same Local Zone as the source resource, specify `LOCAL_ZONE` .\n- To create snapshots on the same Outpost as the source resource, specify `OUTPOST_LOCAL` .\n\nDefault: `CLOUD`", @@ -13387,9 +13479,9 @@ "ContributorInsightsSpecification": "The settings used to enable or disable CloudWatch Contributor Insights for the specified global secondary index.", "IndexName": "The name of the global secondary index. The name must be unique among all other indexes on this table.", "KeySchema": "The complete key schema for a global secondary index, which consists of one or more pairs of attribute names and key types:\n\n- `HASH` - partition key\n- `RANGE` - sort key\n\n> The partition key of an item is also known as its *hash attribute* . The term \"hash attribute\" derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values.\n> \n> The sort key of an item is also known as its *range attribute* . The term \"range attribute\" derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.", - "OnDemandThroughput": "The maximum number of read and write units for the specified global secondary index. If you use this parameter, you must specify `MaxReadRequestUnits` , `MaxWriteRequestUnits` , or both.", + "OnDemandThroughput": "The maximum number of read and write units for the specified global secondary index. If you use this parameter, you must specify `MaxReadRequestUnits` , `MaxWriteRequestUnits` , or both. You must use either `OnDemandThroughput` or `ProvisionedThroughput` based on your table's capacity mode.", "Projection": "Represents attributes that are copied (projected) from the table into the global secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.", - "ProvisionedThroughput": "Represents the provisioned throughput settings for the specified global secondary index.\n\nFor current minimum and maximum provisioned throughput values, see [Service, Account, and Table Quotas](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html) in the *Amazon DynamoDB Developer Guide* .", + "ProvisionedThroughput": "Represents the provisioned throughput settings for the specified global secondary index. You must use either `OnDemandThroughput` or `ProvisionedThroughput` based on your table's capacity mode.\n\nFor current minimum and maximum provisioned throughput values, see [Service, Account, and Table Quotas](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html) in the *Amazon DynamoDB Developer Guide* .", "WarmThroughput": "Represents the warm throughput value (in read units per second and write units per second) for the specified secondary index. If you use this parameter, you must specify `ReadUnitsPerSecond` , `WriteUnitsPerSecond` , or both." }, "AWS::DynamoDB::Table ImportSourceSpecification": { @@ -13690,7 +13782,7 @@ "AcceleratorManufacturers": "Indicates whether instance types must have accelerators by specific manufacturers.\n\n- For instance types with AWS devices, specify `amazon-web-services` .\n- For instance types with AMD devices, specify `amd` .\n- For instance types with Habana devices, specify `habana` .\n- For instance types with NVIDIA devices, specify `nvidia` .\n- For instance types with Xilinx devices, specify `xilinx` .\n\nDefault: Any manufacturer", "AcceleratorNames": "The accelerators that must be on the instance type.\n\n- For instance types with NVIDIA A10G GPUs, specify `a10g` .\n- For instance types with NVIDIA A100 GPUs, specify `a100` .\n- For instance types with NVIDIA H100 GPUs, specify `h100` .\n- For instance types with AWS Inferentia chips, specify `inferentia` .\n- For instance types with NVIDIA GRID K520 GPUs, specify `k520` .\n- For instance types with NVIDIA K80 GPUs, specify `k80` .\n- For instance types with NVIDIA M60 GPUs, specify `m60` .\n- For instance types with AMD Radeon Pro V520 GPUs, specify `radeon-pro-v520` .\n- For instance types with NVIDIA T4 GPUs, specify `t4` .\n- For instance types with NVIDIA T4G GPUs, specify `t4g` .\n- For instance types with Xilinx VU9P FPGAs, specify `vu9p` .\n- For instance types with NVIDIA V100 GPUs, specify `v100` .\n\nDefault: Any accelerator", "AcceleratorTotalMemoryMiB": "The minimum and maximum amount of total accelerator memory, in MiB.\n\nDefault: No minimum or maximum limits", - "AcceleratorTypes": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n\nDefault: Any accelerator type", + "AcceleratorTypes": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n- For instance types with Inference accelerators, specify `inference` .\n\nDefault: Any accelerator type", "AllowedInstanceTypes": "The instance types to apply your specified attributes against. All other instance types are ignored, even if they match your specified attributes.\n\nYou can use strings with one or more wild cards, represented by an asterisk ( `*` ), to allow an instance type, size, or generation. The following are examples: `m5.8xlarge` , `c5*.*` , `m5a.*` , `r*` , `*3*` .\n\nFor example, if you specify `c5*` ,Amazon EC2 will allow the entire C5 instance family, which includes all C5a and C5n instance types. If you specify `m5a.*` , Amazon EC2 will allow all the M5a instance types, but not the M5n instance types.\n\n> If you specify `AllowedInstanceTypes` , you can't specify `ExcludedInstanceTypes` . \n\nDefault: All instance types", "BareMetal": "Indicates whether bare metal instance types must be included, excluded, or required.\n\n- To include bare metal instance types, specify `included` .\n- To require only bare metal instance types, specify `required` .\n- To exclude bare metal instance types, specify `excluded` .\n\nDefault: `excluded`", "BaselineEbsBandwidthMbps": "The minimum and maximum baseline bandwidth to Amazon EBS, in Mbps. For more information, see [Amazon EBS\u2013optimized instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-optimized.html) in the *Amazon EC2 User Guide* .\n\nDefault: No minimum or maximum limits", @@ -14202,7 +14294,7 @@ "AcceleratorManufacturers": "Indicates whether instance types must have accelerators by specific manufacturers.\n\n- For instance types with AWS devices, specify `amazon-web-services` .\n- For instance types with AMD devices, specify `amd` .\n- For instance types with Habana devices, specify `habana` .\n- For instance types with NVIDIA devices, specify `nvidia` .\n- For instance types with Xilinx devices, specify `xilinx` .\n\nDefault: Any manufacturer", "AcceleratorNames": "The accelerators that must be on the instance type.\n\n- For instance types with NVIDIA A10G GPUs, specify `a10g` .\n- For instance types with NVIDIA A100 GPUs, specify `a100` .\n- For instance types with NVIDIA H100 GPUs, specify `h100` .\n- For instance types with AWS Inferentia chips, specify `inferentia` .\n- For instance types with NVIDIA GRID K520 GPUs, specify `k520` .\n- For instance types with NVIDIA K80 GPUs, specify `k80` .\n- For instance types with NVIDIA M60 GPUs, specify `m60` .\n- For instance types with AMD Radeon Pro V520 GPUs, specify `radeon-pro-v520` .\n- For instance types with NVIDIA T4 GPUs, specify `t4` .\n- For instance types with NVIDIA T4G GPUs, specify `t4g` .\n- For instance types with Xilinx VU9P FPGAs, specify `vu9p` .\n- For instance types with NVIDIA V100 GPUs, specify `v100` .\n\nDefault: Any accelerator", "AcceleratorTotalMemoryMiB": "The minimum and maximum amount of total accelerator memory, in MiB.\n\nDefault: No minimum or maximum limits", - "AcceleratorTypes": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n\nDefault: Any accelerator type", + "AcceleratorTypes": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n- For instance types with Inference accelerators, specify `inference` .\n\nDefault: Any accelerator type", "AllowedInstanceTypes": "The instance types to apply your specified attributes against. All other instance types are ignored, even if they match your specified attributes.\n\nYou can use strings with one or more wild cards, represented by an asterisk ( `*` ), to allow an instance type, size, or generation. The following are examples: `m5.8xlarge` , `c5*.*` , `m5a.*` , `r*` , `*3*` .\n\nFor example, if you specify `c5*` ,Amazon EC2 will allow the entire C5 instance family, which includes all C5a and C5n instance types. If you specify `m5a.*` , Amazon EC2 will allow all the M5a instance types, but not the M5n instance types.\n\n> If you specify `AllowedInstanceTypes` , you can't specify `ExcludedInstanceTypes` . \n\nDefault: All instance types", "BareMetal": "Indicates whether bare metal instance types must be included, excluded, or required.\n\n- To include bare metal instance types, specify `included` .\n- To require only bare metal instance types, specify `required` .\n- To exclude bare metal instance types, specify `excluded` .\n\nDefault: `excluded`", "BaselineEbsBandwidthMbps": "The minimum and maximum baseline bandwidth to Amazon EBS, in Mbps. For more information, see [Amazon EBS\u2013optimized instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-optimized.html) in the *Amazon EC2 User Guide* .\n\nDefault: No minimum or maximum limits", @@ -14974,7 +15066,7 @@ "AcceleratorManufacturers": "Indicates whether instance types must have accelerators by specific manufacturers.\n\n- For instance types with AWS devices, specify `amazon-web-services` .\n- For instance types with AMD devices, specify `amd` .\n- For instance types with Habana devices, specify `habana` .\n- For instance types with NVIDIA devices, specify `nvidia` .\n- For instance types with Xilinx devices, specify `xilinx` .\n\nDefault: Any manufacturer", "AcceleratorNames": "The accelerators that must be on the instance type.\n\n- For instance types with NVIDIA A10G GPUs, specify `a10g` .\n- For instance types with NVIDIA A100 GPUs, specify `a100` .\n- For instance types with NVIDIA H100 GPUs, specify `h100` .\n- For instance types with AWS Inferentia chips, specify `inferentia` .\n- For instance types with NVIDIA GRID K520 GPUs, specify `k520` .\n- For instance types with NVIDIA K80 GPUs, specify `k80` .\n- For instance types with NVIDIA M60 GPUs, specify `m60` .\n- For instance types with AMD Radeon Pro V520 GPUs, specify `radeon-pro-v520` .\n- For instance types with NVIDIA T4 GPUs, specify `t4` .\n- For instance types with NVIDIA T4G GPUs, specify `t4g` .\n- For instance types with Xilinx VU9P FPGAs, specify `vu9p` .\n- For instance types with NVIDIA V100 GPUs, specify `v100` .\n\nDefault: Any accelerator", "AcceleratorTotalMemoryMiB": "The minimum and maximum amount of total accelerator memory, in MiB.\n\nDefault: No minimum or maximum limits", - "AcceleratorTypes": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n\nDefault: Any accelerator type", + "AcceleratorTypes": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n- For instance types with Inference accelerators, specify `inference` .\n\nDefault: Any accelerator type", "AllowedInstanceTypes": "The instance types to apply your specified attributes against. All other instance types are ignored, even if they match your specified attributes.\n\nYou can use strings with one or more wild cards, represented by an asterisk ( `*` ), to allow an instance type, size, or generation. The following are examples: `m5.8xlarge` , `c5*.*` , `m5a.*` , `r*` , `*3*` .\n\nFor example, if you specify `c5*` ,Amazon EC2 will allow the entire C5 instance family, which includes all C5a and C5n instance types. If you specify `m5a.*` , Amazon EC2 will allow all the M5a instance types, but not the M5n instance types.\n\n> If you specify `AllowedInstanceTypes` , you can't specify `ExcludedInstanceTypes` . \n\nDefault: All instance types", "BareMetal": "Indicates whether bare metal instance types must be included, excluded, or required.\n\n- To include bare metal instance types, specify `included` .\n- To require only bare metal instance types, specify `required` .\n- To exclude bare metal instance types, specify `excluded` .\n\nDefault: `excluded`", "BaselineEbsBandwidthMbps": "The minimum and maximum baseline bandwidth to Amazon EBS, in Mbps. For more information, see [Amazon EBS\u2013optimized instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-optimized.html) in the *Amazon EC2 User Guide* .\n\nDefault: No minimum or maximum limits", @@ -15753,6 +15845,18 @@ "AWS::ECR::RegistryPolicy": { "PolicyText": "The JSON policy text for your registry." }, + "AWS::ECR::RegistryScanningConfiguration": { + "Rules": "The scanning rules associated with the registry.", + "ScanType": "The type of scanning configured for the registry." + }, + "AWS::ECR::RegistryScanningConfiguration RepositoryFilter": { + "Filter": "The filter to use when scanning.", + "FilterType": "The type associated with the filter." + }, + "AWS::ECR::RegistryScanningConfiguration ScanningRule": { + "RepositoryFilters": "The details of a scanning repository filter. For more information on how to use filters, see [Using filters](https://docs.aws.amazon.com/AmazonECR/latest/userguide/image-scanning.html#image-scanning-filters) in the *Amazon Elastic Container Registry User Guide* .", + "ScanFrequency": "The frequency that scans are performed at for a private registry. When the `ENHANCED` scan type is specified, the supported scan frequencies are `CONTINUOUS_SCAN` and `SCAN_ON_PUSH` . When the `BASIC` scan type is specified, the `SCAN_ON_PUSH` scan frequency is supported. If scan on push is not specified, then the `MANUAL` scan frequency is set by default." + }, "AWS::ECR::ReplicationConfiguration": { "ReplicationConfiguration": "The replication configuration for a registry." }, @@ -16046,7 +16150,7 @@ }, "AWS::ECS::TaskDefinition": { "ContainerDefinitions": "A list of container definitions in JSON format that describe the different containers that make up your task. For more information about container definition parameters and defaults, see [Amazon ECS Task Definitions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_defintions.html) in the *Amazon Elastic Container Service Developer Guide* .", - "Cpu": "The number of `cpu` units used by the task. If you use the EC2 launch type, this field is optional. Any value can be used. If you use the Fargate launch type, this field is required. You must use one of the following values. The value that you choose determines your range of valid values for the `memory` parameter.\n\nIf you're using the EC2 launch type or the external launch type, this field is optional. Supported values are between `128` CPU units ( `0.125` vCPUs) and `196608` CPU units ( `192` vCPUs). The CPU units cannot be less than 1 vCPU when you use Windows containers on Fargate.\n\n- 256 (.25 vCPU) - Available `memory` values: 512 (0.5 GB), 1024 (1 GB), 2048 (2 GB)\n- 512 (.5 vCPU) - Available `memory` values: 1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB)\n- 1024 (1 vCPU) - Available `memory` values: 2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB)\n- 2048 (2 vCPU) - Available `memory` values: 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB)\n- 4096 (4 vCPU) - Available `memory` values: 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB)\n- 8192 (8 vCPU) - Available `memory` values: 16 GB and 60 GB in 4 GB increments\n\nThis option requires Linux platform `1.4.0` or later.\n- 16384 (16vCPU) - Available `memory` values: 32GB and 120 GB in 8 GB increments\n\nThis option requires Linux platform `1.4.0` or later.", + "Cpu": "The number of `cpu` units used by the task. If you use the EC2 launch type, this field is optional. Any value can be used. If you use the Fargate launch type, this field is required. You must use one of the following values. The value that you choose determines your range of valid values for the `memory` parameter.\n\nIf you're using the EC2 launch type or the external launch type, this field is optional. Supported values are between `128` CPU units ( `0.125` vCPUs) and `196608` CPU units ( `192` vCPUs).\n\nThis field is required for Fargate. For information about the valid values, see [Task size](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#task_size) in the *Amazon Elastic Container Service Developer Guide* .", "EnableFaultInjection": "Enables fault injection and allows for fault injection requests to be accepted from the task's containers. The default value is `false` .", "EphemeralStorage": "The ephemeral storage settings to use for tasks run with the task definition.", "ExecutionRoleArn": "The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make AWS API calls on your behalf. For informationabout the required IAM roles for Amazon ECS, see [IAM roles for Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/security-ecs-iam-role-overview.html) in the *Amazon Elastic Container Service Developer Guide* .", @@ -18090,7 +18194,7 @@ "Hashed": "Indicates if the column values are hashed in the schema input.\n\nIf the value is set to `TRUE` , the column values are hashed.\n\nIf the value is set to `FALSE` , the column values are cleartext.", "MatchKey": "A key that allows grouping of multiple input attributes into a unified matching group.\n\nFor example, consider a scenario where the source table contains various addresses, such as `business_address` and `shipping_address` . By assigning a `matchKey` called `address` to both attributes, AWS Entity Resolution will match records across these fields to create a consolidated matching group.\n\nIf no `matchKey` is specified for a column, it won't be utilized for matching purposes but will still be included in the output table.", "SubType": "The subtype of the attribute, selected from a list of values.", - "Type": "The type of the attribute, selected from a list of values.\n\n> Normalization is only supported for `NAME` , `ADDRESS` , `PHONE` , and `EMAIL_ADDRESS` .\n> \n> If you want to normalize `NAME_FIRST` , `NAME_MIDDLE` , and `NAME_LAST` , you must group them by assigning them to the `NAME` `groupName` .\n> \n> If you want to normalize `ADDRESS_STREET1` , `ADDRESS_STREET2` , `ADDRESS_STREET3` , `ADDRESS_CITY` , `ADDRESS_STATE` , `ADDRESS_COUNTRY` , and `ADDRESS_POSTALCODE` , you must group them by assigning them to the `ADDRESS` `groupName` .\n> \n> If you want to normalize `PHONE_NUMBER` and `PHONE_COUNTRYCODE` , you must group them by assigning them to the `PHONE` `groupName` ." + "Type": "The type of the attribute, selected from a list of values.\n\nLiveRamp supports: `NAME` | `NAME_FIRST` | `NAME_MIDDLE` | `NAME_LAST` | `ADDRESS` | `ADDRESS_STREET1` | `ADDRESS_STREET2` | `ADDRESS_STREET3` | `ADDRESS_CITY` | `ADDRESS_STATE` | `ADDRESS_COUNTRY` | `ADDRESS_POSTALCODE` | `PHONE` | `PHONE_NUMBER` | `EMAIL_ADDRESS` | `UNIQUE_ID` | `PROVIDER_ID`\n\nTransUnion supports: `NAME` | `NAME_FIRST` | `NAME_LAST` | `ADDRESS` | `ADDRESS_CITY` | `ADDRESS_STATE` | `ADDRESS_COUNTRY` | `ADDRESS_POSTALCODE` | `PHONE_NUMBER` | `EMAIL_ADDRESS` | `UNIQUE_ID` | `IPV4` | `IPV6` | `MAID`\n\nUnified ID 2.0 supports: `PHONE_NUMBER` | `EMAIL_ADDRESS` | `UNIQUE_ID`\n\n> Normalization is only supported for `NAME` , `ADDRESS` , `PHONE` , and `EMAIL_ADDRESS` .\n> \n> If you want to normalize `NAME_FIRST` , `NAME_MIDDLE` , and `NAME_LAST` , you must group them by assigning them to the `NAME` `groupName` .\n> \n> If you want to normalize `ADDRESS_STREET1` , `ADDRESS_STREET2` , `ADDRESS_STREET3` , `ADDRESS_CITY` , `ADDRESS_STATE` , `ADDRESS_COUNTRY` , and `ADDRESS_POSTALCODE` , you must group them by assigning them to the `ADDRESS` `groupName` .\n> \n> If you want to normalize `PHONE_NUMBER` and `PHONE_COUNTRYCODE` , you must group them by assigning them to the `PHONE` `groupName` ." }, "AWS::EntityResolution::SchemaMapping Tag": { "Key": "The key of the tag.", @@ -30196,7 +30300,8 @@ "ProgramDateTimeIntervalSeconds": "The `EXT-X-PROGRAM-DATE-TIME` interval, in seconds, associated with the HLS manifest configuration.", "ScteHls": "THE SCTE-35 HLS configuration associated with the HLS manifest configuration.", "StartTag": "", - "Url": "The URL of the HLS manifest configuration." + "Url": "The URL of the HLS manifest configuration.", + "UrlEncodeChildManifest": "" }, "AWS::MediaPackageV2::OriginEndpoint LowLatencyHlsManifestConfiguration": { "ChildManifestName": "The name of the child manifest associated with the low-latency HLS (LL-HLS) manifest configuration of the origin endpoint.", @@ -30206,7 +30311,8 @@ "ProgramDateTimeIntervalSeconds": "Inserts `EXT-X-PROGRAM-DATE-TIME` tags in the output manifest at the interval that you specify. If you don't enter an interval, `EXT-X-PROGRAM-DATE-TIME` tags aren't included in the manifest. The tags sync the stream to the wall clock so that viewers can seek to a specific time in the playback timeline on the player.\n\nIrrespective of this parameter, if any `ID3Timed` metadata is in the HLS input, MediaPackage passes through that metadata to the HLS output.", "ScteHls": "The SCTE-35 HLS configuration associated with the low-latency HLS (LL-HLS) manifest configuration of the origin endpoint.", "StartTag": "", - "Url": "The URL of the low-latency HLS (LL-HLS) manifest configuration of the origin endpoint." + "Url": "The URL of the low-latency HLS (LL-HLS) manifest configuration of the origin endpoint.", + "UrlEncodeChildManifest": "" }, "AWS::MediaPackageV2::OriginEndpoint Scte": { "ScteFilter": "The filter associated with the SCTE-35 configuration." @@ -31377,18 +31483,18 @@ "VpcEndpointManagement": "Defines whether you or Amazon OpenSearch Ingestion service create and manage the VPC endpoint configured for the pipeline." }, "AWS::Oam::Link": { - "LabelTemplate": "Specify a friendly human-readable name to use to identify this source account when you are viewing data from it in the monitoring account.\n\nYou can include the following variables in your template:\n\n- `$AccountName` is the name of the account\n- `$AccountEmail` is a globally-unique email address, which includes the email domain, such as `mariagarcia@example.com`\n- `$AccountEmailNoDomain` is an email address without the domain name, such as `mariagarcia`", + "LabelTemplate": "Specify a friendly human-readable name to use to identify this source account when you are viewing data from it in the monitoring account.\n\nYou can include the following variables in your template:\n\n- `$AccountName` is the name of the account\n- `$AccountEmail` is a globally-unique email address, which includes the email domain, such as `mariagarcia@example.com`\n- `$AccountEmailNoDomain` is an email address without the domain name, such as `mariagarcia`\n\n> In the and Regions, the only supported option is to use custom labels, and the `$AccountName` , `$AccountEmail` , and `$AccountEmailNoDomain` variables all resolve as *account-id* instead of the specified variable.", "LinkConfiguration": "Use this structure to optionally create filters that specify that only some metric namespaces or log groups are to be shared from the source account to the monitoring account.", - "ResourceTypes": "An array of strings that define which types of data that the source account shares with the monitoring account. Valid values are `AWS::CloudWatch::Metric | AWS::Logs::LogGroup | AWS::XRay::Trace | AWS::ApplicationInsights::Application | AWS::InternetMonitor::Monitor | AWS::ApplicationSignals::Service | AWS::ApplicationSignals::ServiceLevelObjective` .", + "ResourceTypes": "An array of strings that define which types of data that the source account shares with the monitoring account. Valid values are `AWS::CloudWatch::Metric | AWS::Logs::LogGroup | AWS::XRay::Trace | AWS::ApplicationInsights::Application | AWS::InternetMonitor::Monitor` .", "SinkIdentifier": "The ARN of the sink in the monitoring account that you want to link to. You can use [ListSinks](https://docs.aws.amazon.com/OAM/latest/APIReference/API_ListSinks.html) to find the ARNs of sinks.", "Tags": "An array of key-value pairs to apply to the link.\n\nFor more information, see [Tag](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-resource-tags.html) ." }, "AWS::Oam::Link LinkConfiguration": { - "LogGroupConfiguration": "Use this structure to filter which log groups are to send log events from the source account to the monitoring account.", + "LogGroupConfiguration": "Use this structure to filter which log groups are to share log events from this source account to the monitoring account.", "MetricConfiguration": "Use this structure to filter which metric namespaces are to be shared from the source account to the monitoring account." }, "AWS::Oam::Link LinkFilter": { - "Filter": "" + "Filter": "When used in `MetricConfiguration` this field specifies which metric namespaces are to be shared with the monitoring account\n\nWhen used in `LogGroupConfiguration` this field specifies which log groups are to share their log events with the monitoring account. Use the term `LogGroupName` and one or more of the following operands.\n\nUse single quotation marks (') around log group names and metric namespaces.\n\nThe matching of log group names and metric namespaces is case sensitive. Each filter has a limit of five conditional operands. Conditional operands are `AND` and `OR` .\n\n- `=` and `!=`\n- `AND`\n- `OR`\n- `LIKE` and `NOT LIKE` . These can be used only as prefix searches. Include a `%` at the end of the string that you want to search for and include.\n- `IN` and `NOT IN` , using parentheses `( )`\n\nExamples:\n\n- `Namespace NOT LIKE 'AWS/%'` includes only namespaces that don't start with `AWS/` , such as custom namespaces.\n- `Namespace IN ('AWS/EC2', 'AWS/ELB', 'AWS/S3')` includes only the metrics in the EC2, Elastic Load Balancing , and Amazon S3 namespaces.\n- `Namespace = 'AWS/EC2' OR Namespace NOT LIKE 'AWS/%'` includes only the EC2 namespace and your custom namespaces.\n- `LogGroupName IN ('This-Log-Group', 'Other-Log-Group')` includes only the log groups with names `This-Log-Group` and `Other-Log-Group` .\n- `LogGroupName NOT IN ('Private-Log-Group', 'Private-Log-Group-2')` includes all log groups except the log groups with names `Private-Log-Group` and `Private-Log-Group-2` .\n- `LogGroupName LIKE 'aws/lambda/%' OR LogGroupName LIKE 'AWSLogs%'` includes all log groups that have names that start with `aws/lambda/` or `AWSLogs` .\n\n> If you are updating a link that uses filters, you can specify `*` as the only value for the `filter` parameter to delete the filter and share all log groups with the monitoring account." }, "AWS::Oam::Sink": { "Name": "A name for the sink.", @@ -33550,7 +33656,7 @@ }, "AWS::QBusiness::DataSource HookConfiguration": { "InvocationCondition": "The condition used for when a Lambda function should be invoked.\n\nFor example, you can specify a condition that if there are empty date-time values, then Amazon Q Business should invoke a function that inserts the current date-time.", - "LambdaArn": "The Amazon Resource Name (ARN) of the Lambda function sduring ingestion. For more information, see [Using Lambda functions for Amazon Q Business document enrichment](https://docs.aws.amazon.com/amazonq/latest/qbusiness-ug/cde-lambda-operations.html) .", + "LambdaArn": "The Amazon Resource Name (ARN) of the Lambda function during ingestion. For more information, see [Using Lambda functions for Amazon Q Business document enrichment](https://docs.aws.amazon.com/amazonq/latest/qbusiness-ug/cde-lambda-operations.html) .", "RoleArn": "The Amazon Resource Name (ARN) of a role with permission to run `PreExtractionHookConfiguration` and `PostExtractionHookConfiguration` for altering document metadata and content during the document ingestion process.", "S3BucketName": "Stores the original, raw documents or the structured, parsed documents before and after altering them. For more information, see [Data contracts for Lambda functions](https://docs.aws.amazon.com/amazonq/latest/business-use-dg/cde-lambda-operations.html#cde-lambda-operations-data-contracts) ." }, @@ -43377,7 +43483,7 @@ "AWS::RDS::DBCluster": { "AllocatedStorage": "The amount of storage in gibibytes (GiB) to allocate to each DB instance in the Multi-AZ DB cluster.\n\nValid for Cluster Type: Multi-AZ DB clusters only\n\nThis setting is required to create a Multi-AZ DB cluster.", "AssociatedRoles": "Provides a list of the AWS Identity and Access Management (IAM) roles that are associated with the DB cluster. IAM roles that are associated with a DB cluster grant permission for the DB cluster to access other Amazon Web Services on your behalf.\n\nValid for: Aurora DB clusters and Multi-AZ DB clusters", - "AutoMinorVersionUpgrade": "Specifies whether minor engine upgrades are applied automatically to the DB cluster during the maintenance window. By default, minor engine upgrades are applied automatically.\n\nValid for Cluster Type: Aurora DB clusters and Multi-AZ DB cluster", + "AutoMinorVersionUpgrade": "Specifies whether minor engine upgrades are applied automatically to the DB cluster during the maintenance window. By default, minor engine upgrades are applied automatically.\n\nValid for Cluster Type: Aurora DB clusters and Multi-AZ DB cluster.\n\nFor more information about automatic minor version upgrades, see [Automatically upgrading the minor engine version](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Upgrading.html#USER_UpgradeDBInstance.Upgrading.AutoMinorVersionUpgrades) .", "AvailabilityZones": "A list of Availability Zones (AZs) where instances in the DB cluster can be created. For information on AWS Regions and Availability Zones, see [Choosing the Regions and Availability Zones](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.RegionsAndAvailabilityZones.html) in the *Amazon Aurora User Guide* .\n\nValid for: Aurora DB clusters only", "BacktrackWindow": "The target backtrack window, in seconds. To disable backtracking, set this value to `0` .\n\nValid for Cluster Type: Aurora MySQL DB clusters only\n\nDefault: `0`\n\nConstraints:\n\n- If specified, this value must be set to a number from 0 to 259,200 (72 hours).", "BackupRetentionPeriod": "The number of days for which automated backups are retained.\n\nDefault: 1\n\nConstraints:\n\n- Must be a value from 1 to 35\n\nValid for: Aurora DB clusters and Multi-AZ DB clusters", @@ -50232,7 +50338,7 @@ "Forward": "Describes a forward action. You can use forward actions to route requests to one or more target groups." }, "AWS::VpcLattice::Listener FixedResponse": { - "StatusCode": "The HTTP response code." + "StatusCode": "The HTTP response code. Only `404` and `500` status codes are supported." }, "AWS::VpcLattice::Listener Forward": { "TargetGroups": "The target groups. Traffic matching the rule is forwarded to the specified target groups. With forward actions, you can assign a weight that controls the prioritization and selection of each target group. This means that requests are distributed to individual target groups based on their weights. For example, if two target groups have the same weight, each target group receives half of the traffic.\n\nThe default value is 1. This means that if only one target group is provided, there is no need to set the weight; 100% of the traffic goes to that target group." @@ -50300,7 +50406,7 @@ "Forward": "The forward action. Traffic that matches the rule is forwarded to the specified target groups." }, "AWS::VpcLattice::Rule FixedResponse": { - "StatusCode": "The HTTP response code." + "StatusCode": "The HTTP response code. Only `404` and `500` status codes are supported." }, "AWS::VpcLattice::Rule Forward": { "TargetGroups": "The target groups. Traffic matching the rule is forwarded to the specified target groups. With forward actions, you can assign a weight that controls the prioritization and selection of each target group. This means that requests are distributed to individual target groups based on their weights. For example, if two target groups have the same weight, each target group receives half of the traffic.\n\nThe default value is 1. This means that if only one target group is provided, there is no need to set the weight; 100% of the traffic goes to that target group." diff --git a/schema_source/cloudformation.schema.json b/schema_source/cloudformation.schema.json index 2d19845b5..507ae8b45 100644 --- a/schema_source/cloudformation.schema.json +++ b/schema_source/cloudformation.schema.json @@ -55513,7 +55513,7 @@ "additionalProperties": false, "properties": { "CronExpression": { - "markdownDescription": "The schedule, as a Cron expression. The schedule interval must be between 1 hour and 1 year. For more information, see the [Cron expressions reference](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-cron-expressions.html) in the *Amazon EventBridge User Guide* .", + "markdownDescription": "The schedule, as a Cron expression. The schedule interval must be between 1 hour and 1 year. For more information, see the [Cron and rate expressions](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-scheduled-rule-pattern.html) in the *Amazon EventBridge User Guide* .", "title": "CronExpression", "type": "string" }, @@ -68030,7 +68030,7 @@ }, "ProvisionedThroughput": { "$ref": "#/definitions/AWS::DynamoDB::Table.ProvisionedThroughput", - "markdownDescription": "Represents the provisioned throughput settings for the specified global secondary index.\n\nFor current minimum and maximum provisioned throughput values, see [Service, Account, and Table Quotas](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html) in the *Amazon DynamoDB Developer Guide* .", + "markdownDescription": "Represents the provisioned throughput settings for the specified global secondary index. You must use either `OnDemandThroughput` or `ProvisionedThroughput` based on your table's capacity mode.\n\nFor current minimum and maximum provisioned throughput values, see [Service, Account, and Table Quotas](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html) in the *Amazon DynamoDB Developer Guide* .", "title": "ProvisionedThroughput" } }, @@ -69786,7 +69786,7 @@ "items": { "type": "string" }, - "markdownDescription": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n\nDefault: Any accelerator type", + "markdownDescription": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n- For instance types with Inference accelerators, specify `inference` .\n\nDefault: Any accelerator type", "title": "AcceleratorTypes", "type": "array" }, @@ -72887,7 +72887,7 @@ "items": { "type": "string" }, - "markdownDescription": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n\nDefault: Any accelerator type", + "markdownDescription": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n- For instance types with Inference accelerators, specify `inference` .\n\nDefault: Any accelerator type", "title": "AcceleratorTypes", "type": "array" }, @@ -77407,7 +77407,7 @@ "items": { "type": "string" }, - "markdownDescription": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n\nDefault: Any accelerator type", + "markdownDescription": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n- For instance types with Inference accelerators, specify `inference` .\n\nDefault: Any accelerator type", "title": "AcceleratorTypes", "type": "array" }, @@ -84193,7 +84193,7 @@ "type": "array" }, "Cpu": { - "markdownDescription": "The number of `cpu` units used by the task. If you use the EC2 launch type, this field is optional. Any value can be used. If you use the Fargate launch type, this field is required. You must use one of the following values. The value that you choose determines your range of valid values for the `memory` parameter.\n\nIf you're using the EC2 launch type or the external launch type, this field is optional. Supported values are between `128` CPU units ( `0.125` vCPUs) and `196608` CPU units ( `192` vCPUs). The CPU units cannot be less than 1 vCPU when you use Windows containers on Fargate.\n\n- 256 (.25 vCPU) - Available `memory` values: 512 (0.5 GB), 1024 (1 GB), 2048 (2 GB)\n- 512 (.5 vCPU) - Available `memory` values: 1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB)\n- 1024 (1 vCPU) - Available `memory` values: 2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB)\n- 2048 (2 vCPU) - Available `memory` values: 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB)\n- 4096 (4 vCPU) - Available `memory` values: 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB)\n- 8192 (8 vCPU) - Available `memory` values: 16 GB and 60 GB in 4 GB increments\n\nThis option requires Linux platform `1.4.0` or later.\n- 16384 (16vCPU) - Available `memory` values: 32GB and 120 GB in 8 GB increments\n\nThis option requires Linux platform `1.4.0` or later.", + "markdownDescription": "The number of `cpu` units used by the task. If you use the EC2 launch type, this field is optional. Any value can be used. If you use the Fargate launch type, this field is required. You must use one of the following values. The value that you choose determines your range of valid values for the `memory` parameter.\n\nIf you're using the EC2 launch type or the external launch type, this field is optional. Supported values are between `128` CPU units ( `0.125` vCPUs) and `196608` CPU units ( `192` vCPUs).\n\nThis field is required for Fargate. For information about the valid values, see [Task size](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#task_size) in the *Amazon Elastic Container Service Developer Guide* .", "title": "Cpu", "type": "string" }, @@ -95825,7 +95825,7 @@ "type": "string" }, "Type": { - "markdownDescription": "The type of the attribute, selected from a list of values.\n\n> Normalization is only supported for `NAME` , `ADDRESS` , `PHONE` , and `EMAIL_ADDRESS` .\n> \n> If you want to normalize `NAME_FIRST` , `NAME_MIDDLE` , and `NAME_LAST` , you must group them by assigning them to the `NAME` `groupName` .\n> \n> If you want to normalize `ADDRESS_STREET1` , `ADDRESS_STREET2` , `ADDRESS_STREET3` , `ADDRESS_CITY` , `ADDRESS_STATE` , `ADDRESS_COUNTRY` , and `ADDRESS_POSTALCODE` , you must group them by assigning them to the `ADDRESS` `groupName` .\n> \n> If you want to normalize `PHONE_NUMBER` and `PHONE_COUNTRYCODE` , you must group them by assigning them to the `PHONE` `groupName` .", + "markdownDescription": "The type of the attribute, selected from a list of values.\n\nLiveRamp supports: `NAME` | `NAME_FIRST` | `NAME_MIDDLE` | `NAME_LAST` | `ADDRESS` | `ADDRESS_STREET1` | `ADDRESS_STREET2` | `ADDRESS_STREET3` | `ADDRESS_CITY` | `ADDRESS_STATE` | `ADDRESS_COUNTRY` | `ADDRESS_POSTALCODE` | `PHONE` | `PHONE_NUMBER` | `EMAIL_ADDRESS` | `UNIQUE_ID` | `PROVIDER_ID`\n\nTransUnion supports: `NAME` | `NAME_FIRST` | `NAME_LAST` | `ADDRESS` | `ADDRESS_CITY` | `ADDRESS_STATE` | `ADDRESS_COUNTRY` | `ADDRESS_POSTALCODE` | `PHONE_NUMBER` | `EMAIL_ADDRESS` | `UNIQUE_ID` | `IPV4` | `IPV6` | `MAID`\n\nUnified ID 2.0 supports: `PHONE_NUMBER` | `EMAIL_ADDRESS` | `UNIQUE_ID`\n\n> Normalization is only supported for `NAME` , `ADDRESS` , `PHONE` , and `EMAIL_ADDRESS` .\n> \n> If you want to normalize `NAME_FIRST` , `NAME_MIDDLE` , and `NAME_LAST` , you must group them by assigning them to the `NAME` `groupName` .\n> \n> If you want to normalize `ADDRESS_STREET1` , `ADDRESS_STREET2` , `ADDRESS_STREET3` , `ADDRESS_CITY` , `ADDRESS_STATE` , `ADDRESS_COUNTRY` , and `ADDRESS_POSTALCODE` , you must group them by assigning them to the `ADDRESS` `groupName` .\n> \n> If you want to normalize `PHONE_NUMBER` and `PHONE_COUNTRYCODE` , you must group them by assigning them to the `PHONE` `groupName` .", "title": "Type", "type": "string" } @@ -170722,7 +170722,7 @@ "additionalProperties": false, "properties": { "LabelTemplate": { - "markdownDescription": "Specify a friendly human-readable name to use to identify this source account when you are viewing data from it in the monitoring account.\n\nYou can include the following variables in your template:\n\n- `$AccountName` is the name of the account\n- `$AccountEmail` is a globally-unique email address, which includes the email domain, such as `mariagarcia@example.com`\n- `$AccountEmailNoDomain` is an email address without the domain name, such as `mariagarcia`", + "markdownDescription": "Specify a friendly human-readable name to use to identify this source account when you are viewing data from it in the monitoring account.\n\nYou can include the following variables in your template:\n\n- `$AccountName` is the name of the account\n- `$AccountEmail` is a globally-unique email address, which includes the email domain, such as `mariagarcia@example.com`\n- `$AccountEmailNoDomain` is an email address without the domain name, such as `mariagarcia`\n\n> In the and Regions, the only supported option is to use custom labels, and the `$AccountName` , `$AccountEmail` , and `$AccountEmailNoDomain` variables all resolve as *account-id* instead of the specified variable.", "title": "LabelTemplate", "type": "string" }, @@ -170735,7 +170735,7 @@ "items": { "type": "string" }, - "markdownDescription": "An array of strings that define which types of data that the source account shares with the monitoring account. Valid values are `AWS::CloudWatch::Metric | AWS::Logs::LogGroup | AWS::XRay::Trace | AWS::ApplicationInsights::Application | AWS::InternetMonitor::Monitor | AWS::ApplicationSignals::Service | AWS::ApplicationSignals::ServiceLevelObjective` .", + "markdownDescription": "An array of strings that define which types of data that the source account shares with the monitoring account. Valid values are `AWS::CloudWatch::Metric | AWS::Logs::LogGroup | AWS::XRay::Trace | AWS::ApplicationInsights::Application | AWS::InternetMonitor::Monitor` .", "title": "ResourceTypes", "type": "array" }, @@ -170788,7 +170788,7 @@ "properties": { "LogGroupConfiguration": { "$ref": "#/definitions/AWS::Oam::Link.LinkFilter", - "markdownDescription": "Use this structure to filter which log groups are to send log events from the source account to the monitoring account.", + "markdownDescription": "Use this structure to filter which log groups are to share log events from this source account to the monitoring account.", "title": "LogGroupConfiguration" }, "MetricConfiguration": { @@ -170803,7 +170803,7 @@ "additionalProperties": false, "properties": { "Filter": { - "markdownDescription": "", + "markdownDescription": "When used in `MetricConfiguration` this field specifies which metric namespaces are to be shared with the monitoring account\n\nWhen used in `LogGroupConfiguration` this field specifies which log groups are to share their log events with the monitoring account. Use the term `LogGroupName` and one or more of the following operands.\n\nUse single quotation marks (') around log group names and metric namespaces.\n\nThe matching of log group names and metric namespaces is case sensitive. Each filter has a limit of five conditional operands. Conditional operands are `AND` and `OR` .\n\n- `=` and `!=`\n- `AND`\n- `OR`\n- `LIKE` and `NOT LIKE` . These can be used only as prefix searches. Include a `%` at the end of the string that you want to search for and include.\n- `IN` and `NOT IN` , using parentheses `( )`\n\nExamples:\n\n- `Namespace NOT LIKE 'AWS/%'` includes only namespaces that don't start with `AWS/` , such as custom namespaces.\n- `Namespace IN ('AWS/EC2', 'AWS/ELB', 'AWS/S3')` includes only the metrics in the EC2, Elastic Load Balancing , and Amazon S3 namespaces.\n- `Namespace = 'AWS/EC2' OR Namespace NOT LIKE 'AWS/%'` includes only the EC2 namespace and your custom namespaces.\n- `LogGroupName IN ('This-Log-Group', 'Other-Log-Group')` includes only the log groups with names `This-Log-Group` and `Other-Log-Group` .\n- `LogGroupName NOT IN ('Private-Log-Group', 'Private-Log-Group-2')` includes all log groups except the log groups with names `Private-Log-Group` and `Private-Log-Group-2` .\n- `LogGroupName LIKE 'aws/lambda/%' OR LogGroupName LIKE 'AWSLogs%'` includes all log groups that have names that start with `aws/lambda/` or `AWSLogs` .\n\n> If you are updating a link that uses filters, you can specify `*` as the only value for the `filter` parameter to delete the filter and share all log groups with the monitoring account.", "title": "Filter", "type": "string" } @@ -224429,7 +224429,7 @@ "type": "array" }, "AutoMinorVersionUpgrade": { - "markdownDescription": "Specifies whether minor engine upgrades are applied automatically to the DB cluster during the maintenance window. By default, minor engine upgrades are applied automatically.\n\nValid for Cluster Type: Aurora DB clusters and Multi-AZ DB cluster", + "markdownDescription": "Specifies whether minor engine upgrades are applied automatically to the DB cluster during the maintenance window. By default, minor engine upgrades are applied automatically.\n\nValid for Cluster Type: Aurora DB clusters and Multi-AZ DB cluster.\n\nFor more information about automatic minor version upgrades, see [Automatically upgrading the minor engine version](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Upgrading.html#USER_UpgradeDBInstance.Upgrading.AutoMinorVersionUpgrades) .", "title": "AutoMinorVersionUpgrade", "type": "boolean" }, @@ -264755,7 +264755,7 @@ "additionalProperties": false, "properties": { "StatusCode": { - "markdownDescription": "The HTTP response code.", + "markdownDescription": "The HTTP response code. Only `404` and `500` status codes are supported.", "title": "StatusCode", "type": "number" } @@ -264996,7 +264996,7 @@ "additionalProperties": false, "properties": { "StatusCode": { - "markdownDescription": "The HTTP response code.", + "markdownDescription": "The HTTP response code. Only `404` and `500` status codes are supported.", "title": "StatusCode", "type": "number" }