-
Notifications
You must be signed in to change notification settings - Fork 245
Add testing components page #15458
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Add testing components page #15458
Conversation
Your site preview for commit 4d93425 is ready! 🎉 http://www-testing-pulumi-docs-origin-pr-15458-4d934251.s3-website.us-west-2.amazonaws.com. |
Your site preview for commit ce3b8f9 is ready! 🎉 http://www-testing-pulumi-docs-origin-pr-15458-ce3b8f97.s3-website.us-west-2.amazonaws.com. |
|
||
### Prevention Strategy: Use Enums to Control Inputs | ||
|
||
You can model allowed values directly in your component’s schema using enum types: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@julienp which of our languages currently support enums? We'll want to call it out
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe only Python does
|
||
### Prevention Strategy: Use Policies as Guardrails | ||
|
||
Pulumi CrossGuard policies can enforce input constraints dynamically at deployment time. For example: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we link to CrossGuard?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wanted to leave my feedback so far. I need to continue this review from:
#### Integration Testing via the Pulumi Go Provider SDK
weight: 1 | ||
--- | ||
|
||
When authoring Pulumi components, it's critical to ensure changes won't break Pulumi programs that use them, or violate organizational policies. This page outlines different testing strategies and tools you can use to confidently update and maintain components. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When authoring Pulumi components, it's critical to ensure changes won't break Pulumi programs that use them, or violate organizational policies. This page outlines different testing strategies and tools you can use to confidently update and maintain components. | |
When authoring Pulumi components, it's critical to ensure that changes won't unintentionally break Pulumi programs that consume your components, nor violate organizational policies. This page outlines different testing strategies and tools you can use to confidently update and maintain components. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I say "unintentionally" since a major version change would intentionally break consuming programs.
|
||
## Why Testing Matters: Blast Radius and Change Safety | ||
|
||
When a component is updated, it's important to understand what other projects or teams might be affected. For example, if a platform engineering team maintains a shared component that encodes sensitive company security details, which then need to be updated in response to a security incident or policy change, before rolling that out, they will need to verify that the update won’t break downstream applications. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When a component is updated, it's important to understand what other projects or teams might be affected. For example, if a platform engineering team maintains a shared component that encodes sensitive company security details, which then need to be updated in response to a security incident or policy change, before rolling that out, they will need to verify that the update won’t break downstream applications. | |
When a component is updated, it's important to understand what other projects or teams might be affected. For example, if a platform engineering team updates a shared component that encapsulates sensitive company security details in response to a security incident or a new regulatory requirement, the platform team will need to verify that the update won’t unintentionally break downstream Pulumi programs. |
|
||
To assess the impact of this change, two primary methods can be used: | ||
|
||
### Testing Strategy: Use `pulumi preview` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
### Testing Strategy: Use `pulumi preview` | |
### Testing Strategy: Link to a local copy and use `pulumi preview` |
|
||
### Testing Strategy: Use `pulumi preview` | ||
|
||
`pulumi preview` shows what changes will occur if a project consumes the updated version of the component. This helps identify if the changes are additive, destructive, or trigger replacements. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
`pulumi preview` shows what changes will occur if a project consumes the updated version of the component. This helps identify if the changes are additive, destructive, or trigger replacements. | |
[The `pulumi preview` command](/docs/iac/cli/commands/pulumi_preview/) shows what changes will occur if a project consumes the updated version of the component. This helps identify if the changes are additive, destructive, or trigger replacements. |
- You have access to consumer projects | ||
- You can link the component locally | ||
|
||
Running `pulumi preview` works great for existing projects to detect any unexepected behavior after an update. The best part is that it tests your new code in exactly the environments where it will be used, meaning they are very accurate and will show any issues, even ones you may not have considered ahead of time. Unfortunately, it may not scale well if you have a lot of projects or don't have direct access to the Pulumi project's code. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Running `pulumi preview` works great for existing projects to detect any unexepected behavior after an update. The best part is that it tests your new code in exactly the environments where it will be used, meaning they are very accurate and will show any issues, even ones you may not have considered ahead of time. Unfortunately, it may not scale well if you have a lot of projects or don't have direct access to the Pulumi project's code. | |
Running `pulumi preview` works great for existing projects to detect any unexepected behavior after an update. The upside of this approach is that it tests your new code in exactly the environments where it will be used, giving real-world feedback that will reveal many unintended consequences of the component update. The downside of this approach is that it may not scale well if your component has many downstream consumers or your team does not have direct access to the Pulumi programs that consume your component. |
- Local test benches ([see below](#yaml-test-benches)) | ||
- CI/CD workflows (like GitHub Actions) that validate downstream usage | ||
|
||
These tests can assert that the updated component produces expected outputs and maintains compatibility. This works well when you don't have access to the end-user programs. However, there are limits to what tests can detect. It's often very difficult to write enough tests to have 100% test coverage of all inputs. Often there are environment-specific problems related to configuration, secrets, or other factors that are not able to be recreated in the testing environment. So, while these approaches give you *some* security, they are not as comprehensive as simply running `pulumi preview` and seeing what breaks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These tests can assert that the updated component produces expected outputs and maintains compatibility. This works well when you don't have access to the end-user programs. However, there are limits to what tests can detect. It's often very difficult to write enough tests to have 100% test coverage of all inputs. Often there are environment-specific problems related to configuration, secrets, or other factors that are not able to be recreated in the testing environment. So, while these approaches give you *some* security, they are not as comprehensive as simply running `pulumi preview` and seeing what breaks. | |
These tests can assert that the updated component produces expected outputs and maintains compatibility. This approach works well when you don't have access to the end-user programs. However, there are limits to what these types of tests can detect: It's often difficult to write enough tests to have 100% test coverage for all possible inputs. Often there are environment-specific problems related to configuration, secrets, or other factors that are not able to be recreated in the testing environment. So, while these approaches give you *some* security, they are not as comprehensive as simply running `pulumi preview` in a consuming program and seeing what breaks. |
This adds a page on testing of components to the conceptual docs. Also moves the concepts/components doc to be a subcategory in the menu vs just a loose page.