Skip to content

Add testing components page #15458

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open

Conversation

thoward
Copy link
Contributor

@thoward thoward commented Jul 1, 2025

This adds a page on testing of components to the conceptual docs. Also moves the concepts/components doc to be a subcategory in the menu vs just a loose page.

@pulumi-bot
Copy link
Collaborator

@pulumi-bot
Copy link
Collaborator


### Prevention Strategy: Use Enums to Control Inputs

You can model allowed values directly in your component’s schema using enum types:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@julienp which of our languages currently support enums? We'll want to call it out

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe only Python does


### Prevention Strategy: Use Policies as Guardrails

Pulumi CrossGuard policies can enforce input constraints dynamically at deployment time. For example:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we link to CrossGuard?

Copy link
Member

@jkodroff jkodroff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wanted to leave my feedback so far. I need to continue this review from:

#### Integration Testing via the Pulumi Go Provider SDK

weight: 1
---

When authoring Pulumi components, it's critical to ensure changes won't break Pulumi programs that use them, or violate organizational policies. This page outlines different testing strategies and tools you can use to confidently update and maintain components.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
When authoring Pulumi components, it's critical to ensure changes won't break Pulumi programs that use them, or violate organizational policies. This page outlines different testing strategies and tools you can use to confidently update and maintain components.
When authoring Pulumi components, it's critical to ensure that changes won't unintentionally break Pulumi programs that consume your components, nor violate organizational policies. This page outlines different testing strategies and tools you can use to confidently update and maintain components.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I say "unintentionally" since a major version change would intentionally break consuming programs.


## Why Testing Matters: Blast Radius and Change Safety

When a component is updated, it's important to understand what other projects or teams might be affected. For example, if a platform engineering team maintains a shared component that encodes sensitive company security details, which then need to be updated in response to a security incident or policy change, before rolling that out, they will need to verify that the update won’t break downstream applications.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
When a component is updated, it's important to understand what other projects or teams might be affected. For example, if a platform engineering team maintains a shared component that encodes sensitive company security details, which then need to be updated in response to a security incident or policy change, before rolling that out, they will need to verify that the update won’t break downstream applications.
When a component is updated, it's important to understand what other projects or teams might be affected. For example, if a platform engineering team updates a shared component that encapsulates sensitive company security details in response to a security incident or a new regulatory requirement, the platform team will need to verify that the update won’t unintentionally break downstream Pulumi programs.


To assess the impact of this change, two primary methods can be used:

### Testing Strategy: Use `pulumi preview`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### Testing Strategy: Use `pulumi preview`
### Testing Strategy: Link to a local copy and use `pulumi preview`


### Testing Strategy: Use `pulumi preview`

`pulumi preview` shows what changes will occur if a project consumes the updated version of the component. This helps identify if the changes are additive, destructive, or trigger replacements.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
`pulumi preview` shows what changes will occur if a project consumes the updated version of the component. This helps identify if the changes are additive, destructive, or trigger replacements.
[The `pulumi preview` command](/docs/iac/cli/commands/pulumi_preview/) shows what changes will occur if a project consumes the updated version of the component. This helps identify if the changes are additive, destructive, or trigger replacements.

- You have access to consumer projects
- You can link the component locally

Running `pulumi preview` works great for existing projects to detect any unexepected behavior after an update. The best part is that it tests your new code in exactly the environments where it will be used, meaning they are very accurate and will show any issues, even ones you may not have considered ahead of time. Unfortunately, it may not scale well if you have a lot of projects or don't have direct access to the Pulumi project's code.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Running `pulumi preview` works great for existing projects to detect any unexepected behavior after an update. The best part is that it tests your new code in exactly the environments where it will be used, meaning they are very accurate and will show any issues, even ones you may not have considered ahead of time. Unfortunately, it may not scale well if you have a lot of projects or don't have direct access to the Pulumi project's code.
Running `pulumi preview` works great for existing projects to detect any unexepected behavior after an update. The upside of this approach is that it tests your new code in exactly the environments where it will be used, giving real-world feedback that will reveal many unintended consequences of the component update. The downside of this approach is that it may not scale well if your component has many downstream consumers or your team does not have direct access to the Pulumi programs that consume your component.

- Local test benches ([see below](#yaml-test-benches))
- CI/CD workflows (like GitHub Actions) that validate downstream usage

These tests can assert that the updated component produces expected outputs and maintains compatibility. This works well when you don't have access to the end-user programs. However, there are limits to what tests can detect. It's often very difficult to write enough tests to have 100% test coverage of all inputs. Often there are environment-specific problems related to configuration, secrets, or other factors that are not able to be recreated in the testing environment. So, while these approaches give you *some* security, they are not as comprehensive as simply running `pulumi preview` and seeing what breaks.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
These tests can assert that the updated component produces expected outputs and maintains compatibility. This works well when you don't have access to the end-user programs. However, there are limits to what tests can detect. It's often very difficult to write enough tests to have 100% test coverage of all inputs. Often there are environment-specific problems related to configuration, secrets, or other factors that are not able to be recreated in the testing environment. So, while these approaches give you *some* security, they are not as comprehensive as simply running `pulumi preview` and seeing what breaks.
These tests can assert that the updated component produces expected outputs and maintains compatibility. This approach works well when you don't have access to the end-user programs. However, there are limits to what these types of tests can detect: It's often difficult to write enough tests to have 100% test coverage for all possible inputs. Often there are environment-specific problems related to configuration, secrets, or other factors that are not able to be recreated in the testing environment. So, while these approaches give you *some* security, they are not as comprehensive as simply running `pulumi preview` in a consuming program and seeing what breaks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants