This repository provides a Kustomize base for Hashicorp's Vault.
This is an opinionated setup based on the following principles/assumptions:
- Automated bootstrapping and management, with a minimum of manual steps
- Expendable storage. The assumption is that Vault is only providing short-lived credentials sourced from cloud providers and other secret backends. Configuration data is stored outside of Vault and can be reapplied for disaster recovery.
Be careful using it for other use-cases, as some design decision taken here may carry security risks for different uses of Vault.
The Vault cluster uses Raft Integrated Storage as its storage backend.
This provides redundant storage without the operational overhead of maintaining a separate storage backend.
An initializer
sidecar runs alongside each replica and is responsible for
forming the cluster when it is first deployed.
The first replica in the vault
StatefulSet
will initialize itself as the
leader. The second and third will join the first.
The process of initialization generates an unseal key and a root token.
A Vault member starts in a 'sealed' state and must be unsealed by a master key. Typically it's considered best practice to split the key using Shamir's Secret Sharing so that multiple shards are required to unseal Vault. This means that Vault is not compromised if a single shard leaks.
Multiple keys complicate the process of automating unsealing, so this setup opts
to generate a single unseal key during initialization
which is stored in a secret called vault
under the key unseal-key
.
The unsealer
sidecar uses this secret to
unseal Vault automatically when the replica starts.
When Vault is initialized it generates an initial root token which has full access to Vault. The typical expectation is that you perform initial setup of an alternative authentication method and then delete the root token.
However, the setup provided by this base presumes that Vault will only be accessed and
configured by an automation system (see:
terraform-applier,
vault-kube-cloud-credentials)
running in the same Kubernetes namespace and therefore the root token is
persisted to the vault
secret under the key root-token
for use by these
systems.
Vault uses TLS with a self-signed certificate. Clients communicating with Vault need to hold the corresponding self-signed CA certificate.
A deployment called vault-pki-manager
performs the following functions:
- Generates/rotates the CA certificate, private key and server certificate every
24 hours
- This frequent rotation mitigates the risk of the private key being compromised without our knowledge. The threat scenario being that a malicious actor could perpetrate a MITM attack.
- There is a sidecar on the Vault server pods called
reloader
which reloads Vault to pick up the new certificates when they change on disk.
- Ensures the CA certificate is copied into a
ConfigMap
calledvault-tls
in every namespace in the cluster- This allows the CA cert to be mounted into containers which communicate with Vault
- The base provides a
ClusterRole
that allowsvault-pki-manager
to writeConfigMaps
cluster wide
The Prometheus metrics provided by Vault leave a lot to be desired:
- Elements that would ideally be labels in Prometheus are part of the metric name (hashicorp/vault#9068)
- Metrics are translated naively from statsd which is event based, which creates problems with metric retention (hashicorp/vault#7137)
To mitigate these issues metrics are exported by
statsd_exporter
with custom
mappings to create sane
metrics names and labels.
Each Vault replica also runs an instance of
vault-exporter
which exports
information about the state of the replica (i.e leadership status, whether Vault
is sealed or not).
Reference the bases in your kustomization.yaml
:
In vault's namespace:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- github.com/utilitywarehouse/vault-manifests//base/vault-namespace
Somewhere with permission to apply cluster-wide resources
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- github.com/utilitywarehouse/vault-manifests//base/cluster-wide
Build the examples:
kustomize build example/vault-namespace
kustomize build example/cluster-wide
go get -u sigs.k8s.io/kustomize
This Vault setup is intended to be used with other elements to provide an easy way for applications to access cloud resources.
Here is a complete step by step guide to easily provide a kubernetes application access to an aws bucket.