The StorPool CSI driver enables CSI-compliant COs e.g. Kubernetes, to dynamically provision
and attach StorPool volumes. Currently, the driver only supports the StorPool native protocol,
hence it's required that all Kubernetes hosts have the StorPool native client service
(storpool_block
) configured and running.
The StorPool CSI driver can be deployed using the precompiled manifests inside the manifests/
directory. The deployment procedure assumes that StorPool is already installed and configured.
The procedure looks like this:
- Apply the
ControllerPlugin
andNodePlugin
manifests:
kubectl apply -f manifests/*
Please note, the manifests will create all respective resource inside the kube-system
namespace.
Feel free to edit them if you want to create the resources elsewhere.
- Create a
StorageClass
resource. Mapping aStorageClass
to a StorPool template allows the Kubernetes operator to utilize multiple storage media if such is present in the StorPool cluster. More information on StorPool templates can be found here. Below you can find an exampleStorageClass
:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: storpool-nvme
provisioner: csi.storpool.com
allowVolumeExpansion: true
parameters:
template: nvme
volumeBindingMode: WaitForFirstConsumer
- Finally, one can create a PVC to test if the CSI is configured properly. Please note that the
StorPool CSI supports only the
ReadWriteOnce
access mode.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-storpool-pvc
spec:
storageClassName: storpool-nvme
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi