-
Notifications
You must be signed in to change notification settings - Fork 107
ZFS LocalPV e2e test cases
Aman Gupta edited this page Sep 14, 2020
·
14 revisions
https://gitlab.openebs.ci/openebs/e2e-nativek8s/pipelines/
- To see the CI/E2E summary of the gitlab pipelines [click here] and switch to
k8s
tab understable releases
category. - See the test results and the README files for test-cases [click here] and navigate to various stages.
Kubernetes version: v1.17.2 (1 master + 3 worker_node cluster)
Application used for E2e test flow: Percona-mysql
ZFS version:
`$ modinfo zfs | grep -iw version`
`version: 0.8.1-1ubuntu14.4`
OS details:
$ cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.3 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.3 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
-
Installation
- Validation of installing ZFS-LocalPV provisioner.
- Install ZFSPV controller in High Availabilty (HA) and check the behaviour of zvolume when one of the controller replica is down.
-
Volume Provisioning
- E2e test for succesfully provision and deprovision of the volume. (fsTypes: zfs, xfs, ext4 and btrfs)
- E2e test of custom-topology support for ZFSPV set via storage-class.
- E2e test for Raw-block-volume support for ZFSPV.
- E2e test for shared mount support for zfs-localpv.
-
Zvolume Properties
- Verification of zvolume properties set via the storage-class.
- Modification of zvolume properties after zvolume creation i.e. at runtime (Properties: compression,dedup and recordsize)
-
Volume Resize
- ZFS volume resize test. (File-system: zfs,xfs and ext4)
-
Snapshot & clone
- E2e test case for ZFS-LocalPV snapshot and clone (File-system: zfs,xfs,ext4 and btrfs)
-
Backup & Restore
- Create the backup of namespace after dumping some data into the application, and check the data-consistency after restore.
- Create the backup of one namespace, restore it in another namespace and continue this for 3-4 different namespaces and check the behaviour of zfs-localpv and restore should be completed successfully.
- After taking a backup of multiple namespaces in a single namespace, Deprovision the base volumes and restart of zfs-localpv driver components and after that restore the backup.
- If backup is taken with volume-snapshots, While restoring snapshot points to the whole data instead of only snapshot data. #issue
- Create schedules while continuously dumping data into the application, and restore should be consist only that data which is retrieved from a particular schedule backup.
- Create the backup of a namespace in which multiple applications are running on different-different nodes, and verify the successful backup and restore of this namespace.
-
Infra-chaos
- Testcase for restart of the docker runtime on node where volume is provisioned.
- Testcase for restart of kubelet services on node where volume is provisioned.
-
Upgrade Testing
- Upgrade of the ZFS-LocalPV components and verify that older volume is not impacting with any issue.
- Provision of the new volume after upgrading the zfspv-components
- Check for the parent volume; it should not be deleted when volume snapshot is present.
- Test case for the scheduler to verify it is doing volume count based scheduling.