Test PowerFlex CSI Driver
This section provides multiple methods to test driver functionality in your environment.
Note: To run the test for CSI Driver for Dell PowerFlex, install Helm 3.
Test deploying a simple pod with PowerFlex storage
Test the deployment workflow of a simple pod on PowerFlex storage.
Prerequisites
In the source code, there is a directory that contains examples of how you can use the driver. To use these examples, you must create a helmtest-vxflexos namespace, using kubectl create namespace helmtest-vxflexos
, before you can start testing. HELM 3 must be installed to perform the tests.
The starttest.sh
script is located in the csi-vxflexos/test/helm
directory. This script is used in the following procedure to deploy helm charts that test the deployment of a simple pod.
Steps
- Navigate to the test/helm directory, which contains the
starttest.sh
and the 2vols directories. This directory contains a simple Helm chart that will deploy a pod that uses two PowerFlex volumes. NOTE: Helm tests are designed assuming users are using the storageclass names (vxflexos and vxflexos-xfs). If your storageclass names differ from these values, please update the templates in 2vols accordingly (located intest/helm/2vols/templates
directory). You can usekubectl get sc
to check for the storageclass names. - Run
sh starttest.sh 2vols
to deploy the pod. You should see the following:
Normal Pulled 38s kubelet, k8s113a-10-247-102-215.lss.emc.com Successfully pulled image "docker.io/centos:latest"
Normal Created 38s kubelet, k8s113a-10-247-102-215.lss.emc.com Created container
Normal Started 38s kubelet, k8s113a-10-247-102-215.lss.emc.com Started container
/dev/scinib 8125880 36852 7653216 1% /data
/dev/scinia 16766976 32944 16734032 1% /data
/dev/scinib on /data0 type ext4 (rw,relatime,data=ordered)
/dev/scinia on /data1 type xfs (rw,relatime,attr2,inode64,noquota)
- To stop the test, run
sh stoptest.sh 2vols
. This script deletes the pods and the volumes depending on the retention setting you have configured.
Results
An outline of this workflow is described below:
- The 2vols helm chart contains two PersistentVolumeClaim definitions, one in
pvc0.yaml
, and the other inpvc1.yaml
. They are referenced by thetest.yaml
which creates the pod. The contents of thePvc0.yaml
file are described below:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvol
namespace: helmtest-vxflexos
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 8Gi
storageClassName: vxflexos
- The volumeMode: Filesystem requires a mounted file system, and the resources.requests.storage of 8Gi requires an 8 GB file. In this case, the storageClassName: vxflexos directs the system to use a storage class named vxflexos. This step yields a mounted ext4 file system. You can create the vxflexos and vxflexos-xfs storage classes by using the yamls located in samples/storageclass.
- If you compare pvol0.yaml and pvol1.yaml, you will find that the latter uses a different storage class; vxflexos-xfs. This class gives you an xfs file system.
- To see the volumes you created, run kubectl get persistentvolumeclaim –n helmtest-vxflexos and kubectl describe persistentvolumeclaim –n helmtest-vxflexos.
NOTE: For more information about Kubernetes objects like StatefulSet and PersistentVolumeClaim see Kubernetes documentation: Concepts.
Test creating snapshots
Test the workflow for snapshot creation.
NOTE: Starting with version 2.0, CSI Driver for PowerFlex helm tests are designed to work exclusively with v1 snapshots.
Steps
- Start the 2vols container and leave it running.
- Helm tests are designed assuming users are using the storageclass names (vxflexos and vxflexos-xfs). If your storageclass names differ from these values, update the templates in 2vols accordingly (located in
test/helm/2vols/templates
directory). You can usekubectl get sc
to check for the storageclass names. - Helm tests are designed assuming users are using the snapshotclass name: vxflexos-snapclass If your snapshotclass name differs from the default values, update
snap1.yaml
andsnap2.yaml
accordingly.
- Helm tests are designed assuming users are using the storageclass names (vxflexos and vxflexos-xfs). If your storageclass names differ from these values, update the templates in 2vols accordingly (located in
- Run
sh snaptest.sh
to start the test.
This will create a snapshot of each of the volumes in the container using VolumeSnapshot objects defined in snap1.yaml
and snap2.yaml
. The following are the contents of snap1.yaml
:
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: pvol0-snap1
namespace: helmtest-vxflexos
spec:
volumeSnapshotClassName: vxflexos-snapclass
source:
persistentVolumeClaimName: pvol0
Results
The snaptest.sh
script will create a snapshot using the definitions in the snap1.yaml
file. The spec.source section contains the volume that will be snapped. For example, if the volume to be snapped is pvol0, then the created snapshot is named pvol0-snap1.
NOTE: The snaptest.sh
shell script creates the snapshots, describes them, and then deletes them. You can see your snapshots using kubectl get volumesnapshot -n helmtest-vxflexos
.
Notice that this VolumeSnapshot class has a reference to a snapshotClassName: vxflexos-snapclass. The CSI Driver for Dell PowerFlex installation does not create this class. You will need to create instance of VolumeSnapshotClass from one of default samples in `samples/volumesnapshotclass’ directory.
Test restoring from a snapshot
Test the restore operation workflow to restore from a snapshot.
Prerequisites
Ensure that you have stopped any previous test instance before performing this procedure.
Steps
- Run
sh snaprestoretest.sh
to start the test.
This script deploys the 2vols example, creates a snap of pvol0, and then updates the deployed helm chart from the updated directory 2vols+restore. This then adds an additional volume that is created from the snapshot.
NOTE:
- Helm tests are designed assuming users are using the storageclass names (vxflexos and vxflexos-xfs). If your storageclass names differ from these values, update the templates for snap restore tests accordingly (located in
test/helm/2vols+restore/template
directory). You can usekubectl get sc
to check for the storageclass names. - Helm tests are designed assuming users are using the snapshotclass name: vxflexos-snapclass If your snapshotclass name differs from the default values, update
snap1.yaml
andsnap2.yaml
accordingly.
Results
An outline of this workflow is described below:
- The snapshot is taken using
snap1.yaml
. - Helm is called to upgrade the deployment with a new definition, which is found in the 2vols+restore directory. The
csi-vxflexos/test/helm/2vols+restore/templates
directory contains the newly createdcreateFromSnap.yaml
file. The script then creates a PersistentVolumeClaim, which is a volume that is dynamically created from the snapshot. Then the helm deployment is upgraded to contain the newly created third volume. In other words, when thesnaprestoretest.sh
creates a new volume with data from the snapshot, the restore operation is tested. The contents of thecreateFromSnap.yaml
are described below:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: restorepvc
namespace: helmtest-vxflexos
spec:
storageClassName: vxflexos
dataSource:
name: pvol0-snap1
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
NOTE: The spec.dataSource clause, specifies a source VolumeSnapshot named pvol0-snap1 which matches the snapshot’s name in snap1.yaml
.
Test creating NFS volumes
Steps
- Navigate to the test/helm directory, which contains the
starttest.sh
and the 1vol-nfs directories. This directory contains a simple Helm chart that will deploy a pod that uses one PowerFlex volumes for NFS filesystem type.
NOTE:
- Helm tests are designed assuming users are using the storageclass name: vxflexos-nfs. If your storageclass names differ from these values, please update the templates in 1vol-nfs accordingly (located in
test/helm/1vol-nfs/templates
directory). You can usekubectl get sc
to check for the storageclass names.
- Run
sh starttest.sh 1vol-nfs
to deploy the pod. You should see the following:
Normal Scheduled default-scheduler, Successfully assigned helmtest-vxflexos/vxflextest-0 to worker-1-zwfjtd4eoblkg.domain
Normal SuccessfulAttachVolume attachdetach-controller, AttachVolume.Attach succeeded for volume "k8s-e279d47296"
Normal Pulled 13s kubelet, Successfully pulled image "docker.io/centos:latest" in 791.117427ms (791.125522ms including waiting)
Normal Created 13s kubelet, Created container test
Normal Started 13s kubelet, Started container test
10.x.x.x:/k8s-e279d47296 8388608 1582336 6806272 19% /data0
10.x.x.x:/k8s-e279d47296 on /data0 type nfs4 (rw,relatime,vers=4.2,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.x.x.x,local_lock=none,addr=10.x.x.x)
- To stop the test, run
sh stoptest.sh 1vol-nfs
. This script deletes the pods and the volumes depending on the retention setting you have configured.
Results
An outline of this workflow is described below:
- The 1vol-nfs helm chart contains one PersistentVolumeClaim definition in
pvc0.yaml
. It is referenced by thetest.yaml
which creates the pod. The contents of thepvc0.yaml
file are described below:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvol0
namespace: helmtest-vxflexos
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 8Gi
storageClassName: vxflexos-nfs
- The volumeMode: Filesystem requires a mounted file system, and the resources.requests.storage of 8Gi requires an 8 GB file. In this case, the storageClassName: vxflexos-nfs directs the system to use a storage class named vxflexos-nfs. This step yields a mounted nfs file system. You can create the vxflexos-nfs storage classes by using the yaml located in samples/storageclass.
- To see the volumes you created, run
kubectl get persistentvolumeclaim -n helmtest-vxflexos
andkubectl describe persistentvolumeclaim -n helmtest-vxflexos
.
NOTE: For more information about Kubernetes objects like StatefulSet and PersistentVolumeClaim see Kubernetes documentation: Concepts.
Test restoring NFS volume from snapshot
Test the restore operation workflow to restore NFS volume from a snapshot.
Prerequisites
Ensure that you have stopped any previous test instance before performing this procedure.
Steps
- Run
sh snaprestoretest-nfs.sh
to start the test.
This script deploys the 1vol-nfs example, creates a snap of pvol0, and then updates the deployed helm chart from the updated directory 1vols+restore-nfs. This adds an additional volume that is created from the snapshot.
NOTE:
- Helm tests are designed assuming users are using the storageclass name: vxflexos-nfs. If your storageclass names differ from these values, update the templates for 1vols+restore-nfs accordingly (located in
test/helm/1vols+restore-nfs/template
directory). You can usekubectl get sc
to check for the storageclass names. - Helm tests are designed assuming users are using the snapshotclass name: vxflexos-snapclass If your snapshotclass name differs from the default values, update
snap1.yaml
accordingly.
Results
An outline of this workflow is described below:
- The snapshot is taken using
snap1.yaml
. - Helm is called to upgrade the deployment with a new definition, which is found in the 1vols+restore-nfs directory. The
csi-vxflexos/test/helm/1vols+restore-nfs/templates
directory contains the newly createdcreateFromSnap.yaml
file. The script then creates a PersistentVolumeClaim, which is a volume that is dynamically created from the snapshot. Then the helm deployment is upgraded to contain the newly created third volume. In other words, when thesnaprestoretest-nfs.sh
creates a new volume with data from the snapshot, the restore operation is tested. The contents of thecreateFromSnap.yaml
are described below:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: restorepvc
namespace: helmtest-vxflexos
spec:
storageClassName: vxflexos-nfs
dataSource:
name: pvol0-snap1
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
NOTE: The spec.dataSource clause, specifies a source VolumeSnapshot named pvol0-snap1 which matches the snapshot’s name in snap1.yaml
.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.