The CSI Driver for Dell PowerFlex can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, review the script documentation.
The controller section of the Helm chart installs the following components in a Deployment in the specified namespace:
- CSI Driver for Dell PowerFlex
- Kubernetes External Provisioner, which provisions the volumes
- Kubernetes External Attacher, which attaches the volumes to the containers
- Kubernetes External Snapshotter, which provides snapshot support
- Kubernetes External Resizer, which resizes the volume
The node section of the Helm chart installs the following component in a DaemonSet in the specified namespace:
- CSI Driver for Dell PowerFlex
- Kubernetes Node Registrar, which handles the driver registration
The following are requirements that must be met before installing the CSI Driver for Dell PowerFlex:
- Install Kubernetes or OpenShift (see supported versions)
- Install Helm 3
- Enable Zero Padding on PowerFlex
- Mount propagation is enabled on container runtime that is being used
- Install PowerFlex Storage Data Client
- If using Snapshot feature, satisfy all Volume Snapshot requirements
- A user must exist on the array with a role >= FrontEndConfigure
- If enabling CSM for Authorization, please refer to the Authorization deployment steps first
- If multipath is configured, ensure CSI-PowerFlex volumes are blacklisted by multipathd. See troubleshooting section for details
Install Helm 3.0
Install Helm 3.0 on the master node before you install the CSI Driver for Dell PowerFlex.
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash command to install Helm 3.0.
Enable Zero Padding on PowerFlex
Verify that zero padding is enabled on the PowerFlex storage pools that will be used. Use PowerFlex GUI or the PowerFlex CLI to check this setting. For more information to configure this setting, see Dell PowerFlex documentation.
Install PowerFlex Storage Data Client
The CSI Driver for PowerFlex requires you to have installed the PowerFlex Storage Data Client (SDC) on all Kubernetes nodes which run the node portion of the CSI driver. SDC could be installed automatically by CSI driver install on Kubernetes nodes with OS platform which support automatic SDC deployment; for Red Hat CoreOS (RHCOS), RHEL 7.9 and RHEL 8.x. On Kubernetes nodes with OS version not supported by automatic install, you must perform the Manual SDC Deployment steps below. Refer to https://hub.docker.com/r/dellemc/sdc for supported OS versions.
NOTE: To install CSI driver for Powerflex with automated SDC deployment, you need below two packages on worker nodes.
Optional: For a typical install, you will pull SDC kernel modules from the Dell FTP site, which is set up by default. Some users might want to mirror this repository to a local location. The PowerFlex KB article has instructions on how to do this.
Manual SDC Deployment
For detailed PowerFlex installation procedure, see the Dell PowerFlex Deployment Guide. Install the PowerFlex SDC as follows:
- Download the PowerFlex SDC from Dell Online support. The filename is EMC-ScaleIO-sdc-*.rpm, where * is the SDC name corresponding to the PowerFlex installation version.
- Export the shell variable MDM_IP in a comma-separated list using
export MDM_IP=xx.xxx.xx.xx,xx.xxx.xx.xx, where xxx represents the actual IP address in your environment. This list contains the IP addresses of the MDMs.
- Install the SDC per the Dell PowerFlex Deployment Guide:
- For Red Hat Enterprise Linux and CentOS, run
rpm -iv ./EMC-ScaleIO-sdc-*.x86_64.rpm, where * is the SDC name corresponding to the PowerFlex installation version.
- For Red Hat Enterprise Linux and CentOS, run
- To add more MDM_IP for multi-array support, run
/opt/emc/scaleio/sdc/bin/drv_cfg --add_mdm --ip 10.xx.xx.xx.xx,10.xx.xx.xx
(Optional) Volume Snapshot Requirements
Applicable only if you decided to enable snapshot feature in
controller: snapshot: enabled: true
Volume Snapshot CRD’s
The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here: v6.1.x
Volume Snapshot Controller
The CSI external-snapshotter sidecar is split into two controllers:
- A common snapshot controller
- A CSI external-snapshotter sidecar
The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using
kubectl and the manifests are available here: v6.1.x
- The manifests available on GitHub install the snapshotter image:
- The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration.
You can install CRDs and default snapshot controller by running following commands:
git clone https://github.com/kubernetes-csi/external-snapshotter/ cd ./external-snapshotter git checkout release-<your-version> kubectl kustomize client/config/crd | kubectl create -f - kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f -
- When using Kubernetes it is recommended to use 6.1.x version of snapshotter/snapshot-controller.
- The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration.
Install the Driver
git clone -b v2.5.0 https://github.com/dell/csi-powerflex.gitto clone the git repository.
Ensure that you have created a namespace where you want to install the driver. You can run
kubectl create namespace vxflexosto create a new one.
Collect information from the PowerFlex SDC by executing the
get_vxflexos_info.shscript located in the
scriptsdirectory. This script shows the VxFlex OS system ID and MDM IP addresses. Make a note of the values for these parameters as they must be entered into
samples/config.yamlfor driver configuration. The following table lists driver configuration parameters for multiple storage arrays.
Parameter Description Required Default username Username for accessing PowerFlex system. If authorization is enabled, username will be ignored. true - password Password for accessing PowerFlex system. If authorization is enabled, password will be ignored. true - systemID System name/ID of PowerFlex system. true - allSystemNames List of previous names of powerflex array if used for PV create false - endpoint REST API gateway HTTPS endpoint/PowerFlex Manager public IP for PowerFlex system. If authorization is enabled, endpoint should be the HTTPS localhost endpoint that the authorization sidecar will listen on true - skipCertificateValidation Determines if the driver is going to validate certs while connecting to PowerFlex REST API interface. true true isDefault An array having isDefault=true is for backward compatibility. This parameter should occur once in the list. false false mdm mdm defines the MDM(s) that SDC should register with on start. This should be a list of MDM IP addresses or hostnames separated by comma. true -
- username: "admin" password: "Password123" systemID: "ID2" endpoint: "https://127.0.0.2" skipCertificateValidation: true isDefault: true mdm: "10.0.0.3,10.0.0.4"
NOTE: To use multiple arrays, copy and paste section above for each array. Make sure isDefault is set to true for only one array.
After editing the file, run the below command to create a secret called
`kubectl create secret generic vxflexos-config -n vxflexos --from-file=config=samples/config.yaml`
Use the below command to replace or update the secret:
`kubectl create secret generic vxflexos-config -n vxflexos --from-file=config=samples/config.yaml -o yaml --dry-run=client | kubectl replace -f -`
- The user needs to validate the YAML syntax and array-related key/values while replacing the vxflexos-creds secret.
- If you want to create a new array or update the MDM values in the secret, you will need to reinstall the driver. If you change other details, such as login information, the secret will dynamically update – see dynamic-array-configuration for more details.
jsonformat of the array configuration file is still supported in this release. If you already have your configuration in
jsonformat, you may continue to maintain it or you may transfer this configuration to
yamlformat and replace/update the secret.
- “insecure” parameter has been changed to “skipCertificateValidation” as insecure is deprecated and will be removed from use in config.yaml or secret.yaml in a future release. Users can continue to use any one of “insecure” or “skipCertificateValidation” for now. The driver would return an error if both parameters are used.
- Please note that log configuration parameters from v1.5 will no longer work in v2.0 and higher. Please refer to the Dynamic Logging Configuration section in Features for more information.
- If the user is using complex K8s version like “v1.21.3-mirantis-1”, use this kubeVersion check in helm/csi-unity/Chart.yaml file. kubeVersion: “>= 1.21.0-0 < 1.26.0-0”
Default logging options are set during Helm install. To see possible configuration options, see the Dynamic Logging Configuration section in Features.
If using automated SDC deployment:
- Check the SDC container image is the correct version for your version of PowerFlex.
Copy the default values.yaml file
cd helm && cp csi-vxflexos/values.yaml myvalues.yaml
If you are using a custom image, check the
myvalues.yamlto make sure that they are pointing to the correct image repository and driver version. These two fields are spliced together to form the image name, as shown here:
Look over all the other fields
myvalues.yamland fill in/adjust any as needed. All the fields are described here:
|version||Set to verify the values file version matches driver version and used to pull the image as part of the image name.||Yes||2.5.0|
|driverRepository||Set to give the repository containing the driver image (used as part of the image name).||Yes||dellemc|
|powerflexSdc||Set to give the location of the SDC image used if automatic SDC deployment is being utilized.||No||dellemc/sdc:3.6|
|certSecretCount||Represents the number of certificate secrets, which the user is going to create for SSL authentication.||No||0|
|logLevel||CSI driver log level. Allowed values: “error”, “warn”/“warning”, “info”, “debug”.||Yes||“debug”|
|logFormat||CSI driver log format. Allowed values: “TEXT” or “JSON”.||Yes||“TEXT”|
|kubeletConfigDir||kubelet config directory path. Ensure that the config.yaml file is present at this path.||Yes||/var/lib/kubelet|
|defaultFsType||Used to set the default FS type which will be used for mount volumes if FsType is not specified in the storage class. Allowed values: ext4, xfs.||Yes||ext4|
|fsGroupPolicy||Defines which FS Group policy mode to be used. Supported modes are
|imagePullPolicy||Policy to determine if the image should be pulled prior to starting the container. Allowed values: Always, IfNotPresent, Never.||Yes||IfNotPresent|
|enablesnapshotcgdelete||A boolean that, when enabled, will delete all snapshots in a consistency group everytime a snap in the group is deleted.||Yes||false|
|enablelistvolumesnapshot||A boolean that, when enabled, will allow list volume operation to include snapshots (since creating a volume from a snap actually results in a new snap). It is recommend this be false unless instructed otherwise.||Yes||false|
|allowRWOMultiPodAccess||Setting allowRWOMultiPodAccess to “true” will allow multiple pods on the same node to access the same RWO volume. This behavior conflicts with the CSI specification version 1.3. NodePublishVolume description that requires an error to be returned in this case. However, some other CSI drivers support this behavior and some customers desire this behavior. Customers use this option at their own risk.||Yes||false|
|controller||This section allows the configuration of controller-specific parameters. To maximize the number of available nodes for controller pods, see this section. For more details on the new controller pod configurations, see the Features section for Powerflex specifics.||-||-|
|volumeNamePrefix||Set so that volumes created by the driver have a default prefix. If one PowerFlex/VxFlex OS system is servicing several different Kubernetes installations or users, these prefixes help you distinguish them.||Yes||“k8s”|
|controllerCount||Set to deploy multiple controller instances. If the controller count is greater than the number of available nodes, excess pods remain in a pending state. It should be greater than 0. You can increase the number of available nodes by configuring the “controller” section in your values.yaml. For more details on the new controller pod configurations, see the Features section for Powerflex specifics.||Yes||2|
|snapshot.enabled||A boolean that enable/disable volume snapshot feature.||No||true|
|resizer.enabled||A boolean that enable/disable volume expansion feature.||No||true|
|nodeSelector||Defines what nodes would be selected for pods of controller deployment. Leave as blank to use all nodes. Uncomment this section to deploy on master nodes exclusively.||Yes||" "|
|tolerations||Defines tolerations that would be applied to controller deployment. Leave as blank to install the controller on worker nodes only. If deploying on master nodes is desired, uncomment out this section.||Yes||" "|
|healthMonitor||This section configures the optional deployment of the external health monitor sidecar, for controller side volume health monitoring.||-||-|
|enabled||Enable/Disable deployment of external health monitor sidecar.||No||false|
|interval||Interval of monitoring volume health condition. Allowed values: Number followed by unit (s,m,h)||No||60s|
|node||This section allows the configuration of node-specific parameters.||-||-|
|healthMonitor.enabled||Enable/Disable health monitor of CSI volumes- volume usage, volume condition||No||false|
|nodeSelector||Defines what nodes would be selected for pods of node daemonset. Leave as blank to use all nodes.||Yes||" "|
|tolerations||Defines tolerations that would be applied to node daemonset. Leave as blank to install node driver only on worker nodes.||Yes||" "|
|monitor||This section allows the configuration of the SDC monitoring pod.||-||-|
|enabled||Set to enable the usage of the monitoring pod.||Yes||false|
|hostNetwork||Set whether the monitor pod should run on the host network or not.||Yes||true|
|hostPID||Set whether the monitor pod should run in the host namespace or not.||Yes||true|
|vgsnapshotter||This section allows the configuration of the volume group snapshotter(vgsnapshotter) pod.||-||-|
|enabled||A boolean that enable/disable vg snapshotter feature.||No||false|
|image||Image for vg snapshotter.||No||" "|
|podmon||Podmon is an optional feature to enable application pods to be resilient to node failure.||-||-|
|enabled||A boolean that enables/disables podmon feature.||No||false|
|image||image for podmon.||No||" "|
|authorization||Authorization is an optional feature to apply credential shielding of the backend PowerFlex.||-||-|
|enabled||A boolean that enables/disables authorization feature.||No||false|
|sidecarProxyImage||Image for csm-authorization-sidecar.||No||" "|
|proxyHost||Hostname of the csm-authorization server.||No||Empty|
|skipCertificateValidation||A boolean that enables/disables certificate validation of the csm-authorization server.||No||true|
- Install the driver using
csi-install.shbash script by running
cd dell-csi-helm-installer && ./csi-install.sh --namespace vxflexos --values ../helm/myvalues.yaml. Alternatively, to do a helm install solely with Helm charts (without shell scripts), refer to
For detailed instructions on how to run the install scripts, refer to the README.md in the dell-csi-helm-installer folder.
Install script will validate MDM IP(s) in
vxflexos-configsecret and creates a new field consumed by the init container and sdc-monitor container
This install script also runs the
verify.shscript. You will be prompted to enter the credentials for each of the Kubernetes nodes. The
verify.shscript needs the credentials to check if SDC has been configured on all nodes.
It is mandatory to run install script after changes to MDM configuration in
vxflexos-configsecret. Refer dynamic-array-configuration
If an extended Kubernetes version is being used (e.g.
v1.21.3-mirantis-1) and is failing the version check in Helm even though it falls in the allowed range, then you must go into
helm/csi-vxflexos/Chart.yamland replace the standard
kubeVersioncheck with the commented-out alternative. Please note that this will also allow the use of pre-release alpha and beta versions of Kubernetes, which is not supported.
(Optional) Enable additional Mount Options - A user is able to specify additional mount options as needed for the driver.
- Mount options are specified in storageclass yaml under mkfsFormatOption.
- WARNING: Before utilizing mount options, you must first be fully aware of the potential impact and understand your environment’s requirements for the specified option.
Certificate validation for PowerFlex Gateway REST API calls
This topic provides details about setting up the certificate for the CSI Driver for Dell PowerFlex.
Before you begin
As part of the CSI driver installation, the CSI driver requires a secret with the name vxflexos-certs-0 to vxflexos-certs-n based on the “.Values.certSecretCount” parameter present in the namespace vxflexos.
This secret contains the X509 certificates of the CA which signed PowerFlex gateway SSL certificate in PEM format.
The CSI driver exposes an install parameter in config.yaml,
skipCertificateValidation, which determines if the driver performs client-side verification of the gateway certificates.
skipCertificateValidation parameter is set to true by default, and the driver does not verify the gateway certificates.
skipCertificateValidation is set to false, then the secret vxflexos-certs-n must contain the CA certificate for the array gateway.
If this secret is an empty secret, then the validation of the certificate fails, and the driver fails to start.
If the gateway certificate is self-signed or if you are using an embedded gateway, then perform the following steps.
To fetch the certificate, run the following command.
`openssl s_client -showcerts -connect <Gateway IP:Port> </dev/null 2>/dev/null | openssl x509 -outform PEM > ca_cert_0.pem`
Example: openssl s_client -showcerts -connect 22.214.171.124:443 </dev/null 2>/dev/null | openssl x509 -outform PEM > ca_cert_0.pem
Run the following command to create the cert secret with index ‘0’:
`kubectl create secret generic vxflexos-certs-0 --from-file=cert-0=ca_cert_0.pem -n vxflexos`
Use the following command to replace the secret:
`kubectl create secret generic vxflexos-certs-0 -n vxflexos --from-file=cert-0=ca_cert_0.pem -o yaml --dry-run | kubectl replace -f -`
Repeat step 1 and 2 to create multiple cert secrets with incremental index (example: vxflexos-certs-1, vxflexos-certs-2, etc)
- “vxflexos” is the namespace for Helm-based installation but namespace can be user-defined in operator-based installation.
- User can add multiple certificates in the same secret. The certificate file should not exceed more than 1Mb due to Kubernetes secret size limitation.
- Whenever certSecretCount parameter changes in
myvalues.yamluser needs to uninstall and install the driver.
- Updating vxflexos-certs-n secrets is a manual process, unlike vxflexos-config. Users have to re-install the driver in case of updating/adding the SSL certificates or changing the certSecretCount parameter.
For CSI driver for PowerFlex version 1.4 and later,
dell-csi-helm-installer does not create any storage classes as part of the driver installation. A wide set of annotated storage class manifests have been provided in the
samples folder. Use these samples to create new storage classes to provision storage.
What happens to my existing storage classes?
Upgrading from an older version of the driver: The storage classes will be deleted if you upgrade the driver. If you wish to continue using those storage classes, you can patch them and apply the annotation “helm.sh/resource-policy”: keep before performing an upgrade.
Note: If you continue to use the old storage classes, you may not be able to take advantage of any new storage class parameter supported by the driver.
Steps to create storage class:
There are samples storage class yaml files available under
samples/storageclass. These can be copied and modified as needed.
storageclass.yamlif you need ext4 filesystem and
storageclass-xfs.yamlif you want xfs filesystem.
<STORAGE_POOL>with the storage pool you have.
<SYSTEM_ID>with the system ID you have. Note there are two appearances in the file.
storageclass.kubernetes.io/is-default-classto true if you want to set it as default, otherwise false.
- Save the file and create it by using
kubectl create -f storageclass.yamlor
kubectl create -f storageclass-xfs.yaml
- At least one storage class is required for one array.
- If you uninstall the driver and reinstall it, you can still face errors if any update in the
values.yamlfile leads to an update of the storage class(es):
Error: cannot patch "<sc-name>" with kind StorageClass: StorageClass.storage.k8s.io "<sc-name>" is invalid: parameters: Forbidden: updates to parameters are forbidden
In case you want to make such updates, ensure to delete the existing storage classes using the
kubectl delete storageclass command.
Deleting a storage class has no impact on a running Pod with mounted PVCs. You cannot provision new PVCs until at least one storage class is newly created.
Volume Snapshot Class
Starting CSI PowerFlex v1.5,
dell-csi-helm-installer will not create any Volume Snapshot Class during the driver installation. There is a sample Volume Snapshot Class manifest present in the samples/ folder. Please use this sample to create a new Volume Snapshot Class to create Volume Snapshots.