The CSI Driver for Dell PowerStore can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, review the script documentation.
The controller section of the Helm chart installs the following components in a Deployment in the specified namespace:
- CSI Driver for Dell PowerStore
- Kubernetes External Provisioner, which provisions the volumes
- Kubernetes External Attacher, which attaches the volumes to the containers
- (Optional) Kubernetes External Snapshotter, which provides snapshot support
- (Optional) Kubernetes External Resizer, which resizes the volume
The node section of the Helm chart installs the following component in a DaemonSet in the specified namespace:
- CSI Driver for Dell PowerStore
- Kubernetes Node Registrar, which handles the driver registration
The following are requirements to be met before installing the CSI Driver for Dell PowerStore:
- Install Kubernetes or OpenShift (see supported versions)
- Install Helm 3
- If you plan to use either the Fibre Channel or iSCSI or NVMe/TCP protocol, refer to either Fibre Channel requirements or Set up the iSCSI Initiator or Set up the NVMe/TCP Initiator sections below. You can use NFS volumes without FC or iSCSI or NVMe/TCP configuration.
You can use either the Fibre Channel or iSCSI or NVMe/TCP protocol, but you do not need all the three.
If you want to use preconfigured iSCSI/FC hosts be sure to check that they are not part of any host group
- Linux native multipathing requirements
- Mount propagation is enabled on container runtime that is being used
- If using Snapshot feature, satisfy all Volume Snapshot requirements
- Nonsecure registries are defined in Docker or other container runtimes, for CSI drivers that are hosted in a non-secure location.
- You can access your cluster with kubectl and helm.
- Ensure that your nodes support mounting NFS volumes.
Install Helm 3.0
Install Helm 3.0 on the master node before you install the CSI Driver for Dell PowerStore.
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash command to install Helm 3.0.
Fibre Channel requirements
Dell PowerStore supports Fibre Channel communication. If you use the Fibre Channel protocol, ensure that the following requirement is met before you install the CSI Driver for Dell PowerStore:
- Zoning of the Host Bus Adapters (HBAs) to the Fibre Channel port must be done.
Set up the iSCSI Initiator
The CSI Driver for Dell PowerStore v1.4 and higher supports iSCSI connectivity.
If you use the iSCSI protocol, set up the iSCSI initiators as follows:
- Ensure that the iSCSI initiators are available on both Controller and Worker nodes.
- Kubernetes nodes must have access (network connectivity) to an iSCSI port on the Dell PowerStore array that has IP interfaces. Manually create IP routes for each node that connects to the Dell PowerStore.
- All Kubernetes nodes must have the iscsi-initiator-utils package for CentOS/RHEL or open-iscsi package for Ubuntu installed, and the iscsid service must be enabled and running.
To do this, run the
systemctl enable --now iscsidcommand.
- Ensure that the unique initiator name is set in /etc/iscsi/initiatorname.iscsi.
For information about configuring iSCSI, see Dell PowerStore documentation on Dell Support.
Set up the NVMe Initiator
If you want to use the protocol, set up the NVMe initiators as follows:
- The driver requires NVMe management command-line interface (nvme-cli) to use configure, edit, view or start the NVMe client and target. The nvme-cli utility provides a command-line and interactive shell option. The NVMe CLI tool is installed in the host using the below command.
sudo apt install nvme-cli
Requirements for NVMeTCP
- Modules including the nvme, nvme_core, nvme_fabrics, and nvme_tcp are required for using NVMe over Fabrics using TCP. Load the NVMe and NVMe-OF Modules using the below commands:
modprobe nvme modprobe nvme_tcp
Requirements for NVMeFC
- NVMeFC Zoning of the Host Bus Adapters (HBAs) to the Fibre Channel port must be done.
- Do not load the nvme_tcp module for NVMeFC
Linux multipathing requirements
Dell PowerStore supports Linux multipathing. Configure Linux multipathing before installing the CSI Driver for Dell PowerStore.
Set up Linux multipathing as follows:
- Ensure that all nodes have the Device Mapper Multipathing package installed.
You can install it by running
yum install device-mapper-multipathon CentOS or
apt install multipath-toolson Ubuntu. This package should create a multipath configuration file located in
- Enable multipathing using the
mpathconf --enable --with_multipathd ycommand.
- Ensure that the multipath command for
multipath.confis available on all Kubernetes nodes.
(Optional) Volume Snapshot Requirements
Applicable only if you decided to enable the snapshot feature in
snapshot: enabled: true
Volume Snapshot CRD’s
The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Use v5.0.x for the installation.
Volume Snapshot Controller
The CSI external-snapshotter sidecar is split into two controllers:
- A common snapshot controller
- A CSI external-snapshotter sidecar
The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using
kubectl and the manifests are available:
Use v5.0.x for the installation.
- The manifests available on GitHub install the snapshotter image:
- The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration.
Volume Health Monitoring
Volume Health Monitoring feature is optional and by default this feature is disabled for drivers when installed via helm. To enable this feature, add the below block to the driver manifest before installing the driver. This ensures to install external health monitor sidecar. To get the volume health state value under controller should be set to true as seen below. To get the volume stats value under node should be set to true.
controller: healthMonitor: # enabled: Enable/Disable health monitor of CSI volumes # Allowed values: # true: enable checking of health condition of CSI volumes # false: disable checking of health condition of CSI volumes # Default value: None enabled: false # healthMonitorInterval: Interval of monitoring volume health condition # Allowed values: Number followed by unit (s,m,h) # Examples: 60s, 5m, 1h # Default value: 60s volumeHealthMonitorInterval: 60s node: healthMonitor: # enabled: Enable/Disable health monitor of CSI volumes- volume usage, volume condition # Allowed values: # true: enable checking of health condition of CSI volumes # false: disable checking of health condition of CSI volumes # Default value: None enabled: false
You can install CRDs and default snapshot controller by running following commands:
git clone https://github.com/kubernetes-csi/external-snapshotter/ cd ./external-snapshotter git checkout release-<your-version> kubectl kustomize client/config/crd | kubectl create -f - kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f -
- It is recommended to use 5.0.x version of snapshotter/snapshot-controller.
- The CSI external-snapshotter sidecar is installed along with the driver and does not involve any extra configuration.
(Optional) Replication feature Requirements
Applicable only if you decided to enable the Replication feature in
replication: enabled: true
The CRDs for replication can be obtained and installed from the csm-replication project on Github. Use
csm-replication/deploy/replicationcrds.all.yaml located in csm-replication git repo for the installation.
CRDs should be configured during replication prepare stage with repctl as described in install-repctl
Install the Driver
git clone -b v2.3.0 https://github.com/dell/csi-powerstore.gitto clone the git repository.
Ensure that you have created namespace where you want to install the driver. You can run
kubectl create namespace csi-powerstoreto create a new one. “csi-powerstore” is just an example. You can choose any name for the namespace. But make sure to align to the same namespace during the whole installation.
helm/csi-powerstore/driver-image.yamland confirm the driver image points to new image.
samples/secret/secret.yamlfile and configure connection information for your PowerStore arrays changing following parameters:
- endpoint: defines the full URL path to the PowerStore API.
- globalID: specifies what storage cluster the driver should use
- username, password: defines credentials for connecting to array.
- skipCertificateValidation: defines if we should use insecure connection or not.
- isDefault: defines if we should treat the current array as a default.
- blockProtocol: defines what transport protocol we should use (FC, ISCSI, NVMeTCP, NVMeFC, None, or auto).
- nasName: defines what NAS should be used for NFS volumes.
- nfsAcls (Optional): defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory. NFSv4 ACls are supported for NFSv4 shares on NFSv4 enabled NAS servers only. POSIX ACLs are not supported and only POSIX mode bits are supported for NFSv3 shares.
Add more blocks similar to above for each PowerStore array if necessary.
Create storage classes using ones from
samples/storageclassfolder as an example and apply them to the Kubernetes cluster by running
kubectl create -f <path_to_storageclass_file>
If you do not specify
arrayIDparameter in the storage class then the array that was specified as the default would be used for provisioning volumes.
Create the secret by running
kubectl create secret generic powerstore-config -n csi-powerstore --from-file=config=secret.yaml
Copy the default values.yaml file
cd dell-csi-helm-installer && cp ../helm/csi-powerstore/values.yaml ./my-powerstore-settings.yaml
Edit the newly created values file and provide values for the following parameters
|logLevel||Defines CSI driver log level||No||“debug”|
|logFormat||Defines CSI driver log format||No||“JSON”|
|externalAccess||Defines additional entries for hostAccess of NFS volumes, single IP address and subnet are valid entries||No||" "|
|kubeletConfigDir||Defines kubelet config path for cluster||Yes||“/var/lib/kubelet”|
|imagePullPolicy||Policy to determine if the image should be pulled prior to starting the container.||Yes||“IfNotPresent”|
|nfsAcls||Defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory.||No||“0777”|
|connection.enableCHAP||Defines whether the driver should use CHAP for iSCSI connections or not||No||False|
|controller.controllerCount||Defines number of replicas of controller deployment||Yes||2|
|controller.volumeNamePrefix||Defines the string added to each volume that the CSI driver creates||No||“csivol”|
|controller.snapshot.enabled||Allows to enable/disable snapshotter sidecar with driver installation for snapshot feature||No||“true”|
|controller.snapshot.snapNamePrefix||Defines prefix to apply to the names of a created snapshots||No||“csisnap”|
|controller.resizer.enabled||Allows to enable/disable resizer sidecar with driver installation for volume expansion feature||No||“true”|
|controller.healthMonitor.enabled||Allows to enable/disable volume health monitor||No||false|
|controller.healthMonitor.volumeHealthMonitorInterval||Interval of monitoring volume health condition||No||60s|
|controller.nodeSelector||Defines what nodes would be selected for pods of controller deployment||Yes||" "|
|controller.tolerations||Defines tolerations that would be applied to controller deployment||Yes||" "|
|node.nodeNamePrefix||Defines the string added to each node that the CSI driver registers||No||“csi-node”|
|node.nodeIDPath||Defines a path to file with a unique identifier identifying the node in the Kubernetes cluster||No||“/etc/machine-id”|
|node.healthMonitor.enabled||Allows to enable/disable volume health monitor||No||false|
|node.nodeSelector||Defines what nodes would be selected for pods of node daemonset||Yes||" "|
|node.tolerations||Defines tolerations that would be applied to node daemonset||Yes||" "|
|fsGroupPolicy||Defines which FS Group policy mode to be used, Supported modes
|controller.vgsnapshot.enabled||To enable or disable the volume group snapshot feature||No||“true”|
- Install the driver using
csi-install.shbash script by running
./csi-install.sh --namespace csi-powerstore --values ./my-powerstore-settings.yaml
- After that the driver should be installed, you can check the condition of driver pods by running
kubectl get all -n csi-powerstore
- After that the driver should be installed, you can check the condition of driver pods by running
- For detailed instructions on how to run the install scripts, refer to the readme document in the dell-csi-helm-installer folder.
- By default, the driver scans available SCSI adapters and tries to register them with the storage array under the SCSI hostname using
node.nodeNamePrefixand the ID read from the file pointed to by
node.nodeIDPath. If an adapter is already registered with the storage under a different hostname, the adapter is not used by the driver.
- A hostname the driver uses for registration of adapters is in the form
<nodeNamePrefix>-<nodeID>-<nodeIP>. By default, these are csi-node and the machine ID read from the file
- To customize the hostname, for example if you want to make them more user friendly, adjust nodeIDPath and nodeNamePrefix accordingly. For example, you can set
/etc/hostnameto produce names such as
- (Optional) Enable additional Mount Options - A user is able to specify additional mount options as needed for the driver.
- Mount options are specified in storageclass yaml under mountOptions.
- WARNING: Before utilizing mount options, you must first be fully aware of the potential impact and understand your environment’s requirements for the specified option.
The CSI driver for Dell PowerStore version 1.3 and later,
dell-csi-helm-installer does not create any storage classes as part of the driver installation. A wide set of annotated storage class manifests have been provided in the
samples/storageclass folder. Use these samples to create new storage classes to provision storage.
What happens to my existing storage classes?
Upgrading from an older version of the driver: The storage classes will be deleted if you upgrade the driver. If you wish to continue using those storage classes, you can patch them and apply the annotation “helm.sh/resource-policy”: keep before performing an upgrade.
Note: If you continue to use the old storage classes, you may not be able to take advantage of any new storage class parameter supported by the driver.
Steps to create storage class:
There are samples storage class yaml files available under
samples/storageclass. These can be copied and modified as needed.
- Edit the sample storage class yaml file and update following parameters:
- arrayID: specifies what storage cluster the driver should use, if not specified driver will use storage cluster specified as
- FsType: specifies what filesystem type driver should use, possible variants
nfs, if not specified driver will use
- nfsAcls (Optional): defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory.
- allowedTopologies (Optional): If you want you can also add topology constraints.
allowedTopologies: - matchLabelExpressions: - key: csi-powerstore.dellemc.com/220.127.116.11-iscsi # replace "-iscsi" with "-fc", "-nvmetcp" or "-nvmefc" or "-nfs" at the end to use FC, NVMeTCP, NVMeFC or NFS enabled hosts # replace "18.104.22.168" with PowerStore endpoint IP values: - "true"
- Create your storage class by using
kubectl create -f <path_to_storageclass_file>
NOTE: Deleting a storage class has no impact on a running Pod with mounted PVCs. You cannot provision new PVCs until at least one storage class is newly created.
Volume Snapshot Class
Starting CSI PowerStore v1.4.0,
dell-csi-helm-installer will not create any Volume Snapshot Class during the driver installation. There is a sample Volume Snapshot Class manifest present in the samples/volumesnapshotclass folder. Please use this sample to create a new Volume Snapshot Class to create Volume Snapshots.
What happens to my existing Volume Snapshot Classes?
Upgrading from CSI PowerStore v2.1.0 driver: The existing volume snapshot class will be retained.
Upgrading from an older version of the driver: It is strongly recommended to upgrade the earlier versions of CSI PowerStore to 1.4.0 or higher, before upgrading to 2.3.0.
Dynamically update the powerstore secrets
Users can dynamically add delete array information from secret. Whenever an update happens the driver updates the “Host” information in an array. User can update secret using the following command:
kubectl create secret generic powerstore-config -n csi-powerstore --from-file=config=secret.yaml -o yaml --dry-run=client | kubectl replace -f -
Dynamic Logging Configuration
This feature is introduced in CSI Driver for PowerStore version 2.0.0.
Helm based installation
As part of driver installation, a ConfigMap with the name
powerstore-config-params is created, which contains attributes
CSI_LOG_LEVEL which specifies the current log level of CSI driver and
CSI_LOG_FORMAT which specifies the current log format of CSI driver.
Users can set the default log level by specifying log level to
logLevel and log format to
logFormat attribute in
my-powerstore-settings.yaml during driver installation.
To change the log level or log format dynamically to a different value user can edit the same values.yaml, and run the following command
cd dell-csi-helm-installer ./csi-install.sh --namespace csi-powerstore --values ./my-powerstore-settings.yaml --upgrade
my-powerstore-settings.yaml is a
values.yaml file which user has used for driver installation.