PowerStore

Installing the CSI Driver for Dell PowerStore via Dell CSM Operator

Starting with CSM 1.12, all deployments will use images from quay.io by default. New release images will be available on Docker Hub until CSM 1.14 (May 2025), and existing releases will remain on Docker Hub.

The CSI Driver for Dell PowerStore can be installed via the Dell CSM Operator. To deploy the Operator, follow the instructions available here.

Note that the deployment of the driver using the operator does not use any Helm charts and the installation and configuration parameters will be slightly different from the one specified via the Helm installer.

Listing installed drivers

To query for all Dell CSI drivers installed with the ContainerStorageModule CRD use the following command:

kubectl get csm --all-namespaces

Prerequisites

The following requirements must be met before installing the CSI Driver for Dell PowerStore:

  • A Kubernetes or OpenShift cluster (see supported versions).
  • Refer to the sections below for protocol specific requirements.
  • If you want to use pre-configured iSCSI/FC hosts be sure to check that they are not part of any host group.
  • Linux multipathing requirements (described later).
  • Mount propagation is enabled on the container runtime that is being used.
  • If using the Snapshot feature, satisfy all Volume Snapshot requirements.
  • Insecure registries are defined in Docker or other container runtime for CSI drivers that are hosted in a non-secure location.
  • Ensure that your nodes support mounting NFS volumes if using NFS.
  • For NVMe support the preferred multipath solution is NVMe native multipathing. The Dell Host Connectivity Guide describes the details of each configuration option.

Fibre Channel Requirements

The following requirements must be fulfilled in order to successfully use the Fiber Channel protocol with the CSI PowerStore driver:

  • Zoning of the Host Bus Adapters (HBAs) to the Fibre Channel ports on the PowerStore arrays must be done.
  • If the number of volumes that will be published to nodes is high, then configure the maximum number of LUNs for your HBAs on each node. See the appropriate HBA document to configure the maximum number of LUNs.

iSCSI Requirements

The following requirements must be fulfilled in order to successfully use the iSCSI protocol with the CSI PowerStore driver:

  • All Kubernetes nodes must have the iscsi-initiator-utils package installed. On Debian based distributions the package name is open-iscsi.
  • The iscsid service must be enabled and running. You can enable the service by running the following command on all worker nodes: systemctl enable --now iscsid
  • To configure iSCSI in Red Hat OpenShift clusters, you can create a MachineConfig object using the console or oc to ensure that the iSCSI daemon starts on all the Red Hat CoreOS nodes. Here is an example of a MachineConfig object:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  name: 99-iscsid
  labels:
    machineconfiguration.openshift.io/role: worker
spec:
  config:
    ignition:
      version: 3.2.0
    systemd:
      units:
      - name: "iscsid.service"
        enabled: true

Once the MachineConfig object has been deployed, CoreOS will ensure that the iscsid.service starts automatically. You can check the status of the iSCSI service by entering the following command on each worker node in the cluster: sudo systemctl status iscsid.

  • Ensure that the iSCSI initiators are available on all the nodes where the driver node plugin will be installed.
  • Ensure that the unique initiator name is set in /etc/iscsi/initiatorname.iscsi.
  • Kubernetes nodes must have network connectivity to an iSCSI port on the Dell PowerStore array that has IP interfaces.
  • Ensure that the iSCSI initiators on the nodes are not a part of any existing Host or Host Group on the Dell PowerStore arrays. The driver will create host entries for the iSCSI initiators which adheres to the naming conventions required by the driver.

Refer to the Dell Host Connectivity Guide for more information.

NVMe Requirements

The following requirements must be fulfilled in order to successfully use the NVMe protocols with the CSI PowerStore driver:

  • All OpenShift or Kubernetes nodes connecting to Dell storage arrays must use unique host NQNs.
  • The driver requires the NVMe command-line interface (nvme-cli) to manage the NVMe clients and targets. The NVMe CLI tool is installed in the host using the following command on RPM oriented Linux distributions.
sudo dnf -y install nvme-cli
  • Support for NVMe requires native NVMe multipathing to be configured on each worker node in the cluster. Please refer to the Dell Host Connectivity Guide for more details on NVMe multipathing requirements. To determine if the worker nodes are configured for native NVMe multipathing run the following command on each worker node:
cat /sys/module/nvme_core/parameters/multipath

If the result of the command displays Y then NVMe native multipathing is enabled in the kernel. If the output is N then native NVMe multipating is disabled. Consult the Dell Host Connectivity Guide for Linux to enable native NVMe multipathing.

Configure the IO policy

  • The default NVMeTCP native multipathing policy is “numa”. The preferred IO policy for NVMe devices used for PowerStore is round-robin. You can use udev rules to enable the round robin policy on all worker nodes. To view the IO policy you can use the following command:
nvme list-subsys

To change the IO policy to round-robin you can add a udev rule on each worker node. Place a config file in /etc/udev/rules.d with the name 71-nvme-io-policy.rules with the following contents:

ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{iopolicy}="round-robin"

In order to change the rules on a running kernel you can run the following commands:

/sbin/udevadm control --reload-rules
/sbin/udevadm trigger --type=devices --action=change

On OCP clusters you can add a MachineConfig to enable this rule on all worker nodes:

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  name: 99-workers-multipath-round-robin
  labels:
    machineconfiguration.openshift.io/role: worker
spec:
  config:
    ignition:
      version: 3.2.0
    storage:
      files:
      - contents:
          source: data:text/plain;charset=utf-8;base64,QUNUSU9OPT0iYWRkfGNoYW5nZSIsIFNVQlNZU1RFTT09Im52bWUtc3Vic3lzdGVtIiwgQVRUUntpb3BvbGljeX09InJvdW5kLXJvYmluIg==
          verification: {}
        filesystem: root
        mode: 420
        path: /etc/udev/rules.d/71-nvme-io-policy.rules

Configure the control loss timeout

To reduce the impact of PowerStore non disruptive software upgrades you must set the control loss timeout. This can be done using udev rules on each worker node. More information can be found in the Dell Host Connectivity Guide. To configure the control loss timeout place a config file in /etc/udev/rules.d with the name 72-nvmf-ctrl_loss_tmo.rules with the following contents:

ACTION=="add|change", SUBSYSTEM=="nvme", KERNEL=="nvme*", ATTR{ctrl_loss_tmo}="-1"

In order to change the rules on a running kernel you can run the following commands:

/sbin/udevadm control --reload-rules
/sbin/udevadm trigger --type=devices --action=change

On OCP clusters you can add a MachineConfig to enable this rule on all worker nodes:

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  name: 99-nvmf-ctrl-loss-tmo
  labels:
    machineconfiguration.openshift.io/role: worker
spec:
  config:
    ignition:
      version: 3.2.0
    storage:
      files:
      - contents:
          source: data:text/plain;charset=utf-8;base64,QUNUSU9OPT0iYWRkfGNoYW5nZSIsIFNVQlNZU1RFTT09Im52bWUiLCBLRVJORUw9PSJudm1lKiIsIEFUVFJ7Y3RybF9sb3NzX3Rtb309Ii0xIgo=
          verification: {}
        filesystem: root
        mode: 420
        path: /etc/udev/rules.d/72-nvmf-ctrl_loss_tmo.rules

Requirements for NVMeTCP

Starting with OCP 4.14 NVMe/TCP is enabled by default on RCOS nodes.

  • Modules including the nvme, nvme_core, nvme_fabrics, and nvme_tcp are required for using NVMe over Fabrics using TCP. Load the NVMe and NVMe-OF Modules using the below commands:
modprobe nvme
modprobe nvme_tcp
  • The NVMe modules may not be available after a node reboot. Loading the modules at startup is recommended.

Requirements for NVMeFC

  • NVMeFC Zoning of the Host Bus Adapters (HBAs) to the Fibre Channel port must be done.

Do not load the nvme_tcp module for NVMeFC

Linux Multipathing Requirements

Dell PowerStore supports Linux multipathing (DM-MPIO) and NVMe native multipathing. Configure Linux multipathing before installing the CSI Driver.

For NVMe connectivity native NVMe multipathing is used. The following sections apply only for iSCSI and Fiber Channel connectivity.

Configure Linux multipathing as follows:

  • Ensure that all nodes have the Device Mapper Multipathing package installed. You can install it by running dnf install device-mapper-multipath or apt install multipath-tools based on your Linux distribution.
  • Enable multipathing using the mpathconf --enable --with_multipathd y command. A default configuration file, /etc/multipath.conf is created.
  • Enable user_friendly_names and find_multipaths in the multipath.conf file.
  • Ensure that the multipath command for multipath.conf is available on all Kubernetes nodes.

The following is a sample multipath.conf file:

defaults {
  user_friendly_names yes
  find_multipaths yes
}
  blacklist {
}

On some distributions the multipathd service for changes to the configuration and dynamically reconfigures itself. If you need to manually trigger a reload you can run the following command: sudo systemctl reload multipathd

On OCP clusters you can add a MachineConfig to configure multipathing on the worker nodes.

You will need to first base64 encode the multipath.conf and add it to the MachineConfig definition.

echo 'defaults {
user_friendly_names yes
find_multipaths yes
}
  blacklist {
}' | base64 -w0

Use the base64 encoded string output in the following MachineConfig yaml file (under source section)

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  name: workers-multipath-conf-default
  labels:
    machineconfiguration.openshift.io/role: worker
spec:
  config:
    ignition:
      version: 3.2.0
    storage:
      files:
      - contents:
          source: data:text/plain;charset=utf-8;base64,ZGVmYXVsdHMgewp1c2VyX2ZyaWVuZGx5X25hbWVzIHllcwpmaW5kX211bHRpcGF0aHMgeWVzCn0KCmJsYWNrbGlzdCB7Cn0K
          verification: {}
        filesystem: root
        mode: 400
        path: /etc/multipath.conf

After deploying thisMachineConfig object, CoreOS will start the multipath service automatically. Alternatively, you can check the status of the multipath service by running the following command on each worker node. sudo multipath -ll

Refer to the Dell Host Connectivity Guide for more information.

Volume Snapshot Requirements (Optional)

For detailed snapshot setup procedure, click here.

Replication Requirements (Optional)

Applicable only if you decided to enable the Replication feature in sample.yaml

replication:
  enabled: true

Replication CRDs

The CRDs for replication can be obtained and installed from the csm-replication project on Github. Use csm-replication/deploy/replicationcrds.all.yaml located in csm-replication git repo for the installation.

CRDs should be configured during replication prepare stage with repctl as described in install-repctl

Namespace and PowerStore API Access Configuration

  1. Create namespace. Execute kubectl create namespace powerstore to create the powerstore namespace (if not already present). Note that the namespace can be any user-defined name, in this example, we assume that the namespace is ‘powerstore’.

  2. Create a file called config.yaml that has Powerstore array connection details with the following content

    arrays:
       - endpoint: "https://10.0.0.1/api/rest"     # full URL path to the PowerStore API
         globalID: "unique"                        # unique id of the PowerStore array
         username: "user"                          # username for connecting to API
         password: "password"                      # password for connecting to API
         skipCertificateValidation: true           # indicates if client side validation of (management)server's certificate can be skipped
         isDefault: true                           # treat current array as a default (would be used by storage classes without arrayID parameter)
         blockProtocol: "auto"                     # what SCSI transport protocol use on node side (FC, ISCSI, NVMeTCP, NVMeFC, None, or auto)
         nasName: "nas-server"                     # what NAS should be used for NFS volumes
         nfsAcls: "0777"                           # (Optional) defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory.
                                                   # NFSv4 ACls are supported for NFSv4 shares on NFSv4 enabled NAS servers only. POSIX ACLs are not supported and only POSIX mode bits are supported for NFSv3 shares.
    

    Change the parameters with relevant values for your PowerStore array. Add more blocks similar to above for each PowerStore array if necessary.

    If replication feature is enabled, ensure the secret includes all the PowerStore arrays involved in replication.

    User Privileges

    The username specified in config.yaml must be from the authentication providers of PowerStore. The user must have the correct user role to perform the actions. The minimum requirement is Storage Operator.

  3. Create Kubernetes secret:

    Create a file called secret.yaml in same folder as config.yaml with following content

    apiVersion: v1
    kind: Secret
    metadata:
       name: powerstore-config
       namespace: powerstore
    type: Opaque
    data:
       config: CONFIG_YAML
    

    Combine both files and create Kubernetes secret by running the following command:

    
    sed "s/CONFIG_YAML/`cat config.yaml | base64 -w0`/g" secret.yaml | kubectl apply -f -
    

Install Driver

  1. Follow all the prerequisites above

  2. Create a CR (Custom Resource) for PowerStore using the sample files provided

    a. Install the PowerStore driver using default configuration using the sample file provided here. This file can be modified to use custom parameters if needed.

    b. Install the PowerStore driver using the detailed configuration using the sample file provided here.

  3. Users should configure the parameters in CR. The following table lists the primary configurable parameters of the PowerStore driver and their default values:

Parameter Description Required Default
replicas Controls the number of controller pods you deploy. If the number of controller pods is greater than the number of available nodes, the excess pods will be in pending state until new nodes are available for scheduling. Default is 2 which allows for Controller high availability. Yes 2
namespace Specifies namespace where the driver will be installed Yes “powerstore”
fsGroupPolicy Defines which FS Group policy mode to be used. Supported modes None, File and ReadWriteOnceWithFSType. In OCP <= 4.16 and K8s <= 1.29, fsGroupPolicy is an immutable field. No “ReadWriteOnceWithFSType”
storageCapacity Enable/Disable storage capacity tracking feature No false
Common parameters for node and controller
X_CSI_POWERSTORE_NODE_NAME_PREFIX Prefix to add to each node registered by the CSI driver Yes “csi-node”
X_CSI_FC_PORTS_FILTER_FILE_PATH To set path to the file which provides a list of WWPN which should be used by the driver for FC connection on this node No “/etc/fc-ports-filter”
Controller parameters
X_CSI_POWERSTORE_EXTERNAL_ACCESS allows specifying additional entries for hostAccess of NFS volumes. Both single IP address and subnet are valid entries No empty
X_CSI_NFS_ACLS Defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory. No “0777”
Node parameters
X_CSI_POWERSTORE_ENABLE_CHAP Set to true if you want to enable iSCSI CHAP feature No false
  1. Execute the following command to create PowerStore custom resource:
kubectl create -f <input_sample_file.yaml>

This command will deploy the CSI PowerStore driver in the namespace specified in the input YAML file.

  • Next, the driver should be installed, you can check the condition of driver pods by running
    kubectl get all -n <driver-namespace>
    
  1. Verify the CSI Driver installation

  2. Refer https://github.com/dell/csi-powerstore/tree/main/samples for the sample files.

Note :

  1. “Kubelet config dir path” is not yet configurable in case of Operator based driver installation.
  2. Snapshotter and resizer sidecars are not optional. They are defaults with Driver installation.

Dynamic secret change detection

CSI PowerStore supports the ability to dynamically modify array information within the secret, allowing users to update credentials for the PowerStore arrays, in-flight, without restarting the driver.

Note: Updates to the secret that include adding a new array, or modifying the endpoint, globalID, or blockProtocol parameters require the driver to be restarted to properly pick up and process the changes.

To do so, change the configuration file config.yaml and apply the update using the following command:


sed "s/CONFIG_YAML/`cat config.yaml | base64 -w0`/g" secret.yaml | kubectl apply -f -