Prerequisite

The following requirements must be met before installing the CSI Driver for PowerStore:

  • A Kubernetes or OpenShift cluster (see supported versions)
  • Install Helm 3.x
  • Refer to the sections below for protocol specific requirements.
  • If you want to use pre-configured iSCSI/FC hosts be sure to check that they are not part of any host group.
  • Linux multipathing requirements (described later).
  • Mount propagation is enabled on the container runtime that is being used.
  • If using Snapshot feature, satisfy all Volume Snapshot requirements.
  • Insecure registries are defined in Docker or other container runtime for CSI drivers that are hosted in a non-secure location.
  • Ensure that your nodes support mounting NFS volumes if using NFS.
  • For NVMe support the preferred multipath solution is NVMe native multipathing. The Dell Host Connectivity Guide describes the details of each configuration option.

Fibre Channel requirements

The following requirements must be fulfilled in order to successfully use the Fiber Channel protocol with the CSI PowerStore driver:

  • Zoning of the Host Bus Adapters (HBAs) to the Fibre Channel ports on the PowerStore arrays must be done.
  • If the number of volumes that will be published to nodes is high, then configure the maximum number of LUNs for your HBAs on each node. See the appropriate HBA document to configure the maximum number of LUNs.

iSCSI Requirements

The following requirements must be fulfilled in order to successfully use the iSCSI protocol with the CSI PowerStore driver:

  • Ensure that the necessary iSCSI initiator utilities are installed on each Kubernetes worker node. This typically includes the iscsi-initiator-utils package for RHEL or open-iscsi package for Ubuntu.
  • Enable and start the iscsid service on each Kubernetes worker node. This service is responsible for managing the iSCSI initiator. You can enable the service by running the following command on all worker nodes: systemctl enable --now iscsid
  • Ensure that the unique initiator name is set in /etc/iscsi/initiatorname.iscsi.
  • Ensure that the iSCSI initiators are available on all the nodes where the driver node plugin will be installed.
  • Ensure that the unique initiator name is set in /etc/iscsi/initiatorname.iscsi.
  • Kubernetes nodes must have network connectivity to an iSCSI port on the PowerStore array that has IP interfaces.
  • Ensure that the iSCSI initiators on the nodes are not a part of any existing Host or Host Group on the PowerStore arrays. The driver will create host entries for the iSCSI initiators which adheres to the naming conventions required by the driver.

Refer to the Dell Host Connectivity Guide for more information.

NVMe Requirements

The following requirements must be fulfilled in order to successfully use the NVMe protocols with the CSI PowerStore driver:

  • All OpenShift or Kubernetes nodes connecting to Dell storage arrays must use unique host NVMe Qualified Names (NQNs).

The OpenShift deployment process for CoreOS will set the same host NQN for all nodes. The host NQN is stored in the file /etc/nvme/hostnqn. One possible solution to ensure unique host NQNs is to add the following machine config to your OCP cluster:

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: worker
  name: 99-worker-custom-nvme-hostnqn
spec:
  config:
    ignition:
      version: 3.4.0
    systemd:
      units:
        - contents: |
            [Unit]
            Description=Custom CoreOS Generate NVMe Hostnqn
            [Service]
            Type=oneshot
            ExecStart=/usr/bin/sh -c '/usr/sbin/nvme gen-hostnqn > /etc/nvme/hostnqn'
            RemainAfterExit=yes
            [Install]
            WantedBy=multi-user.target
          enabled: true
          name: custom-coreos-generate-nvme-hostnqn.service
  • The driver requires the NVMe command-line interface (nvme-cli) to manage the NVMe clients and targets. The NVMe CLI tool is installed in the host using the following command on RPM oriented Linux distributions.
sudo dnf -y install nvme-cli
  • Support for NVMe requires native NVMe multipathing to be configured on each worker node in the cluster. Please refer to the Dell Host Connectivity Guide for more details on NVMe multipathing requirements. To determine if the worker nodes are configured for native NVMe multipathing run the following command on each worker node:
cat /sys/module/nvme_core/parameters/multipath

If the result of the command displays Y then NVMe native multipathing is enabled in the kernel. If the output is N then native NVMe multipating is disabled. Consult the Dell Host Connectivity Guide for Linux to enable native NVMe multipathing.

Configure the IO policy

  • The default NVMeTCP native multipathing policy is “numa”. The preferred IO policy for NVMe devices used for PowerStore is round-robin. You can use udev rules to enable the round robin policy on all worker nodes. To view the IO policy you can use the following command:
nvme list-subsys

To change the IO policy to round-robin you can add a udev rule on each worker node. Place a config file in /etc/udev/rules.d with the name 71-nvme-io-policy.rules with the following contents:

ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{iopolicy}="round-robin"

In order to change the rules on a running kernel you can run the following commands:

/sbin/udevadm control --reload-rules
/sbin/udevadm trigger --type=devices --action=change

On OCP clusters you can add a MachineConfig to enable this rule on all worker nodes:

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  name: 99-workers-multipath-round-robin
  labels:
    machineconfiguration.openshift.io/role: worker
spec:
  config:
    ignition:
      version: 3.4.0
    storage:
      files:
      - contents:
          source: data:text/plain;charset=utf-8;base64,QUNUSU9OPT0iYWRkfGNoYW5nZSIsIFNVQlNZU1RFTT09Im52bWUtc3Vic3lzdGVtIiwgQVRUUntpb3BvbGljeX09InJvdW5kLXJvYmluIg==
          verification: {}
        filesystem: root
        mode: 420
        path: /etc/udev/rules.d/71-nvme-io-policy.rules

Configure the control loss timeout

To reduce the impact of PowerStore non disruptive software upgrades you must set the control loss timeout. This can be done using udev rules on each worker node. More information can be found in the Dell Host Connectivity Guide. To configure the control loss timeout place a config file in /etc/udev/rules.d with the name 72-nvmf-ctrl_loss_tmo.rules with the following contents:

ACTION=="add|change", SUBSYSTEM=="nvme", KERNEL=="nvme*", ATTR{ctrl_loss_tmo}="-1"

In order to change the rules on a running kernel you can run the following commands:

/sbin/udevadm control --reload-rules
/sbin/udevadm trigger --type=devices --action=change

On OCP clusters you can add a MachineConfig to enable this rule on all worker nodes:

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  name: 99-nvmf-ctrl-loss-tmo
  labels:
    machineconfiguration.openshift.io/role: worker
spec:
  config:
    ignition:
      version: 3.4.0
    storage:
      files:
      - contents:
          source: data:text/plain;charset=utf-8;base64,QUNUSU9OPT0iYWRkfGNoYW5nZSIsIFNVQlNZU1RFTT09Im52bWUiLCBLRVJORUw9PSJudm1lKiIsIEFUVFJ7Y3RybF9sb3NzX3Rtb309Ii0xIgo=
          verification: {}
        filesystem: root
        mode: 420
        path: /etc/udev/rules.d/72-nvmf-ctrl_loss_tmo.rules

Requirements for NVMeTCP

Starting with OCP 4.14 NVMe/TCP is enabled by default on RCOS nodes.

  • Modules including the nvme, nvme_core, nvme_fabrics, and nvme_tcp are required for using NVMe over Fabrics using TCP. Load the NVMe and NVMe-OF Modules using the below commands:
modprobe nvme
modprobe nvme_tcp
  • The NVMe modules may not be available after a node reboot. Loading the modules at startup is recommended.

Requirements for NVMeFC

  • NVMeFC Zoning of the Host Bus Adapters (HBAs) to the Fibre Channel port must be done.

Do not load the nvme_tcp module for NVMeFC

Linux multipathing requirements

Supported Multipathing

  • Dell PowerStore supports Linux multipathing (DM-MPIO) and NVMe native multipathing.
  • Configure Linux multipathing before installing the CSI Driver.
NVMe
FC/iSCSI

Replication feature Requirements (Optional)

Applicable only if you decided to enable the Replication feature in values.yaml

replication:
  enabled: true

Replication CRD’s

The CRDs for replication can be obtained and installed from the csm-replication project on Github. Use csm-replication/deploy/replicationcrds.all.yaml located in csm-replication git repo for the installation.

CRDs should be configured during replication prepare stage with repctl as described in install-repctl