Prerequisite

The following requirements must be met before installing the CSI Driver for PowerMax:

  • A Kubernetes or OpenShift cluster (see supported versions).
  • If enabling CSM for Authorization, please refer to the Authorization deployment steps first
  • If enabling CSM Replication, both source and target storage systems must be locally managed by Unisphere.
    • Example: When using two Unisphere instances, the first Unisphere instance should be configured with the source storage system as locally managed and target storage system as remotely managed. The second Unisphere configuration should mirror the first — locally managing the target storage system and remotely managing the source storage system.
  • Refer to the sections below for protocol specific requirements.
  • For NVMe support the preferred multipath solution is NVMe native multipathing. The Dell Host Connectivity Guide describes the details of each configuration option.
  • Linux multipathing requirements (described later).
  • PowerPath for Linux requirements (described later).
  • Mount propagation is enabled on the container runtime that is being used.
  • If using Snapshot feature, satisfy all Volume Snapshot requirements.
  • Insecure registries are defined in Docker or other container runtime for CSI drivers that are hosted in a non-secure location.
  • Ensure that your nodes support mounting NFS volumes if using NFS.

CSI PowerMax Reverse Proxy

The CSI PowerMax Reverse Proxy is a component that will be installed with the CSI PowerMax driver. For more details on this feature, see the related documentation.

Create a TLS secret that holds an SSL certificate and a private key. This is required by the reverse proxy server. Use a tool such as openssl to generate this secret using the example below:

openssl genrsa -out tls.key 2048
openssl req -new -x509 -sha256 -key tls.key -out tls.crt -days 3650
kubectl create secret -n <namespace> tls csirevproxy-tls-secret --cert=tls.crt --key=tls.key

Fibre Channel Requirements

The following requirements must be fulfilled in order to successfully use the Fiber Channel protocol with the CSI PowerMax driver:

  • Zoning of the Host Bus Adapters (HBAs) to the Fibre Channel port director must be completed.
  • Ensure that the HBA WWNs (initiators) appear on the list of initiators that are logged into the array.
  • If the number of volumes that will be published to nodes is high, then configure the maximum number of LUNs for your HBAs on each node. See the appropriate HBA document to configure the maximum number of LUNs.

iSCSI Requirements

The following requirements must be fulfilled in order to successfully use the iSCSI protocol with the CSI PowerMax driver.

  • Ensure that the necessary iSCSI initiator utilities are installed on each Kubernetes worker node. This typically includes the iscsi-initiator-utils package for RHEL.
  • Ensure that the unique initiator name is set in /etc/iscsi/initiatorname.iscsi.
  • To configure iSCSI in Red Hat OpenShift clusters, you can create a MachineConfig object using the console or oc to ensure that the iSCSI daemon starts on all the Red Hat CoreOS nodes. Here is an example of a MachineConfig object:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  name: 99-iscsid
  labels:
    machineconfiguration.openshift.io/role: worker
spec:
  config:
    ignition:
      version: 3.4.0
    systemd:
      units:
      - name: "iscsid.service"
        enabled: true

Once the MachineConfig object has been deployed, CoreOS will ensure that the iscsid.service starts automatically. You can check the status of the iSCSI service by entering the following command on each worker node in the cluster: sudo systemctl status iscsid.

  • Ensure that the iSCSI initiators are available on all the nodes where the driver node plugin will be installed.
  • Ensure that the unique initiator name is set in /etc/iscsi/initiatorname.iscsi.
  • If your worker nodes are running Red Hat CoreOS, make sure that automatic iSCSI login at boot is configured. Please contact RedHat for more details.
  • Kubernetes nodes must have network connectivity to an iSCSI director on the Dell PowerMax array that has IP interfaces. Manually create IP routes for each node that connects to the Dell PowerMax if required.
  • Ensure that the iSCSI initiators on the nodes are not a part of any existing Host (Initiator Group) on the Dell PowerMax array.
  • The CSI Driver needs the port group name containing the required iSCSI director ports. These port groups must be set up on each Dell PowerMax array. All the port group names supplied to the driver must exist on each Dell PowerMax with the same name.

Refer to the Dell Host Connectivity Guide for more information.

NVMe Requirements

The following requirements must be fulfilled in order to successfully use the NVMe/TCP protocols with the CSI PowerMax driver:

  • Modules including the nvme, nvme_core, nvme_fabrics, and nvme_tcp are required for using NVMe over Fabrics using TCP. Load the NVMe and NVMe-OF Modules using the below commands:
modprobe nvme
modprobe nvme_tcp
  • The NVMe modules may not be available after a node reboot. Loading the modules at startup is recommended.

Starting with OCP 4.14 NVMe/TCP is enabled by default on RCOS nodes.

Cluster requirments

  • All OpenShift or Kubernetes nodes connecting to Dell storage arrays must use unique host NQNs.
  • The driver requires the NVMe command-line interface (nvme-cli) to manage the NVMe clients and targets. The NVMe CLI tool is installed in the host using the following command on RPM oriented Linux distributions.
sudo dnf -y install nvme-cli
  • Support for NVMe requires native NVMe multipathing to be configured on each worker node in the cluster. Please refer to the Dell Host Connectivity Guide for more details on NVMe multipathing requirements. To determine if the worker nodes are configured for native NVMe multipathing run the following command on each worker node:
cat /sys/module/nvme_core/parameters/multipath

If the result of the command displays Y then NVMe native multipathing is enabled in the kernel. If the output is N then native NVMe multipating is disabled. Consult the Dell Host Connectivity Guide for Linux to enable native NVMe multipathing.

Configure the IO policy

  • The default NVMeTCP native multipathing policy is “numa”. The preferred IO policy for NVMe devices used for PowerMax is round-robin. You can use udev rules to enable the round robin policy on all worker nodes. To view the IO policy you can use the following command:
nvme list-subsys

To change the IO policy to round-robin you can add a udev rule on each worker node. Place a config file in /etc/udev/rules.d with the name 71-nvme-io-policy.rules with the following contents:

ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{iopolicy}="round-robin"

In order to change the rules on a running kernel you can run the following commands:

/sbin/udevadm control --reload-rules
/sbin/udevadm trigger --type=devices --action=change

On OCP clusters you can add a MachineConfig to enable this rule on all worker nodes:

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  name: 99-workers-multipath-round-robin
  labels:
    machineconfiguration.openshift.io/role: worker
spec:
  config:
    ignition:
      version: 3.4.0
    storage:
      files:
      - contents:
          source: data:text/plain;charset=utf-8;base64,QUNUSU9OPT0iYWRkfGNoYW5nZSIsIFNVQlNZU1RFTT09Im52bWUtc3Vic3lzdGVtIiwgQVRUUntpb3BvbGljeX09InJvdW5kLXJvYmluIg==
          verification: {}
        filesystem: root
        mode: 420
        path: /etc/udev/rules.d/71-nvme-io-policy.rules

Array requirements

Once the NVMe endpoint is created on the array, follow the following steps to update the endpoint name to adhere to the CSI driver requirements.

  • Run nvme discover --transport=tcp --traddr=<InterfaceAdd> --trsvcid=4420. is the placeholder for actual IP address of NVMe Endpoint.
  • Fetch the subnqn, for e.g., nqn.1988-11.com.dell:PowerMax_2500:00:000120001100, this will be used as the subnqn holder while updating NVMe endpoint name.
  • Update the NVMe endpoint name as <subnqn>:<dir><port>>. Here is an example how it should look, nqn.1988-11.com.dell:PowerMax_2500:00:000120001100:OR1C000

NFS Requirements

CSI Driver for Dell PowerMax supports NFS communication. Ensure that the following requirements are met before you install CSI Driver:

  • Configure the NFS network. Please refer here for more details.
  • PowerMax Embedded Management guest to access Unisphere for PowerMax.
  • Create the NAS server. Please refer here for more details.

Choose your multipathing software between Multipath & PowerPath

Linux Multipathing Requirements

Configure Linux multipathing before installing the CSI Driver. Supported Multipathing - Dell PowerMax supports Linux multipathing (DM-MPIO) and NVMe native multipathing.
- Configure Linux multipathing before installing the CSI Driver.

NVMe

FC/iSCSI

PowerPath for Linux requirements

The CSI Driver for Dell PowerMax supports PowerPath for Linux. Configure Linux PowerPath before installing the CSI Driver.

Follow this procedure to set up PowerPath for Linux:

  • All the nodes must have the PowerPath package installed . Download the PowerPath archive for the environment from Dell Online Support.
  • Untar the PowerPath archive, Copy the RPM package into a temporary folder and Install PowerPath using rpm -ivh DellEMCPower.LINUX-<version>-<build>.<platform>.x86_64.rpm
  • Start the PowerPath service using systemctl start PowerPath

Note: Do not install Dell PowerPath if multi-path software is already installed, as they cannot co-exist with native multi-path software.

Replication Requirements (Optional)

Applicable only if you decided to enable the Replication feature in my-powermax-settings.yaml

replication:
  enabled: true

Replication CRD’s

The CRDs for replication can be obtained and installed from the csm-replication project on Github. Use csm-replication/deploy/replicationcrds.all.yaml located in the csm-replication git repo for the installation.

CRDs should be configured during replication prepare stage with repctl as described in install-repctl