Prerequisite

The following requirements must be met before installing the CSI Driver for Dell PowerFlex:

  • Enable Zero Padding on PowerFlex (see details below)
  • Mount propagation is enabled on container runtime that is being used
  • If using Snapshot feature, satisfy all Volume Snapshot requirements
  • A user must exist on the array with a role >= FrontEndConfigure
  • If enabling CSM for Authorization, please refer to the Authorization deployment steps first
  • If multipath is configured, ensure CSI-PowerFlex volumes are blacklisted by multipathd. See troubleshooting section for details
  • For NVMe support the preferred multipath solution is NVMe native multipathing. The Dell Host Connectivity Guide describes the details of each configuration option.
  • Secure boot is not supported; ensure that secure boot is disabled in the BIOS.

SDC Support

The CSI Driver for PowerFlex requires you to have installed the PowerFlex Storage Data Client (SDC) on all Kubernetes nodes which run the node portion of the CSI driver.

SDC could be installed automatically by CSI driver install on Kubernetes nodes with OS platform which support automatic SDC deployment; for Red Hat CoreOS (RHCOS) and RHEL. On Kubernetes nodes with OS version not supported by automatic install, you must perform the Manual SDC Deployment steps below.

Refer to https://quay.io/repository/dell/storage/powerflex/sdc for supported OS versions.

Please visit E-Lab Navigator for specific Dell Storage platform host operating system level support matrices.

Note: For NVMe/TCP, SDC must be disabled. If SDC is enabled, it takes precedence over NVMe/TCP and the driver will treat the node as an SDC node.

SDC Deployment Options

Automatic Deployment

To install CSI driver for Powerflex with automated SDC deployment, you need below two packages on worker nodes.

  • libaio
  • numactl-libs Optional: For a typical install, you will pull SDC kernel modules from the Dell FTP site, which is set up by default. Some users might want to mirror this repository to a local location. The PowerFlex KB article has instructions on how to do this.

For CSM Operator:

  • Enable/Disable SDC: Set the X_CSI_SDC_ENABLED value in the CR file to true (default) or false.

  • MDM Value: The operator sets the MDM value for initContainers in the driver CR from the mdm attributes in config.yaml. Do not set this manually.

  • SDC Monitor: Enable the SDC monitor by setting the enabled flag to true in the sidecar configuration.

    • With Sidecar: Edit the HOST_PID and MDM fields with the host PID and MDM IPs.
    • Without Sidecar: Leave the enabled field set to false.

    Example Sidecar config:

    sideCars:
    # sdc-monitor is disabled by default, due to high CPU usage
      - name: sdc-monitor
        enabled: false
        image: quay.io/dell/storage/powerflex/sdc:5.0
        envs:
         - name: HOST_PID
          value: "1"
        - name: MDM
          value: "10.xx.xx.xx,10.xx.xx.xx" #provide the same MDM value from secret
    
Manual Deployment

For operating systems not supported by automatic installation, or if you prefer to manage SDC manually:

  • Refer to Quay for supported OS versions.
  • Manual SDC Deployment Steps:
    1. Download SDC: Download the PowerFlex SDC from Dell Online support. The filename is EMC-ScaleIO-sdc-*.rpm, where * is the SDC name corresponding to the PowerFlex installation version.
    2. Set MDM IPs: Export the shell variable MDM_IP in a comma-separated list:
      export MDM_IP=xx.xxx.xx.xx,xx.xxx.xx.xx
      
      where xxx represents the actual IP address in your environment.
    3. Install SDC: Install the SDC per the Dell PowerFlex Deployment Guide. For Red Hat Enterprise Linux, run:
      rpm -iv ./EMC-ScaleIO-sdc-*.x86_64.rpm
      
      Replace * with the SDC name corresponding to the PowerFlex installation version.
    4. Multi-Array Support: To add more MDM_IP for multi-array support, run:
      /opt/emc/scaleio/sdc/bin/drv_cfg --add_mdm --ip 10.xx.xx.xx.xx,10.xx.xx.xx
      

NVMe/TCP Requirements

  1. Complete the NVMe network configuration to connect the hosts with the PowerFlex Storage array. Please refer Host Connectivity Guide for the best practices for attaching the hosts to a PowerFlex storage array.

  1. Verify the initiators of each host are logged in to the PowerFlex Storage Array. CSM will perform the Host Registration of each host with the PowerFlex Array.

  1. To ensure successful integration of NVMe protocols with the CSI Driver, the following conditions must be met:

    • Each OpenShift node that connects to Dell storage arrays must have a unique NVMe Qualified Name (NQN).
    • By default, the OpenShift deployment process for CoreOS assigns the same host NQN to all nodes. This value is stored in the file: /etc/nvme/hostnqn.
    • To resolve this and guarantee unique host NQNs across nodes, you can apply a machine configuration to your OpenShift Container Platform (OCP) cluster. One recommended approach is to add the following machine config:

    cat <<EOF > 99-worker-custom-nvme-hostnqn.yaml
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker
      name: 99-worker-custom-nvme-hostnqn
    spec:
      config:
        ignition:
          version: 3.4.0
        systemd:
          units:
            - contents: |
                [Unit]
                Description=Custom CoreOS Generate NVMe Hostnqn
                [Service]
                Type=oneshot
                ExecStart=/usr/bin/sh -c '/usr/sbin/nvme gen-hostnqn > /etc/nvme/hostnqn'
                RemainAfterExit=yes
                [Install]
                WantedBy=multi-user.target
              enabled: true
              name: custom-coreos-generate-nvme-hostnqn.service
    EOF
    

  1. Configure IO policy for native NVMe multipathing

    Use this comment to create the machine configuration to configure the native NVMe multipathing IO Policy to round robin.

    oc apply -f 99-workers-multipath-round-robin.yaml
    

    cat <<EOF> 71-nvmf-iopolicy-dell.rules
    ACTION=="add", SUBSYSTEM=="nvme-subsystem", ATTR{model}=="dellemc-PowerFlex",ATTR{iopolicy}="round-robin"
    EOF
    

    Example:

    cat <<EOF> 99-workers-multipath-round-robin.yaml
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      name: 99-workers-multipath-round-robin
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.4.0
        storage:
          files:
          - contents:
              source: data:text/plain;charset=utf-8;base64,$(cat 71-nvmf-iopolicy-dell.rules | base64 -w0)
              verification: {}
            filesystem: root
            mode: 420
            path: /etc/udev/rules.d/71-nvme-io-policy.rules 
    EOF
    

  1. Configure NVMe reconnecting forever

    Use this command to create the machine configuration to configure the NVMe reconnect

    oc apply -f 99-workers-nvmf-ctrl-loss-tmo.yaml 
    

    cat <<EOF> 72-nvmf-ctrl_loss_tmo.rules
    ACTION=="add|change", SUBSYSTEM=="nvme", KERNEL=="nvme*", ATTR{ctrl_loss_tmo}="-1"
    EOF
    

    cat <<EOF> 99-workers-nvmf-ctrl-loss-tmo.yaml
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      name: 99-workers-nvmf-ctrl-loss-tmo
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.4.0
        storage:
          files:
          - contents:
              source: data:text/plain;charset=utf-8;base64,$(cat 72-nvmf-ctrl_loss_tmo.rules | base64 -w0)
              verification: {}
            filesystem: root
            mode: 420
            path: /etc/udev/rules.d/72-nvmf-ctrl_loss_tmo.rules
    EOF
    

Cluster requirements

All OpenShift nodes connecting to Dell storage arrays must use unique host NVMe Qualified Names (NQNs).

The OpenShift deployment process for CoreOS will set the same host NQN for all nodes. The host NQN is stored in the file /etc/nvme/hostnqn. One possible solution to ensure unique host NQNs is to add the following machine config to your OCP cluster:

cat <<EOF> 99-worker-custom-nvme-hostnqn.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: worker
  name: 99-worker-custom-nvme-hostnqn
spec:
  config:
    ignition:
      version: 3.4.0
    systemd:
      units:
        - contents: |
            [Unit]
            Description=Custom CoreOS Generate NVMe Hostnqn

            [Service]
            Type=oneshot
            ExecStart=/usr/bin/sh -c '/usr/sbin/nvme gen-hostnqn > /etc/nvme/hostnqn'
            RemainAfterExit=yes

            [Install]
            WantedBy=multi-user.target
          enabled: true
          name: custom-coreos-generate-nvme-hostnqn.service
EOF

NFS Requirements

  • Ensure that your nodes support mounting NFS volumes if using NFS.

Enable Zero Padding on PowerFlex

Verify that zero padding is enabled on the PowerFlex storage pools that will be used. Use PowerFlex GUI or the PowerFlex CLI to check this setting. For more information to configure this setting, see Dell PowerFlex documentation.

Volume Snapshot Requirements

For detailed snapshot setup procedure, click here.