Installation Guide

  1. Set up an OpenShift cluster following the official documentation.
  2. Proceed to the Prerequisite
  3. Complete the base installation.
  4. Proceed with module installation.


The following requirements must be met before installing the CSI Driver for Dell PowerFlex:

  • Enable Zero Padding on PowerFlex (see details below)
  • Mount propagation is enabled on container runtime that is being used
  • If using Snapshot feature, satisfy all Volume Snapshot requirements
  • A user must exist on the array with a role >= FrontEndConfigure
  • If enabling CSM for Authorization, please refer to the Authorization deployment steps first
  • If multipath is configured, ensure CSI-PowerFlex volumes are blacklisted by multipathd. See troubleshooting section for details
  • For NVMe support the preferred multipath solution is NVMe native multipathing. The Dell Host Connectivity Guide describes the details of each configuration option.
  • Secure boot is not supported; ensure that secure boot is disabled in the BIOS.

SDC Support

The CSI Driver for PowerFlex requires you to have installed the PowerFlex Storage Data Client (SDC) on all Kubernetes nodes which run the node portion of the CSI driver.

SDC could be installed automatically by CSI driver install on Kubernetes nodes with OS platform which support automatic SDC deployment; for Red Hat CoreOS (RHCOS) and RHEL. On Kubernetes nodes with OS version not supported by automatic install, you must perform the Manual SDC Deployment steps below.

Refer to https://quay.io/repository/dell/storage/powerflex/sdc for supported OS versions.

Please visit E-Lab Navigator for specific Dell Storage platform host operating system level support matrices.

Note: For NVMe/TCP, SDC must be disabled. If SDC is enabled, it takes precedence over NVMe/TCP and the driver will treat the node as an SDC node.

SDC Deployment Options

Automatic Deployment

To install CSI driver for Powerflex with automated SDC deployment, you need below two packages on worker nodes.

  • libaio
  • numactl-libs Optional: For a typical install, you will pull SDC kernel modules from the Dell FTP site, which is set up by default. Some users might want to mirror this repository to a local location. The PowerFlex KB article has instructions on how to do this.

For CSM Operator:

  • Enable/Disable SDC: Set the X_CSI_SDC_ENABLED value in the CR file to true (default) or false.

  • MDM Value: The operator sets the MDM value for initContainers in the driver CR from the mdm attributes in config.yaml. Do not set this manually.

  • SDC Monitor: Enable the SDC monitor by setting the enabled flag to true in the sidecar configuration.

    • With Sidecar: Edit the HOST_PID and MDM fields with the host PID and MDM IPs.
    • Without Sidecar: Leave the enabled field set to false.

    Example Sidecar config:

    sideCars:
    # sdc-monitor is disabled by default, due to high CPU usage
      - name: sdc-monitor
        enabled: false
        image: quay.io/dell/storage/powerflex/sdc:5.0
        envs:
         - name: HOST_PID
          value: "1"
        - name: MDM
          value: "10.xx.xx.xx,10.xx.xx.xx" #provide the same MDM value from secret
    
Manual Deployment

For operating systems not supported by automatic installation, or if you prefer to manage SDC manually:

  • Refer to Quay for supported OS versions.
  • Manual SDC Deployment Steps:
    1. Download SDC: Download the PowerFlex SDC from Dell Online support. The filename is EMC-ScaleIO-sdc-*.rpm, where * is the SDC name corresponding to the PowerFlex installation version.
    2. Set MDM IPs: Export the shell variable MDM_IP in a comma-separated list:
      export MDM_IP=xx.xxx.xx.xx,xx.xxx.xx.xx
      
      where xxx represents the actual IP address in your environment.
    3. Install SDC: Install the SDC per the Dell PowerFlex Deployment Guide. For Red Hat Enterprise Linux, run:
      rpm -iv ./EMC-ScaleIO-sdc-*.x86_64.rpm
      
      Replace * with the SDC name corresponding to the PowerFlex installation version.
    4. Multi-Array Support: To add more MDM_IP for multi-array support, run:
      /opt/emc/scaleio/sdc/bin/drv_cfg --add_mdm --ip 10.xx.xx.xx.xx,10.xx.xx.xx
      

NVMe/TCP Requirements

  1. Complete the NVMe network configuration to connect the hosts with the PowerFlex Storage array. Please refer Host Connectivity Guide for the best practices for attaching the hosts to a PowerFlex storage array.

  1. Verify the initiators of each host are logged in to the PowerFlex Storage Array. CSM will perform the Host Registration of each host with the PowerFlex Array.

  1. To ensure successful integration of NVMe protocols with the CSI Driver, the following conditions must be met:

    • Each OpenShift node that connects to Dell storage arrays must have a unique NVMe Qualified Name (NQN).
    • By default, the OpenShift deployment process for CoreOS assigns the same host NQN to all nodes. This value is stored in the file: /etc/nvme/hostnqn.
    • To resolve this and guarantee unique host NQNs across nodes, you can apply a machine configuration to your OpenShift Container Platform (OCP) cluster. One recommended approach is to add the following machine config:

    cat <<EOF > 99-worker-custom-nvme-hostnqn.yaml
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker
      name: 99-worker-custom-nvme-hostnqn
    spec:
      config:
        ignition:
          version: 3.4.0
        systemd:
          units:
            - contents: |
                [Unit]
                Description=Custom CoreOS Generate NVMe Hostnqn
                [Service]
                Type=oneshot
                ExecStart=/usr/bin/sh -c '/usr/sbin/nvme gen-hostnqn > /etc/nvme/hostnqn'
                RemainAfterExit=yes
                [Install]
                WantedBy=multi-user.target
              enabled: true
              name: custom-coreos-generate-nvme-hostnqn.service
    EOF
    

  1. Configure IO policy for native NVMe multipathing

    Use this comment to create the machine configuration to configure the native NVMe multipathing IO Policy to round robin.

    oc apply -f 99-workers-multipath-round-robin.yaml
    

    cat <<EOF> 71-nvmf-iopolicy-dell.rules
    ACTION=="add", SUBSYSTEM=="nvme-subsystem", ATTR{model}=="dellemc-PowerFlex",ATTR{iopolicy}="round-robin"
    EOF
    

    Example:

    cat <<EOF> 99-workers-multipath-round-robin.yaml
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      name: 99-workers-multipath-round-robin
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.4.0
        storage:
          files:
          - contents:
              source: data:text/plain;charset=utf-8;base64,$(cat 71-nvmf-iopolicy-dell.rules | base64 -w0)
              verification: {}
            filesystem: root
            mode: 420
            path: /etc/udev/rules.d/71-nvme-io-policy.rules 
    EOF
    

  1. Configure NVMe reconnecting forever

    Use this command to create the machine configuration to configure the NVMe reconnect

    oc apply -f 99-workers-nvmf-ctrl-loss-tmo.yaml 
    

    cat <<EOF> 72-nvmf-ctrl_loss_tmo.rules
    ACTION=="add|change", SUBSYSTEM=="nvme", KERNEL=="nvme*", ATTR{ctrl_loss_tmo}="-1"
    EOF
    

    cat <<EOF> 99-workers-nvmf-ctrl-loss-tmo.yaml
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      name: 99-workers-nvmf-ctrl-loss-tmo
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.4.0
        storage:
          files:
          - contents:
              source: data:text/plain;charset=utf-8;base64,$(cat 72-nvmf-ctrl_loss_tmo.rules | base64 -w0)
              verification: {}
            filesystem: root
            mode: 420
            path: /etc/udev/rules.d/72-nvmf-ctrl_loss_tmo.rules
    EOF
    

Cluster requirements

All OpenShift nodes connecting to Dell storage arrays must use unique host NVMe Qualified Names (NQNs).

The OpenShift deployment process for CoreOS will set the same host NQN for all nodes. The host NQN is stored in the file /etc/nvme/hostnqn. One possible solution to ensure unique host NQNs is to add the following machine config to your OCP cluster:

cat <<EOF> 99-worker-custom-nvme-hostnqn.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: worker
  name: 99-worker-custom-nvme-hostnqn
spec:
  config:
    ignition:
      version: 3.4.0
    systemd:
      units:
        - contents: |
            [Unit]
            Description=Custom CoreOS Generate NVMe Hostnqn

            [Service]
            Type=oneshot
            ExecStart=/usr/bin/sh -c '/usr/sbin/nvme gen-hostnqn > /etc/nvme/hostnqn'
            RemainAfterExit=yes

            [Install]
            WantedBy=multi-user.target
          enabled: true
          name: custom-coreos-generate-nvme-hostnqn.service
EOF

NFS Requirements

  • Ensure that your nodes support mounting NFS volumes if using NFS.

Enable Zero Padding on PowerFlex

Verify that zero padding is enabled on the PowerFlex storage pools that will be used. Use PowerFlex GUI or the PowerFlex CLI to check this setting. For more information to configure this setting, see Dell PowerFlex documentation.

Volume Snapshot Requirements

For detailed snapshot setup procedure, click here.



Operator Installation


  1. On the OpenShift console, navigate to OperatorHub and use the keyword filter to search for Dell Container Storage Modules.

  2. Click Dell Container Storage Modules tile

  3. Keep all default settings and click Install.


    Verify that the operator is deployed

    oc get operators
    
    NAME                                                          AGE
    dell-csm-operator-certified.openshift-operators               2d21h
    
    oc get pod -n openshift-operators
    
    NAME                                                       READY   STATUS       RESTARTS      AGE
    dell-csm-operator-controller-manager-86dcdc8c48-6dkxm      2/2     Running      21 (19h ago)  2d21h
    

CSI Driver Installation

For details on enabling NVMe/TCP, refer to the NVMe/TCP Support section in the Features page.


  1. Create project:

    Use this command to create new project. You can use any project name instead of vxflexos.

    oc new-project vxflexos
    
  2. Create config secret:

    Create a file called config.yaml or use sample:

    Example:

    cat << EOF > config.yaml
    - username: "admin"
      password: "password"
      systemID: "2b11bb111111bb1b"
      endpoint: "https://127.0.0.2"
      skipCertificateValidation: true
      mdm: "10.0.0.3,10.0.0.4"
      blockProtocol: "auto"
    EOF
    

    Add blocks for each Powerflex array in config.yaml, and include both source and target arrays if replication is enabled.


    Edit the file, then run the command to create the vxflexos-config.

    oc create secret generic vxflexos-config --from-file=config=config.yaml -n vxflexos --dry-run=client -oyaml > secret-vxflexos-config.yaml
    

    Use this command to create the config:

    oc apply -f secret-vxflexos-config.yaml
    

    Use this command to replace or update the config:

    oc replace -f secret-vxflexos-config.yaml --force
    

    Verify config secret is created.

    oc get secret -n vxflexos
    
    NAME                 TYPE        DATA   AGE
    vxflexos-config      Opaque      1      3h7m
    

  1. Create Custom Resource ContainerStorageModule for powerflex.

    Use this command to create the ContainerStorageModule Custom Resource:

    oc create -f csm-vxflexos.yaml
    

    Starting from CSM version 1.16, users can utilize the spec.version parameter for automatic image management. No ConfigMap or custom registry configuration needed.

    For more details click on Advanced Image Configuration Options section.

    Example:

    cat << EOF > csm-vxflexos.yaml
    apiVersion: storage.dell.com/v1
    kind: ContainerStorageModule
    metadata:
      name: vxflexos
      namespace: vxflexos
    spec:
      version: v1.16.1
      driver:
        csiDriverType: "powerflex"
    EOF
    

    Detailed Configuration: Use the sample file for detailed settings.

    Note:

    • Configure SFTP settings based on PowerFlex Concepts
    • Configure OIDC secret for Powerflex Concepts

    To set the parameters in CR. The table shows the main settings of the PowerFlex driver and their defaults.
    Parameters
    Check if ContainerStorageModule CR is created successfully:
    oc get csm vxflexos -n vxflexos
    
    NAME        CREATIONTIME   CSIDRIVERTYPE   CONFIGVERSION                                           STATE
    vxflexos    3h             powerflex       v2.16.0         Succeeded
    

    Check the status of the CR to verify if the driver installation is in the Succeeded state. If the status is not Succeeded, see the Troubleshooting guide for more information.


  1. Create Storage class:

    Use this command to create the Storage Class:

    oc apply -f sc-vxflexos.yaml
    

    Example:

    cat << EOF > sc-vxflexos.yaml
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: vxflexos
      annotations:
        storageclass.kubernetes.io/is-default-class: "true"
    provisioner: csi-vxflexos.dellemc.com
    reclaimPolicy: Delete
    allowVolumeExpansion: true
    parameters:
      storagepool: <STORAGE_POOL>
      systemID: <SYSTEM_ID>
      csi.storage.k8s.io/fstype: ext4
    volumeBindingMode: Immediate
    EOF
    

    Replace placeholders with actual values for your powerflex array and various storage class sample refer here


    Verify Storage Class is created:

    oc get storageclass vxflexos
    
    NAME                    PROVISIONER                    RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    vxflexos (default)      csi-vxflexos.dellemc.com       Delete          Immediate           true                   3h8m
    

  2. Create Volume Snapshot Class:

    Use this command to create the Volume Snapshot Class:

    oc apply -f vsclass-vxflexos.yaml
    

    Example:

    cat << EOF > vsclass-vxflexos.yaml
    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshotClass
    metadata:
      name: vsclass-vxflexos
    driver: csi-vxflexos.dellemc.com
    deletionPolicy: Delete
    EOF
    

    Verify Volume Snapshot Class is created:

    oc get volumesnapshotclass
    
    NAME                 DRIVER                     DELETIONPOLICY   AGE
    vsclass-vxflexos     csi-vxflexos.dellemc.com   Delete           3h9m
    

Configurations


Persistent Volume Claim
Volume Snapshot