Installation Guide

  1. Set up an OpenShift cluster following the official documentation.
  2. Proceed to the Prerequisite
  3. Complete the base installation.
  4. Proceed with module installation.


CSI PowerMax Reverse Proxy

The CSI PowerMax Reverse Proxy is a component that will be installed with the CSI PowerMax driver. For more details on this feature, see the related documentation.

Create a TLS secret that holds an SSL certificate and a private key. This is required by the reverse proxy server.

Create the Configuration file (openssl.cnf) which includes the subjectAltName:

[ req ]
default_bits       = 2048
distinguished_name = req_distinguished_name
req_extensions     = req_ext
prompt             = no

[ req_distinguished_name ]
C  = XX
L  = Default City
O  = Default Company Ltd

[ req_ext ]
subjectAltName = @alt_names

[ alt_names ]
DNS.1 = "csipowermax-reverseproxy"
IP.1 = "0.0.0.0"

Use a tool such as openssl to generate this secret using the example below:

openssl genrsa -out tls.key 2048
openssl req -new -key tls.key -out tls.csr -config openssl.cnf
openssl x509 -req -in tls.csr -signkey tls.key -out tls.crt -days 3650 -extensions req_ext -extfile openssl.cnf

Make note of the newly created tls.crt and tls.key files as they will be referenced later to create Kubernetes Secret resources.

For the protocol specific prerequisite check below.

Fibre Channel Requirements

The following requirements must be fulfilled in order to successfully use the Fiber Channel protocol with the CSI PowerMax driver:

  • Zoning of the Host Bus Adapters (HBAs) to the Fibre Channel port director must be completed.
  • Ensure that the HBA WWNs (initiators) appear on the list of initiators that are logged into the array.
  • If the number of volumes that will be published to nodes is high, then configure the maximum number of LUNs for your HBAs on each node. See the appropriate HBA document to configure the maximum number of LUNs.

  1. Complete the zoning of each host with the PowerMax Storage Array. Please refer to Host Connectivity Guide for the guidelines when setting up a Fibre Channel SAN infrastructure.

  2. Verify the initiators of each host are logged in to the PowerMax Storage Array. CSM will perform the Host Registration of each host with the PowerMax Array.

  3. Multipathing software configuration

    a. Configure Device Mapper MPIO for PowerMax FC connectivity

    Use this command to create the machine configuration to configure the DM-MPIO service on all the worker hosts for FC connectivity.

    oc apply -f 99-workers-multipath-conf.yaml
    

    Example:

    cat <<EOF> multipath.conf
      defaults {
        polling_interval 5
        checker_timeout 15
        disable_changed_wwids yes
        find_multipaths no
      }
      devices {
        device {
          vendor                   DellEMC
          product                  PowerMax
          detect_prio              "yes"
          path_selector            "service-time 0"
          path_grouping_policy     "group_by_prio"
          path_checker             tur
          failback                 immediate
          fast_io_fail_tmo         5
          no_path_retry            3
          rr_min_io_rq             1
          max_sectors_kb           1024
          dev_loss_tmo             10
        }
      }  
    EOF
    


    
    cat <<EOF> 99-workers-multipath-conf.yaml
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      name: 99-workers-multipath-conf
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.4.0
        storage:
          files:
          - contents:
              source: data:text/plain;charset=utf-8;base64,$(cat multipath.conf | base64 -w0)
              verification: {}
            filesystem: root
            mode: 400
            path: /etc/multipath.conf   
    EOF  
    


    b. Enable Linux Device Mapper MPIO

    Use this command to create the machine configuration to enable the DM-MPIO service on all the worker host

    oc apply -f 99-workers-enable-multipathd.yaml
    

    cat << EOF > 99-workers-enable-multipathd.yaml 
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      name: 99-workers-enable-multipathd.yaml
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.4.0  
        systemd:
          units:
          - name: "multipathd.service"
            enabled: true
    EOF
    

  1. Complete the iSCSI network configuration to connect the hosts with the PowerMax Storage array. Please refer to Host Connectivity Guide for the best practices for attaching the hosts to a PowerMax storage array.

  2. Verify the initiators of each host are logged in to the PowerMax Storage Array. CSM will perform the Host Registration of each host with the PowerMax Array.

  3. Enable iSCSI service

    Use this command to create the machine configuration to enable the iscsid service.

    oc apply -f 99-workers-enable-iscsid.yaml
    

    Example:

cat <<EOF> 99-workers-enable-iscsid.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  name: 99-workers-enable-iscsid
  labels:
    machineconfiguration.openshift.io/role: worker
spec:
  config:
    ignition:
      version: 3.4.0  
    systemd:
      units:
      - name: "iscsid.service"
        enabled: true
EOF

  1. Multipathing software configuration

    a. Configure Device Mapper MPIO for PowerMax FC connectivity

    Use this command to create the machine configuration to configure the DM-MPIO service on all the worker hosts for FC connectivity.

    oc apply -f 99-workers-multipath-conf.yaml
    

    cat <<EOF> multipath.conf
    defaults {
      polling_interval 5
      checker_timeout 15
      disable_changed_wwids yes
      find_multipaths no
    }
    devices {
      device {
        vendor                   DellEMC
        product                  PowerMax
        detect_prio              "yes"
        path_selector            "service-time 0"
        path_grouping_policy     "group_by_prio"
        path_checker             tur
        failback                 immediate
        fast_io_fail_tmo         5
        no_path_retry            3
        rr_min_io_rq             1
        max_sectors_kb           1024
        dev_loss_tmo             10
      }
    }  
    EOF
    

    cat <<EOF> 99-workers-multipath-conf.yaml
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      name: 99-workers-multipath-conf
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.4.0
        storage:
          files:
          - contents:
              source: data:text/plain;charset=utf-8;base64,$(cat multipath.conf | base64 -w0)
              verification: {}
            filesystem: root
            mode: 400
            path: /etc/multipath.conf   
    EOF  
    


    b. Enable Linux Device Mapper MPIO

    Use this command to create the machine configuration to enable the DM-MPIO service on all the worker host

    oc apply -f 99-workers-enable-multipathd.yaml
    

    cat << EOF > 99-workers-enable-multipathd.yaml 
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      name: 99-workers-enable-multipathd.yaml
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.4.0  
        systemd:
          units:
          - name: "multipathd.service"
            enabled: true
    EOF
    
  1. Complete the zoning of each host with the PowerMax Storage Array. Please refer to Host Connectivity Guide for the guidelines when setting a Fibre Channel SAN infrastructure.

  1. Verify the initiators of each host are logged in to the PowerMax Storage Array. CSM will perform the Host Registration of each host with the PowerMax Array.

  1. Multipathing software configuration

    cat <<EOF> 71-nvmf-iopolicy-dell.rules
    ACTION=="add", SUBSYSTEM=="nvme-subsystem", ATTR{model}=="dellemc-powermax",ATTR{iopolicy}="round-robin"
    EOF
    


    cat <<EOF> 99-workers-multipath-round-robin.yaml
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      name: 99-workers-multipath-round-robin
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.4.0
        storage:
          files:
          - contents:
              source: data:text/plain;charset=utf-8;base64,$(cat 71-nvmf-iopolicy-dell.rules | base64 -w0)
              verification: {}
            filesystem: root
            mode: 420
            path: /etc/udev/rules.d/71-nvme-io-policy.rules 
    EOF
    

  1. Configure NVMe reconnecting forever

    cat <<EOF> 72-nvmf-ctrl_loss_tmo.rules
    ACTION=="add|change", SUBSYSTEM=="nvme", KERNEL=="nvme*", ATTR{ctrl_loss_tmo}="-1"
    EOF
    

    cat <<EOF> 99-nvmf-ctrl-loss-tmo.yaml
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      name: 99-nvmf-ctrl-loss-tmo
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.4.0
        storage:
          files:
          - contents:
              source: data:text/plain;charset=utf-8;base64,(cat 72-nvmf-ctrl_loss_tmo.rules | base64 -w0)
              verification: {}
            filesystem: root
            mode: 420
            path: /etc/udev/rules.d/72-nvmf-ctrl_loss_tmo.rules
     EOF
    

Cluster requirements

All OpenShift nodes connecting to Dell storage arrays must use unique host NVMe Qualified Names (NQNs).

The OpenShift deployment process for CoreOS will set the same host NQN for all nodes. The host NQN is stored in the file /etc/nvme/hostnqn. One possible solution to ensure unique host NQNs is to add the following machine config to your OCP cluster:

cat <<EOF> 99-worker-custom-nvme-hostnqn.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: worker
  name: 99-worker-custom-nvme-hostnqn
spec:
  config:
    ignition:
      version: 3.4.0
    systemd:
      units:
        - contents: |
            [Unit]
            Description=Custom CoreOS Generate NVMe Hostnqn

            [Service]
            Type=oneshot
            ExecStart=/usr/bin/sh -c '/usr/sbin/nvme gen-hostnqn > /etc/nvme/hostnqn'
            RemainAfterExit=yes

            [Install]
            WantedBy=multi-user.target
          enabled: true
          name: custom-coreos-generate-nvme-hostnqn.service
EOF
  1. Complete the NVMe network configuration to connect the hosts with the PowerMax Storage array. Please refer Host Connectivity Guide for the best practices for attaching the hosts to a PowerMax storage array.

  1. Verify the initiators of each host are logged in to the PowerMax Storage Array. CSM will perform the Host Registration of each host with the PowerMax Array.

  1. Configure IO policy for native NVMe multipathing

    Use this comment to create the machine configuration to configure the native NVMe multipathing IO Policy to round robin.

    oc apply -f 99-workers-multipath-round-robin.yaml
    

    cat <<EOF> 71-nvmf-iopolicy-dell.rules
    ACTION=="add", SUBSYSTEM=="nvme-subsystem", ATTR{model}=="dellemc-powermax",ATTR{iopolicy}="round-robin"
    EOF
    

    Example:

    cat <<EOF> 99-workers-multipath-round-robin.yaml
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      name: 99-workers-multipath-round-robin
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.4.0
        storage:
          files:
          - contents:
              source: data:text/plain;charset=utf-8;base64,$(cat 71-nvmf-iopolicy-dell.rules | base64 -w0)
              verification: {}
            filesystem: root
            mode: 420
            path: /etc/udev/rules.d/71-nvme-io-policy.rules 
    EOF
    

  1. Configure NVMe reconnecting forever

    Use this command to create the machine configuration to configure the NVMe reconnect

    oc apply -f 99-workers-nvmf-ctrl-loss-tmo.yaml 
    

    cat <<EOF> 72-nvmf-ctrl_loss_tmo.rules
    ACTION=="add|change", SUBSYSTEM=="nvme", KERNEL=="nvme*", ATTR{ctrl_loss_tmo}="-1"
    EOF
    

    cat <<EOF> 99-workers-nvmf-ctrl-loss-tmo.yaml
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      name: 99-workers-nvmf-ctrl-loss-tmo
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.4.0
        storage:
          files:
          - contents:
              source: data:text/plain;charset=utf-8;base64,$(cat 72-nvmf-ctrl_loss_tmo.rules | base64 -w0)
              verification: {}
            filesystem: root
            mode: 420
            path: /etc/udev/rules.d/72-nvmf-ctrl_loss_tmo.rules
    EOF
    

Cluster requirements

All OpenShift nodes connecting to Dell storage arrays must use unique host NVMe Qualified Names (NQNs).

The OpenShift deployment process for CoreOS will set the same host NQN for all nodes. The host NQN is stored in the file /etc/nvme/hostnqn. One possible solution to ensure unique host NQNs is to add the following machine config to your OCP cluster:

cat <<EOF> 99-worker-custom-nvme-hostnqn.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: worker
  name: 99-worker-custom-nvme-hostnqn
spec:
  config:
    ignition:
      version: 3.4.0
    systemd:
      units:
        - contents: |
            [Unit]
            Description=Custom CoreOS Generate NVMe Hostnqn

            [Service]
            Type=oneshot
            ExecStart=/usr/bin/sh -c '/usr/sbin/nvme gen-hostnqn > /etc/nvme/hostnqn'
            RemainAfterExit=yes

            [Install]
            WantedBy=multi-user.target
          enabled: true
          name: custom-coreos-generate-nvme-hostnqn.service
EOF

NFS Requirements

  • Ensure that your nodes support mounting NFS volumes if using NFS.

Replication Requirements (Optional)

Applicable only if you decided to enable the Replication feature in my-powermax-settings.yaml

replication:
  enabled: true

Replication CRD’s

The CRDs for replication can be obtained and installed from the csm-replication project on Github. Use csm-replication/deploy/replicationcrds.all.yaml located in the csm-replication git repo for the installation.

CRDs should be configured during replication prepare stage with repctl as described in install-repctl



Operator Installation


  1. On the OpenShift console, navigate to OperatorHub and use the keyword filter to search for Dell Container Storage Modules.

  2. Click Dell Container Storage Modules tile

  3. Keep all default settings and click Install.


    Verify that the operator is deployed

    oc get operators
    
    NAME                                                          AGE
    dell-csm-operator-certified.openshift-operators               2d21h
    
    oc get pod -n openshift-operators
    
    NAME                                                       READY   STATUS       RESTARTS      AGE
    dell-csm-operator-controller-manager-86dcdc8c48-6dkxm      2/2     Running      21 (19h ago)  2d21h
    

CSI Driver Installation


  1. Create namespace:

      oc create namespace powermax
    
  2. Create PowerMax credentials:

    Create a file called config.yaml or pick a sample.

    cat << EOF > config.yaml
    storageArrays:
      - storageArrayId: "000000000001"
        primaryEndpoint: https://primary-1.unisphe.re:8443
        backupEndpoint: https://backup-1.unisphe.re:8443
    managementServers:
      - endpoint: https://primary-1.unisphe.re:8443
        username: admin
        password: password
        skipCertificateValidation: true
      - endpoint: https://backup-1.unisphe.re:8443
        username: admin2
        password: password2
        skipCertificateValidation: false
        certSecret: primary-cert
    EOF
    

    Edit the file, then run the command to create the powermax-creds.

      oc create secret generic powermax-creds --from-file=config=config.yaml -n powermax --dry-run=client -oyaml > secret-powermax-config.yaml
    

    Use this command to create the config:

      oc apply -f secret-powermax-config.yaml
    

    Use this command to replace or update the config:

      oc replace -f secret-powermax-config.yaml --force
    

    Verify config secret is created.

      oc get secret -n powermax
    
      NAME                 TYPE        DATA   AGE
      powermax-creds       Opaque      1      3h7m
    
  3. Create Powermax Array Configmap:

    Note: powermax-array-config is deprecated and remains for backward compatibility only. You can skip creating it and instead add values for X_CSI_MANAGED_ARRAYS, X_CSI_TRANSPORT_PROTOCOL, and X_CSI_POWERMAX_PORTGROUPS in the sample files.

    Create a configmap using the sample file here. Fill in the appropriate values for driver configuration.

    # To create this configmap use: kubectl create -f powermax-array-config.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: powermax-array-config
      namespace: powermax
    data:
      powermax-array-config.yaml: |
        # List of comma-separated port groups (ISCSI only). Example: PortGroup1, portGroup2 Required for iSCSI only
        X_CSI_POWERMAX_PORTGROUPS: ""
        # Choose which transport protocol to use (ISCSI, FC, NVMETCP, auto) defaults to auto if nothing is specified
        X_CSI_TRANSPORT_PROTOCOL: ""
        # IP address of the Unisphere for PowerMax (Required), Defaults to https://0.0.0.0:8443
        X_CSI_POWERMAX_ENDPOINT: "https://10.0.0.0:8443"
        # List of comma-separated array ID(s) which will be managed by the driver (Required)
        X_CSI_MANAGED_ARRAYS: "000000000000,000000000000,"
    
  4. Create the Reverse Proxy TLS Secret

    Referencing the TLS certificate and key created in the CSI PowerMax Reverse Proxy prerequisite, create the csirevproxy-tls-secret secret.

    oc create secret -n powermax tls csirevproxy-tls-secret --cert=tls.crt --key=tls.key
    
  5. Create a CR (Custom Resource) for PowerMax using the sample files provided

    i. Create a CR (Custom Resource) for PowerMax using the sample files provided

    a. Minimal Configuration: Use the sample file for default settings. If using the secret above, ensure that the secret name of the secret created is powermax-creds.

    [OR]

    b. Detailed Configuration: Use the sample file for detailed settings or use Wizard to generate the sample file. .

    • Users should configure the parameters in CR. The following table lists the primary configurable parameters of the PowerMax driver and their default values:
    Parameters

    ii. Confirm that value of X_CSI_REVPROXY_USE_SECRET is set to true.

    iii. Create PowerMax custom resource:

    oc create -f <input_sample_file.yaml>
    

    This command will deploy the CSI PowerMax driver in the namespace specified in the input YAML file.

    Check if ContainerStorageModule CR is created successfully:

    oc get csm powermax -n powermax
    
    NAME        CREATIONTIME   CSIDRIVERTYPE   CONFIGVERSION   STATE
    powermax    3h             powermax        v2.14.0         Succeeded      
    

    Check the status of the CR to verify if the driver installation is in the Succeeded state. If the status is not Succeeded, see the Troubleshooting guide for more information.

  1. Refer Volume Snapshot Class and Storage Class for the sample files.

Other features to enable

Dynamic Logging Configuration

This feature is introduced in CSI Driver for powermax version 2.0.0.

As part of driver installation, a ConfigMap with the name powermax-config-params is created using the manifest located in the sample file. This ConfigMap contains an attribute CSI_LOG_LEVEL which specifies the current log level of the CSI driver. To set the default/initial log level user can set this field during driver installation.

To update the log level dynamically user has to edit the ConfigMap powermax-config-params and update CSI_LOG_LEVEL to the desired log level.

kubectl edit configmap -n powermax powermax-config-params

Volume Health Monitoring

This feature is introduced in CSI Driver for PowerMax version 2.2.0.

Volume Health Monitoring feature is optional and by default this feature is disabled for drivers when installed via CSM operator.

To enable this feature, set X_CSI_HEALTH_MONITOR_ENABLED to true in the driver manifest under controller and node section. Also, install the external-health-monitor from sideCars section for controller plugin. To get the volume health state value under controller should be set to true as seen below. To get the volume stats value under node should be set to true.

     # Install the 'external-health-monitor' sidecar accordingly.
        # Allowed values:
        #   true: enable checking of health condition of CSI volumes
        #   false: disable checking of health condition of CSI volumes
        # Default value: false
     controller:
       envs:
         - name: X_CSI_HEALTH_MONITOR_ENABLED
           value: "true"
     node:
       envs:
        # X_CSI_HEALTH_MONITOR_ENABLED: Enable/Disable health monitor of CSI volumes from node plugin - volume usage
        # Allowed values:
        #   true: enable checking of health condition of CSI volumes
        #   false: disable checking of health condition of CSI volumes
        # Default value: false
         - name: X_CSI_HEALTH_MONITOR_ENABLED
           value: "true"

Support for custom topology keys

This feature is introduced in CSI Driver for PowerMax version 2.3.0.

Support for custom topology keys is optional and by default this feature is disabled for drivers when installed via CSM operator.

X_CSI_TOPOLOGY_CONTROL_ENABLED provides a way to filter topology keys on a node based on array and transport protocol. If enabled, user can create custom topology keys by editing node-topology-config configmap.

  1. To enable this feature, set X_CSI_TOPOLOGY_CONTROL_ENABLED to true in the driver manifest under node section.

       # X_CSI_TOPOLOGY_CONTROL_ENABLED provides a way to filter topology keys on a node based on array and transport protocol
            # if enabled, user can create custom topology keys by editing node-topology-config configmap.
            # Allowed values:
            #   true: enable the filtration based on config map
            #   false: disable the filtration based on config map
            # Default value: false
            - name: X_CSI_TOPOLOGY_CONTROL_ENABLED
              value: "false"
    
  2. Edit the sample config map “node-topology-config” as described here with appropriate values: Example:

            kind: ConfigMap
            metadata:
              name: node-topology-config
              namespace: powermax
            data:
              topologyConfig.yaml: |
                allowedConnections:
                  - nodeName: "node1"
                    rules:
                      - "000000000001:FC"
                      - "000000000002:FC"
                  - nodeName: "*"
                    rules:
                      - "000000000002:FC"
                deniedConnections:
                  - nodeName: "node2"
                    rules:
                      - "000000000002:*"
                  - nodeName: "node3"
                    rules:
                      - "*:*"
    
    Parameters

  1. Run following command to create the configmap
    kubectl create -f topologyConfig.yaml
    

Note: Name of the configmap should always be node-topology-config.