Prerequisite

CSI PowerMax Reverse Proxy

The CSI PowerMax Reverse Proxy is a component that will be installed with the CSI PowerMax driver. For more details on this feature, see the related documentation.

Create a TLS secret that holds an SSL certificate and a private key. This is required by the reverse proxy server.

Create the Configuration file (openssl.cnf) which includes the subjectAltName:

[ req ]
default_bits       = 2048
distinguished_name = req_distinguished_name
req_extensions     = req_ext
prompt             = no

[ req_distinguished_name ]
C  = XX
L  = Default City
O  = Default Company Ltd

[ req_ext ]
subjectAltName = @alt_names

[ alt_names ]
DNS.1 = "csipowermax-reverseproxy"
IP.1 = "0.0.0.0"

Use a tool such as openssl to generate this secret using the example below:

openssl genrsa -out tls.key 2048
openssl req -new -key tls.key -out tls.csr -config openssl.cnf
openssl x509 -req -in tls.csr -signkey tls.key -out tls.crt -days 3650 -extensions req_ext -extfile openssl.cnf

Make note of the newly created tls.crt and tls.key files as they will be referenced later to create Kubernetes Secret resources.

For the protocol specific prerequisite check below.

Fibre Channel Requirements

The following requirements must be fulfilled in order to successfully use the Fibre Channel protocol with the CSI PowerMax driver:

  • Zoning of the Host Bus Adapters (HBAs) to the Fibre Channel port director must be completed.
  • Ensure that the HBA WWNs (initiators) appear on the list of initiators that are logged into the array.
  • If the number of volumes that will be published to nodes is high, then configure the maximum number of LUNs for your HBAs on each node. See the appropriate HBA document to configure the maximum number of LUNs.

  1. Complete the zoning of each host with the PowerMax Storage Array. Please refer to Host Connectivity Guide for the guidelines when setting up a Fibre Channel SAN infrastructure.

  2. Verify the initiators of each host are logged in to the PowerMax Storage Array. CSM will perform the Host Registration of each host with the PowerMax Array.

  3. Multipathing software configuration

    a. Configure Device Mapper MPIO for PowerMax FC connectivity

    Use this command to create the machine configuration to configure the DM-MPIO service on all the worker hosts for FC connectivity.

    oc apply -f 99-workers-multipath-conf.yaml
    

    Example:

    cat <<EOF> multipath.conf
      defaults {
        polling_interval 5
        checker_timeout 15
        disable_changed_wwids yes
        find_multipaths no
      }
      devices {
        device {
          vendor                   EMC
          product                  SYMMETRIX
          detect_prio              "yes"
          path_selector            "service-time 0"
          path_grouping_policy     "group_by_prio"
          path_checker             tur
          failback                 immediate
          fast_io_fail_tmo         5
          no_path_retry            3
          rr_min_io_rq             1
          max_sectors_kb           1024
          dev_loss_tmo             10
        }
      }  
    EOF
    


    
    cat <<EOF> 99-workers-multipath-conf.yaml
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      name: 99-workers-multipath-conf
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.4.0
        storage:
          files:
          - contents:
              source: data:text/plain;charset=utf-8;base64,$(cat multipath.conf | base64 -w0)
              verification: {}
            filesystem: root
            mode: 400
            path: /etc/multipath.conf   
    EOF  
    


    b. Enable Linux Device Mapper MPIO

    Use this command to create the machine configuration to enable the DM-MPIO service on all the worker host

    oc apply -f 99-workers-enable-multipathd.yaml
    

    cat << EOF > 99-workers-enable-multipathd.yaml 
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      name: 99-workers-enable-multipathd.yaml
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.4.0  
        systemd:
          units:
          - name: "multipathd.service"
            enabled: true
    EOF
    

iSCSI Requirements

  1. Complete the iSCSI network configuration to connect the hosts with the PowerMax Storage array. Please refer to Host Connectivity Guide for the best practices for attaching the hosts to a PowerMax storage array.

  2. Verify the initiators of each host are logged in to the PowerMax Storage Array. CSM will perform the Host Registration of each host with the PowerMax Array.

  3. Enable iSCSI service

    Use this command to create the machine configuration to enable the iscsid service.

    oc apply -f 99-workers-enable-iscsid.yaml
    

    Example:

cat <<EOF> 99-workers-enable-iscsid.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  name: 99-workers-enable-iscsid
  labels:
    machineconfiguration.openshift.io/role: worker
spec:
  config:
    ignition:
      version: 3.4.0  
    systemd:
      units:
      - name: "iscsid.service"
        enabled: true
EOF

  1. Multipathing software configuration

    a. Configure Device Mapper MPIO for PowerMax iSCSI connectivity

    Use this command to create the machine configuration to configure the DM-MPIO service on all the worker hosts for iSCSI connectivity.

    oc apply -f 99-workers-multipath-conf.yaml
    

    cat <<EOF> multipath.conf
    defaults {
      polling_interval 5
      checker_timeout 15
      disable_changed_wwids yes
      find_multipaths no
    }
    devices {
      device {
        vendor                   EMC
        product                  SYMMETRIX
        detect_prio              "yes"
        path_selector            "service-time 0"
        path_grouping_policy     "group_by_prio"
        path_checker             tur
        failback                 immediate
        fast_io_fail_tmo         5
        no_path_retry            3
        rr_min_io_rq             1
        max_sectors_kb           1024
        dev_loss_tmo             10
      }
    }  
    EOF
    

    cat <<EOF> 99-workers-multipath-conf.yaml
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      name: 99-workers-multipath-conf
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.4.0
        storage:
          files:
          - contents:
              source: data:text/plain;charset=utf-8;base64,$(cat multipath.conf | base64 -w0)
              verification: {}
            filesystem: root
            mode: 400
            path: /etc/multipath.conf   
    EOF  
    


    b. Enable Linux Device Mapper MPIO

    Use this command to create the machine configuration to enable the DM-MPIO service on all the worker host

    oc apply -f 99-workers-enable-multipathd.yaml
    

    cat << EOF > 99-workers-enable-multipathd.yaml 
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      name: 99-workers-enable-multipathd.yaml
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.4.0  
        systemd:
          units:
          - name: "multipathd.service"
            enabled: true
    EOF
    

NVMe/FC Requirements

  1. Complete the zoning of each host with the PowerMax Storage Array. Please refer to Host Connectivity Guide for the guidelines when setting a Fibre Channel SAN infrastructure.

  1. Verify the initiators of each host are logged in to the PowerMax Storage Array. CSM will perform the Host Registration of each host with the PowerMax Array.

  1. To ensure successful integration of NVMe protocols with the CSI Driver, the following conditions must be met:

    • Each OpenShift node that connects to Dell storage arrays must have a unique NVMe Qualified Name (NQN).
    • By default, the OpenShift deployment process for CoreOS assigns the same host NQN to all nodes. This value is stored in the file: /etc/nvme/hostnqn.
    • To resolve this and guarantee unique host NQNs across nodes, you can apply a machine configuration to your OpenShift Container Platform (OCP) cluster. One recommended approach is to add the following machine config:

    ```yaml cat < 99-worker-custom-nvme-hostnqn.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-custom-nvme-hostnqn spec: config: ignition: version: 3.4.0 systemd: units: - contents: | [Unit] Description=Custom CoreOS Generate NVMe Hostnqn [Service] Type=oneshot ExecStart=/usr/bin/sh -c '/usr/sbin/nvme gen-hostnqn > /etc/nvme/hostnqn' RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: custom-coreos-generate-nvme-hostnqn.service EOF ```

  1. Multipathing software configuration

    cat <<EOF> 71-nvmf-iopolicy-dell.rules
    ACTION=="add", SUBSYSTEM=="nvme-subsystem", ATTR{model}=="dellemc-powermax",ATTR{iopolicy}="round-robin"
    EOF
    


    cat <<EOF> 99-workers-multipath-round-robin.yaml
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      name: 99-workers-multipath-round-robin
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.4.0
        storage:
          files:
          - contents:
              source: data:text/plain;charset=utf-8;base64,$(cat 71-nvmf-iopolicy-dell.rules | base64 -w0)
              verification: {}
            filesystem: root
            mode: 420
            path: /etc/udev/rules.d/71-nvme-io-policy.rules 
    EOF
    

  1. Configure NVMe reconnecting forever

    cat <<EOF> 72-nvmf-ctrl_loss_tmo.rules
    ACTION=="add|change", SUBSYSTEM=="nvme", KERNEL=="nvme*", ATTR{ctrl_loss_tmo}="-1"
    EOF
    

    cat <<EOF> 99-nvmf-ctrl-loss-tmo.yaml
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      name: 99-nvmf-ctrl-loss-tmo
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.4.0
        storage:
          files:
          - contents:
              source: data:text/plain;charset=utf-8;base64,(cat 72-nvmf-ctrl_loss_tmo.rules | base64 -w0)
              verification: {}
            filesystem: root
            mode: 420
            path: /etc/udev/rules.d/72-nvmf-ctrl_loss_tmo.rules
     EOF
    

Cluster requirements

All OpenShift nodes connecting to Dell storage arrays must use unique host NVMe Qualified Names (NQNs).

The OpenShift deployment process for CoreOS will set the same host NQN for all nodes. The host NQN is stored in the file /etc/nvme/hostnqn. One possible solution to ensure unique host NQNs is to add the following machine config to your OCP cluster:

cat <<EOF> 99-worker-custom-nvme-hostnqn.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: worker
  name: 99-worker-custom-nvme-hostnqn
spec:
  config:
    ignition:
      version: 3.4.0
    systemd:
      units:
        - contents: |
            [Unit]
            Description=Custom CoreOS Generate NVMe Hostnqn

            [Service]
            Type=oneshot
            ExecStart=/usr/bin/sh -c '/usr/sbin/nvme gen-hostnqn > /etc/nvme/hostnqn'
            RemainAfterExit=yes

            [Install]
            WantedBy=multi-user.target
          enabled: true
          name: custom-coreos-generate-nvme-hostnqn.service
EOF

Array requirements

Once the NVMe endpoint is created on the array, follow the following steps to update the endpoint name to adhere to the CSI driver requirements.

  • Run nvme discover --transport=tcp --traddr=<InterfaceAdd> --trsvcid=4420. is the placeholder for actual IP address of NVMe Endpoint.
  • Fetch the subnqn, for e.g., nqn.1988-11.com.dell:PowerMax_2500:00:000120001100, this will be used as the subnqn holder while updating NVMe endpoint name.
  • Update the NVMe endpoint name as <subnqn>:<dir><port>>. Here is an example how it should look, nqn.1988-11.com.dell:PowerMax_2500:00:000120001100:OR1C000

NVMe/TCP Requirements

  1. Complete the NVMe network configuration to connect the hosts with the PowerMax Storage array. Please refer Host Connectivity Guide for the best practices for attaching the hosts to a PowerMax storage array.

  1. Verify the initiators of each host are logged in to the PowerMax Storage Array. CSM will perform the Host Registration of each host with the PowerMax Array.

  1. To ensure successful integration of NVMe protocols with the CSI Driver, the following conditions must be met:

    • Each OpenShift node that connects to Dell storage arrays must have a unique NVMe Qualified Name (NQN).
    • By default, the OpenShift deployment process for CoreOS assigns the same host NQN to all nodes. This value is stored in the file: /etc/nvme/hostnqn.
    • To resolve this and guarantee unique host NQNs across nodes, you can apply a machine configuration to your OpenShift Container Platform (OCP) cluster. One recommended approach is to add the following machine config:

    cat <<EOF > 99-worker-custom-nvme-hostnqn.yaml
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker
      name: 99-worker-custom-nvme-hostnqn
    spec:
      config:
        ignition:
          version: 3.4.0
        systemd:
          units:
            - contents: |
                [Unit]
                Description=Custom CoreOS Generate NVMe Hostnqn
                [Service]
                Type=oneshot
                ExecStart=/usr/bin/sh -c '/usr/sbin/nvme gen-hostnqn > /etc/nvme/hostnqn'
                RemainAfterExit=yes
                [Install]
                WantedBy=multi-user.target
              enabled: true
              name: custom-coreos-generate-nvme-hostnqn.service
    EOF
    

  1. Configure IO policy for native NVMe multipathing

    Use this comment to create the machine configuration to configure the native NVMe multipathing IO Policy to round robin.

    oc apply -f 99-workers-multipath-round-robin.yaml
    

    cat <<EOF> 71-nvmf-iopolicy-dell.rules
    ACTION=="add", SUBSYSTEM=="nvme-subsystem", ATTR{model}=="dellemc-powermax",ATTR{iopolicy}="round-robin"
    EOF
    

    Example:

    cat <<EOF> 99-workers-multipath-round-robin.yaml
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      name: 99-workers-multipath-round-robin
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.4.0
        storage:
          files:
          - contents:
              source: data:text/plain;charset=utf-8;base64,$(cat 71-nvmf-iopolicy-dell.rules | base64 -w0)
              verification: {}
            filesystem: root
            mode: 420
            path: /etc/udev/rules.d/71-nvme-io-policy.rules 
    EOF
    

  1. Configure NVMe reconnecting forever

    Use this command to create the machine configuration to configure the NVMe reconnect

    oc apply -f 99-workers-nvmf-ctrl-loss-tmo.yaml 
    

    cat <<EOF> 72-nvmf-ctrl_loss_tmo.rules
    ACTION=="add|change", SUBSYSTEM=="nvme", KERNEL=="nvme*", ATTR{ctrl_loss_tmo}="-1"
    EOF
    

    cat <<EOF> 99-workers-nvmf-ctrl-loss-tmo.yaml
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      name: 99-workers-nvmf-ctrl-loss-tmo
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.4.0
        storage:
          files:
          - contents:
              source: data:text/plain;charset=utf-8;base64,$(cat 72-nvmf-ctrl_loss_tmo.rules | base64 -w0)
              verification: {}
            filesystem: root
            mode: 420
            path: /etc/udev/rules.d/72-nvmf-ctrl_loss_tmo.rules
    EOF
    

Cluster requirements

All OpenShift nodes connecting to Dell storage arrays must use unique host NVMe Qualified Names (NQNs).

The OpenShift deployment process for CoreOS will set the same host NQN for all nodes. The host NQN is stored in the file /etc/nvme/hostnqn. One possible solution to ensure unique host NQNs is to add the following machine config to your OCP cluster:

cat <<EOF> 99-worker-custom-nvme-hostnqn.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: worker
  name: 99-worker-custom-nvme-hostnqn
spec:
  config:
    ignition:
      version: 3.4.0
    systemd:
      units:
        - contents: |
            [Unit]
            Description=Custom CoreOS Generate NVMe Hostnqn

            [Service]
            Type=oneshot
            ExecStart=/usr/bin/sh -c '/usr/sbin/nvme gen-hostnqn > /etc/nvme/hostnqn'
            RemainAfterExit=yes

            [Install]
            WantedBy=multi-user.target
          enabled: true
          name: custom-coreos-generate-nvme-hostnqn.service
EOF

Array requirements

Once the NVMe endpoint is created on the array, follow the following steps to update the endpoint name to adhere to the CSI driver requirements.

  • Run nvme discover --transport=tcp --traddr=<InterfaceAdd> --trsvcid=4420. is the placeholder for actual IP address of NVMe Endpoint.
  • Fetch the subnqn, for e.g., nqn.1988-11.com.dell:PowerMax_2500:00:000120001100, this will be used as the subnqn holder while updating NVMe endpoint name.
  • Update the NVMe endpoint name as <subnqn>:<dir><port>>. Here is an example how it should look, nqn.1988-11.com.dell:PowerMax_2500:00:000120001100:OR1C000

NFS Requirements

  • Ensure that your nodes support mounting NFS volumes if using NFS.

Replication Requirements (Optional)

Applicable only if you decided to enable the Replication feature in my-powermax-settings.yaml

replication:
  enabled: true

Replication CRD’s

The CRDs for replication can be obtained and installed from the csm-replication project on Github. Use csm-replication/deploy/replicationcrds.all.yaml located in the csm-replication git repo for the installation.

CRDs should be configured during replication prepare stage with repctl as described in install-repctl