Installation Guide
- Set up an OpenShift cluster following the official documentation.
- Proceed to the Prerequisite
- Complete the base installation.
- Proceed with module installation.
-
Create a user in the PowerStore Navigate in the PowerStore Manager Settings -> Users -> Add
Username: csmadmin
User Role: Storage Operator -
(Optional) Create NAS server Navigate in the PowerStore Manager Storage -> Nas Servers -> Create
-
For the protocol specific prerequisite check below.
-
Complete the zoning of each host with the PowerStore Storage Array. Please refer the Host Connectivity Guide for the guidelines when setting a Fibre Channel SAN infrastructure.
-
Verify the initiators of each host are logged in to the PowerStore Storage Array. CSM will perform the Host Registration of each host with the PowerStore Array.
-
Multipathing software configuration
a. Configure Device Mapper MPIO for PowerStore FC connectivity
Use this command to create the machine configuration to configure the DM-MPIO service on all the worker hosts for FC connectivity.
oc apply -f 99-workers-multipath-conf.yaml
Example:
cat <<EOF> multipath.conf defaults { polling_interval 5 checker_timeout 15 disable_changed_wwids yes find_multipaths no } devices { device { vendor DellEMC product PowerStore detect_prio "yes" path_selector "service-time 0" path_grouping_policy "group_by_prio" path_checker tur failback immediate fast_io_fail_tmo 5 no_path_retry 3 rr_min_io_rq 1 max_sectors_kb 1024 dev_loss_tmo 10 } } EOF
cat <<EOF> 99-workers-multipath-conf.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 99-workers-multipath-conf labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.4.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,$(cat multipath.conf | base64 -w0) verification: {} filesystem: root mode: 400 path: /etc/multipath.conf EOF
b. Enable Linux Device Mapper MPIO
Use this command to create the machine configuration to enable the DM-MPIO service on all the worker host
oc apply -f 99-workers-enable-multipathd.yaml
cat << EOF > 99-workers-enable-multipathd.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 99-workers-enable-multipathd.yaml labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.4.0 systemd: units: - name: "multipathd.service" enabled: true EOF
-
Complete the iSCSI network configuration to connect the hosts with the PowerStore Storage array. Please refer the Host Connectivity Guide for the best practices for attaching the hosts to a PowerStore storage array.
-
Verify the initiators of each host are logged in to the PowerStore Storage Array. CSM will perform the Host Registration of each host with the PowerStore Array.
-
Enable iSCSI service
Use this command to create the machine configuration to enable the iscsid service.
oc apply -f 99-workers-enable-iscsid.yaml
Example:
cat <<EOF> 99-workers-enable-iscsid.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 99-workers-enable-iscsid labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.4.0 systemd: units: - name: "iscsid.service" enabled: true
-
Multipathing software configuration
a. Configure Device Mapper MPIO for PowerStore iSCSI connectivity
Use this command to create the machine configuration to configure the DM-MPIO service on all the worker hosts for iSCSI connectivity.
oc apply -f 99-workers-multipath-conf.yaml
Example:cat <<EOF> multipath.conf defaults { polling_interval 5 checker_timeout 15 disable_changed_wwids yes find_multipaths no } devices { device { vendor DellEMC product PowerStore detect_prio "yes" path_selector "service-time 0" path_grouping_policy "group_by_prio" path_checker tur failback immediate fast_io_fail_tmo 5 no_path_retry 3 rr_min_io_rq 1 max_sectors_kb 1024 dev_loss_tmo 10 } } EOF
cat <<EOF> 99-workers-multipath-conf.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 99-workers-multipath-conf labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.4.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,$(cat multipath.conf | base64 -w0) verification: {} filesystem: root mode: 400 path: /etc/multipath.conf EOF
b. Enable Linux Device Mapper MPIO
Use this command to create the machine configuration to enable the DM-MPIO service on all the worker host
oc apply -f 99-workers-enable-multipathd.yaml
cat << EOF > 99-workers-enable-multipathd.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 99-workers-enable-multipathd.yaml labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.4.0 systemd: units: - name: "multipathd.service" enabled: true EOF
- Complete the zoning of each host with the PowerStore Storage Array. Please refer the Host Connectivity Guide for the guidelines when setting a Fibre Channel SAN infrastructure.
- Verify the initiators of each host are logged in to the PowerStore Storage Array. CSM will perform the Host Registration of each host with the PowerStore Array.
-
Configure IO policy for native NVMe multipathing
Use this comment to create the machine configuration to configure the native NVMe multipathing IO Policy to round robin.
oc apply -f 99-workers-multipath-round-robin.yaml
cat <<EOF> 71-nvmf-iopolicy-dell.rules ACTION=="add", SUBSYSTEM=="nvme-subsystem", ATTR{model}=="dellemc-powerstore",ATTR{iopolicy}="round-robin" EOF
Example:cat <<EOF> 99-workers-multipath-round-robin.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 99-workers-multipath-round-robin labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.4.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,$(cat 71-nvmf-iopolicy-dell.rules | base64 -w0) verification: {} filesystem: root mode: 420 path: /etc/udev/rules.d/71-nvme-io-policy.rules EOF
-
Configure NVMe reconnecting forever
Use this command to create the machine configuration to configure the NVMe reconnect
oc apply -f 99-workers-nvmf-ctrl-loss-tmo.yaml
cat <<EOF> 72-nvmf-ctrl_loss_tmo.rules ACTION=="add|change", SUBSYSTEM=="nvme", KERNEL=="nvme*", ATTR{ctrl_loss_tmo}="-1" EOF
cat <<EOF> 99-workers-nvmf-ctrl-loss-tmo.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 99-workers-nvmf-ctrl-loss-tmo labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.4.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,$(cat 72-nvmf-ctrl_loss_tmo.rules | base64 -w0) verification: {} filesystem: root mode: 420 path: /etc/udev/rules.d/72-nvmf-ctrl_loss_tmo.rules EOF
- Complete the NVMe network configuration to connect the hosts with the PowerStore Storage array. Please refer Host Connectivity Guide for the best practices for attaching the hosts to a PowerStore storage array.
- Verify the initiators of each host are logged in to the PowerStore Storage Array. CSM will perform the Host Registration of each host with the PowerStore Array.
-
Configure IO policy for native NVMe multipathing
Use this comment to create the machine configuration to configure the native NVMe multipathing IO Policy to round robin.
oc apply -f 99-workers-multipath-round-robin.yaml
cat <<EOF> 71-nvmf-iopolicy-dell.rules ACTION=="add", SUBSYSTEM=="nvme-subsystem", ATTR{model}=="dellemc-powerstore",ATTR{iopolicy}="round-robin" EOF
Example:cat <<EOF> 99-workers-multipath-round-robin.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 99-workers-multipath-round-robin labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.4.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,$(cat 71-nvmf-iopolicy-dell.rules | base64 -w0) verification: {} filesystem: root mode: 420 path: /etc/udev/rules.d/71-nvme-io-policy.rules EOF
-
Configure NVMe reconnecting forever
Use this command to create the machine configuration to configure the NVMe reconnect
oc apply -f 99-workers-nvmf-ctrl-loss-tmo.yaml
cat <<EOF> 72-nvmf-ctrl_loss_tmo.rules ACTION=="add|change", SUBSYSTEM=="nvme", KERNEL=="nvme*", ATTR{ctrl_loss_tmo}="-1" EOF
cat <<EOF> 99-workers-nvmf-ctrl-loss-tmo.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 99-workers-nvmf-ctrl-loss-tmo labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.4.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,$(cat 72-nvmf-ctrl_loss_tmo.rules | base64 -w0) verification: {} filesystem: root mode: 420 path: /etc/udev/rules.d/72-nvmf-ctrl_loss_tmo.rules EOF
-
Operator Installation
-
On the OpenShift console, navigate to OperatorHub and use the keyword filter to search for Dell Container Storage Modules.
-
Click Dell Container Storage Modules tile
-
Keep all default settings and click Install.
Verify that the operator is deployed
oc get operators
NAME AGE
dell-csm-operator-certified.openshift-operators 2d21h
oc get pod -n openshift-operators
NAME READY STATUS RESTARTS AGE
dell-csm-operator-controller-manager-86dcdc8c48-6dkxm 2/2 Running 21 (19h ago) 2d21h
CSI Driver Installation
-
Create namespace:
oc create namespace powermax
-
Create PowerMax credentials:
Create a file called
config.yaml
or pick a sample.cat << EOF > config.yaml storageArrays: - storageArrayId: "000000000001" primaryEndpoint: https://primary-1.unisphe.re:8443 backupEndpoint: https://backup-1.unisphe.re:8443 managementServers: - endpoint: https://primary-1.unisphe.re:8443 username: admin password: password skipCertificateValidation: true - endpoint: https://backup-1.unisphe.re:8443 username: admin2 password: password2 skipCertificateValidation: false certSecret: primary-cert EOF
Edit the file, then run the command to create the
powermax-creds
.oc create secret generic powermax-creds --from-file=config=config.yaml -n powermax --dry-run=client -oyaml > secret-powermax-config.yaml
Use this command to
create
the config:oc apply -f secret-powermax-config.yaml
Use this command to
replace or update
the config:oc replace -f secret-powermax-config.yaml --force
Verify config secret is created.
oc get secret -n powermax NAME TYPE DATA AGE powermax-creds Opaque 1 3h7m
-
Create Powermax Array Configmap:
Note:
powermax-array-config
is deprecated and remains for backward compatibility only. You can skip creating it and instead add values for X_CSI_MANAGED_ARRAYS, X_CSI_TRANSPORT_PROTOCOL, and X_CSI_POWERMAX_PORTGROUPS in the sample files.Create a configmap using the sample file here. Fill in the appropriate values for driver configuration.
# To create this configmap use: kubectl create -f powermax-array-config.yaml apiVersion: v1 kind: ConfigMap metadata: name: powermax-array-config namespace: powermax data: powermax-array-config.yaml: | # List of comma-separated port groups (ISCSI only). Example: PortGroup1, portGroup2 Required for iSCSI only X_CSI_POWERMAX_PORTGROUPS: "" # Choose which transport protocol to use (ISCSI, FC, NVMETCP, auto) defaults to auto if nothing is specified X_CSI_TRANSPORT_PROTOCOL: "" # IP address of the Unisphere for PowerMax (Required), Defaults to https://0.0.0.0:8443 X_CSI_POWERMAX_ENDPOINT: "https://10.0.0.0:8443" # List of comma-separated array ID(s) which will be managed by the driver (Required) X_CSI_MANAGED_ARRAYS: "000000000000,000000000000,"
-
Create a CR (Custom Resource) for PowerMax using the sample files provided
i. Create a CR (Custom Resource) for PowerMax using the sample files provided
a. Minimal Configuration: Use the sample file for default settings. If using the secret above, ensure that the secret name of the secret created is
powermax-creds
.[OR]
b. Detailed Configuration: Use the sample file for detailed settings or use Wizard to generate the sample file. .
- Users should configure the parameters in CR. The following table lists the primary configurable parameters of the PowerMax driver and their default values:
ii. Confirm that value of X_CSI_REVPROXY_USE_SECRET
is set to true
.
iii. Create PowerMax custom resource:
oc create -f <input_sample_file.yaml>
This command will deploy the CSI PowerMax driver in the namespace specified in the input YAML file.
Check if ContainerStorageModule CR is created successfully:
oc get csm powermax -n powermax
NAME CREATIONTIME CSIDRIVERTYPE CONFIGVERSION STATE
powermax 3h powermax v2.14.0 Succeeded
Check the status of the CR to verify if the driver installation is in the Succeeded
state. If the status is not Succeeded
, see the Troubleshooting guide for more information.
- Refer Volume Snapshot Class and Storage Class for the sample files.
Other features to enable
Dynamic Logging Configuration
This feature is introduced in CSI Driver for powermax version 2.0.0.
As part of driver installation, a ConfigMap with the name powermax-config-params
is created using the manifest located in the sample file. This ConfigMap contains an attribute CSI_LOG_LEVEL
which specifies the current log level of the CSI driver. To set the default/initial log level user can set this field during driver installation.
To update the log level dynamically user has to edit the ConfigMap powermax-config-params
and update CSI_LOG_LEVEL
to the desired log level.
kubectl edit configmap -n powermax powermax-config-params
Volume Health Monitoring
This feature is introduced in CSI Driver for PowerMax version 2.2.0.
Volume Health Monitoring feature is optional and by default this feature is disabled for drivers when installed via CSM operator.
To enable this feature, set X_CSI_HEALTH_MONITOR_ENABLED
to true
in the driver manifest under controller and node section. Also, install the external-health-monitor
from sideCars
section for controller plugin.
To get the volume health state value
under controller should be set to true as seen below. To get the volume stats value
under node should be set to true.
# Install the 'external-health-monitor' sidecar accordingly.
# Allowed values:
# true: enable checking of health condition of CSI volumes
# false: disable checking of health condition of CSI volumes
# Default value: false
controller:
envs:
- name: X_CSI_HEALTH_MONITOR_ENABLED
value: "true"
node:
envs:
# X_CSI_HEALTH_MONITOR_ENABLED: Enable/Disable health monitor of CSI volumes from node plugin - volume usage
# Allowed values:
# true: enable checking of health condition of CSI volumes
# false: disable checking of health condition of CSI volumes
# Default value: false
- name: X_CSI_HEALTH_MONITOR_ENABLED
value: "true"
Support for custom topology keys
This feature is introduced in CSI Driver for PowerMax version 2.3.0.
Support for custom topology keys is optional and by default this feature is disabled for drivers when installed via CSM operator.
X_CSI_TOPOLOGY_CONTROL_ENABLED provides a way to filter topology keys on a node based on array and transport protocol. If enabled, user can create custom topology keys by editing node-topology-config configmap.
-
To enable this feature, set
X_CSI_TOPOLOGY_CONTROL_ENABLED
totrue
in the driver manifest under node section.# X_CSI_TOPOLOGY_CONTROL_ENABLED provides a way to filter topology keys on a node based on array and transport protocol # if enabled, user can create custom topology keys by editing node-topology-config configmap. # Allowed values: # true: enable the filtration based on config map # false: disable the filtration based on config map # Default value: false - name: X_CSI_TOPOLOGY_CONTROL_ENABLED value: "false"
-
Edit the sample config map “node-topology-config” as described here with appropriate values: Example:
kind: ConfigMap metadata: name: node-topology-config namespace: powermax data: topologyConfig.yaml: | allowedConnections: - nodeName: "node1" rules: - "000000000001:FC" - "000000000002:FC" - nodeName: "*" rules: - "000000000002:FC" deniedConnections: - nodeName: "node2" rules: - "000000000002:*" - nodeName: "node3" rules: - "*:*"
- Run following command to create the configmap
kubectl create -f topologyConfig.yaml
Note: Name of the configmap should always be
node-topology-config
.