Installation Guide
- Set up an OpenShift cluster following the official documentation.
 - Proceed to the Prerequisite.
 - Complete the base installation.
 - Proceed with module installation.
 
- 
To create a new user in PowerStore, navigate to PowerStore Manager → Settings → Users, and click ‘Add’ to initiate the user creation process.
Username: csmadmin
User Role: Storage Operator - 
(Optional) To create a NAS server, navigate to PowerStore Manager → Storage → NAS Servers → Create.
 
- 
For the protocol specific prerequisite check below.
- 
Complete the zoning of each host with the PowerStore Storage Array. Please refer the Host Connectivity Guide for the guidelines when setting a Fibre Channel SAN infrastructure.
 - 
Verify the initiators of each host are logged in to the PowerStore Storage Array. CSM will perform the Host Registration of each host with the PowerStore Array.
 
- 
Multipathing software configuration
a. Configure Device Mapper MPIO for PowerStore FC connectivity
Use this command to create the machine configuration to configure the DM-MPIO service on all the worker hosts for FC connectivity.
oc apply -f 99-workers-multipath-conf.yaml
Example:
cat <<EOF> multipath.conf defaults { polling_interval 5 checker_timeout 15 disable_changed_wwids yes find_multipaths no } devices { device { vendor DellEMC product PowerStore detect_prio "yes" path_selector "queue-length 0" path_grouping_policy "group_by_prio" path_checker tur failback immediate fast_io_fail_tmo 5 no_path_retry 3 rr_min_io_rq 1 max_sectors_kb 1024 dev_loss_tmo 10 } } EOF
cat <<EOF> 99-workers-multipath-conf.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 99-workers-multipath-conf labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.4.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,$(cat multipath.conf | base64 -w0) verification: {} filesystem: root mode: 400 path: /etc/multipath.conf EOF
b. Enable Linux Device Mapper MPIO
Use this command to create the machine configuration to enable the DM-MPIO service on all the worker host
oc apply -f 99-workers-enable-multipathd.yaml
cat << EOF > 99-workers-enable-multipathd.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 99-workers-enable-multipathd.yaml labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.4.0 systemd: units: - name: "multipathd.service" enabled: true EOF
 
- 
Complete the iSCSI network configuration to connect the hosts with the PowerStore Storage array. Please refer the Host Connectivity Guide for the best practices for attaching the hosts to a PowerStore storage array.
 - 
Verify the initiators of each host are logged in to the PowerStore Storage Array. CSM will perform the Host Registration of each host with the PowerStore Array.
 - 
Enable iSCSI service
Use this command to create the machine configuration to enable the iscsid service.
oc apply -f 99-workers-enable-iscsid.yaml
Example:
cat <<EOF> 99-workers-enable-iscsid.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 99-workers-enable-iscsid labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.4.0 systemd: units: - name: "iscsid.service" enabled: true EOF 
- 
Multipathing software configuration
a. Configure Device Mapper MPIO for PowerStore iSCSI connectivity
Use this command to create the machine configuration to configure the DM-MPIO service on all the worker hosts for iSCSI connectivity.
oc apply -f 99-workers-multipath-conf.yaml
Example:cat <<EOF> multipath.conf defaults { polling_interval 5 checker_timeout 15 disable_changed_wwids yes find_multipaths no } devices { device { vendor DellEMC product PowerStore detect_prio "yes" path_selector "queue-length 0" path_grouping_policy "group_by_prio" path_checker tur failback immediate fast_io_fail_tmo 5 no_path_retry 3 rr_min_io_rq 1 max_sectors_kb 1024 dev_loss_tmo 10 } } EOF
cat <<EOF> 99-workers-multipath-conf.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 99-workers-multipath-conf labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.4.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,$(cat multipath.conf | base64 -w0) verification: {} filesystem: root mode: 400 path: /etc/multipath.conf EOF
b. Enable Linux Device Mapper MPIO
Use this command to create the machine configuration to enable the DM-MPIO service on all the worker host
oc apply -f 99-workers-enable-multipathd.yaml
cat << EOF > 99-workers-enable-multipathd.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 99-workers-enable-multipathd.yaml labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.4.0 systemd: units: - name: "multipathd.service" enabled: true EOF
 
- Complete the zoning of each host with the PowerStore Storage Array. Please refer the Host Connectivity Guide for the guidelines when setting a Fibre Channel SAN infrastructure.
 
- Verify the initiators of each host are logged in to the PowerStore Storage Array. CSM will perform the Host Registration of each host with the PowerStore Array.
 
- 
To ensure successful integration of NVMe protocols with the CSI Driver, the following conditions must be met:
- Each OpenShift node that connects to Dell storage arrays must have a unique NVMe Qualified Name (NQN).
 - By default, the OpenShift deployment process for CoreOS assigns the same host NQN to all nodes. This value is stored in the file: /etc/nvme/hostnqn.
 - To resolve this and guarantee unique host NQNs across nodes, you can apply a machine configuration to your OpenShift Container Platform (OCP) cluster. One recommended approach is to add the following machine config:
 
cat <<EOF > 99-worker-custom-nvme-hostnqn.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-custom-nvme-hostnqn spec: config: ignition: version: 3.4.0 systemd: units: - contents: | [Unit] Description=Custom CoreOS Generate NVMe Hostnqn [Service] Type=oneshot ExecStart=/usr/bin/sh -c '/usr/sbin/nvme gen-hostnqn > /etc/nvme/hostnqn' RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: custom-coreos-generate-nvme-hostnqn.service EOF 
- 
Configure IO policy for native NVMe multipathing
Use this comment to create the machine configuration to configure the native NVMe multipathing IO Policy to round robin.
oc apply -f 99-workers-multipath-round-robin.yaml
cat <<EOF> 71-nvmf-iopolicy-dell.rules ACTION=="add", SUBSYSTEM=="nvme-subsystem", ATTR{model}=="dellemc-powerstore",ATTR{iopolicy}="round-robin" EOF
Example:cat <<EOF> 99-workers-multipath-round-robin.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 99-workers-multipath-round-robin labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.4.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,$(cat 71-nvmf-iopolicy-dell.rules | base64 -w0) verification: {} filesystem: root mode: 420 path: /etc/udev/rules.d/71-nvme-io-policy.rules EOF 
- 
Configure NVMe reconnecting forever
Use this command to create the machine configuration to configure the NVMe reconnect
oc apply -f 99-workers-nvmf-ctrl-loss-tmo.yaml
cat <<EOF> 72-nvmf-ctrl_loss_tmo.rules ACTION=="add|change", SUBSYSTEM=="nvme", KERNEL=="nvme*", ATTR{ctrl_loss_tmo}="-1" EOF
cat <<EOF> 99-workers-nvmf-ctrl-loss-tmo.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 99-workers-nvmf-ctrl-loss-tmo labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.4.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,$(cat 72-nvmf-ctrl_loss_tmo.rules | base64 -w0) verification: {} filesystem: root mode: 420 path: /etc/udev/rules.d/72-nvmf-ctrl_loss_tmo.rules EOF 
- Complete the NVMe network configuration to connect the hosts with the PowerStore Storage array. Please refer Host Connectivity Guide for the best practices for attaching the hosts to a PowerStore storage array.
 
- Verify the initiators of each host are logged in to the PowerStore Storage Array. CSM will perform the Host Registration of each host with the PowerStore Array.
 
- 
To ensure successful integration of NVMe protocols with the CSI Driver, the following conditions must be met:
- Each OpenShift node that connects to Dell storage arrays must have a unique NVMe Qualified Name (NQN).
 - By default, the OpenShift deployment process for CoreOS assigns the same host NQN to all nodes. This value is stored in the file: /etc/nvme/hostnqn.
 - To resolve this and guarantee unique host NQNs across nodes, you can apply a machine configuration to your OpenShift Container Platform (OCP) cluster. One recommended approach is to add the following machine config:
 
cat <<EOF > 99-worker-custom-nvme-hostnqn.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-custom-nvme-hostnqn spec: config: ignition: version: 3.4.0 systemd: units: - contents: | [Unit] Description=Custom CoreOS Generate NVMe Hostnqn [Service] Type=oneshot ExecStart=/usr/bin/sh -c '/usr/sbin/nvme gen-hostnqn > /etc/nvme/hostnqn' RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: custom-coreos-generate-nvme-hostnqn.service EOF 
- 
Configure IO policy for native NVMe multipathing
Use this comment to create the machine configuration to configure the native NVMe multipathing IO Policy to round robin.
oc apply -f 99-workers-multipath-round-robin.yaml
cat <<EOF> 71-nvmf-iopolicy-dell.rules ACTION=="add", SUBSYSTEM=="nvme-subsystem", ATTR{model}=="dellemc-powerstore",ATTR{iopolicy}="round-robin" EOF
Example:cat <<EOF> 99-workers-multipath-round-robin.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 99-workers-multipath-round-robin labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.4.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,$(cat 71-nvmf-iopolicy-dell.rules | base64 -w0) verification: {} filesystem: root mode: 420 path: /etc/udev/rules.d/71-nvme-io-policy.rules EOF 
- 
Configure NVMe reconnecting forever
Use this command to create the machine configuration to configure the NVMe reconnect
oc apply -f 99-workers-nvmf-ctrl-loss-tmo.yaml
cat <<EOF> 72-nvmf-ctrl_loss_tmo.rules ACTION=="add|change", SUBSYSTEM=="nvme", KERNEL=="nvme*", ATTR{ctrl_loss_tmo}="-1" EOF
cat <<EOF> 99-workers-nvmf-ctrl-loss-tmo.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 99-workers-nvmf-ctrl-loss-tmo labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.4.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,$(cat 72-nvmf-ctrl_loss_tmo.rules | base64 -w0) verification: {} filesystem: root mode: 420 path: /etc/udev/rules.d/72-nvmf-ctrl_loss_tmo.rules EOF 
 - 
 
Operator Installation
- 
On the OpenShift console, navigate to OperatorHub and use the keyword filter to search for Dell Container Storage Modules.
 - 
Click Dell Container Storage Modules tile
 - 
Keep all default settings and click Install.
 
Verify that the operator is deployed
oc get operators
NAME                                                          AGE
dell-csm-operator-certified.openshift-operators               2d21h
oc get pod -n openshift-operators
NAME                                                       READY   STATUS       RESTARTS      AGE
dell-csm-operator-controller-manager-86dcdc8c48-6dkxm      2/2     Running      21 (19h ago)  2d21h
CSI Driver Installation
- 
Create project:
Use this command to create new project. You can use any project name instead of
powerstore.oc new-project powerstore - 
Create config secret:
Create a file called
config.yamlor use sample:Example:
cat << EOF > config.yaml arrays: - endpoint: "https://powerstore.example.com/api/rest" globalID: "PSxxxxxxxxxxxx" username: "csmadmin" password: "P@ssw0rd123" skipCertificateValidation: true blockProtocol: "FC" EOFAdd blocks for each PowerStore array in
config.yaml, and include both source and target arrays if replication is enabled.The username in
config.yamlmust be from PowerStore’s authentication providers and have at least the Storage Operator role.
Edit the file, then run the command to create the
powerstore-config.oc create secret generic powerstore-config --from-file=config=config.yaml -n powerstore --dry-run=client -oyaml > secret-powerstore-config.yamlUse this command to create the config:
oc apply -f secret-powerstore-config.yamlUse this command to replace or update the config:
oc replace -f secret-powerstore-config.yaml --forceVerify config secret is created.
oc get secret -n powerstore NAME TYPE DATA AGE powerstore-config Opaque 1 3h7m 
- 
Create Custom Resource ContainerStorageModule for powerstore.
Use this command to create the ContainerStorageModule Custom Resource:
oc create -f csm-powerstore.yamlExample:
cat << EOF > csm-powerstore.yaml apiVersion: storage.dell.com/v1 kind: ContainerStorageModule metadata: name: powerstore namespace: powerstore spec: driver: csiDriverType: "powerstore" configVersion: v2.15.1 forceRemoveDriver: true EOFDetailed Configuration: Use the sample file for detailed settings or use Wizard to generate the sample file.
To set the parameters in CR. The table shows the main settings of the PowerStore driver and their defaults. 
- 
Check if ContainerStorageModule CR is created successfully:
 
oc get csm powerstore -n powerstore
NAME        CREATIONTIME   CSIDRIVERTYPE   CONFIGVERSION         STATE
powerstore  3h             powerstore      v2.15.1               Succeed
Check the status of the CR to verify if the driver installation is in the Succeeded state. If the status is not Succeeded, see the Troubleshooting guide for more information.
- 
Create Storage class:
Use this command to create the Storage Class:
oc apply -f sc-powerstore.yamlExample:
cat << EOF > sc-powerstore.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: "powerstore" annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: "csi-powerstore.dellemc.com" parameters: arrayID: "Unique" csi.storage.k8s.io/fstype: "xfs" reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: Immediate EOFReplace placeholders with actual values for your powerstore array and various storage class sample refer here
Verify Storage Class is created:
oc get storageclass powerstore NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE powerstore(default) csi-powerstore.dellemc.com Delete Immediate true 3h8m - 
Create Volume Snapshot Class:
Use this command to create the Volume Snapshot:
oc apply -f vsclass-powerstore.yamlExample:
cat << EOF > vsclass-powerstore.yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: powerstore-snapshot driver: "csi-powerstore.dellemc.com" deletionPolicy: Delete EOFVerify Volume Snapshot Class is created:
oc get volumesnapshotclass NAME DRIVER DELETIONPOLICY AGE vsclass-powerstore csi-powerstore.dellemc.com Delete 3h9m