PowerMax
Starting with CSM 1.12, all deployments will use images from quay.io by default. New release images will be available on Docker Hub until CSM 1.14 (May 2025), and existing releases will remain on Docker Hub.
The CSI Driver for Dell PowerMax can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, see the script documentation.
Prerequisites
The following requirements must be met before installing the CSI Driver for Dell PowerMax:
- A Kubernetes or OpenShift cluster (see supported versions)
- Install Helm 3
- If enabling CSM for Authorization, please refer to the Authorization deployment steps first
- If enabling CSM Replication, both source and target storage systems must be locally managed by Unisphere.
- Example: When using two Unisphere instances, the first Unisphere instance should be configured with the source storage system as locally managed and target storage system as remotely managed. The second Unisphere configuration should mirror the first — locally managing the target storage system and remotely managing the source storage system.
- Refer to the sections below for protocol specific requirements.
- For NVMe support the preferred multipath solution is NVMe native multipathing. The Dell Host Connectivity Guide describes the details of each configuration option.
- Linux multipathing requirements (described later).
- PowerPath for Linux requirements (described later).
- Mount propagation is enabled on the container runtime that is being used.
- If using Snapshot feature, satisfy all Volume Snapshot requirements.
- Insecure registries are defined in Docker or other container runtimes for CSI drivers that are hosted in a non-secure location.
- Ensure that your nodes support mounting NFS volumes if using NFS.
- Auto RDM for vSphere over FC requirements
CSI PowerMax Reverse Proxy
The CSI PowerMax Reverse Proxy is an HTTPS server and has to be configured with an SSL certificate and a private key.
The certificate and key are provided to the proxy via a Kubernetes TLS secret (in the same namespace). The SSL certificate must be an X.509 certificate encoded in PEM format. The certificates can be obtained via a Certificate Authority or can be self-signed and generated by a tool such as openssl.
Starting from v2.7.0 these secrets will be automatically created using the tls.key and tls.cert contents provided in my-powermax-settings.yaml file. For this to be used, we need to install cert-manager which manages the certs and secrets. Install cer-manager using the following command:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.11.0/cert-manager.yaml
The following example shows how to generate a private key and how to use that key to sign an SSL certificate using the openssl tool:
openssl genrsa -out tls.key 2048
openssl req -new -x509 -sha256 -key tls.key -out tls.crt -days 3650
Install Helm 3
Install Helm 3 on the master node before you install CSI Driver for Dell PowerMax.
Run the following command to install Helm 3.
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
Fibre Channel Requirements
The following requirements must be fulfilled in order to successfully use the Fiber Channel protocol with the CSI PowerMax driver:
- Zoning of the Host Bus Adapters (HBAs) to the Fibre Channel port director must be completed.
- Ensure that the HBA WWNs (initiators) appear on the list of initiators that are logged into the array.
- If the number of volumes that will be published to nodes is high, then configure the maximum number of LUNs for your HBAs on each node. See the appropriate HBA document to configure the maximum number of LUNs.
iSCSI Requirements
The following requirements must be fulfilled in order to successfully use the iSCSI protocol with the CSI PowerMax driver.
- All Kubernetes nodes must have the iscsi-initiator-utils package installed. On Debian based distributions the package name is open-iscsi.
- The iscsid service must be enabled and running. You can enable the service by running the following command on all worker nodes:
systemctl enable --now iscsid
- To configure iSCSI in Red Hat OpenShift clusters, you can create a
MachineConfig
object using the console oroc
to ensure that the iSCSI daemon starts on all the Red Hat CoreOS nodes. Here is an example of aMachineConfig
object:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 99-iscsid
labels:
machineconfiguration.openshift.io/role: worker
spec:
config:
ignition:
version: 3.2.0
systemd:
units:
- name: "iscsid.service"
enabled: true
Once the MachineConfig
object has been deployed, CoreOS will ensure that the iscsid.service
starts automatically. You can check the status of the iSCSI service by entering the following command on each worker node in the cluster: sudo systemctl status iscsid
.
- Ensure that the iSCSI initiators are available on all the nodes where the driver node plugin will be installed.
- Ensure that the unique initiator name is set in /etc/iscsi/initiatorname.iscsi.
- If your worker nodes are running Red Hat CoreOS, make sure that automatic iSCSI login at boot is configured. Please contact RedHat for more details.
- Kubernetes nodes must have network connectivity to an iSCSI director on the Dell PowerMax array that has IP interfaces. Manually create IP routes for each node that connects to the Dell PowerMax if required.
- Ensure that the iSCSI initiators on the nodes are not a part of any existing Host (Initiator Group) on the Dell PowerMax array.
- The CSI Driver needs the port group name containing the required iSCSI director ports. These port groups must be set up on each Dell PowerMax array. All the port group names supplied to the driver must exist on each Dell PowerMax with the same name.
Refer to the Dell Host Connectivity Guide for more information.
NVMe Requirements
The following requirements must be fulfilled in order to successfully use the NVMe/TCP protocol with the CSI PowerMax driver:
- Modules including the nvme, nvme_core, nvme_fabrics, and nvme_tcp are required for using NVMe over Fabrics using TCP. Load the NVMe and NVMe-OF Modules using the below commands:
modprobe nvme
modprobe nvme_tcp
- The NVMe modules may not be available after a node reboot. Loading the modules at startup is recommended.
Starting with OCP 4.14 NVMe/TCP is enabled by default on RCOS nodes.
Cluster requirments
- All OpenShift or Kubernetes nodes connecting to Dell storage arrays must use unique host NQNs.
- The driver requires the NVMe command-line interface (nvme-cli) to manage the NVMe clients and targets. The NVMe CLI tool is installed in the host using the following command on RPM oriented Linux distributions.
sudo dnf -y install nvme-cli
- Support for NVMe requires native NVMe multipathing to be configured on each worker node in the cluster. Please refer to the Dell Host Connectivity Guide for more details on NVMe multipathing requirements. To determine if the worker nodes are configured for native NVMe multipathing run the following command on each worker node:
cat /sys/module/nvme_core/parameters/multipath
If the result of the command displays Y then NVMe native multipathing is enabled in the kernel. If the output is N then native NVMe multipating is disabled. Consult the Dell Host Connectivity Guide for Linux to enable native NVMe multipathing.
Configure the IO policy
- The default NVMeTCP native multipathing policy is “numa”. The preferred IO policy for NVMe devices used for PowerMax is round-robin. You can use udev rules to enable the round robin policy on all worker nodes. To view the IO policy you can use the following command:
nvme list-subsys
To change the IO policy to round-robin you can add a udev rule on each worker node. Place a config file in /etc/udev/rules.d with the name 71-nvme-io-policy.rules with the following contents:
ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{iopolicy}="round-robin"
In order to change the rules on a running kernel you can run the following commands:
/sbin/udevadm control --reload-rules
/sbin/udevadm trigger --type=devices --action=change
On OCP clusters you can add a MachineConfig to enable this rule on all worker nodes:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 99-workers-multipath-round-robin
labels:
machineconfiguration.openshift.io/role: worker
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,QUNUSU9OPT0iYWRkfGNoYW5nZSIsIFNVQlNZU1RFTT09Im52bWUtc3Vic3lzdGVtIiwgQVRUUntpb3BvbGljeX09InJvdW5kLXJvYmluIg==
verification: {}
filesystem: root
mode: 420
path: /etc/udev/rules.d/71-nvme-io-policy.rules
Array requirements
Once the NVMe endpoint is created on the array, follow the following steps to update the endpoint name to adhere to the CSI driver requirements.
- Run
nvme discover --transport=tcp --traddr=<InterfaceAdd> --trsvcid=4420
.is the placeholder for actual IP address of NVMe Endpoint. - Fetch the subnqn, for e.g., nqn.1988-11.com.dell:PowerMax_2500:00:000120001100, this will be used as the subnqn holder while updating NVMe endpoint name.
- Update the NVMe endpoint name as
<subnqn>:<dir><port>>
. Here is an example how it should look, nqn.1988-11.com.dell:PowerMax_2500:00:000120001100:OR1C000
NFS Requirements
CSI Driver for Dell PowerMax supports NFS communication. Ensure that the following requirements are met before you install CSI Driver:
- Configure the NFS network. Please refer here for more details.
- PowerMax Embedded Management guest to access Unisphere for PowerMax.
- Create the NAS server. Please refer here for more details.
Linux Multipathing Requirements
Dell PowerMax supports Linux multipathing (DM-MPIO) and NVMe native multipathing. Configure Linux multipathing before installing the CSI Driver.
For NVMe connectivity native NVMe multipathing is used. The following sections apply only for iSCSI and Fiber Channel connectivity.
Configure Linux multipathing as follows:
- Ensure that all nodes have the Device Mapper Multipathing package installed.
You can install it by running
dnf install device-mapper-multipath
orapt install multipath-tools
based on your Linux distribution. - Ensure that the multipath command
mpathconf
is available on all Kubernetes nodes. - Enable multipathing using the
mpathconf --enable --with_multipathd y
command. A default configuration file,/etc/multipath.conf
is created. - Enable
user_friendly_names
andfind_multipaths
in themultipath.conf
file. - As a best practice, use these options to help the operating system and the mulitpathing software detect path changes efficiently:
path_grouping_policy multibus
path_checker tur
features "1 queue_if_no_path"
path_selector "round-robin 0"
no_path_retry 10
The following is a sample multipath.conf file. You may have to adjust these values based on your environment.
defaults {
user_friendly_names yes
find_multipaths yes
path_grouping_policy multibus
path_checker tur
features "1 queue_if_no_path"
path_selector "round-robin 0"
no_path_retry 10
}
blacklist {
}
On some distributions the multipathd service for changes to the configuration and dynamically reconfigures itself. If you need to manually trigger a reload you can run the following command:
sudo systemctl reload multipathd
To enable multipathd on RedHat CoreOS nodes you need to prepare a working configuration encoded in base64. For example you can run the following command to encode the above multipath.config file.
echo 'defaults {
user_friendly_names yes
find_multipaths yes
path_grouping_policy multibus
path_checker tur
features "1 queue_if_no_path"
path_selector "round-robin 0"
no_path_retry 10
}
blacklist {
}' | base64 -w0
The output of the above command follows:
ZGVmYXVsdHMgewogIHVzZXJfZnJpZW5kbHlfbmFtZXMgeWVzCiAgZmluZF9tdWx0aXBhdGhzIHllcwogIHBhdGhfZ3JvdXBpbmdfcG9saWN5IG11bHRpYnVzCiAgcGF0aF9jaGVja2VyIHR1cgogIGZlYXR1cmVzICIxIHF1ZXVlX2lmX25vX3BhdGgiCiAgcGF0aF9zZWxlY3RvciAicm91bmQtcm9iaW4gMCIKICBub19wYXRoX3JldHJ5IDEwCn0KICBibGFja2xpc3Qgewp9Cg==
Use the base64 encoded string output in the following MachineConfig
yaml file (under source section)
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: workers-multipath-conf-default
labels:
machineconfiguration.openshift.io/role: worker
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,ZGVmYXVsdHMgewogIHVzZXJfZnJpZW5kbHlfbmFtZXMgeWVzCiAgZmluZF9tdWx0aXBhdGhzIHllcwogIHBhdGhfZ3JvdXBpbmdfcG9saWN5IG11bHRpYnVzCiAgcGF0aF9jaGVja2VyIHR1cgogIGZlYXR1cmVzICIxIHF1ZXVlX2lmX25vX3BhdGgiCiAgcGF0aF9zZWxlY3RvciAicm91bmQtcm9iaW4gMCIKICBub19wYXRoX3JldHJ5IDEwCn0KICBibGFja2xpc3Qgewp9Cg==
verification: {}
filesystem: root
mode: 400
path: /etc/multipath.conf
After deploying thisMachineConfig
object, CoreOS will start the multipath service automatically.
Alternatively, you can check the status of the multipath service by running the following command on each worker node.
sudo multipath -ll
Refer to the Dell Host Connectivity Guide for more information.
PowerPath for Linux requirements
The CSI Driver for Dell PowerMax supports PowerPath for Linux. Configure Linux PowerPath before installing the CSI Driver.
Follow this procedure to set up PowerPath for Linux:
- All the nodes must have the PowerPath package installed . Download the PowerPath archive for the environment from Dell Online Support.
Untar
the PowerPath archive, Copy the RPM package into a temporary folder and Install PowerPath usingrpm -ivh DellEMCPower.LINUX-<version>-<build>.<platform>.x86_64.rpm
- Start the PowerPath service using
systemctl start PowerPath
Note: Do not install Dell PowerPath if multi-path software is already installed, as they cannot co-exist with native multi-path software.
Volume Snapshot Requirements (Optional)
For detailed snapshot setup procedure, click here.
Replication Requirements (Optional)
Applicable only if you decided to enable the Replication feature in my-powermax-settings.yaml
replication:
enabled: true
Replication CRD’s
The CRDs for replication can be obtained and installed from the csm-replication project on Github. Use csm-replication/deploy/replicationcrds.all.yaml
located in the csm-replication git repo for the installation.
CRDs should be configured during replication prepare stage with repctl as described in install-repctl
Installation
Steps
- Run
git clone -b v2.12.0 https://github.com/dell/csi-powermax.git
to clone the git repository. This will include the Helm charts and dell-csi-helm-installer scripts. - Ensure that you have created a namespace where you want to install the driver. You can run
kubectl create namespace powermax
to create a new one - Edit the
samples/secret/secret.yaml
file,to point to the correct namespace, and replace the values for the username and password parameters. These values can be obtained using base64 encoding as described in the following example:where myusername and mypassword are credentials for a user with PowerMax privileges.echo -n "myusername" | base64 echo -n "mypassword" | base64
- Create the secret by running
kubectl create -f samples/secret/secret.yaml
- Download the default values.yaml file
cd dell-csi-helm-installer && wget -O my-powermax-settings.yaml https://github.com/dell/helm-charts/raw/csi-powermax-2.12.0/charts/csi-powermax/values.yaml
- Ensure the unisphere have 10.0 REST endpoint support by clicking on Unisphere -> Help (?) -> About in Unisphere for PowerMax GUI.
- Edit the newly created file and provide values for the following parameters
vi my-powermax-settings.yaml
Parameter | Description | Required | Default |
---|---|---|---|
global | This section refers to configuration options for both CSI PowerMax Driver and Reverse Proxy | - | - |
defaultCredentialsSecret | This secret name refers to: 1 The proxy credentials if the driver is installed with proxy in StandAlone mode. 2. The default Unisphere credentials if credentialsSecret is not specified for a management server. |
Yes | powermax-creds |
storageArrays | This section refers to the list of arrays managed by the driver and Reverse Proxy in StandAlone mode. | - | - |
storageArrayId | This refers to PowerMax Symmetrix ID. | Yes | 000000000001 |
endpoint | This refers to the URL of the Unisphere server managing storageArrayId. If authorization is enabled, endpoint should be the HTTPS localhost endpoint that the authorization sidecar will listen on | Yes if Reverse Proxy mode is StandAlone | https://primary-1.unisphe.re:8443 |
backupEndpoint | This refers to the URL of the backup Unisphere server managing storageArrayId, if Reverse Proxy is installed in StandAlone mode. If authorization is enabled, backupEndpoint should be the HTTPS localhost endpoint that the authorization sidecar will listen on | Yes | https://backup-1.unisphe.re:8443 |
managementServers | This section refers to the list of configurations for Unisphere servers managing powermax arrays. | - | - |
endpoint | This refers to the URL of the Unisphere server. If authorization is enabled, endpoint should be the HTTPS localhost endpoint that the authorization sidecar will listen on | Yes | https://primary-1.unisphe.re:8443 |
credentialsSecret | This refers to the user credentials for endpoint | Yes | primary-1-secret |
skipCertificateValidation | This parameter should be set to false if you want to do client-side TLS verification of Unisphere for PowerMax SSL certificates. | No | “True” |
certSecret | The name of the secret in the same namespace containing the CA certificates of the Unisphere server | Yes, if skipCertificateValidation is set to false | Empty |
limits | This refers to various limits for Reverse Proxy | No | - |
maxActiveRead | This refers to the maximum concurrent READ request handled by the reverse proxy. | No | 5 |
maxActiveWrite | This refers to the maximum concurrent WRITE request handled by the reverse proxy. | No | 4 |
maxOutStandingRead | This refers to maximum queued READ request when reverse proxy receives more than maxActiveRead requests. | No | 50 |
maxOutStandingWrite | This refers to maximum queued WRITE request when reverse proxy receives more than maxActiveWrite requests. | No | 50 |
kubeletConfigDir | Specify kubelet config dir path | Yes | /var/lib/kubelet |
imagePullPolicy | The default pull policy is IfNotPresent which causes the Kubelet to skip pulling an image if it already exists. | Yes | IfNotPresent |
clusterPrefix | Prefix that is used during the creation of various masking-related entities (Storage Groups, Masking Views, Hosts, and Volume Identifiers) on the array. The value that you specify here must be unique. Ensure that no other CSI PowerMax driver is managing the same arrays that are configured with the same prefix. The maximum length for this prefix is three characters. | Yes | “ABC” |
logLevel | CSI driver log level. Allowed values: “error”, “warn”/“warning”, “info”, “debug”. | Yes | “debug” |
logFormat | CSI driver log format. Allowed values: “TEXT” or “JSON”. | Yes | “TEXT” |
kubeletConfigDir | kubelet config directory path. Ensure that the config.yaml file is present at this path. | Yes | /var/lib/kubelet |
defaultFsType | Used to set the default FS type for external provisioner | Yes | ext4 |
portGroups | List of comma-separated port group names. Any port group that is specified here must be present on all the arrays that the driver manages. | For iSCSI Only | “PortGroup1, PortGroup2, PortGroup3” |
skipCertificateValidation | Skip client-side TLS verification of Unisphere certificates | No | “True” |
transportProtocol | Set the preferred transport protocol for the Kubernetes cluster which helps the driver choose between FC, iSCSI and NVMeTCP, when a node has multiple protocol connectivity to a PowerMax array. | No | Empty |
nodeNameTemplate | Used to specify a template that will be used by the driver to create Host/IG names on the PowerMax array. To use the default naming convention, leave this value empty. | No | Empty |
modifyHostName | Change any existing host names. When nodenametemplate is set, it changes the name to the specified format else it uses driver default host name format. | No | false |
powerMaxDebug | Enables low level and http traffic logging between the CSI driver and Unisphere. Don’t enable this unless asked to do so by the support team. | No | false |
enableCHAP | Determine if the driver is going to configure SCSI node databases on the nodes with the CHAP credentials. If enabled, the CHAP secret must be provided in the credentials secret and set to the key “chapsecret” | No | false |
fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes None, File and ReadWriteOnceWithFSType |
No | “ReadWriteOnceWithFSType” |
version | Current version of the driver. Don’t modify this value as this value will be used by the install script. | Yes | v2.10.0 |
images | List all the images used by the CSI driver and CSM. If you use a private repository, change the registries accordingly. | Yes | "" |
maxPowerMaxVolumesPerNode | Specifies the maximum number of volume that can be created on a node. | Yes | 0 |
controller | Allows configuration of the controller-specific parameters. | - | - |
controllerCount | Defines the number of csi-powerscale controller pods to deploy to the Kubernetes release | Yes | 2 |
volumeNamePrefix | Defines a string prefix for the names of PersistentVolumes created | Yes | “k8s” |
snapshot.enabled | Enable/Disable volume snapshot feature | Yes | true |
snapshot.snapNamePrefix | Defines a string prefix for the names of the Snapshots created | Yes | “snapshot” |
resizer.enabled | Enable/Disable volume expansion feature | Yes | true |
healthMonitor.enabled | Allows to enable/disable volume health monitor | No | false |
healthMonitor.interval | Interval of monitoring volume health condition | No | 60s |
nodeSelector | Define node selection constraints for pods of controller deployment | No | |
tolerations | Define tolerations for the controller deployment, if required | No | |
node | Allows configuration of the node-specific parameters. | - | - |
tolerations | Add tolerations as per requirement | No | - |
nodeSelector | Add node selectors as per requirement | No | - |
healthMonitor.enabled | Allows to enable/disable volume health monitor | No | false |
topologyControl.enabled | Allows to enable/disable topology control to filter topology keys | No | false |
csireverseproxy | This section refers to the configuration options for CSI PowerMax Reverse Proxy | - | - |
tlsSecret | This refers to the TLS secret of the Reverse Proxy Server. | Yes | csirevproxy-tls-secret |
deployAsSidecar | If set to true, the Reverse Proxy is installed as a sidecar to the driver’s controller pod otherwise it is installed as a separate deployment. | Yes | “True” |
port | Specify the port number that is used by the NodePort service created by the CSI PowerMax Reverse Proxy installation | Yes | 2222 |
certManager | Auto-create TLS certificate for csi-reverseproxy | - | - |
selfSignedCert | Set selfSignedCert to use a self-signed certificate | No | true |
certificateFile | certificateFile has tls.key content in encoded format | No | tls.crt.encoded64 |
privateKeyFile | privateKeyFile has tls.key content in encoded format | No | tls.key.encoded64 |
authorization | Authorization is an optional feature to apply credential shielding of the backend PowerMax. | - | - |
enabled | A boolean that enables/disables authorization feature. | No | false |
proxyHost | Hostname of the csm-authorization server. | No | Empty |
skipCertificateValidation | A boolean that enables/disables certificate validation of the csm-authorization proxy server. | No | true |
migration | Migration is an optional feature to enable migration between storage classes | - | - |
enabled | A boolean that enables/disables migration feature. | No | false |
image | Image for dell-csi-migrator sidecar. | No | " " |
nodeRescanSidecarImage | Image for node rescan sidecar which rescans nodes for identifying new paths. | No | " " |
migrationPrefix | enables migration sidecar to read required information from the storage class fields | No | migration.storage.dell.com |
replication | Replication is an optional feature to enable replication & disaster recovery capabilities of PowerMax to Kubernetes clusters. | - | - |
enabled | A boolean that enables/disables replication feature. | No | false |
replicationContextPrefix | enables side cars to read required information from the volume context | No | powermax |
replicationPrefix | Determine if replication is enabled | No | replication.storage.dell.com |
storageCapacity | It is an optional feature that enable storagecapacity & helps the scheduler to check whether the requested capacity is available on the PowerMax array and allocate it to the nodes. | - | - |
enabled | A boolean that enables/disables storagecapacity feature. | - | true |
pollInterval | It configure how often external-provisioner polls the driver to detect changed capacity | - | 5m |
vSphere | This section refers to the configuration options for VMware virtualized environment support via RDM | - | - |
enabled | A boolean that enables/disables VMware virtualized environment support. | No | false |
fcPortGroup | Existing portGroup that driver will use for vSphere. | Yes | "" |
fcHostGroup | Existing host(initiator group)/hostgroup(cascaded initiator group) that driver will use for vSphere. | Yes | "" |
vCenterHost | URL/endpoint of the vCenter where all the ESX are present | Yes | "" |
vCenterCredSecret | Secret name for the vCenter credentials. | Yes | "" |
- Install the driver using
csi-install.sh
bash script by runningcd ../dell-csi-helm-installer && ./csi-install.sh --namespace powermax --values ./my-powermax-settings.yaml --helm-charts-version <version>
- Or you can also install the driver using standalone helm chart using the command
helm install --values my-powermax-settings.yaml --namespace powermax powermax ./csi-powermax
Note:
- The parameter
--helm-charts-version
is optional and if you do not specify the flag, by default thecsi-install.sh
script will clone the version of the helm chart that is specified in the driver’s csi-install.sh file. If you wish to install the driver using a different version of the helm chart, you need to include this flag. Also, remember to delete thehelm-charts
repository present in thecsi-powermax
directory if it was cloned before. - For detailed instructions on how to run the install scripts, see the readme document in the dell-csi-helm-installer folder.
- There are a set of samples provided here to help you configure the driver with reverse proxy
- This script also runs the verify.sh script in the same directory. You will be prompted to enter the credentials for each of the Kubernetes nodes. The
verify.sh
script needs the credentials to check if the iSCSI initiators have been configured on all nodes. You can also skip the verification step by specifying the--skip-verify-node
option - In order to enable authorization, there should be an authorization proxy server already installed.
- PowerMax Array username must have role as
StorageAdmin
to be able to perform CRUD operations. - If the user is using complex K8s version like “v1.24.3-mirantis-1”, use this kubeVersion check in helm Chart file. kubeVersion: “>= 1.24.0-0 < 1.29.0-0”.
- User should provide all boolean values with double-quotes. This applies only for values.yaml. Example: “true”/“false”.
- controllerCount parameter value should be <= number of nodes in the kubernetes cluster else install script fails.
- Endpoint should not have any special character at the end apart from port number.
Storage Classes
A wide set of annotated storage class manifests has been provided in the samples/storageclass
folder. Please use these samples to create new storage classes to provision storage.
Volume Snapshot Class
Starting with CSI PowerMax v1.7.0, dell-csi-helm-installer
will not create any Volume Snapshot Class during the driver installation. There is a sample Volume Snapshot Class manifest present in the samples/volumesnapshotclass folder. Please use this sample to create a new Volume Snapshot Class to create Volume Snapshots.
Sample values file
The following sections have useful snippets from values.yaml
file which provides more information on how to configure the CSI PowerMax driver along with CSI PowerMax ReverseProxy in StandAlone mode.
CSI PowerMax driver with Proxy in StandAlone mode
This is the most advanced configuration which provides you with the capability to connect to Multiple Unisphere servers. You can specify primary and backup Unisphere servers for each storage array. If you have different credentials for your Unisphere servers, you can also specify different credential secrets.
global:
defaultCredentialsSecret: powermax-creds
storageArrays:
- storageArrayId: "000000000001"
endpoint: https://primary-1.unisphe.re:8443
backupEndpoint: https://backup-1.unisphe.re:8443
- storageArrayId: "000000000002"
endpoint: https://primary-2.unisphe.re:8443
backupEndpoint: https://backup-2.unisphe.re:8443
managementServers:
- endpoint: https://primary-1.unisphe.re:8443
credentialsSecret: primary-1-secret
skipCertificateValidation: false
certSecret: primary-cert
limits:
maxActiveRead: 5
maxActiveWrite: 4
maxOutStandingRead: 50
maxOutStandingWrite: 50
- endpoint: https://backup-1.unisphe.re:8443
credentialsSecret: backup-1-secret
skipCertificateValidation: true
- endpoint: https://primary-2.unisphe.re:8443
credentialsSecret: primary-2-secret
skipCertificateValidation: true
- endpoint: https://backup-2.unisphe.re:8443
credentialsSecret: backup-2-secret
skipCertificateValidation: true
# "csireverseproxy" refers to the subchart csireverseproxy
csireverseproxy:
tlsSecret: csirevproxy-tls-secret
deployAsSidecar: true
port: 2222
mode: StandAlone
Note: If the credential secret is missing from any management server details, the installer will try to use the defaultCredentialsSecret
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.