PowerMax
CSI Driver for Dell PowerMax can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, see the script documentation.
Prerequisites
The following requirements must be met before installing CSI Driver for Dell PowerMax:
- Install Kubernetes or OpenShift (see supported versions)
- Install Helm 3
- Fibre Channel requirements
- iSCSI requirements
- NFS requirements
- NVMeTCP requirements
- Auto RDM for vSphere over FC requirements
- Certificate validation for Unisphere REST API calls
- Mount propagation is enabled on container runtime that is being used
- Linux multipathing requirements
- If using Snapshot feature, satisfy all Volume Snapshot requirements
- If enabling CSM for Authorization, please refer to the Authorization deployment steps first
- If enabling CSM Replication, both source and target storage systems must be locally managed by Unisphere.
- Example: When using two Unisphere instances, the first Unisphere instance should be configured with the source storage system as locally managed and target storage system as remotely managed. The second Unisphere configuration should mirror the first — locally managing the target storage system and remotely managing the source storage system.
- If using Powerpath , install the PowerPath for Linux requirements
Prerequisite for CSI Reverse Proxy
CSI PowerMax Reverse Proxy is an HTTPS server and has to be configured with an SSL certificate and a private key.
The certificate and key are provided to the proxy via a Kubernetes TLS secret (in the same namespace). The SSL certificate must be an X.509 certificate encoded in PEM format. The certificates can be obtained via a Certificate Authority or can be self-signed and generated by a tool such as openssl.
Starting from v2.7.0 , these secrets will be created automatically using the following tls.key and tls.cert contents provided in my-powermax-settings.yaml file. For this , we need to install cert-manager using below command which manages the certs and secrets .
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.11.0/cert-manager.yaml
Here is an example showing how to generate a private key and use that to sign an SSL certificate using the openssl tool:
openssl genrsa -out tls.key 2048
openssl req -new -x509 -sha256 -key tls.key -out tls.crt -days 3650
Install Helm 3
Install Helm 3 on the master node before you install CSI Driver for Dell PowerMax.
Steps
Run the command to install Helm 3.
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
Fibre Channel Requirements
CSI Driver for Dell PowerMax supports Fibre Channel communication. Ensure that the following requirements are met before you install CSI Driver:
- Zoning of the Host Bus Adapters (HBAs) to the Fibre Channel port director must be completed.
- Ensure that the HBA WWNs (initiators) appear on the list of initiators that are logged into the array.
- If the number of volumes that will be published to nodes is high, then configure the maximum number of LUNs for your HBAs on each node. See the appropriate HBA document to configure the maximum number of LUNs.
iSCSI Requirements
The CSI Driver for Dell PowerMax supports iSCSI connectivity. These requirements are applicable for the nodes that use iSCSI initiator to connect to the PowerMax arrays.
Set up the iSCSI initiators as follows:
- All Kubernetes nodes must have the iscsi-initiator-utils package installed.
- Ensure that the iSCSI initiators are available on all the nodes where the driver node plugin will be installed.
- Kubernetes nodes should have access (network connectivity) to an iSCSI director on the Dell PowerMax array that has IP interfaces. Manually create IP routes for each node that connects to the Dell PowerMax if required.
- Ensure that the iSCSI initiators on the nodes are not a part of any existing Host (Initiator Group) on the Dell PowerMax array.
- The CSI Driver needs the port group names containing the required iSCSI director ports. These port groups must be set up on each Dell PowerMax array. All the port group names supplied to the driver must exist on each Dell PowerMax with the same name.
For more information about configuring iSCSI, see Dell Host Connectivity guide.
NFS requirements
CSI Driver for Dell PowerMax supports NFS communication. Ensure that the following requirements are met before you install CSI Driver:
- Configure the NFS network. Please refer here for more details.
- PowerMax Embedded Management guest to access Unisphere for PowerMax.
- Create the NAS server. Please refer here for more details.
NVMeTCP requirements
If you want to use the protocol, set up the NVMe initiators as follows:
- Setup on Array
Once the NVMe endpoint is created on the array, follow the following step to update endpoint name to adhere with CSI driver.- Perform a
nvme discover --transport=tcp --traddr=<InterfaceAdd> --trsvcid=4420
.is the placeholder for actual IP address of NVMe Endpoint. - Fetch the subnqn, for e.g., nqn.1988-11.com.dell:PowerMax_2500:00:000120001100, this will be used as the subnqn holder while updating NVMe endpoint name.
- Update the NVMe endpoint name as
<subnqn>:<dir><port>>
. Here is an example how it should look, nqn.1988-11.com.dell:PowerMax_2500:00:000120001100:OR1C000
- Perform a
- Setup on Host
- The driver requires NVMe management command-line interface (nvme-cli) to use configure, edit, view or start the NVMe client and target. The nvme-cli utility provides a command-line and interactive shell option. The NVMe CLI tool is installed in the host using the below command.
sudo apt install nvme-cli
Requirements for NVMeTCP
- Modules including the nvme, nvme_core, nvme_fabrics, and nvme_tcp are required for using NVMe over Fabrics using TCP. Load the NVMe and NVMe-OF Modules using the below commands:
modprobe nvme
modprobe nvme_tcp
- The NVMe modules may not be available after a node reboot. Loading the modules at startup is recommended.
- Generate and update the /etc/nvme/hostnqn with hostNQN details.
Auto RDM for vSphere over FC requirements
The CSI Driver for Dell PowerMax supports auto RDM for vSphere over FC. These requirements are applicable for the clusters deployed on ESX/ESXi using virtualized environement.
Set up the environment as follows:
-
Requires VMware vCenter management software to manage all ESX/ESXis where the cluster is hosted.
-
Add all FC array ports zoned to the ESX/ESXis to a port group where the cluster is hosted .
-
Add initiators from all ESX/ESXis to a host(initiator group) where the cluster is hosted.
-
Edit
samples/secret/vcenter-secret.yaml
file, to point to the correct namespace, and replace the values for the username and password parameters. These values can be obtained using base64 encoding as described in the following example:echo -n "myusername" | base64 echo -n "mypassword" | base64
where myusername and mypassword are credentials for a user with vCenter privileges.
Create the secret by running the below command,
kubectl create -f samples/secret/vcenter-secret.yaml
Certificate validation for Unisphere REST API calls
As part of the CSI driver installation, the CSI driver requires a secret with the name powermax-certs present in the namespace powermax. This secret contains the X509 certificates of the CA which signed the Unisphere SSL certificate in PEM format. This secret is mounted as a volume in the driver container. In earlier releases, if the install script did not find the secret, it created an empty secret with the same name. From the 1.2.0 release, the secret volume has been made optional. The install script no longer attempts to create an empty secret.
The CSI driver exposes an install parameter skipCertificateValidation
which determines if the driver performs client-side verification of the Unisphere certificates. The skipCertificateValidation
parameter is set to true by default, and the driver does not verify the Unisphere certificates.
If the skipCertificateValidation
parameter is set to false and a previous installation attempt created an empty secret, then this secret must be deleted and re-created using the CA certs.
If the Unisphere certificate is self-signed or if you are using an embedded Unisphere, then perform the following steps:
-
To fetch the certificate, run
openssl s_client -showcerts -connect [Unisphere IP]:8443 </dev/null 2> /dev/null | openssl x509 -outform PEM > ca_cert.pem
NOTE: The IP address varies for each user.
-
To create the secret, run
kubectl create secret generic powermax-certs --from-file=ca_cert.pem -n powermax
Ports in the port group
There are no restrictions to how many ports can be present in the iSCSI port groups provided to the driver.
The same applies to Fibre Channel where there are no restrictions on the number of FA directors a host HBA can be zoned to. See the best practices for host connectivity to Dell PowerMax to ensure that you have multiple paths to your data volumes.
Linux multipathing requirements
CSI Driver for Dell PowerMax supports Linux multipathing. Configure Linux multipathing before installing the CSI Driver.
Set up Linux multipathing as follows:
- All the nodes must have the Device Mapper Multipathing package installed.
NOTE: When this package is installed it creates a multipath configuration file which is located at/etc/multipath.conf
. Please ensure that this file always exists. - Enable multipathing using
mpathconf --enable --with_multipathd y
- Enable
user_friendly_names
andfind_multipaths
in themultipath.conf
file.
As a best practice, use the following options to help the operating system and the mulitpathing software detect path changes efficiently:
path_grouping_policy multibus
path_checker tur
features "1 queue_if_no_path"
path_selector "round-robin 0"
no_path_retry 10
multipathd MachineConfig
If you are installing a CSI Driver which requires the installation of the Linux native Multipath software - multipathd, please follow the below instructions
To enable multipathd on RedHat CoreOS nodes you need to prepare a working configuration encoded in base64.
user_friendly_names yes
find_multipaths yes
}
blacklist {
}' | base64 -w0
Use the base64 encoded string output in the following MachineConfig
yaml file (under source section)
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: workers-multipath-conf-default
labels:
machineconfiguration.openshift.io/role: worker
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,ZGVmYXVsdHMgewp1c2VyX2ZyaWVuZGx5X25hbWVzIHllcwpmaW5kX211bHRpcGF0aHMgeWVzCn0KCmJsYWNrbGlzdCB7Cn0K
verification: {}
filesystem: root
mode: 400
path: /etc/multipath.conf
After deploying thisMachineConfig
object, CoreOS will start multipath service automatically.
Alternatively, you can check the status of the multipath service by entering the following command in each worker nodes.
sudo multipath -ll
If the above command is not successful, ensure that the /etc/multipath.conf file is present and configured properly. Once the file has been configured correctly, enable the multipath service by running the following command:
sudo /sbin/mpathconf –-enable --with_multipathd y
Finally, you have to restart the service by providing the command
sudo systemctl restart multipathd
For additional information refer to official documentation of the multipath configuration.
PowerPath for Linux requirements
CSI Driver for Dell PowerMax supports PowerPath for Linux. Configure Linux PowerPath before installing the CSI Driver.
Set up the PowerPath for Linux as follows:
- All the nodes must have the PowerPath package installed . Download the PowerPath archive for the environment from Dell Online Support.
Untar
the PowerPath archive, Copy the RPM package into a temporary folder and Install PowerPath usingrpm -ivh DellEMCPower.LINUX-<version>-<build>.<platform>.x86_64.rpm
- Start the PowerPath service using
systemctl start PowerPath
Note: Do not install Dell PowerPath if multi-path software is already installed, as they cannot co-exist with native multi-path software.
(Optional) Volume Snapshot Requirements
For detailed snapshot setup procedure, click here.
(Optional) Replication feature Requirements
Applicable only if you decided to enable the Replication feature in my-powermax-settings.yaml
replication:
enabled: true
Replication CRD’s
The CRDs for replication can be obtained and installed from the csm-replication project on Github. Use csm-replication/deploy/replicationcrds.all.yaml
located in the csm-replication git repo for the installation.
CRDs should be configured during replication prepare stage with repctl as described in install-repctl
Install the Driver
Steps
- Run
git clone -b v2.11.0 https://github.com/dell/csi-powermax.git
to clone the git repository. This will include the Helm charts and dell-csi-helm-installer scripts. - Ensure that you have created a namespace where you want to install the driver. You can run
kubectl create namespace powermax
to create a new one - Edit the
samples/secret/secret.yaml
file,to point to the correct namespace, and replace the values for the username and password parameters. These values can be obtained using base64 encoding as described in the following example:where myusername and mypassword are credentials for a user with PowerMax privileges.echo -n "myusername" | base64 echo -n "mypassword" | base64
- Create the secret by running
kubectl create -f samples/secret/secret.yaml
- Download the default values.yaml file
cd dell-csi-helm-installer && wget -O my-powermax-settings.yaml https://github.com/dell/helm-charts/raw/csi-powermax-2.11.0/charts/csi-powermax/values.yaml
- Ensure the unisphere have 10.0 REST endpoint support by clicking on Unisphere -> Help (?) -> About in Unisphere for PowerMax GUI.
- Edit the newly created file and provide values for the following parameters
vi my-powermax-settings.yaml
Parameter | Description | Required | Default |
---|---|---|---|
global | This section refers to configuration options for both CSI PowerMax Driver and Reverse Proxy | - | - |
defaultCredentialsSecret | This secret name refers to: 1 The proxy credentials if the driver is installed with proxy in StandAlone mode. 2. The default Unisphere credentials if credentialsSecret is not specified for a management server. |
Yes | powermax-creds |
storageArrays | This section refers to the list of arrays managed by the driver and Reverse Proxy in StandAlone mode. | - | - |
storageArrayId | This refers to PowerMax Symmetrix ID. | Yes | 000000000001 |
endpoint | This refers to the URL of the Unisphere server managing storageArrayId. If authorization is enabled, endpoint should be the HTTPS localhost endpoint that the authorization sidecar will listen on | Yes if Reverse Proxy mode is StandAlone | https://primary-1.unisphe.re:8443 |
backupEndpoint | This refers to the URL of the backup Unisphere server managing storageArrayId, if Reverse Proxy is installed in StandAlone mode. If authorization is enabled, backupEndpoint should be the HTTPS localhost endpoint that the authorization sidecar will listen on | Yes | https://backup-1.unisphe.re:8443 |
managementServers | This section refers to the list of configurations for Unisphere servers managing powermax arrays. | - | - |
endpoint | This refers to the URL of the Unisphere server. If authorization is enabled, endpoint should be the HTTPS localhost endpoint that the authorization sidecar will listen on | Yes | https://primary-1.unisphe.re:8443 |
credentialsSecret | This refers to the user credentials for endpoint | Yes | primary-1-secret |
skipCertificateValidation | This parameter should be set to false if you want to do client-side TLS verification of Unisphere for PowerMax SSL certificates. | No | “True” |
certSecret | The name of the secret in the same namespace containing the CA certificates of the Unisphere server | Yes, if skipCertificateValidation is set to false | Empty |
limits | This refers to various limits for Reverse Proxy | No | - |
maxActiveRead | This refers to the maximum concurrent READ request handled by the reverse proxy. | No | 5 |
maxActiveWrite | This refers to the maximum concurrent WRITE request handled by the reverse proxy. | No | 4 |
maxOutStandingRead | This refers to maximum queued READ request when reverse proxy receives more than maxActiveRead requests. | No | 50 |
maxOutStandingWrite | This refers to maximum queued WRITE request when reverse proxy receives more than maxActiveWrite requests. | No | 50 |
kubeletConfigDir | Specify kubelet config dir path | Yes | /var/lib/kubelet |
imagePullPolicy | The default pull policy is IfNotPresent which causes the Kubelet to skip pulling an image if it already exists. | Yes | IfNotPresent |
clusterPrefix | Prefix that is used during the creation of various masking-related entities (Storage Groups, Masking Views, Hosts, and Volume Identifiers) on the array. The value that you specify here must be unique. Ensure that no other CSI PowerMax driver is managing the same arrays that are configured with the same prefix. The maximum length for this prefix is three characters. | Yes | “ABC” |
logLevel | CSI driver log level. Allowed values: “error”, “warn”/“warning”, “info”, “debug”. | Yes | “debug” |
logFormat | CSI driver log format. Allowed values: “TEXT” or “JSON”. | Yes | “TEXT” |
kubeletConfigDir | kubelet config directory path. Ensure that the config.yaml file is present at this path. | Yes | /var/lib/kubelet |
defaultFsType | Used to set the default FS type for external provisioner | Yes | ext4 |
portGroups | List of comma-separated port group names. Any port group that is specified here must be present on all the arrays that the driver manages. | For iSCSI Only | “PortGroup1, PortGroup2, PortGroup3” |
skipCertificateValidation | Skip client-side TLS verification of Unisphere certificates | No | “True” |
transportProtocol | Set the preferred transport protocol for the Kubernetes cluster which helps the driver choose between FC, iSCSI and NVMeTCP, when a node has multiple protocol connectivity to a PowerMax array. | No | Empty |
nodeNameTemplate | Used to specify a template that will be used by the driver to create Host/IG names on the PowerMax array. To use the default naming convention, leave this value empty. | No | Empty |
modifyHostName | Change any existing host names. When nodenametemplate is set, it changes the name to the specified format else it uses driver default host name format. | No | false |
powerMaxDebug | Enables low level and http traffic logging between the CSI driver and Unisphere. Don’t enable this unless asked to do so by the support team. | No | false |
enableCHAP | Determine if the driver is going to configure SCSI node databases on the nodes with the CHAP credentials. If enabled, the CHAP secret must be provided in the credentials secret and set to the key “chapsecret” | No | false |
fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes None, File and ReadWriteOnceWithFSType |
No | “ReadWriteOnceWithFSType” |
version | Current version of the driver. Don’t modify this value as this value will be used by the install script. | Yes | v2.10.0 |
images | List all the images used by the CSI driver and CSM. If you use a private repository, change the registries accordingly. | Yes | "" |
maxPowerMaxVolumesPerNode | Specifies the maximum number of volume that can be created on a node. | Yes | 0 |
controller | Allows configuration of the controller-specific parameters. | - | - |
controllerCount | Defines the number of csi-powerscale controller pods to deploy to the Kubernetes release | Yes | 2 |
volumeNamePrefix | Defines a string prefix for the names of PersistentVolumes created | Yes | “k8s” |
snapshot.enabled | Enable/Disable volume snapshot feature | Yes | true |
snapshot.snapNamePrefix | Defines a string prefix for the names of the Snapshots created | Yes | “snapshot” |
resizer.enabled | Enable/Disable volume expansion feature | Yes | true |
healthMonitor.enabled | Allows to enable/disable volume health monitor | No | false |
healthMonitor.interval | Interval of monitoring volume health condition | No | 60s |
nodeSelector | Define node selection constraints for pods of controller deployment | No | |
tolerations | Define tolerations for the controller deployment, if required | No | |
node | Allows configuration of the node-specific parameters. | - | - |
tolerations | Add tolerations as per requirement | No | - |
nodeSelector | Add node selectors as per requirement | No | - |
healthMonitor.enabled | Allows to enable/disable volume health monitor | No | false |
topologyControl.enabled | Allows to enable/disable topology control to filter topology keys | No | false |
csireverseproxy | This section refers to the configuration options for CSI PowerMax Reverse Proxy | - | - |
tlsSecret | This refers to the TLS secret of the Reverse Proxy Server. | Yes | csirevproxy-tls-secret |
deployAsSidecar | If set to true, the Reverse Proxy is installed as a sidecar to the driver’s controller pod otherwise it is installed as a separate deployment. | Yes | “True” |
port | Specify the port number that is used by the NodePort service created by the CSI PowerMax Reverse Proxy installation | Yes | 2222 |
certManager | Auto-create TLS certificate for csi-reverseproxy | - | - |
selfSignedCert | Set selfSignedCert to use a self-signed certificate | No | true |
certificateFile | certificateFile has tls.key content in encoded format | No | tls.crt.encoded64 |
privateKeyFile | privateKeyFile has tls.key content in encoded format | No | tls.key.encoded64 |
authorization | Authorization is an optional feature to apply credential shielding of the backend PowerMax. | - | - |
enabled | A boolean that enables/disables authorization feature. | No | false |
proxyHost | Hostname of the csm-authorization server. | No | Empty |
skipCertificateValidation | A boolean that enables/disables certificate validation of the csm-authorization proxy server. | No | true |
migration | Migration is an optional feature to enable migration between storage classes | - | - |
enabled | A boolean that enables/disables migration feature. | No | false |
image | Image for dell-csi-migrator sidecar. | No | " " |
nodeRescanSidecarImage | Image for node rescan sidecar which rescans nodes for identifying new paths. | No | " " |
migrationPrefix | enables migration sidecar to read required information from the storage class fields | No | migration.storage.dell.com |
replication | Replication is an optional feature to enable replication & disaster recovery capabilities of PowerMax to Kubernetes clusters. | - | - |
enabled | A boolean that enables/disables replication feature. | No | false |
replicationContextPrefix | enables side cars to read required information from the volume context | No | powermax |
replicationPrefix | Determine if replication is enabled | No | replication.storage.dell.com |
storageCapacity | It is an optional feature that enable storagecapacity & helps the scheduler to check whether the requested capacity is available on the PowerMax array and allocate it to the nodes. | - | - |
enabled | A boolean that enables/disables storagecapacity feature. | - | true |
pollInterval | It configure how often external-provisioner polls the driver to detect changed capacity | - | 5m |
vSphere | This section refers to the configuration options for VMware virtualized environment support via RDM | - | - |
enabled | A boolean that enables/disables VMware virtualized environment support. | No | false |
fcPortGroup | Existing portGroup that driver will use for vSphere. | Yes | "" |
fcHostGroup | Existing host(initiator group)/hostgroup(cascaded initiator group) that driver will use for vSphere. | Yes | "" |
vCenterHost | URL/endpoint of the vCenter where all the ESX are present | Yes | "" |
vCenterCredSecret | Secret name for the vCenter credentials. | Yes | "" |
- Install the driver using
csi-install.sh
bash script by runningcd ../dell-csi-helm-installer && ./csi-install.sh --namespace powermax --values ./my-powermax-settings.yaml --helm-charts-version <version>
- Or you can also install the driver using standalone helm chart using the command
helm install --values my-powermax-settings.yaml --namespace powermax powermax ./csi-powermax
Note:
- The parameter
--helm-charts-version
is optional and if you do not specify the flag, by default thecsi-install.sh
script will clone the version of the helm chart that is specified in the driver’s csi-install.sh file. If you wish to install the driver using a different version of the helm chart, you need to include this flag. Also, remember to delete thehelm-charts
repository present in thecsi-powermax
directory if it was cloned before. - For detailed instructions on how to run the install scripts, see the readme document in the dell-csi-helm-installer folder.
- There are a set of samples provided here to help you configure the driver with reverse proxy
- This script also runs the verify.sh script in the same directory. You will be prompted to enter the credentials for each of the Kubernetes nodes. The
verify.sh
script needs the credentials to check if the iSCSI initiators have been configured on all nodes. You can also skip the verification step by specifying the--skip-verify-node
option - In order to enable authorization, there should be an authorization proxy server already installed.
- PowerMax Array username must have role as
StorageAdmin
to be able to perform CRUD operations. - If the user is using complex K8s version like “v1.24.3-mirantis-1”, use this kubeVersion check in helm Chart file. kubeVersion: “>= 1.24.0-0 < 1.29.0-0”.
- User should provide all boolean values with double-quotes. This applies only for values.yaml. Example: “true”/“false”.
- controllerCount parameter value should be <= number of nodes in the kubernetes cluster else install script fails.
- Endpoint should not have any special character at the end apart from port number.
Storage Classes
A wide set of annotated storage class manifests has been provided in the samples/storageclass
folder. Please use these samples to create new storage classes to provision storage.
Volume Snapshot Class
Starting with CSI PowerMax v1.7.0, dell-csi-helm-installer
will not create any Volume Snapshot Class during the driver installation. There is a sample Volume Snapshot Class manifest present in the samples/volumesnapshotclass folder. Please use this sample to create a new Volume Snapshot Class to create Volume Snapshots.
Sample values file
The following sections have useful snippets from values.yaml
file which provides more information on how to configure the CSI PowerMax driver along with CSI PowerMax ReverseProxy in StandAlone mode.
CSI PowerMax driver with Proxy in StandAlone mode
This is the most advanced configuration which provides you with the capability to connect to Multiple Unisphere servers. You can specify primary and backup Unisphere servers for each storage array. If you have different credentials for your Unisphere servers, you can also specify different credential secrets.
global:
defaultCredentialsSecret: powermax-creds
storageArrays:
- storageArrayId: "000000000001"
endpoint: https://primary-1.unisphe.re:8443
backupEndpoint: https://backup-1.unisphe.re:8443
- storageArrayId: "000000000002"
endpoint: https://primary-2.unisphe.re:8443
backupEndpoint: https://backup-2.unisphe.re:8443
managementServers:
- endpoint: https://primary-1.unisphe.re:8443
credentialsSecret: primary-1-secret
skipCertificateValidation: false
certSecret: primary-cert
limits:
maxActiveRead: 5
maxActiveWrite: 4
maxOutStandingRead: 50
maxOutStandingWrite: 50
- endpoint: https://backup-1.unisphe.re:8443
credentialsSecret: backup-1-secret
skipCertificateValidation: true
- endpoint: https://primary-2.unisphe.re:8443
credentialsSecret: primary-2-secret
skipCertificateValidation: true
- endpoint: https://backup-2.unisphe.re:8443
credentialsSecret: backup-2-secret
skipCertificateValidation: true
# "csireverseproxy" refers to the subchart csireverseproxy
csireverseproxy:
tlsSecret: csirevproxy-tls-secret
deployAsSidecar: true
port: 2222
mode: StandAlone
Note: If the credential secret is missing from any management server details, the installer will try to use the defaultCredentialsSecret
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.