Helm

Install Helm 3.0

Install Helm 3.0 on the master node before you install the CSI Driver for PowerScale.

Steps

Run the command to install Helm 3.0.

curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash


Installation Wizard Support Matrix Click Here

The Container Storage Modules Installation Wizard is a webpage that helps you create a manifest file to install Dell CSI Drivers and CSM Modules. Users can enable or disable modules through the UI, and it generates a single manifest file, eliminating the need to download individual Helm charts for drivers and modules.

Note:Ensure Helm 3.x, namespace, and secrets are set up before installing the Helm chart.

Generate Manifest File

  1. Open the CSM Installation Wizard.
  2. Select the Installation Type as Helm/Operator.
  3. Select the Array.
  4. Enter the Image Repository. The default value is dellemc.
  5. Select the CSM Version.
  6. Select the modules for installation. If there are module specific inputs, enter their values.
  7. If needed, modify the Controller Pods Count.
  8. If needed, select Install Controller Pods on Control Plane and/or Install Node Pods on Control Plane.
  9. Enter the Namespace. The default value is csi-<array>.
  10. Click on Generate YAML.
  11. A manifest file, values.yaml will be generated and downloaded.
  12. A section Run the following commands to install will be displayed.
  13. Run the commands displayed to install Dell CSI Driver and Modules using the generated manifest file.

Installation Using Helm Chart

Steps

NOTE: Ensure Helm 3.x, namespace, and secrets are set up before installing the Helm chart.

  • Add the Dell Helm Charts repository.

    On your terminal, run each of the commands below:

     helm repo add dell https://dell.github.io/helm-charts
                 helm repo update
                
  • Copy the downloaded values.yaml file.

  • Look over all the fields in the generated values.yaml and fill in/adjust any as needed.

NOTE: The CSM Installation Wizard generates values.yaml with the minimal inputs required to install the CSM. To configure additional parameters in values.yaml, you can follow the steps outlined in CSI Driver ,Observability, Replication, ,Resiliency.

  • When the PowerFlex driver is installed using values generated by installation wizard,if any changes to MDM the user run following command to update it.

    echo -n '<MDM_IPS>' | base64
                kubectl create secret generic vxflexos-config -n vxflexos --from-file=config=samples/config.yaml --from-literal=MDM='xx.xx.xx.xx,yy.yy.yy.yy&zz.zz.zz.zz' 
                
  • If Observability is checked in the wizard, refer to Observability to export metrics to Prometheus and load the Grafana dashboards.
  • If Authorization is checked in the wizard, only the sidecar is enabled. Refer to Authorization to install and configure the CSM Authorization Proxy Server.
  • If Replication is checked in the wizard, refer to Replication on configuring communication between Kubernetes clusters.
  • If your Kubernetes distribution doesn’t have the Volume Snapshot feature enabled, refer to this section to install the Volume Snapshot CRDs and the default snapshot controller.

  • Install the Helm chart.

    On your terminal, run this command:

    helm install <release-name> dell/container-storage-modules -n <namespace> --version <container-storage-module chart-version> -f <values.yaml location>
                

    Example: helm install powerscale dell/container-storage-modules -n csi-powerscale –version 1.4.0 -f values.yaml


Prerequisites

The following are requirements to be met before installing the CSI Driver for PowerScale:

  • Install Kubernetes or OpenShift (see supported versions)
  • Install Helm 3
  • Mount propagation is enabled on container runtime that is being used
  • nfs-utils package must be installed on nodes that will mount volumes
  • If using Snapshot feature, satisfy all Volume Snapshot requirements
  • If enabling CSM for Authorization, please refer to the Authorization deployment steps first
  • If enabling CSM for Replication, please refer to the Replication deployment steps first
  • If enabling CSM for Resiliency, please refer to the Resiliency deployment steps first

(Optional) Volume Snapshot Requirements

For detailed snapshot setup procedure, click here.

(Optional) Volume Health Monitoring

Volume Health Monitoring feature is optional and by default this feature is disabled for drivers when installed via helm.

If enabled capacity metrics (used & free capacity, used & free inodes) for PowerScale PV will be expose in Kubernetes metrics API.

To enable this feature, add the below block to the driver manifest before installing the driver. This ensures to install external health monitor sidecar. To get the volume health state value under controller should be set to true as seen below. To get the volume stats value under node should be set to true.

controller:
  healthMonitor:
    # enabled: Enable/Disable health monitor of CSI volumes
    # Allowed values:
    #   true: enable checking of health condition of CSI volumes
    #   false: disable checking of health condition of CSI volumes
    # Default value: None
    enabled: false
    # interval: Interval of monitoring volume health condition
    # Allowed values: Number followed by unit (s,m,h)
    # Examples: 60s, 5m, 1h
    # Default value: 60s
    interval: 60s
node:
  healthMonitor:
    # enabled: Enable/Disable health monitor of CSI volumes- volume usage, volume condition
    # Allowed values:
    #   true: enable checking of health condition of CSI volumes
    #   false: disable checking of health condition of CSI volumes
    # Default value: None
    enabled: false

NOTE: To enable this feature to existing driver OR enable this feature while upgrading the driver versions, follow either of the way.

  1. Reinstall of Driver
  2. Upgrade the driver smoothly with “–upgrade” option

(Optional) Replication feature Requirements

Applicable only if you decided to enable the Replication feature in values.yaml

replication:
  enabled: true

Replication CRD’s

The CRDs for replication can be obtained and installed from the csm-replication project on Github. Use csm-replication/deploy/replicationcrds.all.yaml located in the csm-replication git repo for the installation.

CRDs should be configured during replication prepare stage with repctl as described in install-repctl

Install Driver

Steps

  1. Run git clone -b v2.13.0 https://github.com/dell/csi-powerscale.git to clone the git repository.

  2. Ensure that you have created the namespace where you want to install the driver. You can run kubectl create namespace isilon to create a new one. The use of “isilon” as the namespace is just an example. You can choose any name for the namespace.

  3. Collect information from the PowerScale Systems like IP address, IsiPath, username, and password. Make a note of the value for these parameters as they must be entered in the secret.yaml.

    Note: The ‘clusterName’ serves as a logical, unique identifier for the array that should remain unchanged once it is included in the volume handle. Altering this identifier is not advisable, as it would result in the failure of all operations associated with the volume that was created earlier.

  4. Download wget -O my-isilon-settings.yaml https://raw.githubusercontent.com/dell/helm-charts/csi-isilon-2.13.0/charts/csi-isilon/values.yaml into cd ../dell-csi-helm-installer to customize settings for installation.

  5. Edit my-isilon-settings.yaml to set the following parameters for your installation: The following table lists the primary configurable parameters of the PowerScale driver Helm chart and their default values. More detailed information can be found in the values.yaml file in this repository.

    Parameters
  1. Edit following parameters in samples/secret/secret.yaml file and update/add connection/authentication information for one or more PowerScale clusters. If replication feature is enabled, ensure the secret includes all the PowerScale clusters involved in replication.
    Parameters

    User privileges

    The username specified in secret.yaml must be from the authentication providers of PowerScale. The user must have enough privileges to perform the actions. The suggested privileges are as follows:

    Privilege Type
    ISI_PRIV_LOGIN_PAPI Read Only
    ISI_PRIV_NFS Read Write
    ISI_PRIV_QUOTA Read Write
    ISI_PRIV_SNAPSHOT Read Write
    ISI_PRIV_IFS_RESTORE Read Only
    ISI_PRIV_NS_IFS_ACCESS Read Only
    ISI_PRIV_IFS_BACKUP Read Only
    ISI_PRIV_AUTH_ZONES Read Only
    ISI_PRIV_SYNCIQ Read Write
    ISI_PRIV_STATISTICS Read Only

    Create isilon-creds secret using the following command:
    kubectl create secret generic isilon-creds -n isilon --from-file=config=secret.yaml

    NOTE:

    • If any key/value is present in all my-isilon-settings.yaml, secret, and storageClass, then the values provided in storageClass parameters take precedence.
    • The user has to validate the yaml syntax and array-related key/values while replacing or appending the isilon-creds secret. The driver will continue to use previous values in case of an error found in the yaml file.
    • For the key isiIP/endpoint, the user can give either IP address or FQDN. Also, the user can prefix ‘https’ (For example, https://192.168.1.1) with the value.
    • The isilon-creds secret has a mountEndpoint parameter which should only be updated and used when Authorization is enabled.
  1. Install OneFS CA certificates by following the instructions from the next section, if you want to validate OneFS API server’s certificates. If not, create an empty secret using the following command and an empty secret must be created for the successful installation of CSI Driver for PowerScale.

    kubectl create -f empty-secret.yaml
    

    This command will create a new secret called isilon-certs-0 in isilon namespace.

  2. Install the driver using csi-install.sh bash script and default yaml by running

    cd dell-csi-helm-installer && wget -O my-isilon-settings.yaml https://raw.githubusercontent.com/dell/helm-charts/csi-isilon-2.13.0/charts/csi-isilon/values.yaml &&
    ./csi-install.sh --namespace isilon --values  my-isilon-settings.yaml --helm-charts-version <version>
    

NOTE:

  • The parameter --helm-charts-version is optional and if you do not specify the flag, by default the csi-install.sh script will clone the version of the helm chart that is specified in the driver’s csi-install.sh file. If you wish to install the driver using a different version of the helm chart, you need to include this flag. Also, remember to delete the helm-charts repository present in the csi-powerscale directory if it was cloned before.

Certificate validation for OneFS REST API calls

The CSI driver exposes an install parameter ‘skipCertificateValidation’ which determines if the driver performs client-side verification of the OneFS certificates. The ‘skipCertificateValidation’ parameter is set to true by default and the driver does not verify the OneFS certificates.

If the ‘skipCertificateValidation’ is set to false, then the secret isilon-certs must contain the CA certificate for OneFS. If this secret is an empty secret, then the validation of the certificate fails, and the driver fails to start.

If the ‘skipCertificateValidation’ parameter is set to false and a previous installation attempt to create the empty secret, then this secret must be deleted and re-created using the CA certs. If the OneFS certificate is self-signed, then perform the following steps:

Procedure

  1. To fetch the certificate, run
openssl s_client -showcerts -connect [OneFS IP] </dev/null 2>/dev/null | openssl x509 -outform PEM > ca_cert_0.pem
  1. To create the certs secret, run
kubectl create secret generic isilon-certs-0 --from-file=cert-0=ca_cert_0.pem -n isilon
  1. Use the following command to replace the secret
kubectl create secret generic isilon-certs-0 -n isilon --from-file=cert-0=ca_cert_0.pem -o yaml --dry-run | kubectl replace -f -

NOTES:

  1. The OneFS IP can be with or without a port, depends upon the configuration of OneFS API server.
  2. The commands are based on the namespace ‘isilon’
  3. It is highly recommended that ca_cert.pem file(s) having the naming convention as ca_cert_number.pem (example: ca_cert_0, ca_cert_1), where this number starts from 0 and grows as the number of OneFS arrays grows.
  4. The cert secret created out of these pem files must have the naming convention as isilon-certs-number (example: isilon-certs-0, isilon-certs-1, and so on.); The number must start from zero and must grow in incremental order. The number of the secrets created out of pem files should match certSecretCount value in myvalues.yaml or my-isilon-settings.yaml.

Dynamic update of array details via secret.yaml

CSI Driver for PowerScale now provides supports for Multi cluster. Now users can link the single CSI Driver to multiple OneFS Clusters by updating secret.yaml. Users can now update the isilon-creds secret by editing the secret.yaml and executing the following command

kubectl create secret generic isilon-creds -n isilon --from-file=config=secret.yaml -o yaml --dry-run=client | kubectl replace -f -

Note: Updating isilon-certs-x secrets is a manual process, unlike isilon-creds. Users have to re-install the driver in case of updating/adding the SSL certificates or changing the certSecretCount parameter.

Storage Classes

The CSI driver for PowerScale version 1.5 and later, dell-csi-helm-installer does not create any storage classes as part of the driver installation. A sample storage class manifest is available at samples/storageclass/isilon.yaml. Use this sample manifest to create a storageclass to provision storage; uncomment/ update the manifest as per the requirements.

What happens to my existing storage classes?

Upgrading from CSI PowerScale v2.3 driver: The storage classes created as part of the installation have an annotation - “helm.sh/resource-policy”: keep set. This ensures that even after an uninstall or upgrade, the storage classes are not deleted. You can continue using these storage classes if you wish so.

NOTE:

  • At least one storage class is required for one array.
  • If you uninstall the driver and reinstall it, you can still face errors if any update in the values.yaml file leads to an update of the storage class(es):
    Error: cannot patch "<sc-name>" with kind StorageClass: StorageClass.storage.k8s.io "<sc-name>" is invalid: parameters: Forbidden: updates to parameters are forbidden

In case you want to make such updates, ensure to delete the existing storage classes using the kubectl delete storageclass command. Deleting a storage class has no impact on a running Pod with mounted PVCs. You cannot provision new PVCs until at least one storage class is newly created.

Note: If you continue to use the old storage classes, you may not be able to take advantage of any new storage class parameter supported by the driver.

Steps to create secondary storage class:

There are samples storage class yaml files available under samples/storageclass. These can be copied and modified as needed.

  1. Copy the storageclass.yaml to second_storageclass.yaml (This is just an example, you can rename to file you require.)
  2. Edit the second_storageclass.yaml yaml file and update following parameters:
  • Update the name parameter to you require
     metadata:
       name: isilon-new
    
  • Cluster name of 2nd array looks like this in the secret file.( Under /samples/secret/secret.yaml)
    - clusterName: "cluster2"
      username: "user name"
      password: "Password"
      endpoint: "10.X.X.X"
      endpointPort: "8080
    
  • Use same clusterName ↑ in the second_storageclass.yaml
     # Optional: true
     ClusterName: "cluster2"
    
  • Note: These are two essential parameters that you need to change in the “second_storageclass.yaml” file and other parameters that you change as required.
  1. Save the second_storageclass.yaml file

  2. Create your 2nd storage class by using kubectl:

    kubectl create -f <path_to_second_storageclass_file>
    
  3. Use newly created storage class isilon-new for volumes to spin up on cluster2

    PVC example

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: test-pvc
      spec:
       accessModes:
       - ReadWriteOnce
       resources:
         requests:
           storage: 5Gi
       storageClassName: isilon-new
    

Volume Snapshot Class

Starting CSI PowerScale v1.6, dell-csi-helm-installer will not create any Volume Snapshot Class during the driver installation. Sample volume snapshot class manifests are available at samples/volumesnapshotclass/. Use these sample manifests to create a volumesnapshotclass for creating volume snapshots; uncomment/ update the manifests as per the requirements.

Silent Mount Re-tries (v2.6.0)

There are race conditions, when completing the ControllerPublish call to populate the client to volumes export list takes longer time than usual due to background NFS refresh process on OneFS wouldn’t have completed at same time, resulted in error:“mount failed” with initial attempts and might log success after few re-tries. This unnecessarily logs false positive “mount failed” error logs and to overcome this scenario Driver does silent mount re-tries attempts after every two sec. (five attempts max) for every NodePublish Call and allows successful mount within five re-tries without logging any mount error messages. “mount failed” will be logged once these five mount retrial attempts are exhausted and still client is not populated to export list.

Mount Re-tries handles below scenarios:

  • Access denied by server while mounting (NFSv3)
  • No such file or directory (NFSv4)

Sample:

level=error clusterName=powerscale runid=10 msg="mount failed: exit status 32
mounting arguments: -t nfs -o rw XX.XX.XX.XX:/ifs/data/csi/k8s-ac7b91962d /var/lib/kubelet/pods/9f72096a-a7dc-4517-906c-20697f9d7375/volumes/kubernetes.io~csi/k8s-ac7b91962d/mount
output: mount.nfs: access denied by server while mounting XX.XX.XX.XX:/ifs/data/csi/k8s-ac7b91962d