PowerScale

Troubleshooting PowerScale Driver

Here are some installation failures that might be encountered and how to mitigate them.

Symptoms Prevention, Resolution or Workaround
The kubectl logs isilon-controller-0 -n isilon -c driver logs shows the driver cannot authenticate Check your secret’s username and password for corresponding cluster
The kubectl logs isilon-controller-0 -n isilon -c driver logs shows the driver failed to connect to the Isilon because it couldn’t verify the certificates Check the isilon-certs- secret and ensure it is not empty and it has the valid certificates. Set isiInsecure: "true" for insecure connection. SSL validation is recommended in the production environment.
The kubectl logs isilon-controller-0 -n isilon -c driver logs shows the driver error: create volume failed, Access denied. create directory as requested This situation can happen when the user who created the base path is different from the user configured for the driver. Make sure the user used to deploy CSI-Driver must have enough rights on the base path (i.e. isiPath) to perform all operations.
Volume/filesystem is allowed to mount by any host in the network, though that host is not a part of the export of that particular volume under /ifs directory “Dell PowerScale: OneFS NFS Design Considerations and Best Practices”:
There is a default shared directory (ifs) of OneFS, which lets clients running Windows, UNIX, Linux, or Mac OS X access the same directories and files. It is recommended to disable the ifs shared directory in a production environment and create dedicated NFS exports and SMB shares for your workload.
Creating snapshot fails if the parameter IsiPath in volume snapshot class and related storage class is not the same. The driver uses the incorrect IsiPath parameter and tries to locate the source volume due to the inconsistency. Ensure IsiPath in VolumeSnapshotClass yaml and related storageClass yaml are the same.
While deleting a volume, if there are files or folders created on the volume that are owned by different users. If the Isilon credentials used are for a nonprivileged Isilon user, the delete volume action fails. It is due to the limitation in Linux permission control. To perform the delete volume action, the user account must be assigned a role that has the privilege ISI_PRIV_IFS_RESTORE. The user account must have the following set of privileges to ensure that all the CSI Isilon driver capabilities work properly:
* ISI_PRIV_LOGIN_PAPI
* ISI_PRIV_NFS
* ISI_PRIV_QUOTA
* ISI_PRIV_SNAPSHOT
* ISI_PRIV_IFS_RESTORE
* ISI_PRIV_NS_IFS_ACCESS
* ISI_PRIV_STATISTICS
In some cases, ISI_PRIV_BACKUP is also required, for example, when files owned by other users have mode bits set to 700.
If the hostname is mapped to loopback IP in /etc/hosts file, and pods are created using 1.3.0.1 release, after upgrade to driver version 1.4.0 or later there is a possibility of “localhost” as a stale entry in export Recommended setup: User should not map a hostname to loopback IP in /etc/hosts file
Driver node pod is in “CrashLoopBackOff” as “Node ID” generated is not with proper FQDN. This might be due to “dnsPolicy” implemented on the driver node pod which may differ with different networks.

This parameter is configurable in both helm and Operator installer and the user can try with different “dnsPolicy” according to the environment.
The kubectl logs isilon-controller-0 -n isilon -c driver logs shows the driver Authentication failed. Trying to re-authenticate when using Session-based authentication The issue has been resolved from OneFS 9.3 onwards, for OneFS versions prior to 9.3 for session-based authentication either smart connect can be created against a single node of Isilon or CSI Driver can be installed/pointed to a particular node of the Isilon else basic authentication can be used by setting isiAuthType in values.yaml to 0
When an attempt is made to create more than one ReadOnly PVC from the same volume snapshot, the second and subsequent requests result in PVCs in state Pending, with a warning another RO volume from this snapshot is already present. This is because the driver allows only one RO volume from a specific snapshot at any point in time. This is to allow faster creation(within a few seconds) of a RO PVC from a volume snapshot irrespective of the size of the volume snapshot. Wait for the deletion of the first RO PVC created from the same volume snapshot.
Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.22.0 < 1.25.0 which is incompatible with Kubernetes V1.22.11-mirantis-1 If you are using an extended Kubernetes version, please see the helm Chart and use the alternate kubeVersion check that is provided in the comments. Please note that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported.
Standby controller pod is in crashloopbackoff state Scale down the replica count of the controller pod’s deployment to 1 using kubectl scale deployment <deployment_name> --replicas=1 -n <driver_namespace>
Driver install fails because of the incompatible helm values file specified in dell-csi-helm-installer - expected: v2.9.x, found: v2.8.0. Change driver version in each file in dell/csi-powerscale/dell-csi-helm-installer from 2.8.0 to 2.9.x
fsGroupPolicy may not work as expected without root privileges for NFS only https://github.com/kubernetes/examples/issues/260 To get the desired behavior set “RootClientEnabled” = “true” in the storage class parameter