|Symptoms||Prevention, Resolution or Workaround|
|The installation fails with the following error message:
||Install the PowerFlex SDC on listed nodes. The SDC must be installed on all the nodes that need to pull an image of the driver.|
|When you run the command
||- If on Kubernetes, edit the
- If on OpenShift, run the command
||Check the username, password, and the gateway IP address for the PowerFlex system.|
|CreateVolume error System
||Powerflex name if used for systemID in StorageClass ensure same name is also used in array config systemID|
|Defcontext mount option seems to be ignored, volumes still are not being labeled correctly.||Ensure SElinux is enabled on a worker node, and ensure your container run time manager is properly configured to be utilized with SElinux.|
|Mount options that interact with SElinux are not working (like defcontext).||Check that your container orchestrator is properly configured to work with SElinux.|
|Installation of the driver on Kubernetes v1.25/v1.26/v1.27 fails with the following error:
||Kubernetes v1.23/v1.24/v1.25 requires v1 version of snapshot CRDs to be created in cluster, see the Volume Snapshot Requirements|
||A self assigned certificate is used for PowerFlex array. See certificate validation for PowerFlex Gateway|
|When you run the command
||Check to make sure that the v1 snapshotter CRDs are installed, and not the v1beta1 CRDs, which are no longer supported.|
|The controller pod is stuck and producing errors such as"
||Make sure that v1 snapshotter CRDs and v1 snapclass are installed, and not v1beta1, which is no longer supported.|
|Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example:
||If you are using an extended Kubernetes version, please see the helm Chart at
|Volume metrics are missing||Enable Volume Health Monitoring|
|When a node goes down, the block volumes attached to the node cannot be attached to another node||This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround:
1. Force delete the pod running on the node that went down
2. Delete the volumeattachment to the node that went down.
Now the volume can be attached to the new node.
|CSI-PowerFlex volumes cannot mount; are being recognized as multipath devices||CSI-PowerFlex does not support multipath; to fix:
1. Remove any multipath mapping involving a powerflex volume with
2. Blacklist CSI-PowerFlex volumes in multipath config file
|When attempting a driver upgrade, you see:
||You cannot upgrade between drivers with different fsGroupPolicies. See upgrade documentation for more details|
|When accessing ROX mode PVC in OpenShift where the worker nodes are non-root user, you see:
|After installing version v2.6.0 of the driver using the default
||The SDC is already installed. Change the
|In version v2.6.0, the driver is crashing because the External Health Monitor sidecar crashes when a persistent volume is not found.||This is a known issue reported at kubernetes-csi/external-health-monitor#100.|
|In version v2.6.0, when a cluster node goes down, the block volumes attached to the node cannot be attached to another node.||This is a known issue reported at kubernetes-csi/external-attacher#215. Workaround:
1. Force delete the pod running on the node that went down.
2. Delete the pod’s persistent volume attachment on the node that went down. Now the volume can be attached to the new node.
vxflexos-controller-*is the controller pod that acquires leader lease