Troubleshooting
- Can Container Storage Module Operator manage existing drivers installed using Helm charts or the CSI Operator?
- Why do some of the Custom Resource fields show up as invalid or unsupported in the OperatorHub GUI?
- How can I view detailed logs for the Container Storage Module SM Operator?
- My Dell CSI Driver install failed. How do I fix it?
- My CSContainer Storage ModuleM Replication install fails to validate replication prechecks with ’no such host’.
- How to update resource limits for Container Storage Module Operator when it is deployed using Operator hub
Can Container Storage Module Operator manage existing drivers installed using Helm charts or the CSI Operator?
The Container Storage Module Operator is unable to manage any existing driver installed using Helm charts or the CSI Operator. If you already have installed one of the Dell CSI driver in your cluster and want to use the CSM operator based deployment, uninstall the driver and then redeploy the driver via Container Storage ModuleM Operator
Why do some of the Custom Resource fields show up as invalid or unsupported in the OperatorHub GUI?
The Container Storage Module Operator is not fully compliant with the OperatorHub React UI elements. Due to this, some of the Custom Resource fields may show up as invalid or unsupported in the OperatorHub GUI. To get around this problem, use kubectl/oc
commands to get details about the Custom Resource(CR). This issue will be fixed in the upcoming releases of the Container Storage Module Operator.
How can I view detailed logs for the Container Storage Module Operator?
Detailed logs of the Container Storage Module Operator can be displayed using the following command:
kubectl logs <csm-operator-controller-podname> -n <namespace>
My Dell CSI Driver install failed. How do I fix it?
Describe the current state by issuing:
kubectl describe csm <custom-resource-name> -n <namespace>
In the output refer to the status and events section. If status shows pods that are in the failed state, refer to the CSI Driver Troubleshooting guide.
Example:
Status:
Controller Status:
Available: 0
Desired: 2
Failed: 2
Node Status:
Available: 0
Desired: 2
Failed: 2
State: Failed
Events
Warning Updated 67s (x15 over 2m4s) csm (combined from similar events): at 1646848059520359167 Pod error details ControllerError: ErrImagePull= pull access denied for dellem/csi-isilon, repository does not exist or may require 'docker login': denied: requested access to the resource is denied, Daemonseterror: ErrImagePull= pull access denied for dellem/csi-isilon, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
The above event shows dellem/csi-isilon does not exist, to resolve this user can kubectl edit the csm and update to correct image.
To get details of driver installation: kubectl logs <dell-csm-operator-controller-manager-pod> -n dell-csm-operator
.
Typical reasons for errors:
- Incorrect driver version
- Incorrect driver type
- Incorrect driver Spec env, args for containers
- Incorrect RBAC permissions
My CSM Replication install fails to validate replication prechecks with ’no such host'.
In replication environments that utilize more than one cluster, and utilize FQDNs to reference API endpoints, it is highly recommended that the DNS be configured to resolve requests involving the FQDN to the appropriate cluster.
If for some reason it is not possible to configure the DNS, the /etc/hosts file should be updated to map the FQDN to the appropriate IP. This change will need to be made to the /etc/hosts file on:
- The bastion node(s) (or wherever
repctl
is used). - Either the CSM Operator Deployment or ClusterServiceVersion custom resource if using an Operator Lifecycle Manager (such as with an OperatorHub install).
- Both dell-replication-controller-manager deployments.
To update the ClusterServiceVersion, execute the command below, replacing the fields for the remote cluster’s FQDN and IP.
kubectl patch clusterserviceversions.operators.coreos.com -n <operator-namespace> dell-csm-operator-certified.v1.3.0 \
--type=json -p='[{"op": "add", "path": "/spec/install/spec/deployments/0/spec/template/spec/hostAliases", "value": [{"ip":"<remote-IP>","hostnames":["<remote-FQDN>"]}]}]'
To update the dell-replication-controller-manager deployment, execute the command below, replacing the fields for the remote cluster’s FQDN and IP. Make sure to update the deployment on both the primary and disaster recovery clusters.
kubectl patch deployment -n dell-replication-controller dell-replication-controller-manager \
-p '{"spec":{"template":{"spec":{"hostAliases":[{"hostnames":["<remote-FQDN>"],"ip":"<remote-IP>"}]}}}}'
How to update resource limits for CSM Operator when it is deployed using Operator Hub
In certain environments where users have deployed CSM Operator using Operator hub, they have encountered issues related to Container Storage Module Operator pods reporting ‘OOM Killed’. This issue is attributed to the default resource requests and limits configured in the CSM Operator, which fail to meet the resource requirements of the user environments. In this case users can update the resource limits from Openshift web console by following the steps below:
- Login into OpenShift web console
- Navigate to
Operators
section in the left pane and expand it and click on ‘Installed Operators’ - Select the
Dell Container Storage Modules
operator - Click on the
YAML
tab under the operator and you will seeClusterServiceVersion(CSV)
file opened in an YAML editor - Update the resource limits in the opened YAML under the section
spec.install.spec.deployments.spec.template.spec.containers.resources
- Save the CSV and your changes should be applied
Symptoms | Prevention, Resolution or Workaround |
---|---|
When you run the command kubectl describe pods unity-controller-<suffix> –n unity , the system indicates that the driver image could not be loaded. |
You may need to put an insecure-registries entry in /etc/docker/daemon.json or login to the docker registry |
The kubectl logs -n unity unity-node-<suffix> driver logs show that the driver can’t connect to Unity XT - Authentication failure. |
Check if you have created a secret with correct credentials |
fsGroup specified in pod spec is not reflected in files or directories at mounted path of volume. |
fsType of PVC must be set for fsGroup to work. fsType can be specified while creating a storage class. For NFS protocol, fsType can be specified as nfs . fsGroup doesn’t work for ephemeral inline volumes. |
Dynamic array detection will not work in Topology based environment | Whenever a new array is added or removed, then the driver controller and node pod should be restarted with command kubectl get pods -n unity –no-headers=true | awk ‘/unity-/{print $1}’| xargs kubectl delete -n unity pod when topology-based storage classes are used. For dynamic array addition without topology, the driver will detect the newly added or removed arrays automatically |
If source PVC is deleted when cloned PVC exists, then source PVC will be deleted in the cluster but on array, it will still be present and marked for deletion. | All the cloned PVC should be deleted in order to delete the source PVC from the array. |
PVC creation fails on a fresh cluster with iSCSI and NFS protocols alone enabled with error failed to provision volume with StorageClass “unity-iscsi”: error generating accessibility requirements: no available topology found. | This is because iSCSI initiator login takes longer than the node pod startup time. This can be overcome by bouncing the node pods in the cluster using the below command the driver pods with kubectl get pods -n unity –no-headers=true | awk ‘/unity-/{print $1}’| xargs kubectl delete -n unity pod |
Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.24.0 < 1.29.0 which is incompatible with Kubernetes 1.24.6-mirantis-1 |
If you are using an extended Kubernetes version, see the helm Chart at helm/csi-unity/Chart.yaml and use the alternate kubeVersion check that is provided in the comments. Note that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported. |
When a node goes down, the block volumes attached to the node cannot be attached to another node | 1. Force delete the pod running on the node that went down 2. Delete the VolumeAttachment to the node that went down. Now the volume can be attached to the new node. |
Standby controller pod is in crashloopbackoff state | Scale down the replica count of the controller pod’s deployment to 1 using kubectl scale deployment <deployment_name> --replicas=1 -n <driver_namespace> |
fsGroupPolicy may not work as expected without root privileges for NFS only https://github.com/kubernetes/examples/issues/260 | To get the desired behavior set “RootClientEnabled” = “true” in the storage class parameter |