Post Installation Dependencies
Post Installation Dependencies
The following third-party components are required in the same Kubernetes cluster where Container Storage Module Observability has been deployed:
There are various ways to deploy these components. We recommend following the Helm deployments according to the specifications defined below.
Tip: Container Storage Module Observability must be deployed first. Once the module has been deployed, you can proceed to deploying/configuring Prometheus and Grafana.
Prometheus
Prometheus and Container Storage Module Observability services run on the same Kubernetes cluster, with Container Storage Module sending metrics to the OpenTelemetry Collector, which Prometheus then scrapes for data.
Supported Version | Image | Helm Chart |
---|---|---|
2.34.0 | prom/prometheus:v2.34.0 | Prometheus Helm chart |
Note: It is the user’s responsibility to provide persistent storage for Prometheus if they want to preserve historical data.
Prometheus Helm Deployment
Here’s a minimal Prometheus configuration using insecure skip verify; for proper TLS, add a ca_file signed by the same CA as the Container Storage Module Observability certificate. More details about Prometheus configuration, see Prometheus configuration.
-
Create a values file named
prometheus-values.yaml
.# prometheus-values.yaml alertmanager: enabled: false nodeExporter: enabled: false pushgateway: enabled: false kubeStateMetrics: enabled: false configmapReload: prometheus: enabled: false server: enabled: true image: repository: quay.io/prometheus/prometheus tag: v2.34.0 pullPolicy: IfNotPresent persistentVolume: enabled: false service: type: NodePort servicePort: 9090 extraScrapeConfigs: | - job_name: 'karavi-metrics-[CSI-DRIVER]' scrape_interval: 5s scheme: https static_configs: - targets: ['otel-collector:8443'] tls_config: insecure_skip_verify: true
-
If using Rancher, create a ServiceMonitor.
apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: otel-collector namespace: powerflex spec: endpoints: - path: /metrics port: exporter-https scheme: https tlsConfig: insecureSkipVerify: true selector: matchLabels: app.kubernetes.io/instance: karavi-observability app.kubernetes.io/name: otel-collector
-
Add the Prometheus Helm chart repository.
On your terminal, run each of the commands below:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts helm repo add stable https://charts.helm.sh/stable helm repo update
-
Install the Helm chart.
On your terminal, run the command below:
helm install prometheus prometheus-community/prometheus -n [CSM_NAMESPACE] -f prometheus-values.yaml
Grafana
The Grafana dashboards require Grafana to be deployed in the same Kubernetes cluster as Container Storage Module Observability. Below are the configuration details required to properly set up Grafana to work with Container Storage Module Observability.
Supported Version | Helm Chart |
---|---|
10.x | Grafana Helm chart |
Note: From Grafana 10.x, deprecation warnings for Angular plugins will appear in the UI, but dashboards still work. Grafana 11.x isn’t supported yet
Grafana must be configured with the following data sources/plugins:
Name | Additional Information |
---|---|
Prometheus data source | Prometheus data source |
Data Table plugin | Data Table plugin |
Pie Chart plugin | Pie Chart plugin |
SimpleJson data source | SimpleJson data source |
Settings for the Grafana Prometheus data source:
Setting | Value | Additional Information |
---|---|---|
Name | Prometheus | |
Type | prometheus | |
URL | http://PROMETHEUS_IP:PORT | The IP/PORT of your running Prometheus instance |
Access | Proxy |
Settings for the Grafana SimpleJson data source:
Setting | Value |
---|---|
Name | Karavi-Topology |
URL | Access Container Storage Module Observability Topology service at https://karavi-topology.namespace.svc.cluster.local:8443 |
Skip TLS Verify | Enabled (If not using CA certificate) |
With CA Cert | Enabled (If using CA certificate) |
Grafana Helm Deployment
Below are the steps to deploy a new Grafana instance into your Kubernetes cluster:
-
Create a ConfigMap.
When using a network that requires a decryption certificate, the Grafana server MUST be configured with the necessary certificate. If no certificate is required, skip to step 2.
- Create a Config file named
grafana-configmap.yaml
The file should look like this:
# grafana-configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: certs-configmap namespace: [CSM_NAMESPACE] labels: certs-configmap: "1" data: ca-certificates.crt: |- -----BEGIN CERTIFICATE----- ReplaceMeWithActualCaCERT= -----END CERTIFICATE-----
NOTE: you need an actual CA Cert for it to work
On your terminal, run the commands below:
kubectl create -f grafana-configmap.yaml
- Create a Config file named
-
Create a values file.
Create a Config file named
grafana-values.yaml
The file should look like this:# grafana-values.yaml image: repository: grafana/grafana tag: 10.4.3 sha: "" pullPolicy: IfNotPresent service: type: NodePort ## Administrator credentials when not using an existing Secret adminUser: admin adminPassword: admin ## Pass the plugins you want to be installed as a list. ## plugins: - grafana-simple-json-datasource - briangann-datatable-panel - grafana-piechart-panel ## Configure grafana datasources ## ref: http://docs.grafana.org/administration/provisioning/#datasources ## datasources: datasources.yaml: apiVersion: 1 datasources: - name: Karavi-Topology type: grafana-simple-json-datasource access: proxy url: 'https://karavi-topology:8443' isDefault: null version: 1 editable: true jsonData: tlsSkipVerify: true - name: Prometheus type: prometheus access: proxy url: 'http://prometheus-server:9090' isDefault: null version: 1 editable: true testFramework: enabled: false sidecar: datasources: enabled: true dashboards: enabled: true ## Additional grafana server ConfigMap mounts ## Defines additional mounts with ConfigMap. ConfigMap must be manually created in the namespace. extraConfigmapMounts: [] # If you created a ConfigMap on the previous step, delete [] and uncomment the lines below # - name: certs-configmap # mountPath: /etc/ssl/certs/ca-certificates.crt # subPath: ca-certificates.crt # configMap: certs-configmap # readOnly: true
-
Add the Grafana Helm chart repository.
On your terminal, run each of the commands below:
helm repo add grafana https://grafana.github.io/helm-charts helm repo update
-
Install the Helm chart.
On your terminal, run the commands below:
helm install grafana grafana/grafana -n [CSM_NAMESPACE] -f grafana-values.yaml
Other Deployment Methods
Importing Container Storage Module for Observability Dashboards
Once Grafana is properly configured, you can import the pre-built observability dashboards. Log into Grafana and click the + icon in the side menu. Then click Import. From here you can upload the JSON files or paste the JSON text directly into the text area. Below are the locations of the dashboards that can be imported:
Dashboard | Description |
---|---|
PowerFlex: I/O Performance by Kubernetes Node | Provides visibility into the I/O performance metrics (IOPS, bandwidth, latency) by Kubernetes node |
PowerFlex: I/O Performance by Provisioned Volume | Provides visibility into the I/O performance metrics (IOPS, bandwidth, latency) by volume |
PowerFlex: Storage Pool Consumption By CSI Driver | Provides visibility into the total, used and available capacity for a storage class and associated underlying storage construct |
CSI Driver Provisioned Volume Topology | Provides visibility into Dell CSI (Container Storage Interface) driver provisioned volume characteristics in Kubernetes correlated with volumes on the storage system. |
Dashboard | Description |
---|---|
PowerStore: I/O Performance by Provisioned Volume | Provides visibility into the I/O performance metrics (IOPS, bandwidth, latency) by volume |
PowerStore: I/O Performance by File System | Provides visibility into the I/O performance metrics (IOPS, bandwidth, latency) by filesystem |
PowerStore: Array and Storage Class Consumption By CSI Driver | Provides visibility into the total, used and available capacity for a storage class and associated underlying storage construct |
CSI Driver Provisioned Volume Topology | Provides visibility into Dell CSI (Container Storage Interface) driver provisioned volume characteristics in Kubernetes correlated with volumes on the storage system. |
Dashboard | Description |
---|---|
PowerScale: I/O Performance by Cluster | Provides visibility into the I/O performance metrics (IOPS, bandwidth) by cluster |
PowerScale: Capacity by Cluster | Provides visibility into the total, used, available capacity and directory quota capacity by cluster |
PowerScale: Capacity by Quota | Provides visibility into the subscribed, remaining capacity and usage by quota |
CSI Driver Provisioned Volume Topology | Provides visibility into Dell CSI (Container Storage Interface) driver provisioned volume characteristics in Kubernetes correlated with volumes on the storage system. |
Dashboard | Description |
---|---|
PowerMax: PowerMax Capacity | Provides visibility into the subscribed, used, available capacity for a storage class and associated underlying storage construct |
PowerMax: PowerMax Performance | Provides visibility into the I/O performance metrics (IOPS, bandwidth) by storage group and volume |
CSI Driver Provisioned Volume Topology | Provides visibility into Dell CSI (Container Storage Interface) driver provisioned volume characteristics in Kubernetes correlated with volumes on the storage system. |
Dynamic Configuration
Some parameters can be configured/updated during runtime without restarting the Container Storage Module for Observability services. These parameters will be stored in ConfigMaps that can be updated on the Kubernetes cluster. This will automatically change the settings on the services.
ConfigMap | Observability Service | Parameters |
---|---|---|
karavi-metrics-powerflex-configmap | karavi-metrics-powerflex |
|
karavi-topology-configmap | karavi-topology |
|
ConfigMap | Observability Service | Parameters |
---|---|---|
karavi-metrics-powerstore-configmap | karavi-metrics-powerstore |
|
karavi-topology-configmap | karavi-topology |
|
ConfigMap | Observability Service | Parameters |
---|---|---|
karavi-metrics-powerscale-configmap | karavi-metrics-powerscale |
|
karavi-topology-configmap | karavi-topology |
|
ConfigMap | Observability Service | Parameters |
---|---|---|
karavi-metrics-powermax-configmap | karavi-metrics-powermax |
|
karavi-topology-configmap | karavi-topology |
|
To update any of these settings, run the following command on the Kubernetes cluster then save the updated ConfigMap data.
kubectl edit configmap [CONFIG_MAP_NAME] -n [CSM_NAMESPACE]
Tracing
Container Storage Module Observability is instrumented to report trace data to Zipkin. This helps gather timing data needed to troubleshoot latency problems with Container Storage Module Observability. Follow the instructions below to enable the reporting of trace data:
-
Deploy a Zipkin instance in the CSM namespace and expose the service as NodePort for external access.
apiVersion: apps/v1 kind: Deployment metadata: name: zipkin labels: app.kubernetes.io/name: zipkin app.kubernetes.io/instance: zipkin-instance app.kubernetes.io/managed-by: zipkin-service spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: zipkin app.kubernetes.io/instance: zipkin-instance template: metadata: labels: app.kubernetes.io/name: zipkin app.kubernetes.io/instance: zipkin-instance spec: containers: - name: zipkin image: "openzipkin/zipkin" imagePullPolicy: IfNotPresent env: - name: "STORAGE_TYPE" value: "mem" - name: "TRANSPORT_TYPE" value: "http" --- apiVersion: v1 kind: Service metadata: name: zipkin labels: app.kubernetes.io/name: zipkin app.kubernetes.io/instance: zipkin-instance app.kubernetes.io/managed-by: zipkin-service spec: ports: - port: 9411 targetPort: 9411 protocol: TCP type: "NodePort" selector: app.kubernetes.io/name: zipkin app.kubernetes.io/instance: zipkin-instance
-
Add the Zipkin URI to the Container Storage Module Observability ConfigMaps. Based on the manifest above, Zipkin will be running on port 9411.
Update the ConfigMaps from the [table above](#dynamic-configuration). Here is an example updating the karavi-topology-configmap based on the deployment manifest above.
```console
kubectl edit configmap/karavi-topology-configmap -n [CSM_NAMESPACE]
```
Update the ZIPKIN_URI and ZIPKIN_PROBABILITY values and save the ConfigMap.
```console
ZIPKIN_URI: "http://zipkin:9411/api/v2/spans"
ZIPKIN_SERVICE_NAME: "karavi-topology"
ZIPKIN_PROBABILITY: "1.0"
```
Once the ConfigMaps are updated, the changes will automatically be applied and tracing can be seen by accessing Zipkin on the exposed port.
Updating Storage System Credentials
If storage system credentials are updated in the CSI Driver, update Container Storage Module Observability with the new credentials
When Container Storage Module for Observability uses the Authorization module
All storage system requests by Container Storage Module Observability will go through the Authorization module. Perform the following steps:
Update the Authorization Module Token
CSI Driver for PowerFlex
-
Delete the current
proxy-authz-tokens
Secret from the CSM namespace.kubectl delete secret proxy-authz-tokens -n [CSM_NAMESPACE]
-
Copy the
proxy-authz-tokens
Secret from the CSI Driver for Dell PowerFlex to the CSM namespace.kubectl get secret proxy-authz-tokens -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
CSI Driver for PowerScale
-
Delete the current
isilon-proxy-authz-tokens
Secret from the CSM namespace.kubectl delete secret isilon-proxy-authz-tokens -n [CSM_NAMESPACE]
-
Copy the
isilon-proxy-authz-tokens
Secret from the CSI Driver for PowerScale namespace to the CSM namespace.kubectl get secret proxy-authz-tokens -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/'| sed 's/name: proxy-authz-tokens/name: isilon-proxy-authz-tokens/' | kubectl create -f
CSI Driver for PowerMax
-
Delete the current
powermax-proxy-authz-tokens
Secret from the CSM namespace.kubectl delete secret powermax-proxy-authz-tokens -n [CSM_NAMESPACE]
-
Copy the
powermax-proxy-authz-tokens
Secret from the CSI Driver for PowerMax namespace to the CSM namespace.kubectl get secret proxy-authz-tokens -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/'| sed 's/name: proxy-authz-tokens/name: powermax-proxy-authz-tokens/' | kubectl create -f
Update Storage Systems
If the list of storage systems managed by a Dell CSI Driver have changed, the following steps can be performed to update Container Storage Module Observability to reference the updated systems:
CSI Driver for PowerFlex
-
Delete the current
karavi-authorization-config
Secret from the CSM namespace.kubectl delete secret karavi-authorization-config -n [CSM_NAMESPACE]
-
Copy the
karavi-authorization-config
Secret from the CSI Driver for PowerFlex namespace to Container Storage Module Observability namespace.kubectl get secret karavi-authorization-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
CSI Driver for PowerScale
-
Delete the current
isilon-karavi-authorization-config
Secret from the CSM namespace.kubectl delete secret isilon-karavi-authorization-config -n [CSM_NAMESPACE]
-
Copy the
isilon-karavi-authorization-config
Secret from the CSI Driver for PowerScale namespace to Container Storage Module Observability namespace.kubectl get secret karavi-authorization-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | sed 's/name: karavi-authorization-config/name: isilon-karavi-authorization-config/' | kubectl create -f
CSI Driver for PowerMax
-
Delete the current
powermax-karavi-authorization-config
secret from the CSM namespace.kubectl delete secret powermax-karavi-authorization-config -n [CSM_NAMESPACE]
-
Copy
powermax-karavi-authorization-config
secret from the CSI Driver for PowerMax to the CSM namespace.kubectl get secret karavi-authorization-config proxy-server-root-certificate -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | sed 's/name: karavi-authorization-config/name: powermax-karavi-authorization-config/' | kubectl create -f -
When Container Storage Module for Observability does not use the Authorization module
In this case all storage system requests made by Container Storage Module Observability will not be routed through the Authorization module. The following must be performed:
CSI Driver for PowerFlex
-
Delete the current
vxflexos-config
Secret from the CSM namespace.kubectl delete secret vxflexos-config -n [CSM_NAMESPACE]
-
Copy the
vxflexos-config
Secret from the CSI Driver for PowerFlex namespace to the CSM namespace.kubectl get secret vxflexos-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
If the CSI driver secret name is not the default
vxflexos-config
, please use the following command to copy secret:kubectl get secret [VXFLEXOS-CONFIG] -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/name: [VXFLEXOS-CONFIG]/name: vxflexos-config/' | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
CSI Driver for PowerStore
-
Delete the current
powerstore-config
Secret from the CSM namespace.kubectl delete secret powerstore-config -n [CSM_NAMESPACE]
-
Copy the
powerstore-config
Secret from the CSI Driver for PowerStore namespace to the CSM namespace.kubectl get secret powerstore-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
If the CSI driver secret name is not the default
powerstore-config
, please use the following command to copy secret:kubectl get secret [POWERSTORE-CONFIG] -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/name: [POWERSTORE-CONFIG]/name: powerstore-config/' | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
CSI Driver for PowerScale
-
Delete the current
isilon-creds
Secret from the CSM namespace.kubectl delete secret isilon-creds -n [CSM_NAMESPACE]
-
Copy the
isilon-creds
Secret from the CSI Driver for PowerScale namespace to the CSM namespace.kubectl get secret isilon-creds -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
If the CSI driver secret name is not the default
isilon-creds
, please use the following command to copy secret:kubectl get secret [ISILON-CREDS] -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/name: [ISILON-CREDS]/name: isilon-creds/' | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
CSI Driver for PowerMax
-
Delete the secrets in
powermax-reverseproxy-config
configmap from the CSM namespace.for secret in $(kubectl get configmap powermax-reverseproxy-config -n [CSM_NAMESPACE] -o jsonpath="{.data.config\.yaml}" | grep arrayCredentialSecret | awk 'BEGIN{FS=":"}{print $2}' | uniq) do kubectl delete secret $secret -n [CSM_NAMESPACE] done
-
Delete the current
powermax-reverseproxy-config
configmap from the CSM namespace.kubectl delete configmap powermax-reverseproxy-config -n [CSM_NAMESPACE]
-
Copy the configmap
powermax-reverseproxy-config
from the CSI Driver for PowerMax namespace to the CSM namespace.kubectl get configmap powermax-reverseproxy-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
If the CSI driver configmap name is not the default
powermax-reverseproxy-config
, please use the following command to copy configmap:kubectl get configmap [POWERMAX-REVERSEPROXY-CONFIG] -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/name: [POWERMAX-REVERSEPROXY-CONFIG]/name: powermax-reverseproxy-config/' | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
-
Copy the secrets in
powermax-reverseproxy-config
from the CSI Driver for Dell PowerMax namespace to the CSM namespace.for secret in $(kubectl get configmap powermax-reverseproxy-config -n [CSI_DRIVER_NAMESPACE] -o jsonpath="{.data.config\.yaml}" | grep arrayCredentialSecret | awk 'BEGIN{FS=":"}{print $2}' | uniq) do kubectl get secret $secret -n [CSI_DRIVER_NAMESPACE] -o yaml | sed "s/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/" | kubectl create -f - done
If the CSI driver configmap name is not the default
powermax-reverseproxy-config
, please use the following command to copy secrets:for secret in $(kubectl get configmap [POWERMAX-REVERSEPROXY-CONFIG] -n [CSI_DRIVER_NAMESPACE] -o jsonpath="{.data.config\.yaml}" | grep arrayCredentialSecret | awk 'BEGIN{FS=":"}{print $2}' | uniq) do kubectl get secret $secret -n [CSI_DRIVER_NAMESPACE] -o yaml | sed "s/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/" | kubectl create -f - done