Enabling Replication in CSI PowerScale
Container Storage Modules (CSM) Replication sidecar is a helper container that is installed alongside a CSI driver to facilitate replication functionality. Such CSI drivers must implement
CSI driver for Dell PowerScale supports necessary extension calls from
dell-csi-extensions. To be able to provision replicated volumes you would need to do the steps described in the following sections.
On Storage Array
Ensure that SyncIQ service is enabled on both arrays, you can do that by navigating to
SyncIQ section under
Data protection tab.
The current implementation supports one-to-one replication so you need to ensure that one array can reach another and vice versa.
If you wish to use
SyncIQ encryption you should ensure that you’ve added a server certificate first by navigating to
After adding the certificate, you can choose to use it by checking
Encrypt SyncIQ connection from the dropdown.
After that, you can add similar certificates of other arrays in
SyncIQ-> Certificates, and ensure you’ve added the certificate of the array you want to replicate to.
Similar steps should be done in the reverse direction, so
array-1 has the
array-2 certificate visible in
SyncIQ-> Certificates tab and
array-2 has the
array-1 certificate visible in its own
Ensure you installed CRDs and replication controller in your clusters.
To verify you have everything in order you can execute the following commands:
- Check controller pods
kubectl get pods -n dell-replication-controller
Pods should be
- Check that controller config map is properly populated
kubectl get cm -n dell-replication-controller dell-replication-controller-config -o yaml
datafield should be properly populated with cluster-id of your choosing and, if using multi-cluster installation, your
targets:parameter should be populated by a list of target clusters IDs.
Installing Driver With Replication Module
To install the driver with replication enabled, you need to ensure you have set
controller.replication.enabled in your copy of example
Here is an example of what that would look like:
... # controller: configure controller specific parameters controller: ... # replication: allows to configure replication replication: enabled: true image: dellemc/dell-csi-replicator:v1.2.0 replicationContextPrefix: "powerscale" replicationPrefix: "replication.storage.dell.com" ...
You can leave other parameters like
replicationPrefix as they are.
After enabling the replication module, you can continue to install the CSI driver for PowerScale following the usual installation procedure. Just ensure you’ve added the necessary array connection information to secret.
If you plan to use encryption, you need to set
replicationCertificateID in the array connection secret. To check the ID of the certificate for the cluster, you can navigate to
Data protection->SyncIQ->Settings, find your certificate in the
Server Certificates section and then push the
View/Edit button. It will open a dialog that should contain the
Id field. Use the value of that field to set
NOTE: you need to install your driver on ALL clusters where you want to use replication. Both arrays must be accessible from each cluster.
Creating Storage Classes
To provision replicated volumes, you need to create adequately configured storage classes on both the source and target clusters.
A pair of storage classes on the source, and target clusters would be essentially
mirrored copies of one another.
You can create them manually or with the help of
Manual Storage Class Creation
You can find a sample replication enabled storage class in the driver repository here.
It will look like this:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: isilon-replication provisioner: csi-isilon.dellemc.com reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: Immediate parameters: replication.storage.dell.com/isReplicationEnabled: "true" replication.storage.dell.com/remoteStorageClassName: "isilon-replication" replication.storage.dell.com/remoteClusterID: "target" replication.storage.dell.com/remoteSystem: "cluster-2" replication.storage.dell.com/rpo: Five_Minutes replication.storage.dell.com/ignoreNamespaces: "false" replication.storage.dell.com/volumeGroupPrefix: "csi" AccessZone: System IsiPath: /ifs/data/csi RootClientEnabled: "false" ClusterName: cluster-1
Let’s go through each parameter and what it means:
replication.storage.dell.com/isReplicationEnabledif set to
true, will mark this storage class as replication enabled, just leave it as
replication.storage.dell.com/remoteStorageClassNamepoints to the name of the remote storage class. If you are using replication with the multi-cluster configuration you can make it the same as the current storage class name.
replication.storage.dell.com/remoteClusterIDrepresents the ID of a remote cluster. It is the same id you put in the replication controller config map.
replication.storage.dell.com/remoteSystemis the name of the remote system that should match whatever
clusterNameyou called it in
replication.storage.dell.com/rpois an acceptable amount of data, which is measured in units of time, that may be lost due to a failure.
NOTE: Available RPO values “Five_Minutes”, “Fifteen_Minutes”, “Thirty_Minutes”, “One_Hour”, “Six_Hours”, “Twelve_Hours”, “One_Day”
replication.storage.dell.com/ignoreNamespaces, if set to
truePowerScale driver, it will ignore in what namespace volumes are created and put every volume created using this storage class into a single volume group.
replication.storage.dell.com/volumeGroupPrefixrepresents what string would be appended to the volume group name to differentiate them.
Accesszoneis the name of the access zone a volume can be created in
IsiPathis the base path for the volumes to be created on the PowerScale cluster
RootClientEnableddetermines whether the driver should enable root squashing or not
ClusterNamename of PowerScale cluster, where PV will be provisioned, specified as it was listed in
After figuring out how storage classes would look, you just need to go and apply them to your Kubernetes clusters with
Storage Class creation with
repctl can simplify storage class creation by creating a pair of mirrored storage classes in both clusters
(using a single storage class configuration) in one command.
To create storage classes with
repctl you need to fill up the config with necessary information.
You can find an example here, copy it, and modify it to your needs.
If you open this example you can see a lot of similar fields and parameters you can modify in the storage class.
Let’s use the same example from manual installation and see what config would look like:
sourceClusterID: "source" targetClusterID: "target" name: "isilon-replication" driver: "isilon" reclaimPolicy: "Delete" replicationPrefix: "replication.storage.dell.com" parameters: rpo: "Five_Minutes" ignoreNamespaces: "false" volumeGroupPrefix: "csi" accessZone: "System" isiPath: "/ifs/data/csi" rootClientEnabled: "false" clusterName: source: "cluster-1" target: "cluster-2"
NOTE: both storage classes expected to use access zone with same name
After preparing the config, you can apply it to both clusters with
repctl. Before you do this, ensure you’ve added your clusters to
repctl via the
To create storage classes just run
./repctl create sc --from-config <config-file> and storage classes would be applied to both clusters.
After creating storage classes you can make sure they are in place by using
./repctl get storageclasses command.
Provisioning Replicated Volumes
After installing the driver and creating storage classes, you are good to create volumes using newly created storage classes.
On your source cluster, create a PersistentVolumeClaim using one of the replication-enabled Storage Classes. The CSI PowerScale driver will create a volume on the array, add it to a VolumeGroup and configure replication using the parameters provided in the replication enabled Storage Class.
Supported Replication Actions
The CSI PowerScale driver supports the following list of replication actions: