Chapter 5. Deployment and Administration – IBM Spectrum Scale CSI Driver for Container Persistent Storage

Deployment and Administration
The CSI drivers of IBM Spectrum Scale provide multiple provisioning models: static provisioning, lightweight dynamic provisioning, and file set based dynamic provisioning. Users can choose the provisioning model depending on the use cases and the workload of applications.
This chapter covers the following topics:
This understanding will help for configuring and deploying the IBM Spectrum Scale CSI driver on the Kubernetes and OpenShift platform for consuming persistent storage for containers. This will also help in provisioning the volume to ingest the data on the IBM Spectrum Scale file system.
5.1 IBM Spectrum Scale CSI Driver configuration
The Container Storage Interface (CSI) Driver for IBM Spectrum Scale storage systems enables container orchestrators such as Kubernetes to manage the life cycle of persistent storage. There is the official operator to deploy and manage the IBM Spectrum Scale storage CSI driver.
When using the IBM Spectrum Scale file system as the back-end storage for persistent volumes, it is required to run the CSI drivers on the specific nodes of IBM Spectrum Scale clusters. The node types and the required conditions of IBM Spectrum Scale and Kubernetes are described in Table 5-1.
Table 5-1 Node type required conditions of IBM Spectrum Scale and Kubernetes
Node Type
IBM Spectrum Scale
Kubernetes
Master Node(s)
Not required
Required
Worker Node(s)
Required
Required
IBM Spectrum Scale GUI node(s)
Required
Not required
IBM Spectrum Scale NSD node(s)
Required
Not required
Supported container platforms:
Supported OS: Red Hat 7.5, 7.6, 7.7
IBM Spectrum Scale version: 5.0.4.1+
Kubernetes version: 1.13 to 1.17
OpenShift version: 4.2
5.2 Deployment of IBM Spectrum Scale CSI Driver
There are two methods to deploy and initialize IBM Spectrum Scale CSI driver:
Using Operator Lifecycle Manager (OLM)
Using Operator with command-line interface
5.2.1 Using Operator Lifecycle Manager
OLM runs by default on Red Hat OpenShift Container Platform. Therefore, it is a preferred method to use OLM on Red Hat OpenShift platform. For more information, see Understand OLM. The following steps describe how to deploy IBM Spectrum Scale CSI driver using Operator Lifecycle Manager (OLM).
1. For Kubernetes, because OLM is not installed by default, install OLM: https://github.com/operator-framework/operator-lifecycle-manager/blob/master/doc/install/install.md
2. Create the Operator from the Red Hat OpenShift console, as follows:
a. From the left panel, click Operators  OperatorHub page on Red Hat OpenShift Container Platform management portal.
b. From the Project drop-down list, select the project or create a new project by clicking Create Project.
c. Select “IBM Spectrum Scale CSI Plugin Operator” from the storage section in the operator menu and click install.
d. On the Operator install page, select a namespace from the available options, select the approval strategy (automatic or manual), and click Subscribe. The Installed Operators page appears, where IBM Spectrum Scale CSI Plugin Operator is listed as successfully installed.
e. On the Installed Operators page, click IBM Spectrum Scale CSI Plugin Operator, select the IBM CSI Spectrum Scale Driver tab, and click Create CSIScaleOperator. The Create CSIScaleOperator page displays.
f. On this page, an editor displays, where you can update the manifest file according to your environment.
5.2.2 Using Operator with command line interface (CLI)
Installing IBM Spectrum Scale Container Storage Interface Driver using Operator involves the following phases:
1. Deploy the Operator on your cluster
2. Use the Operator for deploying IBM Spectrum Scale Container Storage Interface Driver
Phase 1: Deploying the Operator
To deploy Operator on your cluster, complete the following steps:
1. Create a namespace as shown in Example 5-1.
Example 5-1 Create a namespace
kubectl create namespace ibm-spectrum-scale-csi-driver
2. Create the Operator as shown in Example 5-2.
Example 5-2 Create the Operator
kubectl create -f https://raw.githubusercontent.com/IBM/ibm-
spectrum-scale-csi/v1.0.1/generated/installer/ibm-spectrum-scale-csi-operator.yaml
3. Verify that the Operator is deployed, and the Operator pod is in the Running state, as shown in Example 5-3.
Example 5-3 Verify the Operator is deployed
# kubectl get pods -n ibm-spectrum-scale-csi-driver
NAME READY STATUS RESTARTS AGE
ibm-spectrum-scale-csi-operator-6d4bd865f6-m297v 2/2 Running 0 25s
Phase 2: Deploying IBM Spectrum Scale Container Storage Interface Driver
Now that the Operator is up and running, you must access the Operator’s API and request a deployment using CSIScaleOperator Custom Resource. Follow these steps:
1. Create a secret with IBM Spectrum Scale GUI server’s credentials in the ibm-spectrum-scale-csi-driver namespace. A secret is needed to store credentials to connect to IBM Spectrum Scale REST API server. Secrets are defined in a data field with base64 encoded values in a JSON file. The GUI user must have csiadmin role.
A sample manifest file for the GUI secret is shown in Example 5-4.
Example 5-4 Sample manifest file for GUI Secret
apiVersion: v1
kind: Secret
metadata:
name: [secret_name]
labels:
product: ibm-spectrum-scale-csi
data:
username: [base64_username]
password: [base64_password]
To create the secret, issue the following command:
kubectl apply -f secrets.yaml -n ibm-spectrum-scale-csi-driver
 
Note: If the secureSslMode=true is specified, a configmap also needs to be created in the ibm-spectrum-scale-driver namespace with the certificate resource name mentioned in the ibm-spectrum-scale-csi-operator-cr.yaml file. See IBM Knowledge Center for IBM Spectrum Scale Container Storage Interface driver configurations for Certificate:
2. Download the CSIScaleOperator Custom Resource file from GitHub:
3. To deploy IBM Spectrum Scale Container Storage Interface Driver, configure the ibm-spectrum-scale-csi-operator-cr.yaml file to suit your requirements and issue this command:
kubectl apply -f ibm-spectrum-scale-csi-operator-cr.yaml
For more information, see IBM Knowledge Center IBM Spectrum Scale Container Storage Interface driver configurations for Operator:
4. Verify that the IBM Spectrum Scale Container Storage Interface Driver is installed, Operator and driver resources are ready, and pods are in the Running state:
Example 5-5 Verify that the IBM Spectrum Scale Container Storage Interface Driver is installed
# kubectl get pods -n ibm-spectrum-scale-csi-driver
NAME READY STATUS RESTARTS AGE
ibm-spectrum-scale-csi-attacher-0 1/1 Running 0 5m54s
ibm-spectrum-scale-csi-fpf7w 2/2 Running 0 5m52s
ibm-spectrum-scale-csi-operator-6d4bd865f6-m297v 2/2 Running 0 34m
ibm-spectrum-scale-csi-pfl2k 2/2 Running 0 5m52s
ibm-spectrum-scale-csi-provisioner-0 1/1 Running 0 5m53s
ibm-spectrum-scale-csi-vqpk6 2/2 Running 0 5m52s
5.3 Volume provisioning
To create and deploy the Persistent Volume (PV) on IBM Spectrum Scale, IBM Spectrum Scale Container Storage Interface (CSI) driver supports the following features:
Static provisioning: Ability to use existing directories as persistent volumes
Dynamic provisioning:
 – Lightweight dynamic provisioning: Ability to create directory-based volumes dynamically
 – File Set-based dynamic provisioning: Ability to create file set-based volumes dynamically
5.3.1 Static provisioning
Users might need to share the data with traditional applications and containerized applications, so static provisioning enables the users to make the data in existing directories available for the applications running on different platforms. In static provisioning, users need to define and create the PV with the existing directories manually. The sample definition of a persistent volume for static provisioning is described in Example 5-6.
Example 5-6 Sample PV definition of static provisioning
# cat static-pv-data.yaml
# -- static-pv-data.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: static-pv-data
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
csi:
driver: ibm-spectrum-scale-csi
     volumeHandle: <clusterID>;<filesystem_uuid>;path=<path_to_dir>
Create a PVC (Example 5-7) to bind to the PV created in Example 5-6.
Example 5-7 Sample PVC definition for claiming an existing PV
# cat pvc.yaml
# -- pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: scale-static-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
This PVC will be bound to an available PV with storage equal to or greater than what is specified in the pvc.yaml file.
 
5.3.2 Dynamic provisioning
Dynamic provisioning is used to dynamically provision the storage back-end volume based on the Storage Class. The Storage Class defines what type of back-end volume should be created by dynamic provisioning. IBM Spectrum Scale CSI Driver supports creation of directory based (also known as lightweight volumes) and file set based (independent as well as dependent) volumes.
The following parameters are supported by IBM Spectrum Scale CSI Driver storageclass:
volBackendFs File system on which the volume should be created. This is a mandatory parameter.
clusterId Cluster ID on which the volume should be created.
volDirBasePath Base directory path relative to the file system mount point under which directory based volumes should be created. If specified, the storageclass is used for directory based (Lightweight) volume creation. If not specified, storageclass creates file set based volumes.
uid UID with which the volume should be created. Optional.
gid GID with which the volume should be created. Optional.
filesetType Type of file set. Valid values are independent or dependent. Default is independent.
parentFileset Specifies the parent file set under which dependent file set should be created. Required only if filesetType is specified as dependent.
inodeLimit Inode limit for file set based volumes. If not specified, default IBM Spectrum Scale inode limit of 1 million is used.
Lightweight dynamic provisioning
When using the lightweight dynamic provisioning, users need to define the storage class with volDirBasedPath options. The PV will be created under the volDirBasedPath dynamically when the PVC is applied. The sample definition of storage class for lightweight dynamic provisioning is shown in Example 5-8.
Example 5-8 Sample storageclass definition of lightweight dynamic provisioning
# cat storageclasslw.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-spectrum-scale-lt
provisioner: spectrumscale.csi.ibm.com
parameters:
volBackendFs: "scalefs1"
volDirBasePath: "pvfileset/lwdir"
reclaimPolicy: Delete
File set-based dynamic provisioning
The file set-based dynamic provisioning enables users to manage the volume with smaller granularity compared to the lightweight dynamic provisioning, as shown Example 5-9.
Example 5-9 Sample storageclass definition of independent file set-based dynamic provisioning
# cat storageclassfileset.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-spectrum-scale-file set
provisioner: spectrumscale.csi.ibm.com
parameters:
volBackendFs: "scalefs1"
clusterId: "15635445795430606940"
reclaimPolicy: Delete
Creating a PVC
Create a persistent volume claim (PVC) using the defined storageclass. A sample manifest file is shown in Example 5-10.
Example 5-10 Sample PVC definition for using a storageclass for dynamic provisioning
# cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: scale-fset-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: [name_of_your_storageclass]
5.4 Using the volume in a container
After the PVC is created in the Kubernetes/OpenShift clusters, it can attach PVC in a container and read/write files in the volumes. In this section, the steps to attach the PVC on the Pod and use the volume in a container are described.
 
Note: The following steps describe the Pod creation with the PVC created by either Static or Dynamic provisioning.
1. Define the Pod with PVC to be attached. It is required to specify the information of PVC in the “volumes” section. Example 5-11.
Example 5-11 Define the Pod with PVC to be attached
# cat static_pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: csi-scale-demo-pod
labels:
app: nginx
spec:
containers:
- name: web-server
image: nginx
volumeMounts:
- name: mypvc
mountPath: /usr/share/nginx/html/scale
ports:
- containerPort: 80
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: scale-static-pvc
readOnly: false
2. After the status of Pod becomes Running, log in to the csi-scale-demo-pod and check the attached volume.
# kubectl exec -it csi-scale-demo-pod -- /bin/bash
3. Now, it is possible to see the files stored in an existing directory, as shown in Example 5-12.
Example 5-12 List files from an existing directory on IBM Spectrum Scale inside the container
root@csi-scale-fsetdemo-pod:/# ls -l /usr/share/nginx/html/scale
total 102400
-rw-r--r-- 1 root root 10485760 Oct 14 11:12 testfile1
-rw-r--r-- 1 root root 10485760 Oct 14 11:12 testfile10
-rw-r--r-- 1 root root 10485760 Oct 14 11:12 testfile2
-rw-r--r-- 1 root root 10485760 Oct 14 11:12 testfile3
-rw-r--r-- 1 root root 10485760 Oct 14 11:12 testfile4
5.5 Releasing the volume on pod deletion
Storage resource can be released when users are done with the pod and the volume. Releasing the storage resources is managed by the reclaim policy defined in a PVC. IBM Spectrum Scale supports the following reclaim policies:
Delete When the PVC is deleted, the PV will be deleted along with underlying storage resource such as a file set or directory. Note that “delete” reclaim policy is not supported in case of statically provisioned PVCs.
Retain When the PVC is deleted, the PV is retained. Users could release the storage resource deleting the PV manually if required.
5.6 Node Selector
The Node Selector feature of Kubernetes is used to control the nodes on which pods should be scheduled. By default, IBM Spectrum Scale Container Storage Interface driver gets deployed on all worker nodes. Node Selector controls on which Kubernetes worker nodes IBM Spectrum Scale Container Storage Interface driver components should be installed. It helps in cases where new worker nodes are added to a Kubernetes cluster that does not have IBM Spectrum Scale installed. It also helps in ensuring that StatefulSets are running on the desired nodes.
For more information, see the following site:
5.7 Kubernetes to IBM Spectrum Scale Cluster node mapping
In some environments, Kubernetes node names might be different from the IBM Spectrum Scale node names. This results in failure during mounting of pods. Kubernetes node to IBM Spectrum Scale node mapping must be configured to address this condition. The mapping can be defined in the Operator configuration.
For more information, see the following site:
5.8 Upgrading IBM Spectrum Scale Container Storage Interface driver
Upgrading IBM Spectrum Scale Container Storage Interface driver steps, follow the instructions mentioned on the following site: