About this document – IBM Storage for Red Hat OpenShift Blueprint Version 1 Release 4

About this document
This Blueprint is intended to facilitate the deployment of IBM Storage for Red Hat OpenShift Container Platform by using detailed hardware specifications to build a system. It describes the associated parameters for configuring persistent storage within a Red Hat OpenShift Container Platform environment. To complete the tasks, you should understand Red Hat OpenShift, IBM Storage, the IBM block storage Container Storage Interface (CSI) driver and the IBM Spectrum Scale CSI driver.
The information in this document is distributed on an “as is” basis without any warranty that is either expressed or implied. Support assistance for the use of this material is limited to situations where IBM Storwize® or IBM FlashSystem® storage devices, ESS and Spectrum Scale are supported and entitled, and where the issues are not specific to a blueprint implementation.
IBM Storage Suite for IBM Cloud™ Paks is an offering bundle that includes software-defined storage from both IBM and Red Hat. Use this document for details on how to deploy IBM Storage product licenses obtained through Storage Suite for Cloud Paks (IBM Spectrum® Virtualize and IBM Spectrum Scale).
Detailed instructions on how to deploy Red Hat Storage can be found using the following links:
Executive summary
Most organizations will soon be operating in a hybrid multicloud environment. Container technology will help drive this rapid evolution from applications and data anchored on-premises in siloed systems, to applications and data easily moving when and where needed to gain the most insight and advantage.
IBM Storage unifies traditional and container-ready storage, and provides cloud-native agility with the reliability, availability, and security to manage enterprise containers in production. As clients scale containerized applications beyond experimental or departmental use, IBM’s award-winning storage solutions enable mission-critical infrastructure that delivers shared-storage operational efficiency, price-performance leadership, and container data protection.
Through integration with the automation capabilities of Kubernetes and IBM Cloud Paks, IBM enables IT infrastructure and operations to improve developer speed and productivity, while delivering data reduction, disaster recovery, and data availability with enterprise storage. IBM Storage for Red Hat OpenShift is a comprehensive, container-ready solution that includes all of the elements and expertise needed for implementing the technologies that are driving businesses in the 21st century.
Scope
This document is intended to show the proof of concept environment created in a lab environment, while considering the Red Hat OpenShift Container Platform 4.3 cluster prerequisites and requirements. The document describes the setup of various Linux components as prerequisites for the installation used in the lab environment. This document does not cover the subscription components, such as Telemetry used for monitoring the cluster health. The setup instructions provided here are not a replacement for any official documentation released by OpenShift or Linux operating system providers.
Prerequisites
The lab setup of OpenShift was created as user-provisioned infrastructure using VMware vSphere 6.5 update 2 (6.5U2). Users wanting to deploy Red Hat OpenShift Container Platform cluster with VMware NSX-T or VMware vSAN must use VMware vSphere version 6.7 update 2 (6.7U2). You need to have knowledge of Spectrum Scale, ESS, IBM® Storwize and/or IBM FlashSystem all-flash storage arrays.
The following sections give an overview of the cluster resources (such as number of virtual machines, configuration chosen, and network requirements) and infrastructure services (such as domain name server (DNS) and dynamic host configuration protocol (DHCP)).
Virtual machine resources
Deployment of the OpenShift Container Platform cluster requires several virtual machines:
One bootstrap/boot node
Three control plane/master nodes
Two compute/worker nodes
The bootstrap node and master nodes must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. The compute nodes can be installed with either RHCOS or Red Hat Enterprise Linux 7.6 (RHEL 7.6) release.
Minimum resource per virtual machine
Each virtual machine or cluster node configuration is listed in Table 1.
Table 1 Hardware configuration for cluster nodes
Machine
Operating System
vCPU
RAM
Storage
Bootstrap
RHCOS
4
16 GB
120 GB
Control plane
RHCOS
4
16 GB
120 GB
Compute
RHCOS or RHEL 7.6
2
8 GB
120 GB
rhel-host
RHEL 7.5+
2
8 GB
50 GB
Network connectivity requirements
All of the RHCOS machines require network in initramfs during boot to fetch Ignition configuration files from the Machine Config Server. The required IP addresses can be provisioned using a DHCP server. After the initial startup, the machines can be configured to use static IP addresses. The DNS and DHCP configuration used in the lab are listed in “Appendix A: DNS file” on page 42 and “Appendix B: DHCP file” on page 44.
The required bootstrap Ignition configuration files and the raw installation images were hosted on a locally configured web server used as the Machine Config Server. The web server configuration is listed in “Appendix C: HTTP file” on page 45.
The intra cluster communication must be allowed on several network ports. All of the required ports are listed in “Appendix E: node communication” on page 49.
Internet access
Internet access is required for the installation and the eventual update of the cluster environment. During installation, the Internet access is used to complete the following actions:
Download the installation program itself
Obtain the packages required to install and update the cluster
Perform subscription management
Load balancer
Layer-4 load balancers are used to load balance the traffic to the cluster. In the lab environment, haproxy was configured to load balance the traffic from and to the cluster. The haproxy configuration is listed in “Appendix D: haproxy file” on page 47.
Passwordless SSH configuration
For intra-cluster communication and for logging onto the cluster nodes, ssh-keys are used. These were generated on rhel-host using the ssh-keygen command. The contents of the id_rsa.pub file are later incorporated in the sshKey section of the install-config.yaml file used for cluster creation.
Lab topology
The lab topology is shown in Figure 1. The host’s configuration, usage, and IP addresses are shown in Table 2 on page 4. The bootstrap machine is first configured to boot using the Ignition configuration files.
Figure 1 Lab Topology
Table 2 Lab host information and usage
Hostname
IP
Comment
oc-boot-410
192.168.3.155
Bootstrap node
oc-master1-410
192.168.3.156
Master Node
oc-master2-410
192.168.3.157
Master Node
oc-master3-410
192.168.3.158
Master Node
oc-worker1-410
192.168.3.159
Worker Node
oc-worker2-410
192.168.3.160
Worker Node
oc-worker3-410
192.168.3.161
Worker Node
Rh-oc-410
192.168.3.162
RHEL host
isv-dns / rhel-host
192.168.3.80
DNS/HTTP/DHCP
vm-01
9.118.46.116
192.168.3.11
Internet Gateway
In order to reach the Internet, all cluster nodes were configured with appropriate firewall network address translation (NAT) rules, except isv-dns. Along with the named service, the HTTP and DHCP services were also configured on the isv-dns host. The rhel-host is used to run the openshift-installer program to create and encode the required Ignition configuration files.
Table 3 provides the details of the cluster name, domain name, and subdomain name used in the lab setup.
Table 3 Cluster details
Entity
Description
Domain name
isvcluster.net
Subdomain name
openshift.isvcluster.net
OpenShift cluster name
isvsol
 
Important: The OpenShift cluster name is configured in Example 1 on page 7 as metadata: name.
Red Hat OpenShift Container Platform 4.3 installation overview
This section describes the installation sequence and configuration procedure flow for the various systems used for the lab setup.
Installing and configuring a DNS server (named)
The domain name server (named) available with the RHEL distribution was used to create a DNS server. The configuration files used for creating the DNS server are listed in “Appendix A: DNS file” on page 42.
Use the following steps to install and configure the DNS server:
1. Install the bind operating system package using yum or rpm command:
yum install bind
2. Update the configuration files as listed in “Appendix A: DNS file” on page 42.
3. Start or Restart the server and enable the named service:
systemctl restart named
systemctl enable named
Subdomain configuration
A subdomain configuration is optional for the OpenShift installation. For the lab setup, OpenShift was chosen as the subdomain. The subdomain configuration is listed in “Appendix A: DNS file” on page 42.
The named $ORIGIN directive is used to populate the zone file with the OpenShift cluster name and *.apps domains in a single file.
Installing and configuring a DHCP server (dhcp)
The dynamic host configuration protocol server (dhcp) available with the RHEL distribution was used to create a DHCP server. The configuration files used for creating the DHCP server are listed in “Appendix B: DHCP file” on page 44. Static DHCP simplifies the installation because DNS entries can be pre-populated before booting the environment.
Use the following commands:
1. Install the dhcp operating system package using yum/rpm command:
yum install dhcp
2. Update the configuration files as listed in “Appendix B: DHCP file” on page 44.
3. Start or Restart the server and enable the dhcpd service:
systemctl restart dhcpd
systemctl enable dhcpd
Installing and configuring a web server (httpd)
The web server (httpd) available with the RHEL distribution was used to create an HTTP server. The configuration files used for creating the HTTP server are listed in “Appendix C: HTTP file” on page 45.
Use the following commands:
1. Install the httpd operating system package using the yum or rpm command:
yum install httpd
2. Update the configuration files as listed in “Appendix C: HTTP file” on page 45.
3. Start or Restart the server and enable the httpd service:
systemctl restart httpd
systemctl enable httpd
Installing and configuring the load balancer (haproxy)
The load balancer (haproxy) available with the RHEL distribution was used to create the haproxy server. The configuration files used for creating the haproxy server are listed in “Appendix D: haproxy file” on page 47.
Use the following commands:
1. Install the haproxy operating system package using the yum or rpm command:
yum install haproxy
2. Update the configuration files (/etc/haproxy/haproxy.cfg), as listed in “Appendix D: haproxy file” on page 47.
3. Start or Restart the haproxy service:
systemctl restart haproxy
systemctl enable haproxy
 
Note: If your haproxy service does not start and SELinux is enabled, run the following command to allow haproxy:
setsebool -P haproxy_connect_any=1
Installing and creating the ignition configuration files on rhel-host
The minimal install option of RHEL 7.6 installation was chosen to install the rhel-host. The openshift-installer obtained from OpenShift Infrastructure Providers was run to create the Ignition configuration files. The openshift-installer expects a YAML-formatted file called install-config.yaml in order to generate the cluster configuration information.
Example 1 shows the install-config.yaml file used in the lab setup.
Example 1 Sample install-config.yaml file
apiVersion: v1
baseDomain: openshift.isvcluster.net
compute:
hyperthreading: Enabled
name: worker
replicas: 0
controlPlane:
hyperthreading: Enabled
name: master
replicas: 3
metadata:
name: isvol
platform:
vsphere:
vcenter: 192.168.40.100
username: Administrator@vsphere.local
password: <password-for-vcenter>
datacenter: DC-OpenShift
defaultDatastore: DS-OpenShift
pullSecret: <pull-secret-obtained-from-RedHat-OpenShift-Infrastructure-Page>
sshKey: <ssh-public-key-generated-on-rhel-host>
 
Important: compute.replicas must be set to 0 when installing Red Hat OpenShift Container Platform with user-provisioned infrastructure for vSphere. This parameter controls the number of workers that the cluster creates and manages for you, which are functions that the cluster does not perform when you use user-provisioned infrastructure. You must manually deploy worker machines for the cluster to use before you finish installing OpenShift Container Platform.
The install-config.yaml file must be present in the /stage directory before the installer is run to create Ignition config files.
Generate the Kubernetes manifests for the cluster:
./openshift-install create manifests --dir=/stage
Modify the manifests/cluster-scheduler-02-config.yml Kubernetes manifest file to prevent Pods from being scheduled on the control plane machines:
1. Open the manifests/cluster-scheduler-02-config.yml file.
2. Locate the mastersSchedulable parameter and set its value to False.
3. Save and exit the file.
 
Note: Running openshift-install consumes the install-config.yaml file. A backup of this file is recommended as reference of the installation.
./openshift-install create ignition-configs --dir=/stage
Upon completion, the following files are created in the /stage directory:
master.ign
worker.ign
bootstrap.ign
metadata.json
auth <directory>
The auth/kubeconfig file is later used to set the cluster context for logging into the cluster using the command-line interface. In addition, the auth/kubeadmin-password file has password information for logging in from a browser. The *.ign files that are obtained are used to boot each type of cluster node. The master.ign and worker.ign files must be base64 encoded. The base64 encoded files were created in the following way:
base64 -w0 /stage/master.ign > /stage/master.ign.64
base64 -w0 /stage/worker.ign > /stage/worker.ign.64
The bootstrap.ign file is significantly large, so it cannot be passed to the vSphere server. Instead, a smaller text file append-bootstrap.ign is created and provided to vSphere. In our environment, we copied the bootstrap.ign to the root of the HTTP server, /var/www/html/. The append-bootstrap.ign file contains the information shown in Example 2, pointing to the location of bootstrap.ign on the HTTP server.
Example 2 The append-bootstrap.ign file
{
"iginition": {
"config": {
"append": [
{
"source": "http://192.168.3.80/bootstrap.ign",
"verification": {}
}
]
},
"timeouts": {},
"version": “2.1.0”
},
"networkd": {},
"passwd": {},
"storage": {},
"systemd": {}
}
Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines in VMware vSphere
Out of multiple installation methods available for OpenShift installation, the deployment of an ova template file using vSphere was chosen.
 
Note: When using VMware vSphere environment, group all the VMs for OpenShift under a folder created in ESX. The folder name must be the same as the OpenShift cluster-name. For lab setup, isvsol was chosen as both cluster and folder name.
Figure 2 on page 9 shows the sample ova template deployment screen. Notice the Properties section that contains the response for Ignition config data and Ignition config data encoding. For all nodes except the bootstrap node, the Ignition config data contains the contents of the respective base64 encoded file (for example, master.ign.64 for the master node). The Ignition config data encoding field contains base64 as its value.
During the ova deployment of the bootstrap node, the contents of append-bootstrap.ign are entered for the Ignition config data property. The contents of the append-bootstrap.ign are listed in “Appendix F: bootstrap file” on page 50.
Figure 2 Ova deployment (bootstrap)
After all of the systems are created from the ova template, the virtual machines configurations were adapted to the minimum hardware requirements listed in Table 1 on page 2.
From each virtual machine’s properties, the network adapter’s hardware address was noted and added to the /etc/dhcp/dhcpd.conf file on the DHCP server to provide persistent IP address across reboot.
The IP addresses and fully qualified domain and subdomain name entries were configured in the named database files stored under /var/named/ on the DNS server.
The /etc/haproxy/haproxy.cfg file was configured to provide load balancing to the API server, Machine Config Server, and HTTP/HTTPS traffic.
When the configuration files were modified, all of the respective daemons, such as named, dhcpd, httpd, and haproxy, were restarted.
During the bootstrapping process and cluster formation, the boot and master nodes are required to connect to various systems on different ports. See “Appendix E: node communication” on page 49 to see the set of firewall commands that allow communication between different systems.
Installing the oc command-line interface
To monitor the cluster configuration creation, the oc command-line tool is used. Before running the oc tool, the cluster configuration is set in the environment using the export command as follows:
export KUBECONFIG=/stage/auth/kubeconfig
oc whoami or oc get nodes
The login to any of the cluster nodes requires the core user and private ssh-key. In the lab setup, the ssh-key was generated on the rhel-host.
eval “$(ssh-agent –s)”
ssh-add ~/.ssh/id_rsa
ssh core@oc-boot-410
When on the systems, you can use the journalctl command to follow the messages generated during installation, or later during troubleshooting of the cluster.
Creating the Red Hat OpenShift Container Platform 4.3 cluster
Having completed the configuration steps, the bootstrap, master, and worker virtual machines were now started.
Based on the configuration information provided in bootstrap.ign file referenced by the append-bootstrap.ign file, the cluster configuration is checked for as follows:
./openshift-install --dir=/stage wait-for bootstrap-complete --log-level debug
The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. Upon completion of the bootstrap process, the oc-boot-410 system was removed from the haproxy configuration.
 
Note: The certificate signing requests (CSR) are generated when a node is added. These must be approved so that the newly added nodes become usable as cluster resources.
After all CSRs have been approved, if necessary, wait for ClusterOperators to become Available:
$ watch -n5 oc get clusteroperators
 
Note: The image-registry operator must be configured with RWX storage before it is marked as Available. A minimum of one worker node must be online before ingress and authentication operators are marked as Available.
The following command waits for the installation to complete, and outputs the console URL and kubeadmin password when finished:
./openshift-install --dir=/stage wait-for install-complete
Adding Red Hat Enterprise Linux 7.6 worker nodes
When the cluster has come online and is operational, Red Hat Enterprise Linux 7.6 worker nodes can be added to the system. In our lab environment, we added one RHEL node (oc-worker3-410).
We followed the instructions provided in the Red Hat documentation to prepare the rhel-host and oc-worker3-410 nodes with subscriptions for Red Hat OpenShift:
1. On the rhel-host, we installed the following packages that are required to add a RHEL worker node:
yum install openshift-ansible openshift-clients jq
The openshift-ansible package provides installation program utilities and pulls in other packages that you require to add a RHEL compute node to your cluster, such as Ansible, playbooks, and related configuration files. The openshift-clients package provides the oc CLI, and the jq package improves the display of JSON output on your command line.
2. Create an inventory directory in /stage:
mkdir /stage/inventory
3. Create the hosts file that will be used to add the RHEL worker node, as shown in Example 3.
Example 3 The /stage/inventory/hosts file
[all:vars]
ansible_user=root
#ansible_become=True
openshift_kubeconfig_path="/stage/auth/kubeconfig"
 
[new_workers]
oc-worker3-410
4. Run the playbook to add the new RHEL node:
cd /usr/share/ansible/openshift-ansible
ansible-playbook -i /stage/inventory/hosts playbooks/scaleup.yml
5. Approve any CSRs that are Pending for the new machine:
oc get csr
6. If all the CSRs are valid, approve them all by running the following command:
oc get csr -ojson | jq -r '.items[] | select(.status == {} ) | .metadata.name' | xargs oc adm certificate approve
7. Verify that the new node has been added successfully:
oc get node
 
NAME STATUS ROLES AGE VERSION
oc-master1-410 Ready master 104d v1.14.6+7e13ab9a7
oc-master2-410 Ready master 104d v1.14.6+7e13ab9a7
oc-master3-410 Ready master 104d v1.14.6+7e13ab9a7
oc-worker1-410 Ready worker 104d v1.14.6+7e13ab9a7
oc-worker2-410 Ready worker 104d v1.14.6+7e13ab9a7
oc-worker3-410 Ready worker 92d v1.14.6+7e13ab9a7
Configuring iSCSI/Fibre Channel for worker nodes
Red Hat OpenShift Container Platform 4.3 worker nodes must be configured with the IBM storage system.
This section describes how to configure the storage system with the Red Hat OpenShift Container Platform worker nodes.
The test team performed the following steps in the solution lab environment to install and configure iSCSI or Fibre Channel:
1. Install the required packages on Red Hat Enterprise Linux.
2. For iSCSI configuration:
a. Install sg3_utils utilities (which send SCSI commands)
b. The iSCSI initiator server daemon
c. The device mapper multipathing tool to configure multiple I/O paths between worker nodes and the storage array:
yum install -y sg3_utils iscsi-initiator-utils device-mapper-multipath
3. For Fibre Channel configuration:
a. Install sg3_utils utilities (which send SCSI commands)
b. The device mapper multipathing tool to configure multiple I/O paths between worker nodes and the storage array
yum install -y sg3_utils device-mapper-multipath
Manual configuration of worker nodes running Red Hat Enterprise Linux
1. Multipath settings:
The settings shown in Example 4 are the preferred multipath settings for RHEL 7 and IBM Storwize V7000. The multipath.conf file is copied at /etc/multipath.conf.
Example 4 The /etc/multipath.conf file
devices {
device {
vendor “IBM”
product “2145”
path_grouping_policy “group_by_prio”
path_selector “service-time 0”
prio “alua”
path_checker “tur”
 
failback “immediate”
no_path_retry 5
rr_weight uniform
rr_min_io_rq “1”
dev_loss_tmo 120
}
}
Further detailed information relating to your particular storage system can be found in the IBM Knowledge Center.
2. Configure and then start and verify the status of the multipath daemon service. Make sure that the multipath daemon service is in the active (running) state:
sudo modprobe dm-multipath
systemctl start multipathd
systemctl status multipathd
systemctl enable multipathd
Configuration of worker nodes running RedHat Enterprise Linux and/or RedHat Enterprise Linux CoreOS
In this section, MachineConfig is be created to deploy /etc/multipath.conf and /etc/udev/rules.d to support connection to IBM Storage systems. In addition, iSCSI connectivity can be configured, as needed.
Configuring for OpenShift Container Platform users (RHEL and RHCOS) with multipathing
For this process and in this section a 99-ibm-attach.yaml file is provided if needed.
 
IMPORTANT: The 99-ibm-attach.yaml configuration file overrides any multipath.conf file that already exists on your system. Only use this file if one is not already created!
If a file has been created, edit the existing multipath.conf, as necessary.
Use the following steps if you need the 99-ibm-attach.yml file:
1. Save the 99-ibm-attach.yaml file provided in Example 5.
Example 5 99-ibm-attach.yaml file
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 99-ibm-attach
spec:
config:
ignition:
version: 2.2.0
storage:
files:
- path: /etc/multipath.conf
mode: 384
filesystem: root
contents:
source: data:,defaults%20%7B%0A%20%20%20%20path_checker%20tur%0A%20%20%20%20path_selector%20%22round-robin%200%22%0A%20%20%20%20rr_weight%20uniform%0A%20%20%20%20prio%20const%0A%20%20%20%20rr_min_io_rq%201%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%0A%20%20%20%20polling_interval%2030%0A%20%20%20%20path_grouping_policy%20multibus%0A%20%20%20%20find_multipaths%20yes%0A%20%20%20%20no_path_retry%20fail%0A%20%20%20%20user_friendly_names%20yes%0A%20%20%20%20failback%20immediate%0A%20%20%20%20checker_timeout%2010%0A%20%20%20%20fast_io_fail_tmo%20off%0A%7D%0A%0Adevices%20%7B%0A%20%20%20%20device%20%7B%0A%20%20%20%20%20%20%20%20path_checker%20tur%0A%20%20%20%20%20%20%20%20product%20%22FlashSystem%22%0A%20%20%20%20%20%20%20%20vendor%20%22IBM%22%0A%20%20%20%20%20%20%20%20rr_weight%20uniform%0A%20%20%20%20%20%20%20%20rr_min_io_rq%204%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%0A%20%20%20%20%20%20%20%20path_grouping_policy%20multibus%0A%20%20%20%20%20%20%20%20path_selector%20%22round-robin%200%22%0A%20%20%20%20%20%20%20%20no_path_retry%20fail%0A%20%20%20%20%20%20%20%20failback%20immediate%0A%20%20%20%20%7D%0A%20%20%20%20device%20%7B%0A%20%20%20%20%20%20%20%20path_checker%20tur%0A%20%20%20%20%20%20%20%20product%20%22FlashSystem-9840%22%0A%20%20%20%20%20%20%20%20vendor%20%22IBM%22%0A%20%20%20%20%20%20%20%20fast_io_fail_tmo%20off%0A%20%20%20%20%20%20%20%20rr_weight%20uniform%0A%20%20%20%20%20%20%20%20rr_min_io_rq%201000%20%20%20%20%20%20%20%20%20%20%20%20%0A%20%20%20%20%20%20%20%20path_grouping_policy%20multibus%0A%20%20%20%20%20%20%20%20path_selector%20%22round-robin%200%22%0A%20%20%20%20%20%20%20%20no_path_retry%20fail%0A%20%20%20%20%20%20%20%20failback%20immediate%0A%20%20%20%20%7D%0A%20%20%20%20device%20%7B%0A%20%20%20%20%20%20%20%20vendor%20%22IBM%22%0A%20%20%20%20%20%20%20%20product%20%222145%22%0A%20%20%20%20%20%20%20%20path_checker%20tur%0A%20%20%20%20%20%20%20%20features%20%221%20queue_if_no_path%22%0A%20%20%20%20%20%20%20%20path_grouping_policy%20group_by_prio%0A%20%20%20%20%20%20%20%20path_selector%20%22service-time%200%22%20%23%20Used%20by%20Red%20Hat%207.x%0A%20%20%20%20%20%20%20%20prio%20alua%0A%20%20%20%20%20%20%20%20rr_min_io_rq%201%0A%20%20%20%20%20%20%20%20rr_weight%20uniform%20%0A%20%20%20%20%20%20%20%20no_path_retry%20%225%22%0A%20%20%20%20%20%20%20%20dev_loss_tmo%20120%0A%20%20%20%20%20%20%20%20failback%20immediate%0A%20%20%20%7D%0A%7D%0A
verification: {}
- path: /etc/udev/rules.d/99-ibm-2145.rules
mode: 420
filesystem: root
contents:
source: data:,%23%20Set%20SCSI%20command%20timeout%20to%20120s%20%28default%20%3D%3D%2030%20or%2060%29%20for%20IBM%202145%20devices%0ASUBSYSTEM%3D%3D%22block%22%2C%20ACTION%3D%3D%22add%22%2C%20ENV%7BID_VENDOR%7D%3D%3D%22IBM%22%2CENV%7BID_MODEL%7D%3D%3D%222145%22%2C%20RUN%2B%3D%22/bin/sh%20-c%20%27echo%20120%20%3E/sys/block/%25k/device/timeout%27%22%0A
verification: {}
systemd:
units:
- name: multipathd.service
enabled: true
# Uncomment the following lines if this MachineConfig will be used with
iSCSI connectivity
#- name: iscsid.service
# enabled: true
 
Note: To enable iSCSI connectivity to the storage device, uncomment the last two lines included in Example 5. This enables the iscsid.server on boot.
2. Apply the yaml file using the following command:
oc apply -f 99-ibm-attach.yaml
3. RHEL users should verify that the systemctl status multipathd output indicates that the multipath status is active and error-free. Run the following commands to see if multipath is correctly configured:
systemctl status multipathd
multipath -ll
 
iSCSI connectivity configuration
Complete the following steps to configure iSCSI connectivity:
1. Update the iSCSI initiator name in the /etc/iscsi/initiatorname.iscsi file with the worker node <hostname> inserted after the InitiatorName:
InitiatorName=iqn.1994-05.com.redhat:<hostname>-<random generated number>
For example:
InitiatorName=iqn.1994-05.com.redhat:oc-worker1-410-74b436a728b6
2. Add host definitions to the IBM Storwize storage array by selecting hosts from the GUI console. Also, provide the iSCSI initiator name shown in the previous step 1 in the /etc/iscsi/initiatorname.iscsi file.
3. Click Add to add the host definition (as shown in Figure 3).
Figure 3 Add iSCSI host
For the iSCSI initd script startup, set a session to automatic in /etc/iscsi/iscsid.conf: node.startup = automatic
4. Discover the iSCSI targets by using the iscsiadm CLI:
iscsiadm -m discoverydb -t st -p <IP Address configured for iSCSI @ Storwize Storage Array>:3260 --discover
5. Log in to iSCSI targets using the iscsiadm CLI tool:
iscsiadm -m node -p <IP Address configured for iSCSI @ Storwize Storage Array>:3260 --login
6. Verify the host using the Storwize GUI console (as shown in Figure 4).
Figure 4 iSCSI Host status
Fiber Channel connectivity configuration
Use systool to get the Fiber Channel WWPN that will be associated with the host definition on the storage device on RHEL nodes:
1. Install sysfsutils to simplify getting the FC WWPN:
yum install systools
2. Run systool against the fc_host to get the WWPN for each installed FC adapter.
systool -c fc_host -v | grep port_name
port_name = "0x10008c7cffb01b00"
port_name = "0x10008c7cffb01b01"
3. Zone your host ports to the storage array storage ports on your Fiber Channel Switches.
4. Add host definitions to the Storwize storage array by selecting hosts from the GUI console. If zoning has been done, you are able to select the host WWPN from the drop-down list. Click Add to add the host definition (as shown in Figure 5).
Figure 5 Add FC host
Install IBM block storage CSI driver on Red Hat OpenShift Container Platform
The following section describes how to install the IBM block storage CSI driver to work with OpenShift Container Platform 4.3. The source code and additional information can be found on Github at https://github.com/IBM/ibm-block-csi-driver/.
 
Installing from the OpenShift web console
When using the Red Hat OpenShift Container Platform, the Operator for IBM block storage CSI driver can be installed directly from the OpenShift web console, through the OperatorHub. Installing the CSI (Container Storage Interface) driver is part of the Operator installation process. The source code and additional information can be found on Github:
Procedure
1. From Red Hat OpenShift Container Platform Home → Projects, select Create Project and enter the following information:
Name: ibm-block-csi
Display Name: ibm-block-csi
Description: IBM Block CSI
2. From Red Hat OpenShift Container Platform Operators → OperatorHub, select Project: ibm-block-csi.
3. Search for IBM block storage CSI driver, as shown in Figure 6.
Figure 6 Search for IBM block storage CSI driver in catalog
4. Select the Operator for IBM block storage CSI driver and click Install, as shown in Figure 7. The Operator Subscription form displays.
Figure 7 Install the Operator
5. Set the Installation Mode to kube-system, under A specific namespace on the cluster, as shown in Figure 8.
Figure 8 Select the ibm-block-csi Namespace
6. Click Subscribe, as shown in Figure 9.
Figure 9 Subscribe to operator
7. From Operators → Installed Operators, check the status of the Operator for IBM block storage CSI driver, as shown in Figure 10.
Figure 10 Operator is installed
Wait until the Status is Up to date and then InstallSucceeded.
 
Note: While waiting for the Status to change from Up to date to InstallSucceeded, you can check the pod progress and readiness status from Workloads → Pods.
8. When the operator installation progress has completed, click the installed Operator for IBM block storage CSI driver.
9. Click Create Instance to create IBMBlock CSI, as shown in Figure 11.
Figure 11 Overview of operator
10. Edit the yaml file in the web console as follows, if needed, as shown in Figure 12 on page 22:
apiVersion: csi.ibm.com/v1
kind: IBMBlockCSI
metadata:
name: ibm-block-csi
namespace: kube-system
labels:
app.kubernetes.io/name: ibm-block-csi-operator
app.kubernetes.io/instance: ibm-block-csi-operator
app.kubernetes.io/managed-by: ibm-block-csi-operator
spec:
# controller is a statefulSet with ibm-block-csi-driver-controller
# container and csi-provisioner, csi-attacher and livenessprobe sidecars.
controller:
repository: ibmcom/ibm-block-csi-driver-controller
tag: "1.1.0"
imagePullPolicy: IfNotPresent
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
 
# node is a daemonSet with ibm-block-csi-driver-node container
# and csi-node-driver-registrar and livenessprobe sidecars.
node:
repository: ibmcom/ibm-block-csi-driver-node
tag: "1.1.0"
imagePullPolicy: IfNotPresent
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
 
# tolerations:
# - effect: NoSchedule
# key: node-role.kubernetes.io/master
# operator: Exists
sidecars:
- name: csi-node-driver-registrar
repository: quay.io/k8scsi/csi-node-driver-registrar
tag: "v1.2.0"
imagePullPolicy: IfNotPresent
- name: csi-provisioner
repository: quay.io/k8scsi/csi-provisioner
tag: "v1.3.0"
imagePullPolicy: IfNotPresent
- name: csi-attacher
repository: quay.io/k8scsi/csi-attacher
tag: "v1.2.1"
imagePullPolicy: IfNotPresent
- name: livenessprobe
repository: quay.io/k8scsi/livenessprobe
tag: "v1.1.0"
imagePullPolicy: IfNotPresent
 
# imagePullSecrets:
# - "secretName"
Figure 12 Creating the IBM Block CSI
11. Click Create.
Wait until the Status is Running, as shown in Figure 13.
Figure 13 IBM CSI driver installed
Create a secret for authentication to your storage devices
There are two ways to create a secret.
Option 1: Create a storage device secret from a YAML file
 
Important: Data Values need to be encoded as base64 for entry into the yaml file. The output from base64 will be entered in the data.Password field.
Example: echo -n superuser | base64
Output: c3VwZXJ1c2Vy
File: storwize-secret.yml
apiVersion: v1
kind: Secret
metadata:
name: storwize
namespace: ibm-block-csi
labels:
product: ibm-block-csi-driver
type: Opaque
stringData:
# Array username.
username: superuser
# Array managment addresses
management_address: flashv7k-1.isvcluster.net
data:
# Base64-encoded password to authenticate with the storage system.
password: "cGFzc3cwcmQ="
Apply the new secret:
oc create -n ibm-block-csi -f storwize-secret.yml
Option 2: Create a secret with the oc command line
Run the following command:
oc create secret generic storwize --from-literal=management_address= flashv7k-1.isvcluster.net --from-literal=username=superuser --from-literal=password=passw0rd -n kube-system
Create a block.csi.ibm.com StorageClass
To create a StorageClass, run the following code:
File: ibmc-block-gold-SC.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ibmc-block-gold
  namespace: ibm-block-csi
provisioner: block.csi.ibm.com
parameters:
# SpaceEfficiency: <VALUE>
# SpaceEfficiency values for Virtualize products are: thin,
# compressed or deduplicated
pool: ibmc-block-gold
csi.storage.k8s.io/provisioner-secret-name: storwize
csi.storage.k8s.io/provisioner-secret-namespace: ibm-block-csi
csi.storage.k8s.io/controller-publish-secret-name: storwize
csi.storage.k8s.io/controller-publish-secret-namespace: ibm-block-csi
# csi.storage.k8s.io/fstype: <VALUE_FSTYPE>
# Optional. values ext4\xfs. The default is ext4.
Apply the new StorageClass:
oc create -f ibmc-block-gold-SC.yaml
The IBM block storage CSI driver is now ready for use by your users to dynamically provision IBM Storage.
Install IBM Spectrum Scale CSI Driver on Red Hat OpenShift Container Platform
The following section describes how to install the IBM Spectrum Scale CSI Driver on Red Hat OpenShift Container Platform.
IBM Spectrum Scale
IBM Spectrum Scale is a parallel, scale-out, high-performance solution consolidating traditional file-based and new-era workloads to support artificial intelligence, data lake and object storage, Hadoop, Spark, and analytics use cases. IBM Spectrum Scale helps clients optimize for cost and performance using intelligent data management that automates movement of data to the optimal storage tier without end-user impact. IBM Spectrum Scale is known for performance and reliability, providing data storage for some of the largest compute clusters in the world.
IBM Spectrum Scale v5.4 is required on all IBM Spectrum Scale nodes in the Kubernetes cluster (all nodes that run the IBM Spectrum Scale CSI Driver code). See the IBM Spectrum Scale Software Version Recommendation Preventive Service Planning for recommendations on the exact IBM Spectrum Scale v5.4 level to use.
Table 4 shows the software requirements for the solution lab test environment.
Table 4 Software requirements for the solution lab
Software solution requirements
Version
IBM Spectrum Scale
V5.4.0.1+
IBM Elastic Storage® Server1
V5.3.4.2+

1 IBM Elastic Storage Server is shown to illustrate storage compatibility with IBM Spectrum Scale and IBM Spectrum Scale CSI Driver. However, note that IBM Spectrum Scale CSI Driver, Kubernetes, or Red Hat OpenShift code cannot be installed directly on the IBM Elastic Storage Server Elastic Management Server or IBM Elastic Storage Server I/O nodes. IBM Elastic Storage Server will be managed by the larger IBM Spectrum Scale cluster, and IBM Spectrum Scale CSI Driver, Kubernetes, or Red Hat OpenShift will be installed on IBM Spectrum Scale nodes in the larger cluster.
Installing IBM Spectrum Scale
Use the following steps to install IBM Spectrum Scale:
1. Set up your IBM Spectrum Scale cluster.
For more information, see “IBM Spectrum Scale cluster configurations” and “Steps for establishing and starting your IBM Spectrum Scale cluster”.
2. You must add all Red Hat OpenShift Container Platform 4.3 worker nodes as IBM Spectrum Scale client nodes.
For more information, see “Creating an IBM Spectrum Scale cluster”.
3. Next, you must create a file system in your IBM Spectrum Scale cluster.
For more information about creating a file system, see “File system creation considerations”.
For more information about the command to create a file system, see “mmcrfs command”.
4. Finally, mount the file system on all worker nodes in the Red Hat OpenShift Container Platform cluster.
For more information, see “Mounting a file system”.
 
Before using IBM Spectrum Scale CSI Driver with IBM Spectrum Scale, make note of the conditions described in “Appendix G: IBM Spectrum Scale usage restrictions” on page 50.
Installing IBM Spectrum Scale CSI Driver from the OpenShift web console
When using the Red Hat OpenShift Container Platform, the Operator for IBM Spectrum Scale storage CSI driver can be installed directly from the OpenShift web console, through the OperatorHub. Installing the Container Storage Interface (CSI) driver is part of the Operator installation process. The source code and additional information can be found on GitHub at: https://github.com/IBM/ibm-spectrum-scale-csi-driver https://github.com/IBM/ibm-spectrum-scale-csi-operator
Create a namespace for the Spectrum Scale CSI Driver
From Red Hat OpenShift Container Platform Home → Projects. select Create Project and enter the following information:
Name: ibm-spectrum-scale-csi-driver
Display Name: ibm-spectrum-scale-csi-driver
Description: IBM Spectrum Scale CSI
Create a secret for authentication to your storage device(s)
There are two ways to create a secret for authentication.
 
Note: The secret created during this step will be used during the IBM Spectrum Scale Operator installation, as shown in Figure 20 on page 30.
Option 1: Create a storage device secret from a YAML file
 
Important: Data Values need to be encoded as base64 for entry into the YAML file. The output from base64 will be entered in the data.username and data.Password fields.
Example: echo -n superuser | base64
Output: c3VwZXJ1c2Vy
File: scalegui-secret.yml
apiVersion: v1
kind: Secret
metadata:
name: scalegui
labels:
product: ibm-spectrum-scale-csi
type: Opaque
data:
# Base64-encoded username to be set for the Spectrum Scale GUI.
username: "Y3NpYWRtaW4="
# Base64-encoded password to be set for the Spectrum Scale GUI.
password: "c3VwZXJ1c2Vy"
Apply new secret:
oc create -n ibm-spectrum-scale-csi-driver -f scalegui-secret.yml
Option 2: Create a storage device secret with the oc command line
oc create secret generic scalegui --from-literal=username=csi-admin --from-literal=password=passw0rd -n ibm-spectrum-scale-csi-driver
Procedure
1. From Red Hat OpenShift Container Platform Operators  OperatorHub, select Projects: ibm-spectrum-scale-csi-driver
2. Search for IBM Spectrum Scale CSI Plugin Operator, as shown in Figure 14 on page 27.
Figure 14 Operator in Operator Hub
3. Select the IBM Spectrum Scale CSI Plugin Operator and click Install, as shown in Figure 15.
The Operator Subscription form displays.
Figure 15 Install screen
4. Set the Installation Mode to ibm-spectrum-scale-csi-driver, under A specific namespace on the cluster, as shown in Figure 16.
Figure 16 Select namespace
5. Click Subscribe, as shown in Figure 17.
Figure 17 Add Subscription
6. From Operators → Installed Operators, check the status of the IBM Spectrum Scale CSI Plugin Operator, as shown in Figure 18 on page 29.
Figure 18 Operator Subscription added
Wait until the Status is Up to date and then InstallSucceeded.
Note: While waiting for the Status to change from Up to date to InstallSucceeded, you can check the pod progress and readiness status from Workloads → Pods.
7. When the operator installation progress has completed, click the installed IBM Spectrum Scale CSI Plugin Operator.
8. Click Create Instance to create IBM CSI Spectrum Scale, as shown in Figure 19.
Figure 19 Create a new CR IBM Spectrum Scale Application
9. Edit the yaml file in the web console (Figure 20) as follows:
Figure 20 Edit CR
 
Note: Additional options are shown here than will be seen in the Red Hat Openshift UI because “alm-examples” cannot store comments. For additional information see:
apiVersion: csi.ibm.com/v1
kind: 'CSIScaleOperator'
metadata:
name: 'ibm-spectrum-scale-csi'
labels:
app.kubernetes.io/name: ibm-spectrum-scale-csi-operator
app.kubernetes.io/instance: ibm-spectrum-scale-csi-operator
app.kubernetes.io/managed-by: ibm-spectrum-scale-csi-operator
release: ibm-spectrum-scale-csi-operator
status: {}
spec:
# The path to the GPFS file system mounted on the host machine.
# ==================================================================================
scaleHostpath: "< GPFS FileSystem Path >"
 
# Below specifies the details of a SpectrumScale cluster configuration used by the
# plugin. It can have multiple values. For more details, refer to the cluster
# configuration for the plugin. https://github.com/IBM/ibm-spectrum-scale-csi-driver
# ==================================================================================
clusters:
- id: "< Primary Cluster ID - WARNING: THIS IS A STRING NEEDS YAML QUOTES!>"
secrets: "scalegui"
secureSslMode: false
primary:
primaryFs: "< Primary Filesystem >"
primaryFset: "< Fileset in Primary Filesystem >"
# inodeLimit: "< node limit for Primary Fileset >" # Optional
# remoteCluster: "< Remote ClusterID >" # Optional - This ID should have seperate entry in Clusters map.
# remoteFs: "< Remote Filesystem >" # Optional
# cacert: "< CA cert configmap for GUI >" # Optional
restApi:
- guiHost: "< Primary cluster GUI IP/Hostname >"
#
# In the case we have multiple clusters, specify their configuration below.
# ==================================================================================
# - id: "< Cluster ID >"
# secrets: "< Secret for Cluster >"
# secureSslMode: false
# restApi:
# - guiHost: "< Cluster GUI IP/Hostname >"
 
# Attacher image name, in case we do not want to use default image.
# ==================================================================================
# attacher: "quay.io/k8scsi/csi-attacher:v2.1.1"
 
# Provisioner image name, in case we do not want to use default image.
# ==================================================================================
# provisioner: "quay.io/k8scsi/csi-provisioner:v1.5.0"
 
# Driver Registrar image name, in case we do not want to use default image.
# ==================================================================================
# driverRegistrar: "quay.io/k8scsi/csi-node-driver-registrar:v1.2.0"
 
# SpectrumScale CSI Plugin image name, in case we do not want to use default image.
# ==================================================================================
# spectrumScale: "quay.io/ibm-spectrum-scale/ibm-spectrum-scale-csi-driver:v1.1.0"
 
# attacherNodeSelector specifies on which nodes we want to run attacher sidecar
# In below example attacher will run on nodes which have label as "scale=true"
# and "infranode=2". Can have multiple entries.
# ==================================================================================
# attacherNodeSelector:
# - key: "scale"
# value: "true"
# - key: "infranode"
# value: "2"
 
# provisionerNodeSelector specifies on which nodes we want to run provisioner
# sidecar. In below example provisioner will run on nodes which have label as
# "scale=true" and "infranode=1". Can have multiple entries.
# ==================================================================================
# provisionerNodeSelector:
# - key: "scale"
# value: "true"
# - key: "infranode"
# value: "1"
 
# pluginNodeSelector specifies nodes on which we want to run plugin daemoset
# In below example plugin daemonset will run on nodes which have label as
# "scale=true". Can have multiple entries.
# ==================================================================================
# pluginNodeSelector:
# - key: "scale"
# value: "true"
 
# In case K8s nodes name differs from SpectrumScale nodes name, we can provide
# node mapping using nodeMapping attribute. Can have multiple entries.
# ==================================================================================
# nodeMapping:
# - k8sNode: "< K8s Node Name >"
# spectrumscaleNode: "< SpectrumScale Node Name >"
 
 
Note: If nodeselector is needed to support a mixed RHEL/RHCOS cluster or to limit what nodes are attached to Spectrum Scale, see the following IBM Knowledge Center article:
Note: If the Kubernetes node name differs from the Spectrum Scale name, see the following IBM Knowledge Center article:
10. Click Create, as shown in Figure 21 on page 33.
Wait until the Status is Running.
Figure 21 Install completed
IBM Spectrum Scale storage class definitions
This section describes how to configure Kubernetes storage classes. Storage Classes are used for creating lightweight volumes or fileset-based volumes, as shown in Example 6 and Example 7 on page 34.
Example 6 Fileset-based Storage class template
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: "<NAME>"
labels:
product: ibm-spectrum-scale-csi
# annotations:
# storageclass.beta.kubernetes.io/is-default-class: "true"
#reclaimPolicy: "Retain" # Optional,
# Values: Delete[default] or Retain
provisioner: spectrumscale.csi.ibm.com
parameters:
volBackendFs: "<filesystem name>"
type: "fileset"
# fileset-type: "<fileset type>" # Optional,
# Values: Independent[default] or dependent
# dependantFileset: "<fileset>" # Optional
# uid: "<uid number>" # Optional
# gid: "<gid number>" # Optional
# inode-limit: "<no of inodes to be preallocated>" # Optional
Example 7 Lightweight Storage class template
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: "<NAME>"
labels:
product: ibm-spectrum-scale-csi
# annotations:
# storageclass.beta.kubernetes.io/is-default-class: "true"
#reclaimPolicy: "Retain" # Optional,
# Values: Delete[default] or Retain
provisioner: spectrumscale.csi.ibm.com
parameters:
volBackendFs: "<filesystem name>"
volDirBasePath: "<fileset name>"
# uid: "<uid number>" # Optional
# gid: "<gid number>" # Optional
Table 5 list the parameters you can configure in the file.
Table 5 Configuration parameters in storage-class-template.yml
Parameter
Description
Name
Storage class name.
volBackendFS
IBM Spectrum Scale filesystem name for creating new volumes.
fileset- type
Optional parameter.
 
Type of fileset to be created for volume.
 
Permitted values: independent [default],
dependent.
dependantFileset
Parent fileset name in case of dependent fileset-type
volDirBasePath
Base path under which all volumes with this storage class will be created. This path must exist.
Uid
Optional parameter.
 
Owner to be set on the fileset for newly created volume. User with specified uid/name must exist on IBM Spectrum Scale.
Gid
Optional parameter.
 
Group owner to be set on the fileset for newly created volume. Must be specified along with uid.
 
Group with specified gid/group must exist
on IBM Spectrum Scale.
inode-limit
Optional parameter.
 
Number of inodes to be pre-allocated for
newly created fileset
isPreexisting
Optional parameter.
 
Used to indicate whether to use existing
fileset or create new fileset for volume.
 
Permitted values: false [default], true.
 
If true is specified, user must set pv-name
parameter while creating PVC.
Type
Permanently set to fileset.
Product
Permanently set to ibm-spectrum-scale-csi
provisioner
Permanently set to spectrumscale.csi.ibm.com.
Deploying CockroachDB Operator and Database instance
This section provides detailed steps for deploying the CockroachDB Operator using the Red Hat OpenShift Container Platform OperatorHub, and for properly provisioning persistent storage for the CockroachDB StatefulSet.
Create a Subscription to the CockroachDB Operator
To create a subscription, complete the following steps:
1. In the OpenShift Container Platform console select Console → OperatorHub.
2. Select CockroachDB from the available Database Operators (shown in Figure 22).
Figure 22 CockroachDB OperatorHub
3. Accept the warning about Community provided Operators.
4. Select Install (shown in Figure 23).
Figure 23 Install CockroachDB
5. Select Subscribe to create a subscription to the CockroachDB Operator (shown in Figure 24).
Figure 24 Create subscription for CockroachDB Operator
6. Wait for Upgrade status to show 1 installed (shown in Figure 25).
Figure 25 Install of Operator complete
Use the CockroachDB operator to deploy a StatefulSet
To deploy, complete the following steps:
1. Select Installed Operators from the side bar.
2. From the drop-down menu, select the namespace that you want to deploy the application to. In the lab environment, we used the cockroachdb namespace.
3. Select CockroachDB to start (shown in Figure 26).
Figure 26 Installed Operators
4. Select Create New to enter options for the application (shown in Figure 27).
Figure 27 CockroachDB Operator
5. In our lab, we updated the spec.StorageClass from null to ibmc-block-gold.
6. Select Create to deploy the application.
Verify successful deployment of CockroachDB
To verify deployment, complete the following steps:
1. Select the newly deployed Cockroachdb. The default name is example (shown in Figure 28).
Figure 28 Installed example CockroachDB
2. Select Resources to see that 2 Services and a StatefulSet were created (shown in Figure 29).
Figure 29 CockroachDB Resources
3. Select the Statefulset and then navigate to the pods (shown in Figure 30).
Figure 30 CockroachDB Statefulset Pods
4. Wait for the 3 pods to become Ready.
5. Select Storage → Persistent Volume Claims.
6. Select one of the PVCs and see that it is bound to a PersistentVolume (shown in Figure 31).
Figure 31 PVC information
Appendix A: DNS file
Contents of the DNS server configuration file to support Red Hat OpenShift Container Platform 4.3:
[root@dns named]# cat /etc/named.conf
options{
directory "/var/named";
forwarders {
9.0.146.50;
9.0.148.60;
};
};
zone "46.118.9.IN-ADDR.ARPA." IN {
type master;
file "9.118.46.db";
};
zone "3.168.192.IN-ADDR.ARPA." IN {
type master;
file "192.168.3.db";
};
zone "isvcluster.net." {
type master;
file "db.isvcluster.net";
};
zone "openshift.isvcluster.net." {
type master;
file "db.openshift.isvcluster.net";
};
[root@sha-dns named]# cat db.isvcluster.net
$TTL 1H
@ SOA isvcluster.net. root.isvcluster.net. (
2
3H
1H
1H
1H )
NS dns.isvcluster.net.
dns IN 1H A 192.168.3.80
rhel-host IN 1H A 192.168.3.80
rd-icp-vcenter IN 1H A 9.118.46.100
vm-01 IN 1H A 192.168.3.11
[root@dns named]# cat db.openshift.isvcluster.net
$TTL 1H
@ SOA openshift.isvcluster.net. root.openshift.isvcluster.net. (
2
3H
1H
1H
1H )
NS dns.isvcluster.net.
$ORIGIN isvsol.openshift.isvcluster.net.
; oc-boot-410 expands to oc-boot-410.isvsol.openshift.isvcluster.net
oc-boot-410 IN 1H A 192.168.3.155
oc-master1-410 IN 1H A 192.168.3.156
oc-master2-410 IN 1H A 192.168.3.157
oc-master3-410 IN 1H A 192.168.3.158
oc-worker1-410 IN 1H A 192.168.3.159
oc-worker2-410 IN 1H A 192.168.3.160
oc-worker3-410 IN 1H A 192.168.3.161
rh-oc-410 IN 1H A 192.168.3.162
; The next 4 lines are used to point api, api-int and ingress to the load balancer
api IN 1H A 192.168.3.162
api-int IN 1H A 192.168.3.162
ingress-https IN 1H A 192.168.3.162
ingress IN 1H A 192.168.3.162
; The next 3 lines point to the control plane/master nodes
etcd-0 IN 1H A 192.168.3.156
etcd-1 IN 1H A 192.168.3.157
etcd-2 IN 1H A 192.168.3.158
$ORIGIN apps.isvsol.openshift.isvcluster.net.
; console-openshift-console expands to
; console-openshift-console.apps.isvsol.openshift.isvcluster.net
* IN 1H A 192.168.3.162
; SRV records are needed for ETCD discovery
_etcd-server-ssl._tcp.isvsol.openshift.isvcluster.net. 86400 IN SRV 0 10 2380 etcd-0.isvsol.openshift.isvcluster.net.
_etcd-server-ssl._tcp.isvsol.openshift.isvcluster.net. 86400 IN SRV 0 10 2380 etcd-1.isvsol.openshift.isvcluster.net.
_etcd-server-ssl._tcp.isvsol.openshift.isvcluster.net. 86400 IN SRV 0 10 2380 etcd-2.isvsol.openshift.isvcluster.net.
[root@dns named]# cat 192.168.3.db
$TTL 1H
@ SOA isvcluster.net. root.isvcluster.net. (
2
3H
1H
1W
1H )
NS dns.isvcluster.net.
155 PTR oc-boot-410.isvsol.openshift.isvcluster.net.
156 PTR oc-master1-410.isvsol.openshift.isvcluster.net.
157 PTR oc-master2-410.isvsol.openshift.isvcluster.net.
158 PTR oc-master3-410.isvsol.openshift.isvcluster.net.
159 PTR oc-worker1-410.isvsol.openshift.isvcluster.net.
160 PTR oc-worker2-410.isvsol.openshift.isvcluster.net.
161 PTR oc-worker3-410.isvsol.openshift.isvcluster.net.
162 PTR rh-oc-410.isvsol.openshift.isvcluster.net.
162 PTR api.isvsol.openshift.isvcluster.net.
162 PTR api-int.isvsol.openshift.isvcluster.net.
162 PTR ingress-https.isvsol.openshift.isvcluster.net.
162 PTR ingress.isvsol.openshift.isvcluster.net.
Appendix B: DHCP file
Contents of the DHCP server config file to support Red Hat OpenShift Container Platform 4.3:
[root@dns named]# cat /etc/dhcp/dhcpd.conf
option domain-name "isvcluster.net";
option domain-name-servers sha-dns.isvcluster.net;
default-lease-time 86400;
max-lease-time 604800;
authoritative;
log-facility local7;
subnet 10.152.187.0 netmask 255.255.255.0 {
}
subnet 192.168.3.0 netmask 255.255.255.0 {
option routers 192.168.3.10;
option subnet-mask 255.255.255.0;
option domain-search "openshift.isvcluster.net";
option domain-name-servers 192.168.3.80;
option time-offset -18000; # Eastern Standard Time
range 192.168.3.155 192.168.3.170;
}
host oc-base-410{
option host-name "oc-base-410.openshift.isvcluster.net";
hardware ethernet 00:50:56:AF:06:09;
fixed-address 192.168.3.35;
}
host oc-boot-410{
option host-name "oc-boot-410.isvsol.openshift.isvcluster.net";
hardware ethernet 00:50:56:AF:8D:02;
fixed-address 192.168.3.155;
}
host oc-master1-410{
option host-name "oc-master1-410.isvsol.openshift.isvcluster.net";
hardware ethernet 00:50:56:AF:D4:2A;
fixed-address 192.168.3.156;
}
host oc-master2-410{
option host-name "oc-master2-410.isvsol.openshift.isvcluster.net";
hardware ethernet 00:50:56:AF:68:b9;
fixed-address 192.168.3.157;
}
host oc-master3-410{
option host-name "oc-master3-410.isvsol.openshift.isvcluster.net";
hardware ethernet 00:50:56:AF:09:34;
fixed-address 192.168.3.158;
}
host oc-worker1-410{
option host-name "oc-worker1-410.isvsol.openshift.isvcluster.net";
hardware ethernet 00:50:56:AF:C8:C0;
fixed-address 192.168.3.159;
}
host oc-worker2-410{
option host-name "oc-worker2-410.isvsol.openshift.isvcluster.net";
hardware ethernet 00:50:56:AF:C9:A5;
fixed-address 192.168.3.160;
}
host oc-worker3-410{
option host-name "oc-worker3-410.isvsol.openshift.isvcluster.net";
hardware ethernet 00:50:56:AF:18:B5;
fixed-address 192.168.3.161;
}
host rh-oc-410{
option host-name "rh-oc-410.isvsol.openshift.isvcluster.net";
hardware ethernet 00:50:56:AF:DA:CC;
fixed-address 192.168.3.162;
}
Appendix C: HTTP file
Contents of the HTTP server configuration file to support Red Hat OpenShift Container Platform 4.3:
[root@dns named]# cat /etc/httpd/conf/httpd.conf
ServerRoot "/etc/httpd"
Listen 192.168.3.80:80
Include conf.modules.d/*.conf
User apache
Group apache
ServerAdmin root@localhost
<Directory />
AllowOverride none
Require all denied
</Directory>
DocumentRoot "/var/www/html"
<Directory "/var/www">
AllowOverride None
Require all granted
</Directory>
<Directory "/var/www/html">
Options Indexes FollowSymLinks
AllowOverride None
Require all granted
</Directory>
<IfModule dir_module>
DirectoryIndex index.html
</IfModule>
<Files ".ht*">
Require all denied
</Files>
ErrorLog "logs/error_log"
LogLevel warn
<IfModule log_config_module>
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %b" common
<IfModule logio_module>
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio
</IfModule>
CustomLog "logs/access_log" combined
</IfModule>
<IfModule alias_module>
ScriptAlias /cgi-bin/ "/var/www/cgi-bin/"
</IfModule>
<Directory "/var/www/cgi-bin">
AllowOverride None
Options None
Require all granted
</Directory>
<IfModule mime_module>
TypesConfig /etc/mime.types
AddType application/x-compress .Z
AddType application/x-gzip .gz .tgz
AddType text/html .shtml
AddOutputFilter INCLUDES .shtml
</IfModule>
AddDefaultCharset UTF-8
<IfModule mime_magic_module>
MIMEMagicFile conf/magic
</IfModule>
EnableSendfile on
IncludeOptional conf.d/*.conf
Appendix D: haproxy file
Contents of the haproxy configuration file to support Red Hat OpenShift Container Platform 4.3:
[root@rh-oc-410 ~]# cat /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats timeout 30s
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
defaults
mode tcp
log global
option tcplog
option dontlognull
#option http-server-close
#option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
frontend api.isvsol.openshift.isvcluster.net
mode tcp
bind 192.168.3.162:6443
default_backend api
backend api
mode tcp
balance roundrobin
option ssl-hello-chk
#server oc-boot-410 192.168.3.155:6443
server oc-master1-410 192.168.3.156:6443
server oc-master2-410 192.168.3.157:6443
server oc-master3-410 192.168.3.158:6443
frontend api-int.isvsol.openshift.isvcluster.net
mode tcp
bind 192.168.3.162:22623
default_backend api-int
backend api-int
mode tcp
balance roundrobin
option ssl-hello-chk
#server oc-boot-410 192.168.3.155:22623
server oc-master1-410 192.168.3.156:22623
server oc-master2-410 192.168.3.157:22623
server oc-master3-410 192.168.3.158:22623
frontend ingress-https.isvsol.openshift.isvcluster.net
mode tcp
bind 192.168.3.162:443
default_backend ingress-https
backend ingress-https
mode tcp
balance roundrobin
option ssl-hello-chk
server oc-worker1-410 192.168.3.159:443 check
server oc-worker2-410 192.168.3.160:443 check
server oc-worker3-410 192.168.3.161:443 check
frontend ingress.isvsol.openshift.isvcluster.net
#mode http
bind 192.168.3.162:80
default_backend ingress
backend ingress
#mode http
balance roundrobin
server oc-worker1-410 192.168.3.159:80
server oc-worker2-410 192.168.3.160:80
server oc-worker3-410 192.168.3.161:80
Appendix E: node communication
Enable communication between boot and master nodes’ haproxy, HTTP, DNS, and DHCP systems. In the lab setup, the 3 daemons (HTTP, DNS, and DHCP) were running on the same system. The following are a set of firewall rules that allowed connections to each of these daemons.
RH-DNS-410
firewall-cmd --permanent --add-service=dns
firewall-cmd --permanent --add-service=http
firewall-cmd --permanent --add-service=dhcp
RH-HOST-410
firewall-cmd --permanent --add-port=80/tcp
firewall-cmd --permanent --add-port=443/tcp
firewall-cmd --permanent --add-port=6443/tcp
firewall-cmd --permanent --add-port=10256/tcp
firewall-cmd --permanent --add-port=22623/tcp
firewall-cmd --permanent --add-port=2379-2380/tcp
firewall-cmd --permanent --add-port=9000-9999/tcp
firewall-cmd --permanent --add-port=10249-10259/tcp
firewall-cmd --permanent --add-port=30000-32767/tcp
Appendix F: bootstrap file
Contents of the append-bootstrap.ign file in support of the RHCOS machines in VMware vSphere:
"ignition": {
"config": {
"append": [
{
"source": "http://192.168.3.80/bootstrap.ign",
"verification": {}
}
]
},
"timeouts": {},
"version": "2.1.0"
},
"networkd": {},
"passwd": {},
"storage": {},
"systemd": {}
}
Appendix G: IBM Spectrum Scale usage restrictions
Make note of the following conditions before using IBM Spectrum Scale CSI Driver with IBM Spectrum Scale:
IBM Spectrum Scale storage tiering/pooling is not surfaced to the PVC/POD level. To gain a similar look and feel as the IBM Spectrum Scale storage tiering, it is advised to create a separate IBM Spectrum Scale file system on each type of desired storage (such as flash, SSD, and SAS). After that file system is created, associate filesystem, fileset-type, and storage classes in the storage-class-template.yml and pvc-template.yml files. This enables multiple types of storage to be presented to the pods via multiple PVCs.
IBM Spectrum Scale must be preinstalled with the IBM Spectrum Scale GUI.
IBM Spectrum Scale CSI Driver is only supported on IBM Spectrum Scale V5.0.4.1 and later.
At least one filesystem must exist and must be mounted on all of the worker nodes.
IBM Spectrum Scale Container Storage Interface Driver does not use the size specified in PersistentVolumeClaim for lightweight volume and dependent fileset volume.
Quota must be enabled for all of the filesystems being used for creating persistent volumes.
All Red Hat OpenShift worker nodes must have the IBM Spectrum Scale client installed on them.
The maximum number of supported volumes that can be created using the independent fileset storage class is 998 (excluding the root fileset and primary fileset reserved for the IBM Spectrum Scale Container Storage Interface Driver). This is based upon the following Q/A in the IBM Knowledge Center.
IBM Spectrum Scale Container Storage Interface Driver relies on the GUI server for performing IBM Spectrum Scale operations. If the GUI password or CA certificate expires, then manual intervention is needed by the admin to reset the password on the GUI or generate a new certificate and update the configuration in IBM Spectrum Scale Container Storage Interface Driver.
IBM Spectrum Scale Container Storage Interface Driver does not support mounting of volumes in read-only mode.
Red Hat OpenShift nodes should be configured to schedule pods after the IBM Spectrum Scale filesystem is mounted on worker node(s). This can be monitored by <mmlsmount all> and it is recommended to script Red Hat OpenShift startup based on the return code/systemd results. If scripting off of this command, use the -Y parameter, because this is parseable and the formatting is consistent from release to release.
If the IBM Spectrum Scale filesystem is unmounted, or if there is an issue with IBM Spectrum Scale mounted on a particular node, the applications in the containers that are using the PVC from IBM Spectrum Scale throw an I/O error. IBM Spectrum Scale CSI Driver does not monitor IBM Spectrum Scale and is unaware of any failure in the I/O path. Kubernetes also does not monitor IBM Spectrum Scale and is unaware of any failure in the I/O path. It is recommended to monitor IBM Spectrum Scale to avoid any issues. Monitoring can be accomplished via scripting, such as an IBM General Parallel File System (GPFS) callback set to take action if the GPFS filesystem is unmounted or shut down (such as through script-based activation of cordon or drain node).
If a single PVC is used by multiple pods, the application must maintain data consistency.
Creating a large number of PVCs in a single batch, or deleting all of them simultaneously, is not recommended. Such actions might result in overloading the IBM Spectrum Scale GUI node, which in turn might lead to the failure of creation and deletion of filesets on IBM Spectrum Scale.
The uid, gid, inode-limit, and fileset-type parameters from the storage-classes are only allowed for new fileset creation.
For each uid-gid combination, a new storage class needs to be defined.
Advanced IBM Spectrum Scale functionality, such as active file management, remote mount, encryption, and compression, are not supported by IBM Spectrum Scale CSI Driver.
The persistent volumes created using IBM Spectrum Scale CSI Driver with IBM Spectrum Scale as the back end use the IBM Spectrum Scale quota to make sure that the users cannot use more storage space than the amount specified in the PVC. However, this does not guarantee that the storage specified in the PVC is actually available. The storage administrator must ensure that the required storage is available on the IBM Spectrum Scale filesystem.
IBM Spectrum Scale CSI Driver does not check the storage space available on the IBM Spectrum Scale filesystem before creating the PVC. You can use the Kubernetes storage resource quota to limit the number of PVCs or storage space.
The file set created by the storage IBM Spectrum Scale CSI Driver should not be unlinked or deleted from any other interface.
The filesystem used for the persistent volume must be mounted on all the worker nodes at all times.
Red Hat OpenShift and IBM Spectrum Scale GUI use port 443.
IBM Spectrum Scale CSI Driver does not support volume expansion for storage class.
The df command inside the container shows the full size of the IBM Spectrum Scale filesystem.