This is a cache of https://developer.ibm.com/tutorials/install-spectrum-scale-cnsa-5121-on-ocp-48-on-powervs/. It is a snapshot of the page as it appeared on 2025-11-14T13:21:30.817+0000.
Install IBM Storage Scale Container Native Storage Access 5.2.1.1 on Red Hat OpenShift Container Platform 4.14 on IBM Power Virtual Servers - IBM Developer
IBM Storage Scale in containers (IBM Storage Scale Container Native Storage Access) allows the deployment of the cluster file system in a Red Hat OpenShift cluster. Using a remote mount-attached IBM Storage Scale file system, the IBM Storage Scale solution provides a persistent data store to be accessed by the applications via the IBM Storage Scale Container Storage Interface (CSI) driver using persistent volumes (PVs).
This tutorial shows how to create a two-node Storage Scale 5.2.1.1 storage cluster on Red Hat Enterprise Linux (RHEL) 9.4 virtual machines (VMs) and shared disks on IBM Power Virtual Servers and then connect the Storage Scale storage cluster to your Red Hat OpenShift Container Platform 4.14 cluster running on IBM Power Virtual Servers via Storage Scale Container Native Storage Access 5.2.1.1.
Prerequisites
This tutorial assumes that you are familiar with the Red Hat OpenShift Container Platform 4.14 environment on IBM Power Virtual Server. It is assumed that you have it already installed, you have access to it and have the credentials of an OpenShift cluster administrator (also known as kubeadmin).
Furthermore, you need to have access to the IBM Cloud console to provision the RHEL VMs and the storage on IBM Power Virtual Server for the Storage Scale cluster that is created in this tutorial.
You must be familiar with the Linux command line and have at least a basic understanding of Red Hat OpenShift.
Estimated time
It is expected to take around 2 to 3 hours to complete the installation of IBM Storage Scale 5.2.1.1 on IBM Power Virtual Server and to set up IBM Storage Scale Container Native Storage Access 5.2.1.1 on the Red Hat OpenShift 4.14 cluster. This lengthy duration is because we need to provision VMs on Power Virtual Server, install software from internet repositories, and reboot the worker nodes of the Red Hat OpenShift cluster.
Step 1 – Provision RHEL 9.4 VMs and shared disks on IBM Power Virtual Server
We need to create two basic RHEL 9.4 VMs on Power Virtual Servers with firewalld installed. These VMs need to be on the private network, the same one that we have the OpenShift nodes on. Make sure that the /etc/hosts files have an entry for each VMs long and short hostnames pointing to the IP of this interface. So, when the cluster is built, we get the Storage Scale daemon running on the IP for the private network interface. Add at least two shareable disks to these VMs so we can use them for the file system creation process.
Detailed steps:
Provision two RHEL 9.4 VMs on IBM Power Virtual Servers, each with at least:
2 physical cores with SMT8 (resulting in 16 vCPUs at the operating system level)
16 gB RAM
50 gB disk (for the operating system)
1 public IP address for Secure Shell (SSH) access. The IP address must be in the same network as the workers from the OpenShift cluster.
Provision at least two shared disks each with at least 100 gB size.
Attach the shared disks to each of the VMs.
Log in as root user into each of the VMs.
Using your Red Hat account, register the VM with Red Hat in order to receive updates and packages. Make sure the system stays on the RHEL 9.4 Extended Update Support (EUS) release for the kernel by specifying the release parameter.
subscription-manager register --release=9.4
Copy codeCopied!
Run the following command and verify if the RHEL release is 9.4.
subscription-manager release
Copy codeCopied!
Run a system update for each RHEL VM and reboot afterward.
yum update -y
reboot now
Copy codeCopied!
Verify that the system had been updated to the latest kernel. After running the following command, check if the output displays a kernel version of 5.14.0-427.37.1.el9_4.ppc64le or later.
uname -r
Copy codeCopied!
Step 2 – Change the MTU size of the private network interfaces of each RHEL 9.4 VM to 1450
Change the MTU size of the private network interface to 1450 that is used to connect to the OpenShift worker nodes.
Caution: When changing the MTU size of a network interface the network adapter is going to be disabled. This will cause the SSH session to the VM to be closed. To enable the network interface again, make sure that you can login to the VM using the IBM Cloud web interface.
Log in as a root user to the RHEL VM.
Run the nmtui command.
nmtui
Copy codeCopied!
Select Edit a connection and press Enter.
Choose the private network interface for which we want to set the MTU size to 1450, for example System env3, and press Enter.
Select <Show> using the Down Arrow key and press Enter.
go to the MTU field using the arrow keys and enter the value as 1450.
Scroll down using the Down Arrow key, select <OK> at the bottom of the screen, and press Enter.
On the Ethernet screen using the arrow keys, select <Back> and press Enter.
On the NetworkManager TUI screen, select Activate a connection and press Enter.
Using the arrow keys, select the * System env entry of the private network interface you want to deactivate (in this example, * System env3). Press the Right Arrow key and then press Enter. This will deactivate the network interface.
Caution: This will close your current SSH connection to the VM assuming you have connected to the VM via this network interface.
From the console to the VM in the IBM Cloud Administration gUI, log in to the VM as root user.
Run the nmtui command.
Activate the connection again.
Repeat step 2 to step 14 to change the MTU size to 1450 for the other RHEL 9.4 VM.
Step 3 – Prepare the RHEL 9.4 nodes for Storage Scale 5.2.1.1
On each node, create a /etc/hosts file that has the IP addresses, the fully qualified hostnames, and the short hostnames for the two RHEL VMs that you just created. The following shows an example.
Login to the first node as root user. In this example, the first node is the host 218018-linux-1 and the second node is the host 218018-linux-2.
Configure password-less SSH by performing the following steps (that is, step 4 to step 8). The nodes in the cluster must be able to communicate with each other without the use of a password for the root user and without the remote shell displaying any extraneous output.
generate an SSH key for the system by issuing the following command:
Change the permissions of the ~/.ssh/authorized_keys file by issuing the following command:
chmod 600 ~/.ssh/authorized_keys
Copy codeCopied!
Copy the content of the ~/.ssh directory to the other node in the cluster.
scp ~/.ssh/* 218018-linux-2:/root/.ssh
Copy codeCopied!
Test the SSH setup to ensure that all nodes can communicate with all other nodes. Test by using short hostnames, fully qualified host names, and IP addresses. Assume that the environment has the two nodes:
a) 218018-linux-1.power-iaas.cloud.ibm.com:192.168.167.234 , and
b) 218018-linux-2.power-iaas.cloud.ibm.com:192.168.167.238.
Repeat the following test using the short names (218018-linux-1 and 218018-linux-2), the fully qualified names (218018-linux-1.power-iaas.cloud.ibm.com and 218018-linux-2.power-iaas.cloud.ibm.com ), and the IP addresses (192.168.167.234 and 192.168.167.238):
#!/bin/bash# Edit nodes list and re-run the script for IP addresses,# short hostnames and long hostnames.#nodes="192.168.167.234 192.168.167.238"#nodes="218018-linux-1.power-iaas.cloud.ibm.com 218018-linux-2.power-iaas.cloud.ibm.com"
nodes="218018-linux-1 218018-linux-2"# Test ssh configurationfor i in$nodes; dofor j in$nodes; doecho -n "Testing${i} to ${j}: "
ssh ${i}"ssh ${j} date"donedone
Copy codeCopied!
Sample output:
Testing218018-linux-1 to 218018-linux-1: Sat Sep 28 14:12:36 EDT 2024
Testing218018-linux-1 to 218018-linux-2: Sat Sep 28 14:12:36 EDT 2024
Testing218018-linux-2 to 218018-linux-1: Sat Sep 28 14:12:37 EDT 2024
Testing218018-linux-2 to 218018-linux-2: Sat Sep 28 14:12:38 EDT 2024
Copy codeCopied!
Install the Linux wget and screen utilities on the first node. The screen utility helps you maintain your session in case your internet connection drops and makes it recoverable when you reconnect.
Step 6 – Create a two-node Storage Scale 5.2.1.1 cluster on the RHEL 9.4 VMs
In this step we will create a two-node Storage Scale cluster on the two RHEL VMs. We first setup the installer node on the first node, then create the Storage Scale cluster on both nodes and finally create a Storage Scale file system on that cluster.
Log in as the root user into the first node, for example, 218018-linux-1.
Run the screen command.
screen
Copy codeCopied!
Set up the installer node.
cd /usr/lpp/mmfs/5.2.1.1/ansible-toolkit
./spectrumscale setup -s <IP of private network interfaceoffirstnode>
Copy codeCopied!
Sample output:
Use the multipath command to find out the device IDs of your shared disks. In our test system, the VM has five disks:
disk1 is the rootvg disk for the RHEL operating system, with size of 100 gB and is bound to the device, dm-0
disk2 is the first shared disk of 100 gB size and is bound to device dm-1.
disk3 is the second shared disk of 100 gB size and is bound to device dm-6.
disk4 is the third shared disk of 100 gB size and is bound to device dm-5. We are not going to use this device.
disk 5 is the fourth shared disk of 10 gB size and is bound to device dm-7. We are not going to use this device.
Caution: Make sure that you choose the right dm-nnn IDs for the following step in order to not accidentally overwrite the rootvg disk and thus the partition for the RHEL operating system on that VM.
multipath -ll 2>/dev/null | grep"dm-\|size"
Copy codeCopied!
Sample output:
List the /dev/mapper directory to find out the name of the device that is being used for the rootvg.
ls /dev/mapper
Copy codeCopied!
Sample output:
In our sample, the device 36005076813810219680000000000041e has three partitions on it e1, e2, and e2, while the other disks 360050768138102196800000000000421, 360050768138102196800000000000422, 360050768138102196800000000000423 and 360050768138102196800000000000424 have no partitions.
Run the df command to make sure by another means that for the rootvg the device 36005076813810219680000000000041e is being used
df
Copy codeCopied!
Sample output:
Here we see, that the partition 36005076813810219680000000000041e3 is being used for the root filesystem of the RHEL VM.
Use the define_cluster.sh script to define the topology of the two-node Storage Scale cluster. Edit the script with your preferred editor and adapt the contents of the variables NODE_1, NODE_2, DISK_1, DISK_2, and CLUSTER_NAME to your environment.
# short hostnames of the two nodes in the clusterNODE_1="218018-linux-1"NODE_2="218018-linux-2"# device names dm-x of the shared disks as printed by: "multipath -ll 2>/dev/null | grep 'dm-\|size'”DISK_1="dm-1"DISK_2="dm-6"# cluster nameCLUSTER_NAME="gpfs-tz-p9-cluster"
Copy codeCopied!
Run the modified define_cluster.sh script to define the topology of the Storage Scale cluster.
chmod u+x define_cluster.sh
./define_cluster.sh | tee define_cluster.out
Run the following commands to disable callhome and to perform an installation precheck for Storage Scale. Before continuing to the next step, verify that the installation precheck command reports a Pre-check successful for install message.
cd /usr/lpp/mmfs/5.2.1.1/ansible-toolkit
# disable call home
./spectrumscale callhome disable# list node configuration
./spectrumscale node list
# run install precheck
./spectrumscale install -–precheck
Run the spectrumscale install command. This will create the Storage Scale cluster together with a Storage Scale gpfs0 file system on the two nodes and the two shared disks. Also, include the date and time commands to measure the duration of the installation of the cluster. The command will take up to 10 minutes to complete.
Add the tiebreaker disk. This step is necessary to achieve quorum on a two-node Storage Scale cluster.
/usr/lpp/mmfs/bin/mmchconfig tiebreakerDisks=nsd1
Copy codeCopied!
Edit the ~/.bash_profile file and append the entry /usr/lpp/mmfs/bin to the PATH variable.
PATH=$PATH:$HOME/bin:/usr/lpp/mmfs/bin
Copy codeCopied!
Source the ~/.bash_profile file using the following command:
source ~/.bash_profile
Copy codeCopied!
Run the mmlscluster, mmgetstate -a, and df -h commands to list the Storage Scale cluster definition, to verify that all cluster nodes are active, and to validate that the gpfs0 file system has been successfully mounted under /ibm/gpfs0.
mmlscluster; echo; mmgetstate -a; echo; df -h
Copy codeCopied!
Sample output:
Step 7 – Prepare the Storage Scale 5.2.1.1 cluster for Storage Scale Container Native 5.2.1.1
Create a new gUI user for Storage Scale container native with username as cnss_storage_gui_user and password as cnss_storage_gui_password.
Create a new gUI group CSIadmin and a new gUI user for Storage Scale CSI with username as csi-storage-gui-user and password as csi-storage-gui-password.
Run the following commands to enable quota on the gpfs0 file system, to change SELinux setting and to enable the filesetdf option.:
# enable quota on filesystem used by csi
mmchfs gpfs0 -Q yes# enable quota for root user
mmchconfigenforceFilesetQuotaOnRoot=yes -i
# ensure selinux parameter is set to yes
mmchconfigcontrolSetxattrImmutableSELinux=yes -i
# enable filesetdf
mmchfs gpfs0 --filesetdf
Copy codeCopied!
Sample output:
Step 8 – Prepare the OpenShift 4.14 cluster for Storage Scale Container Native 5.2.1.1
Log in as the root user to the bastion host of your OpenShift 4.14 cluster on IBM Power Virtual Server.
Install the wget and unzip utilities.
yum -y install wget unzip
Copy codeCopied!
Download the Storage Scale Container Native 5.2.1.1 deployment code from gitHub.
Apply the Machine Config Operator (MCO) settings for Storage Scale Container Native 5.2.1.1 for OpenShift 4.14 on IBM Power. Note that applying the MCO to update the configuration will trigger a reboot of all the worker nodes in your OpenShift cluster.
Check the status of the update. Verify if the oc get mcp command shows UPDATED=True, UPDATINg=False, and DEgRADED=False for the workers.
oc get mcp
Copy codeCopied!
Validate if the kernel-devel package has been successfully installed on all worker nodes. The number of lines in the command output should match the number of worker nodes in your cluster. Eventually, you need to rerun the command to get the correct output.
Step 9 – Install Storage Scale Container Native 5.2.1.1 on the OpenShift 4.14 cluster
Make sure that you are logged in as the root user on the bastion host of the OpenShift cluster and that you are logged in as the kubeadmin user at the OpenShift cluster.
oc whoami
Copy codeCopied!
Change to the directory where you have extracted the Spectrum Scale Container Native 5.2.1.1 deployment files from Step 8.
cd /root/ibm-spectrum-scale-container-native-5.2.1.x
Make a note of the fully qualified domain name (FQDN) of the gUI node of your Spectrum Scale cluster on the RHEL VMs that you created earlier. This is the FQDN of the first hostname we created, for example 218018-linux-1.power-iaas.cloud.ibm.com..
Make a backup copy and edit the /var/named/zonefile.db file on the bastion host and add an additional line at the end of the file just before the line that contains the EOF string. The line should contain the short hostname and the IP address of the private network interface of your Storage Scale gUI node using the following format:
<short hostname of gUI node> IN A <IP address of privateinterfaceofgUInode>
;EOF
Copy codeCopied!
Sample:
; Create an entry for the gUI server of the external Storage Scale cluster218018-linux-1 IN A 192.168.167.234;;EOF
Copy codeCopied!
Make a backup copy and edit the /var/named/reverse.db file on the bastion host and add an additional line at the end of the file just before the line that contains the EOF string. The new line should include the last part of the IP address of the gUI node’s private network and the FQDN hostname of the gUI node following this format. Note that there is an additional character (“.”) at the end of the new line.
<last octet of IP address of privateinterfaceofgUInode> INPTR <FQDN hostname of gUI node>.
;
; EOF
Copy codeCopied!
Sample:
234 IN PTR 218018-linux-1.power-iaas.cloud.ibm.com.
;;EOF
Copy codeCopied!
Restart the named service and verify that it is running.
Create the secrets for Storage Scale container native and CSI. Replace REMOTE_SSCALE_gUI_NODE with the FQDN hostname of the gUI node of your Spectrum Scale cluster on the RHEL VMs. Then run the following commands
REMOTE_SSCALE_gUI_NODE="<replace with FQDN of the gUI node of your remote Spectrum Scale storage cluster>"
We need to now edit the configuration file that creates the Storage Scale container native custom resource to adapt the configuration to our environment. Edit the generated/scale/cr/cluster/cluster.yaml file using your preferred editor.
Modify the hostAliases: section. The hostAliases section contains the list of hostnames of the remote Storage Scale cluster nodes.
Note: Make sure to indent the entries properly as this is a YAML file:
Verify that the Storage Scale container native pods are up and running. You should see an output as shown in the following example. Note, that for each of your OpenShift worker nodes, you will see one worker-nnn pod.
oc get pods -n ibm-spectrum-scale
Copy codeCopied!
Sample output:
NAME READY STATUS RESTARTS AgE
ibm-spectrum-scale-gui-04/4 Running06m44s
ibm-spectrum-scale-gui-14/4 Running098s
ibm-spectrum-scale-pmcollector-02/2 Running06m14s
ibm-spectrum-scale-pmcollector-12/2 Running04m6s
worker-02/2 Running06m39s
worker-12/2 Running06m39s
worker-22/2 Running06m39s
Copy codeCopied!
Verify that the cluster CR has been created successfully
oc get cluster ibm-spectrum-scale -o yaml
Copy codeCopied!
Sample output:
We need to now edit the configuration file that creates the Storage Scale remote cluster custom resource to adapt the configuration to our environment. Edit the generated/scale/cr/remotecluster/remotecluster.yaml file using your preferred editor.
Comment out the contactNodes: section.
Find the following lines:
Modify the gui section. In the gui section the FQDN of the gUI node of the remote Storage Scale cluster is specified. The gUI node will be contacted by Storage Scale Container Native when provisioning for example new persistent volumes (PVs).
Find the following lines:
gui:
cacert: cacert-storage-cluster-1
# This is the secret that contains the CSIAdmin user# credentials in the ibm-spectrum-scale-csi namespace.csiSecretName: csi-remote-mount-storage-cluster-1
# hosts are the the gUI endpoints from the storage cluster. Multiple# hosts (up to 3) can be specified to ensure high availability of gUI.hosts:
-guihost1.example.com
# - guihost2.example.com# - guihost3.example.cominsecureSkipVerify: false
# This is the secret that contains the ContainerOperator user# credentials in the ibm-spectrum-scale namespace.secretName: cnsa-remote-mount-storage-cluster-1
Copy codeCopied!
Replace with the following lines:
gui:
#cacert: cacert-storage-cluster-1# This is the secret that contains the CSIAdmin user# credentials in the ibm-spectrum-scale-csi namespace.csiSecretName: csi-remote-mount-storage-cluster-1
# hosts are the the gUI endpoints from the storage cluster. Multiple# hosts (up to 3) can be specified to ensure high availability of gUI.hosts:
- <FQDN of the gUI node of the remote Storage Scale storage cluster>
# - guihost2.example.com# - guihost3.example.cominsecureSkipVerify: true
# This is the secret that contains the ContainerOperator user# credentials in the ibm-spectrum-scale namespace.secretName: cnsa-remote-mount-storage-cluster-1
Verify that the remote cluster CR has been successfully created.
oc get remotecluster -n ibm-spectrum-scale
Copy codeCopied!
Sample output:
We need to now edit the configuration file that creates the Storage Scale remote filesystem custom resource to adapt the configuration to our environment. Edit the generated/scale/cr/filesystem/filesystem.remote.yaml file using your preferred editor.
Verify that the Spectrum Scale CSI pods are up and running. You should see an output as shown in the following example. Note that for each worker node of your OpenShift cluster, you will see one ibm-spectrum-scale-csi-zzzzz pod.
oc get pods -n ibm-spectrum-scale-csi
Copy codeCopied!
Sample output:
On the first RHEL VM from your external Storage Scale storage cluster, run the mmlscluster command to find out the gPFS cluster ID.
On the bastion node, create a file storage_class_fileset.yaml that defines a new storage class ibm-spectrum-scale-csi-fileset for Storage Scale container native. Replace with your cluster ID obtained from the previous step.
cat<<EOF>storage_class_fileset.yaml---apiVersion:storage.k8s.io/v1kind:StorageClassmetadata:name:ibm-spectrum-scale-csi-filesetprovisioner:spectrumscale.csi.ibm.comparameters:permissions:"777"volBackendFs:remote-sampleclusterId:"<replace with your cluster ID>"reclaimPolicy:DeleteEOF
Copy codeCopied!
Sample content of a storage_class_fileset.yaml file:
Create a new physical volume claim (PVC) named ibm-spectrum-scale-pvc that uses the storage class ibm-spectrum-scale-fileset-csi by entering the following command.
In this tutorial, you have learned how to set up Storage Scale Container Native 5.2.1.1 on OpenShift 4.14 on IBM Power Virtual Servers and how to provision new PVCs on a remote Storage Scale 5.2.1.1 cluster running on two RHEL 9.4 VMs on IBM Power Virtual Servers.
Acknowledgments
The authors would like to thank Paulina Acevedo, Tara Astigarraga, Isreal Andres Vizcarra gondinez, Todd Tosseth, Alexander Saupp and Harald Seipp for their guidance and insights on how to set up and verify Storage Scale Container Native Storage Access 5.2.1.1 on Red Hat OpenShift Container Platform 4.14.
Take the next step
Join the Power Developer eXchange Community (PDeX). PDeX is a place for anyone interested in developing open source apps on IBM Power. Whether you're new to Power or a seasoned expert, we invite you to join and begin exchanging ideas, sharing experiences, and collaborating with other members today!
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.