This is a cache of https://developer.ibm.com/tutorials/install-spectrum-scale-cnsa-5121-on-ocp-48-on-powervs/. It is a snapshot of the page as it appeared on 2025-12-29T13:00:09.227+0000.
Install IBM Storage Scale Container Native 6.0.0.0 on Red Hat OpenShi<strong>f</strong>t Container Plat<strong>f</strong>orm 4.20 on IBM Power Virtual Servers - IBM Developer

Tutorial

Install IBM Storage Scale Container Native 6.0.0.0 on Red Hat OpenShift Container Platform 4.20 on IBM Power Virtual Servers

Take advantage of a cloud-native storage solution for your OpenShift cluster using IBM Storage Scale Container Native

By

Claus Huempel,

Daniel Casali

Introduction

IBM Storage Scale in containers (IBM Storage Scale Container Native) allows the deployment of the cluster file system in a Red Hat OpenShift cluster. Using a remote mount-attached IBM Storage Scale file system, the IBM Storage Scale solution provides a persistent data store to be accessed by the applications via the IBM Storage Scale Container Storage Interface (CSI) driver using persistent volumes (PVs).

This tutorial shows how to create a two-node Storage Scale 6.0.0.0 storage cluster on Red Hat Enterprise Linux (RHEL) 9.6 virtual machines (VMs) and shared disks on IBM Power Virtual Servers and then connect the Storage Scale storage cluster to your Red Hat OpenShift Container Platform 4.20 cluster running on IBM Power Virtual Servers via Storage Scale Container Native Storage Access 6.0.0.0.

Prerequisites

This tutorial assumes that you are familiar with the Red Hat OpenShift Container Platform 4.20 environment on IBM Power Virtual Server. It is assumed that you have it already installed, you have access to it and have the credentials of an OpenShift cluster administrator (also known as kubeadmin).

furthermore, you need to have access to the IBM Cloud console to provision the RHEL VMs and the storage on IBM Power Virtual Server for the Storage Scale cluster that is created in this tutorial.

You must be familiar with the Linux command line and have at least a basic understanding of Red Hat OpenShift.

Estimated time

It is expected to take around 2 to 3 hours to complete the installation of IBM Storage Scale 6.0.0.0 on IBM Power Virtual Server and to set up IBM Storage Scale Container Native 6.0.0.0 on the Red Hat OpenShift 4.20 cluster. This lengthy duration is because we need to provision VMs on Power Virtual Server, install software from internet repositories, and reboot the worker nodes of the Red Hat OpenShift cluster.

Steps

This tutorial includes the following steps:

  1. Provision RHEL 9.6 VMs on IBM Power Systems Virtual Server.
  2. Change the MTU size of the private network interface of each RHEL 9.6 VM to 1450.
  3. Prepare the RHEL 9.6 nodes for Storage Scale 6.0.0.0.
  4. Download Storage Scale 6.0.0.0 installer from IBM fix Central.
  5. Install the Storage Scale 6.0.0.0 binary files.
  6. Create a two-node Storage Scale cluster on the RHEL 9.6 VMs and the shared disks.
  7. Prepare the Storage Scale 6.0.0.0 cluster for Storage Scale Container Native 6.0.0.0.
  8. Prepare the OpenShift 4.20 cluster for Storage Scale Container Native 6.0.0.0.
  9. Install Storage Scale Container Native 6.0.0.0 on the OpenShift 4.20 cluster.

Step 1 – Provision RHEL 9.6 VMs and shared disks on IBM Power Virtual Server

We need to create two basic RHEL 9.6 VMs on Power Virtual Servers with firewalld installed. These VMs need to be on the private network, the same one that we have the OpenShift nodes on. Make sure that the /etc/hosts files have an entry for each VMs long and short hostnames pointing to the IP of this interface. So, when the cluster is built, we get the Storage Scale daemon running on the IP for the private network interface. Add at least two shareable disks to these VMs so we can use them for the file system creation process.

Detailed steps:

  1. Provision two RHEL 9.6 VMs on IBM Power Virtual Servers, each with at least:
    • 2 physical cores with SMT8 (resulting in 16 vCPUs at the operating system level)
    • 16 GB RAM
    • 50 GB disk (for the operating system)
    • 1 public IP address for Secure Shell (SSH) access. The IP address must be in the same network as the worker nodes from the OpenShift cluster.
  2. Provision at least two shared disks each with at least 100 GB size.
  3. Attach the shared disks to each of the VMs.
  4. Log in as root user into each of the VMs.
  5. Using your Red Hat account, register each VM with Red Hat in order to receive updates and packages. Make sure the system stays on the RHEL 9.6 Extended Update Support (EUS) release for the kernel by specifying the release parameter.

    subscription-manager register --set 9.6
  6. Run the following command and verify if the RHEL release is 9.6.

    subscription-manager release
  7. Configure to use RHEL 9.6 EUS repos for each VM.

    subscription-manager repos \
    --disable=rhel-9-for-ppc64le-baseos-rpms
    subscription-manager repos \
    --disable=rhel-9-for-ppc64le-appstream-rpms
    subscription-manager repos \
    --disable=rhel-9-for-ppc64le-supplementary-rpms
    subscription-manager repos \
    --disable=codeready-builder-for-rhel-9-ppc64le-rpms
    subscription-manager repos \
    --enable=rhel-9-for-ppc64le-baseos-eus-rpms
    subscription-manager repos \
    --enable=rhel-9-for-ppc64le-appstream-eus-rpms
    subscription-manager repos \
    --enable=rhel-9-for-ppc64le-supplementary-eus-rpms
    subscription-manager repos --list-enabled
  8. Run a system update for each RHEL VM and reboot afterwards.

    yum update -y
    reboot now
  9. Verify that the system had been updated to the latest kernel. After running the following command, check if the output displays a kernel version of 5.14.0-570.64.1.el9_6.ppc64le or later.

    uname -r

Step 2 – Change the MTU size of the private network interfaces of each RHEL 9.6 VM to 1450

Change the MTU size of the private network interface to 1450 that is used to connect to the OpenShift worker nodes.

Caution: When changing the MTU size of a network interface the network adapter is going to be disabled. This will cause the SSH session to the VM to be closed. To enable the network interface again, make sure that you can login to the VM using the IBM Cloud web interface.

  1. Log in as a root user to the RHEL VM.
  2. Run the nmtui command.

    nmtui
  3. Select Edit a connection and press Enter.

    figure 1

  4. Choose the private network interface for which we want to set the MTU size to 1450, for example System env3, and press Enter.

    figure 2

  5. Select <Show> using the Down Arrow key and press Enter.

    figure 3

  6. Go to the MTU field using the arrow keys and enter the value as 1450.

    figure 4

  7. Scroll down using the Down Arrow key, select <OK> at the bottom of the screen, and press Enter.

    figure 5

  8. On the Ethernet screen using the arrow keys, select <Back> and press Enter.

    figure 6

  9. On the NetworkManager TUI screen, select Activate a connection and press Enter.

    figure 7

  10. Using the arrow keys, select the * System env entry of the private network interface you want to deactivate (in this example, * System env3). Press the Right Arrow key and then press Enter. This will deactivate the network interface.

    Caution: This will close your current SSH connection to the VM assuming you have connected to the VM via this network interface.

    figure 8

  11. from the console to the VM in the IBM Cloud Administration GUI, log in to the VM as root user.

  12. Run the nmtui command.
  13. Activate the connection again.
  14. Repeat step 2 to step 14 to change the MTU size to 1450 for the other RHEL 9.6 VM.

Step 3 – Prepare the RHEL 9.6 nodes for Storage Scale 6.0.0.0

  1. On each node, add two lines to the /etc/hosts file that have the IP addresses, the fully qualified hostnames, and the short hostnames for the two RHEL VMs that you just created. The following shows an example.

    # cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    129.40.59.118 p1224-storage1.p1224.cecc.ihost.com p1224-storage1
    129.40.59.119 p1224-storage2.p1224.cecc.ihost.com p1224-storage2
  2. Open a terminal and log in first node as root user. In this example, the first node is the host 218018-linux-1 and the second node is the host p1224-storage1.

  3. Open a second terminal and log in to the second node as root use. In this example, the second node is the host p1224-storage2.

  4. In the first terminal, generate an SSH key for the first node by issuing the following command:

    ssh-keygen -b 2048 -t rsa -f ~/.ssh/id_rsa -q -N ""
  5. Display the content of the generated public key by running the command:

    cat ~/.ssh/id_rsa.pub
  6. In the second terminal, open the file ~/.ssh/authorized_keys in an editor, for example:

    vi ~/.ssh/authorized_keys
  7. Copy the content of the public key from the first terminal and paste it to the end of the authorized_keys file in the second terminal. Save the file.

  8. Repeat steps 4 to 7, but this time you will generate a SSH key in the second terminal and copy and paste the content from the public key to the authorized_keys file in the first terminal.

  9. Test the SSH setup to ensure that all nodes can communicate with all other nodes. Test by using short hostnames, fully qualified host names, and IP addresses. Assume that the environment has the two nodes:

    a) p1224-storage1.p1224.cecc.ihost.com :129.40.59.118, and

    b) p1224-storage2.p1224.cecc.ihost.com :129.40.59.119.

    Repeat the following test using the short names (p1224-storage1 and p1224-storage2), the fully qualified names (p1224-storage1.p1224.cecc.ihost.com and p1224-storage2.p1224.cecc.ihost.com), and the IP addresses (129.40.59.118 and 129.40.59.119):

    #!/bin/bash
    # Edit nodes list and re-run the script for IP addresses,
    # short hostnames and long hostnames.
    #nodes="129.40.59.118 129.40.59.119"
    #nodes="p1224-storage1.p1224.cecc.ihost.com p1224-storage2.p1224.cecc.ihost.com"
    nodes="p1224-storage1 p1224-storage2"
    # Test ssh configuration
    for i in $nodes; do
     for j in $nodes; do
       echo -n "Testing ${i} to ${j}: "
       ssh ${i} "ssh ${j} date"
     done
    done

    Sample output:

    Testing p1224-storage1 to p1224-storage1: Mon Nov 24 08:07:25 AM EST 2025
    Testing p1224-storage1 to p1224-storage2: Mon Nov 24 08:07:26 AM EST 2025
    Testing p1224-storage2 to p1224-storage1: Mon Nov 24 08:07:26 AM EST 2025
    Testing p1224-storage2 to p1224-storage2: Mon Nov 24 08:07:27 AM EST 2025
  10. Install the Linux wget and screen utilities on the first node. The screen utility helps you maintain your session in case your internet connection drops and makes it recoverable when you reconnect.

    yum -y install wget
    yum -y install https://dl.fedoraproject.org/pub/epel/9/Everything/ppc64le/Packages/s/screen-4.8.0-6.el9.ppc64le.rpm
  11. Open a new screen session. You can resume a screen session via the screen -r command.

    screen
  12. Create a nodes file, that contains the short hostnames of the nodes of your cluster.

    cat > /nodes << EOf
    p1224-storage1
    p1224-storage2
    EOf
  13. Set up chrony (a time synchronization service) on all nodes. Chrony is an implementation of the Network Time Protocol (NTP).

    for node in `cat /nodes`
    do echo ""
    echo "===== $node ====="
    ssh $node "yum install -y chrony; systemctl enable chronyd; systemctl start chronyd"
    done
  14. Verify that the time is synchronized on all nodes.

    for node in `cat /nodes`
    do echo ""
    echo "===== $node ====="
    ssh $node "date"
    done
  15. Install the prerequisites to build the Storage Scale portability layer on each node.

    for node in `cat /nodes`
    do echo ""
    echo "===== $node ====="
    ssh $node "yum -y install 'kernel-devel-uname-r == $(uname -r)'"
    ssh $node "yum -y install cpp gcc gcc-c++ binutils"
    ssh $node "yum -y install 'kernel-headers-$(uname -r)' elfutils elfutils-devel make"
    done
  16. Install the python3, ksh, m4, boost-regex, postgresql, openssl-devel, cyrus-sasl-devel, nftables and python3.12 packages on each node.

    for node in `cat /nodes`
    do echo ""
    echo "===== $node ====="
    ssh $node "yum -y install python3 ksh m4 boost-regex"
    ssh $node "yum -y install postgresql-server postgresql-contrib"
    ssh $node "yum -y install openssl-devel cyrus-sasl-devel"
    ssh $node "yum -y install nftables"
    ssh $node "yum -y install python3.12"
    done

Step 4 – Download Storage Scale 6.0.0.0 installer from IBM fix Central

Download the Storage Scale Data Management Edition 6.0.0.0 for Power LE Linux binaries from the IBM fix Central web site with the following steps.

  1. Using your favourite web browser, open the following URL:

    IBM Support: fix Central

  2. Click the "Data Management" link. Figure for 4.2 section
  3. Click the "Storage_Scale_Data_Management-6.0.0.0-ppc64LE-Linux" Figure for 4.3 section
  4. Log in to IBM using your IBM ID.
  5. Select the Download using your browser (HTTPS) option.
  6. Clear the Include prerequisites and co-requisite fixes checkbox.
  7. Log in with you IBM ID.
  8. Select Download using your browser (HTTPS) option and click Continue. Figure for 4.4 section
  9. In the View and accept terms pop-op window, scroll to the end and click I agree.
  10. In the main browser window, scroll down and right-click the Storage_Scale_Data_Management-6.0.0.0-ppc64LE-Linux-install hyperlink and click Copy Link. figure 9
  11. Verify if you get a link similar to:

    https://delivery04-mul.dhe.ibm.com/sdfdl/v2/sar/CM/SS/0dfzd/0/Xa.2/Xb.jusyLTSp44S0BvmHVnfeaiSQc1ZrnydR37PjgK-ibn7STx3kEaLiNUkqoso/Xc.CM/SS/0dfzd/0/Storage_Scale_Data_Management-6.0.0.0-ppc64LE-Linux-install/Xd./Xf.Lpr./Xg.13626089/Xi.habanero/XY.habanero/XZ.PWf8zzcC0fM3HaL-KTRcrwLb8T3scvmn/Storage_Scale_Data_Management-6.0.0.0-ppc64LE-Linux-install

  12. In your command line window on the first RHEL node, use the wget command with the link to download the Storage Scale binary file.

    wget <put your download URL here>
  13. Note that the download could take up to 5 minutes as the file is approximately around 1.2 GB in size.

    figure 10 View larger image

Step 5 – Install the Storage Scale 6.0.0.0 binary files

  1. Login as the root user into the first node, for example, p1224-storage1.
  2. Run the following command to install the Storage Scale binary files on the node. Enter “1” to accept the license agreement when being asked.

    chmod u+x Storage_Scale_Data_Management-6.0.0.0-ppc64LE-Linux-install
    ./Storage_Scale_Data_Management-6.0.0.0-ppc64LE-Linux-install
  3. Verify that the Storage Scale binary files have been installed on the node.

    rpm -qip /usr/lpp/mmfs/6.0.0.0/gpfs_rpms/gpfs.base*.rpm

    Sample output from the command: figure 11

Step 6 – Create a two-node Storage Scale 6.0.0.0 cluster on the RHEL 9.6 VMs

In this step we will create a two-node Storage Scale cluster on the two RHEL VMs. We first setup the installer node on the first node, then create the Storage Scale cluster on both nodes and finally create a Storage Scale file system on that cluster.

  1. Log in as the root user into the first node, for example, p1224-storage1.
  2. Run the screen command.

    screen
  3. Set up the installer node.

    cd /usr/lpp/mmfs/6.0.0.0/ansible-toolkit
    ./spectrumscale setup -s <IP of private network interface of first node>

    Sample output: figure 6_3

  4. Use the multipath command to find out the device IDs of your shared disks. In our test system, the VM has five disks:

    • disk1 is the rootvg disk for the RHEL operating system, with size of 50 GB and is bound to the device, dm-0
    • disk2 is the first shared disk of 100 GB size and is bound to device dm-1.
    • disk3 is the second shared disk of 100 GB size and is bound to device dm-5.

    Caution: Make sure that you choose the right dm-nnn IDs for the following step in order to not accidentally overwrite the rootvg disk and thus the partition for the RHEL operating system on that VM.

    multipath -ll 2>/dev/null | grep "dm-\|size"

    Sample output:

    figure 12

  5. List the /dev/mapper directory to find out the name of the device that is being used for the rootvg.

    ls /dev/mapper

    Sample output:

    figure 12

    In our sample, the device 3600507680c818022f00000000001596b (mpathc) has three partitions on it c1, c2, and c3, while the other disks 3600507680c818022f000000000015970 (mpathd) and 3600507680c818022f00000000001596f (mpathe) have no partitions.

  6. Run the df command to make sure by another means that for the rootvg the device 3600507680c818022f00000000001596b is being used.

    df

    Sample output: figure 12

    Here we see, that the partition rhel-root is being used for the root filesystem of the RHEL VM.

  7. Use the define_cluster.sh script to define the topology of the two-node Storage Scale cluster. Edit the script with your preferred editor and adapt the contents of the variables NODE_1, NODE_2, DISK_1, DISK_2, and CLUSTER_NAME to your environment.

    # short hostnames of the two nodes in the cluster
    NODE_1="p1224-storage1"
    NODE_2="p1224-storage2"
    # device names dm-x of the shared disks as printed by: "multipath -ll 2>/dev/null | grep 'dm-\|size'”
    
    DISK_1="dm-1"
    DISK_2="dm-6"
    # cluster name
    CLUSTER_NAME="gpfs-tz-p10-cluster"
  8. Run the modified define_cluster.sh script to define the topology of the Storage Scale cluster.

    chmod u+x define_cluster.sh
    ./define_cluster.sh | tee define_cluster.out

    Sample output:

    figure 13

    View larger image

  9. Run the following commands to disable callhome and to perform an installation precheck for Storage Scale. Before continuing to the next step, verify that the installation precheck command reports a Pre-check successful for install message.

    cd /usr/lpp/mmfs/6.0.0.0/ansible-toolkit
    # disable call home
    ./spectrumscale callhome disable
    # list node configuration
    ./spectrumscale node list
    # run install precheck
    ./spectrumscale install –precheck

    Sample output: figure 14 View larger image

  10. Run the spectrumscale install command. This will create the Storage Scale cluster together with a Storage Scale gpfs0 file system on the two nodes and the two shared disks. Also, include the date and time commands to measure the duration of the installation of the cluster. The command will take up to 10 minutes to complete.

    cd /usr/lpp/mmfs/6.0.0.0/ansible-toolkit
    date
    time ./spectrumscale install
    date

    Sample output:

    figure 15

    View larger image

  11. Add the tiebreaker disk. This step is necessary to achieve quorum on a two-node Storage Scale cluster.

    /usr/lpp/mmfs/bin/mmchconfig tiebreakerDisks=nsd1
  12. Edit the ~/.bash_profile file and append the entry /usr/lpp/mmfs/bin to the PATH variable.

    PATH=$PATH:$HOME/bin:/usr/lpp/mmfs/bin
  13. Source the ~/.bash_profile file using the following command:

    source ~/.bash_profile

    Run the mmlscluster, mmgetstate -a, and df -h commands to list the Storage Scale cluster definition, to verify that all cluster nodes are active, and to validate that the gpfs0 file system has been successfully mounted under /ibm/gpfs0.

    mmlscluster; echo; mmgetstate -a; echo; df -h

    Sample output: figure 16

Step 7 – Prepare the Storage Scale 6.0.0.0 cluster for Storage Scale Container Native 6.0.0.0

  1. Create a new GUI user for Storage Scale container native with username as cnss_storage_gui_user and password as cnss_storage_gui_password.

    /usr/lpp/mmfs/gui/cli/mkuser cnss_storage_gui_user -p cnss_storage_gui_password -g ContainerOperator --disablePasswordExpiry 1

    Sample output:

    figure 17

  2. Create a new GUI group CSIadmin and a new GUI user for Storage Scale CSI with username as csi-storage-gui-user and password as csi-storage-gui-password.

    /usr/lpp/mmfs/gui/cli/mkusergrp CsiAdmin --role csiadmin
    /usr/lpp/mmfs/gui/cli/mkuser csi-storage-gui-user -p csi-storage-gui-password -g CsiAdmin --disablePasswordExpiry 1

    Sample output:

    figure 18

  3. Run the following commands to enable quota on the gpfs0 file system, to change SELinux setting and to enable the filesetdf option.:

    # enable quota on filesystem used by csi
    mmchfs gpfs0 -Q yes
    # enable quota for root user
    mmchconfig enforcefilesetQuotaOnRoot=yes -i
    # ensure selinux parameter is set to yes
    mmchconfig controlSetxattrImmutableSELinux=yes -i
    # enable filesetdf
    mmchfs gpfs0 --filesetdf

    Sample output:

    figure 18

Step 8 – Prepare the OpenShift 4.20 cluster for Storage Scale Container Native 6.0.0.0

  1. Log in as the root user to the bastion host of your OpenShift 4.20 cluster on IBM Power Virtual Server.
  2. Install the wget and unzip utilities.

    yum -y install wget unzip
  3. Download the Storage Scale Container Native 6.0.0.0 deployment code from GitHub.

    wget https://github.com/IBM/ibm-spectrum-scale-container-native/archive/refs/heads/v6.0.0.x.zip
  4. Extract the archive.

    unzip v6.0.0.x.zip
  5. Change to the directory ibm-spectrum-scale-container-native-6.0.0.x.

    cd ibm-spectrum-scale-container-native-6.0.0.x
  6. Log in to the OpenShift cluster as the kubeadmin user. Replace ClusterName and Domain with the values from your OpenShift cluster.

    oc login https://api.ClusterName.Domain:6443 -u kubeadmin
  7. Apply the Machine Config Operator (MCO) settings for Storage Scale Container Native 6.0.0.0 for OpenShift 4.20 on IBM Power. Note that applying the MCO to update the configuration will trigger a reboot of all the worker nodes in your OpenShift cluster.

    oc apply -f generated/scale/mco/mco.yaml
  8. Check the status of the update. Verify if the oc get mcp command shows UPDATED=True, UPDATING=false, and DEGRADED=false for the workers.

    oc get mcp
  9. Validate if the kernel-devel package has been successfully installed on all worker nodes. The number of lines in the command output should match the number of worker nodes in your cluster. Eventually, you need to rerun the command to get the correct output.

    oc get nodes -lnode-role.kubernetes.io/worker= \
    -ojsonpath="{range .items[*]}{.metadata.name}{'\n'}" |\
    xargs -I{} oc debug node/{} -T -- chroot /host sh -c "rpm -q kernel-devel" 2>/dev/null

    Sample output:

    kernel-devel-5.14.0-570.62.1.el9_6.ppc64le
    kernel-devel-5.14.0-570.62.1.el9_6.ppc64le
    kernel-devel-5.14.0-570.62.1.el9_6.ppc64le

Step 9 – Install Storage Scale Container Native 6.0.0.0 on the OpenShift 4.20 cluster

  1. Make sure that you are logged in as the root user on the bastion host of the OpenShift cluster and that you are logged in as the kubeadmin user at the OpenShift cluster.

    oc whoami
  2. Change to the directory where you have extracted the Spectrum Scale Container Native 6.0.0.0 deployment files from Step 8.

    cd /root/ibm-spectrum-scale-container-native-6.0.0.x
  3. Create namespaces for Spectrum Scale container native, Spectrum Scale container native CSI, Spectrum Scale container native operator, and Spectrum Scale DNS.

    oc create namespace ibm-spectrum-scale
    oc create namespace ibm-spectrum-scale-csi
    oc create namespace ibm-spectrum-scale-operator
    oc create namespace ibm-spectrum-scale-dns
  4. Make a note of the fully qualified domain name (fQDN) of the GUI node of your Spectrum Scale cluster on the RHEL VMs that you created earlier. This is the fQDN of the first hostname we created, for example p1224-storage1.p1224.cecc.ihost.com.
  5. Make a backup copy of the file /var/named/zonefile.db on the bastion host, edit it, and add an additional line at the end of the file just before the line that contains the EOf string. The line should contain the short hostname and the IP address of the private network interface of your Storage Scale GUI node using the following format:

    <short hostname of GUI node>    IN      A       <IP address of private interface of GUI node>
    ;EOf

    Sample:

    ; Create an entry for the GUI server of the external Storage Scale cluster
    p1224-storage1          IN      A       129.40.59.118
    ;
    ;EOf
  6. Make a backup copy of the file /var/named/reverse.db file on the bastion host, edit it and add an additional line at the end of the file just before the line that contains the EOf string. The new line should include the last part of the IP address of the GUI node’s private network and the fQDN hostname of the GUI node following this format. Note that there is an additional character (“.”) at the end of the new line.

    <last octet of IP address of private interface of GUI node>      IN      PTR     <fQDN hostname of GUI node>.
    ;
    ; EOf

    Sample:

    118     IN      PTR     p1224-storage1.p1224.cecc.ihost.com.
    ;
    ;EOf
  7. Restart the named service and verify that it is running.

    systemctl restart named
    systemctl status named

    Sample output:

    figure 19

    View larger image

  8. Create the secrets for Storage Scale container native and CSI. Replace REMOTE_SSCALE_GUI_NODE with the fQDN hostname of the GUI node of your Spectrum Scale cluster on the RHEL VMs. Then run the following commands

    REMOTE_SSCALE_GUI_NODE="<replace with fQDN of the GUI node of your remote Spectrum Scale storage cluster>"
    oc create secret generic cnsa-remote-mount-storage-cluster-1 --from-literal=username='cnss_storage_gui_user' \
     --from-literal=password='cnss_storage_gui_password' -n ibm-spectrum-scale
    oc create configmap cacert-storage-cluster-1 \
     --from-literal=storage-cluster-1.crt="$(openssl s_client -showcerts -connect $ (openssl s_client -showcerts -connect ${REMOTE_SSCALE_GUI_NODE}:443 </dev/null 2>/dev/null|openssl x509 -outform PEM:443 </dev/null 2>/dev/null|openssl x509 -outform PEM)" \
     -n ibm-spectrum-scale
    
    oc create secret generic csi-remote-mount-storage-cluster-1 --from-literal=username=csi-storage-gui-user \
     --from-literal=password=csi-storage-gui-password -n ibm-spectrum-scale-csi
    
    oc label secret csi-remote-mount-storage-cluster-1 product=ibm-spectrum-scale-csi -n ibm-spectrum-scale-csi
  9. Add the pull secret for the Storage Scale container native software. The IBM Container Registry (ICR) entitlement key can be obtained from this URL: https://myibm.ibm.com/products-services/containerlibrary
    #!/bin/bash
    ENTITLEMENT_KEY=<REPLACE WITH YOUR IBM ENTITLEMENT KEY>
    oc create secret docker-registry ibm-entitlement-key \
    --docker-server=cp.icr.io \
    --docker-username cp \
    --docker-password ${ENTITLEMENT_KEY} -n ibm-spectrum-scale
    oc create secret docker-registry ibm-entitlement-key \
    --docker-server=cp.icr.io \
    --docker-username cp \
    --docker-password ${ENTITLEMENT_KEY} -n ibm-spectrum-scale-dns
    oc create secret docker-registry ibm-entitlement-key \
    --docker-server=cp.icr.io \
    --docker-username cp \
    --docker-password ${ENTITLEMENT_KEY} -n ibm-spectrum-scale-csi
    oc create secret docker-registry ibm-entitlement-key \
    --docker-server=cp.icr.io \
    --docker-username cp \
    --docker-password ${ENTITLEMENT_KEY} -n ibm-spectrum-scale-operator
  10. Deploy the Storage Scale container native operator.

    oc project default
    oc apply -f generated/scale/install.yaml
  11. Verify that the operator pod is running. You should see one ibm-spectrum-scale-controller-manager-zzz pod with READY 1/1 and STATUS Running.

    oc get pods -n ibm-spectrum-scale-operator
  12. Monitor the log file output from the operator pod by running the following command:

    oc logs $(oc get pods -n ibm-spectrum-scale-operator -o name) -n ibm-spectrum-scale-operator -f
  13. Label the worker nodes by running the following command:

    oc label nodes -lnode-role.kubernetes.io/worker= scale.spectrum.ibm.com/daemon-selector=

    Sample output:

    figure 19

  14. We need to now edit the configuration file that creates the Storage Scale container native custom resource to adapt the configuration to our environment. Edit the generated/scale/cr/cluster/cluster.yaml file using your preferred editor.

  15. Modify the hostAliases: section. The hostAliases section contains the list of hostnames of the remote Storage Scale cluster nodes.

    Note: Make sure to indent the entries properly as this is a YAML file:

    find the following lines:

    # hostAliases:
    #   - hostname: example.com
    #     ip: 10.0.0.1

    Replace with the following lines:

    hostAliases:
    - hostname: <fQDN of first node of remote Spectrum Scale storage cluster>
      ip: <IP address of private network of first node>
    - hostname: <fQDN of second node of remote Spectrum Scale storage cluster>
      ip: <IP address of private network of second node>

    for example:

    hostAliases:
    - hostname: p1224-storage1.p1224.cecc.ihost.com
      ip: 129.40.59.118
    - hostname: p1224-storage2.p1224.cecc.ihost.com
      ip: 129.40.59.119
  16. Modify the license: section to accept the licence for Storage Scale Container Native.

    find the following lines:

    license:
     accept: false
     license: data-access

    Replace with the following lines:

    license:
      accept: true
      license: data-management
  17. Save all the changes to the file. Refer to a sample file.
  18. Create the Storage Scale container native custom resource. This will deploy Storage Scale container native on your OpenShift cluster.
    oc apply -f generated/scale/cr/cluster/cluster.yaml
  19. Verify that the Storage Scale container native pods are up and running. You should see an output as shown in the following example. Note, that for each of your OpenShift worker nodes, you will see one worker-x pod.
    oc get pods -n ibm-spectrum-scale
    Sample output:
    NAME                               READY   STATUS    RESTARTS   AGE
    ibm-spectrum-scale-gui-0           4/4     Running   0          6m44s
    ibm-spectrum-scale-gui-1           4/4     Running   0          98s
    ibm-spectrum-scale-pmcollector-0   2/2     Running   0          6m14s
    ibm-spectrum-scale-pmcollector-1   2/2     Running   0          4m6s
    worker-0                           2/2     Running   0          6m39s
    worker-1                           2/2     Running   0          6m39s
    worker-2                           2/2     Running   0          6m39s
  20. Verify that the cluster CR has been created successfully.

    oc get cluster ibm-spectrum-scale -o yaml

    Sample output:

    figure 21

  21. We need to now edit the configuration file that creates the Storage Scale remote cluster custom resource to adapt the configuration to our environment. Edit the generated/scale/cr/remotecluster/remotecluster.yaml file using your preferred editor.
  22. Comment out the contactNodes: section. find the following lines:
    contactNodes:
    - storagecluster1node1
    - storagecluster1node2
    Replace with the following lines:
    # contactNodes:
    # - storagecluster1node1
    # - storagecluster1node2
  23. Modify the gui section. In the gui section the fQDN of the GUI node of the remote Storage Scale cluster is specified. The GUI node will be contacted by Storage Scale Container Native when provisioning for example new persistent volumes (PVs).

    find the following lines:

    gui:
    cacert: cacert-storage-cluster-1
    # This is the secret that contains the CSIAdmin user
    # credentials in the ibm-spectrum-scale-csi namespace.
    csiSecretName: csi-remote-mount-storage-cluster-1
    # hosts are the the GUI endpoints from the storage cluster. Multiple
    # hosts (up to 3) can be specified to ensure high availability of GUI.
    hosts:
    - guihost1.example.com
    # - guihost2.example.com
    # - guihost3.example.com
    insecureSkipVerify: false
    # This is the secret that contains the ContainerOperator user
    # credentials in the ibm-spectrum-scale namespace.
    secretName: cnsa-remote-mount-storage-cluster-1

    Replace with the following lines:

    gui:
    #cacert: cacert-storage-cluster-1
    # This is the secret that contains the CSIAdmin user
    # credentials in the ibm-spectrum-scale-csi namespace.
    csiSecretName: csi-remote-mount-storage-cluster-1
    # hosts are the the GUI endpoints from the storage cluster. Multiple
    # hosts (up to 3) can be specified to ensure high availability of GUI.
    hosts:
    - <fQDN of the GUI node of the remote Storage Scale storage cluster>
    # - guihost2.example.com
    # - guihost3.example.com
    insecureSkipVerify: true
    # This is the secret that contains the ContainerOperator user
    # credentials in the ibm-spectrum-scale namespace.
    secretName: cnsa-remote-mount-storage-cluster-1

    for example:

    gui:
    #cacert: cacert-storage-cluster-1
    csiSecretName: csi-remote-mount-storage-cluster-1
    hosts:
    - p1224-storage1.p1224.cecc.ihost.com
    insecureSkipVerify: true
    secretName: cnsa-remote-mount-storage-cluster-1
  24. Create the remote cluster custom resource.
    oc apply -f generated/scale/cr/remotecluster/remotecluster.yaml
  25. Verify that the remote cluster CR has been successfully created.

    oc get remotecluster -n ibm-spectrum-scale

    Sample output:

    figure 9_26

  26. We need to now edit the configuration file that creates the Storage Scale remote filesystem custom resource to adapt the configuration to our environment. Edit the generated/scale/cr/filesystem/filesystem.remote.yaml file using your preferred editor.

    find the following lines:

    remote:
    cluster: remotecluster-sample
    fs: fs1

    Replace with the following lines:

    remote:
    cluster: remotecluster-sample
    fs: gpfs0
    `
  27. Create the remote filesystem custom resource.

    oc apply -f generated/scale/cr/filesystem/filesystem.remote.yaml
  28. Verify that the remote filesystem CR has been successfully created.

    oc get filesystem -n ibm-spectrum-scale

    Sample output:

    figure 9_29

  29. Label the worker nodes for CSI integration.

    oc label nodes -l node-role.kubernetes.io/worker= scale=true

    Sample output:

    figure 9_30

  30. Verify that the Spectrum Scale CSI pods are up and running. You should see an output as shown in the following example. Note that for each worker node of your OpenShift cluster, you will see one ibm-spectrum-scale-csi-zzzzz pod.

    oc get pods -n ibm-spectrum-scale-csi

    Sample output:

    figure 9_31

  31. On the first RHEL VM from your external Storage Scale storage cluster, run the mmlscluster command to find out the GPfS cluster ID.

    mmlscluster

    Sample output: figure 9_32 View larger image

    Enter the following command to filter out the cluster ID:

    mmlscluster | grep 'cluster id' | awk '{print $4}'

    Sample output: 9788464596605235060

  32. On the bastion node, create a file storage_class_fileset.yaml that defines the new storage class ibm-spectrum-scale-csi-fileset for Storage Scale container native. Replace with your cluster ID obtained from the previous step.

    cat <<EOf > storage_class_fileset.yaml
    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
     name: ibm-spectrum-scale-csi-fileset
    provisioner: spectrumscale.csi.ibm.com
    parameters:
     permissions: "777"
     volBackendfs: remote-sample
     clusterId: "<replace with your cluster ID>"
    reclaimPolicy: Delete
    EOf

    Sample content of a storage_class_fileset.yaml file:

    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
     name: ibm-spectrum-scale-csi-fileset
    provisioner: spectrumscale.csi.ibm.com
    parameters:
     permissions: "777"
     volBackendfs: remote-sample
     clusterId: "9788464596605235060"
    reclaimPolicy: Delete
  33. Create the new storage class on your OpenShift cluster.

    oc apply -f storage_class_fileset.yaml
  34. Verify that the new storage class has been created.
    oc get sc
    Sample output: figure 9_35 View larger image
  35. Create a new physical volume claim (PVC) named ibm-spectrum-scale-pvc that uses the storage class ibm-spectrum-scale-fileset-csi by entering the following command.

    oc project default
    cat << EOf | oc apply -f -
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: ibm-spectrum-scale-pvc
    spec:
    storageClassName: ibm-spectrum-scale-csi-fileset
    accessModes:
       - ReadWriteMany
    resources:
       requests:
          storage: 1Gi
    EOf
  36. Verify that the new PVC has been created and it has the status as Bound.

    oc get pvc

    Sample output: figure 9_37 View larger image

Summary

In this tutorial, you have learned how to set up Storage Scale Container Native 6.0.0.0 on OpenShift 4.20 on IBM Power Virtual Servers and how to provision new PVCs on a remote Storage Scale 6.0.0.0 cluster running on two RHEL 9.6 VMs on IBM Power Virtual Servers.

Acknowledgments

The authors would like to thank Jean Midot for providing the infrastructure to set up and verify Storage Scale Container Native Storage Access 6.0.0.0 on Red Hat OpenShift Container Platform 4.20.

Take the next step

Join the Power Developer eXchange Community (PDeX). PDeX is a place for anyone interested in developing open source apps on IBM Power. Whether you're new to Power or a seasoned expert, we invite you to join and begin exchanging ideas, sharing experiences, and collaborating with other members today!