This is a cache of https://developer.ibm.com/tutorials/install-cpd-4-on-ocp-48-on-powervs/. It is a snapshot of the page as it appeared on 2025-11-14T13:04:27.649+0000.
Install IBM Cloud Pak for Data 5.2 on Red Hat OpenShift Container Platform 4.18 on IBM Power Virtual Server - IBM Developer
IBM Cloud Pak for Data unifies and simplifies the collection, organization, and analysis of data. Enterprises can turn data into insights through an integrated cloud-native architecture. IBM Cloud Pak for Data is extensible and can be customized to a client's unique data and AI landscapes through an integrated catalogue of IBM, open source, and third-party microservices add-ons.
This tutorial shows how to perform an online installation of IBM Cloud Pak for Data 5.2 on Red Hat OpenShift Container Platform 4.18 running on IBM Power Virtual Server and some of the services that are needed to use the IBM Industry Accelerators.
Prerequisites
This tutorial assumes that you are familiar with the Red Hat OpenShift Container Platform 4.18 environment on IBM Power Virtual Server. It is assumed that you have it already installed, you have access to it, and have the credentials of an OpenShift cluster administrator (kubeadmin). You must be familiar with Linux command line and have at least a basic understanding of Red Hat OpenShift.
For this tutorial, we assume that the OpenShift 4.14 environment on IBM Power Virtual Server consists of three worker nodes, each with two physical IBM Power cores (that is, 16 vCPU from a Kubernetes level) and 64 gB RAM.
Also, you must have created a local repository on a persistent storage and have a Network File System (NFS) storage class where the NFS export has the no_root_squash property set.
You need to have the wget and the oc clients already installed on your PATH variable.
Estimated time
It is expected to take around 2 to 3 hours to complete the installation of IBM Cloud Pak for Data 5.2 on IBM Power Virtual Server. This lengthy duration is because we need to install the software from internet repositories.
Steps
Installation of IBM Cloud Pak for Data 5.2 on IBM Power Virtual Server includes the following steps:
Install the Linux screen utility that will maintain your session in case your internet connection drops and will make it recoverable when you reconnect.
# on a RHEL 8 system run:yum -y install https://dl.fedoraproject.org/pub/epel/8/Everything/ppc64le/Packages/s/screen-4.6.2-12.el8.ppc64le.rpm
# on a RHEL 9 system run:yum -y install https://dl.fedoraproject.org/pub/epel/9/Everything/ppc64le/Packages/s/screen-4.8.0-6.el9.ppc64le.rpm
Copy codeCopied!
Install the Linux podman package that will allow you to run containers on the bastion host. The Cloud Pak for Data CLI software, that we will be using to install the IBM Cloud Pak for Data software later in this tutorial, runs inside a container on the bastion host.
yum -y install podman
Copy codeCopied!
Verify that the no_root_squash parameter is set for the NFS share. Change the parameter in the file /etc/exports to no_root_squash if needed.
sed -i's/,root_squash/,no_root_squash/g' /etc/exports
exportfs -a
exportfs -v
Copy codeCopied!
Create a new user cp4d on the bastion host, that we will be further using in the installation process, and change to that new user.
useradd cp4d
Copy codeCopied!
Change to the cp4d user.
su – cp4d
Copy codeCopied!
Download the cpd-cli utility from gitHub:
wget https://github.com/IBM/cpd-cli/releases/download/v14.2.0/cpd-cli-ppc64le-EE-14.2.0.tgz
tar -xzvf cpd-cli-ppc64le-EE-14.2.0.tgz
mv cpd-cli-ppc64le-EE-14.2.0*/* .
rm -rf cpd-cli-ppc64le-EE-14.2.0*
rm -f cpd-cli-ppc64le-EE-14.2.0.tgz
./cpd-cli version
Copy codeCopied!
After running the commands, you should get an output similar to the following sample:
Retrieve the OpenShift Container Platform API URL. Log in to the OpenShift web console as kubeadmin user. Then click the Copy login command under the “kube:admin” widget at the upper-right corner. Click Display token. Under “Log in with this token”, you should see an entry similar to this:
Make a note of the OpenShift Container Platform API URL value (the value after the --server= parameter) and store it in the PVS_API_URL environment variable:
exportPVS_API_URL=<replace with the value of the OCP API URL>
# for exampleexportPVS_API_URL=https://api.itzpvs-218018.cecc.ihost.com:6443
Copy codeCopied!
Retrieve the kubeadmin password and store it in the PVS_CLUSTER_ADMIN_PWD environment variable.
exportPVS_CLUSTER_ADMIN_PWD=<replace with your cluster admin password>
# for exampleexportPVS_CLUSTER_ADMIN_PWD=qD8nz-aDQxj-rxeVB-D8S3f
exportPVS_IBM_ENTITLEMENT_KEY=<replace with the value of your IBM entitlement API key>
# for example:exportPVS_IBM_ENTITLEMENT_KEY=eyJ0eXAiOiJKV1QiLCJxxx
Copy codeCopied!
Define the PVS_API_HOST environment variable:
exportPVS_API_HOST=<replace with the hostname of the API server>
# for exampleexportPVS_API_HOST=api.itzpvs-218018.cecc.ihost.com
Copy codeCopied!
Verify all four environment variables.
set | grep PVS
Copy codeCopied!
# you should see an output similar to the following:PVS_API_HOST=api.itzpvs-218018.cecc.ihost.com
PVS_API_URL=https://api.itzpvs-218018.cecc.ihost.com:6443PVS_CLUSTER_ADMIN_PWD=qD8nz-aDQxj-rxeVB-D8S3f
PVS_IBM_ENTITLEMENT_KEY=eyJ0eXAiOiJKV1QiLCJxxx
Copy codeCopied!
Validate that you can successfully log in to your OpenShift cluster using the oc command.
# you should see an output like the following:
Login successful.
You have access to71 projects, the list has been suppressed. You can list all projects with'oc projects'
Using project "default".
Welcome! See 'oc help'toget started.
Copy codeCopied!
Verify that the default storage class is nfs-storage-provisioner.
oc get sc
Copy codeCopied!
# you should see an output like the following:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINgMODE ALLOWVOLUMEEXPANSION AgE
nfs-storage-provisioner (default) nfs-storage Delete Immediate false2d1h
Copy codeCopied!
generate a cpd_vars.sh environment variables file by running the following command:
tee cpd_vars.sh <<EOF
#===========================================================# Cloud Pak for Data installation variables#===========================================================# ----------------------------------------------------------# Cluster# ----------------------------------------------------------exportOCP_URL="$PVS_API_HOST:6443"exportOPENSHIFT_TYPE="self-managed"exportOCP_USERNAME="kubeadmin"exportOCP_PASSWORD="$PVS_CLUSTER_ADMIN_PWD"exportOCP_TOKEN="$(oc whoami -t)"# ----------------------------------------------------------# Projects# ----------------------------------------------------------exportPROJECT_CERT_MANAgER="ibm-cert-manager"exportPROJECT_LICENSE_SERVICE="ibm-licensing"exportPROJECT_SCHEDULINg_SERVICE="cpd-scheduler"exportPROJECT_CPD_INST_OPERATORS="cpd-operators"exportPROJECT_CPD_INST_OPERANDS="cpd-instance"# ----------------------------------------------------------# Storage# ----------------------------------------------------------exportSTg_CLASS_BLOCK=nfs-storage-provisioner
exportSTg_CLASS_FILE=nfs-storage-provisioner
# ----------------------------------------------------------# IBM Entitled Registry# ----------------------------------------------------------exportIBM_ENTITLEMENT_KEY=$PVS_IBM_ENTITLEMENT_KEY# ----------------------------------------------------------# Cloud Pak for Data version# ----------------------------------------------------------exportVERSION=5.2.0
# ----------------------------------------------------------# Components# ----------------------------------------------------------exportCOMPONENTS=ws,wml
EOF
Copy codeCopied!Show more
Verify your cpd_vars.sh file. The file should look similar to the following example:
cat cpd_vars.sh
Copy codeCopied!
# for example:#===========================================================# Cloud Pak for Data installation variables#===========================================================# ----------------------------------------------------------# Cluster# ----------------------------------------------------------exportOCP_URL="api.itzpvs-218018.cecc.ihost.com:6443"exportOPENSHIFT_TYPE="self-managed"exportOCP_USERNAME="kubeadmin"exportOCP_PASSWORD="qD8nz-aDQxj-rxeVB-D8S3f"exportOCP_TOKEN="sha256~m7yyLOJtsgK9ogXRI4Fwxiaw5r4Z7VrHprRUotp8piQ"# ----------------------------------------------------------# Projects# ----------------------------------------------------------exportPROJECT_CERT_MANAgER="ibm-cert-manager"exportPROJECT_LICENSE_SERVICE="ibm-licensing"exportPROJECT_SCHEDULINg_SERVICE="cpd-scheduler"exportPROJECT_CPD_INST_OPERATORS="cpd-operators"exportPROJECT_CPD_INST_OPERANDS="cpd-instance"# ----------------------------------------------------------# Storage# ----------------------------------------------------------exportSTg_CLASS_BLOCK=nfs-storage-provisioner
exportSTg_CLASS_FILE=nfs-storage-provisioner
# ----------------------------------------------------------# IBM Entitled Registry# ----------------------------------------------------------exportIBM_ENTITLEMENT_KEY=eyJ0eXAiOiJKV1QiLCJxxx
# ----------------------------------------------------------# Cloud Pak for Data version# ----------------------------------------------------------exportVERSION=5.1.3
# ----------------------------------------------------------# Components# ----------------------------------------------------------exportCOMPONENTS=ws,wml
Copy codeCopied!Show more
Step 2 – Prepare the OpenShift cluster
As cp4d user, source the cpd_vars.sh file.
source cpd_vars.sh
Copy codeCopied!
Run the cpd-cli manage login command. Note that running the command for the first time takes some time to complete, as the container image that is used by the cpd-cli manage login command will be downloaded from the internet.
Successful completion of the cpd-cli manage login command would look like the following sample:
[…]
Using project "default" onserver "https://api.itzpvs-218018.cecc.ihost.com:6443".
[SUCCESS] 2025-05-22T13:08:01.093265Z You may find output and logs in the /home/cp4d/cpd-cli-workspace/olm-utils-workspace/work directory.
[SUCCESS] 2025-05-22T13:08:01.093327Z The login-to-ocp command ran successfully.
Copy codeCopied!
Add your IBM entitlement key to the global pull secret of your OpenShift cluster.
[…]
Now using project "cpd-instance" onserver "https://api.itzpvs-218018.cecc.ihost.com:6443".
[…]
Copy codeCopied!
Step 3 – Install IBM Software Hub and IBM Cloud Pak for Data control plane
As the cp4d user, open a screen session. This will allow you to reconnect to your terminal using the screen -r command if you lose the SSH connection to the bastion host.
screen
Copy codeCopied!
Install the shared cluster components. This will install the IBM certificate manager and the IBM licensing service on your cluster.
[SUCCESS] 2025-06-17T02:57:48.847340Z The authorize-instance-topology command ran successfully.
Copy codeCopied!
Set up the instance, and include running a storage validation test. Note that this step can take 30-60 minutes as container images will be downloaded from the IBM container registry and containerized software for the IBM Cloud Pak for Data instance will be deployed into the PROJECT_CPD_INST_OPERANDS namespace.
[SUCCESS] 2025-06-17T03:40:37.037781Z The setup-instance command ran successfully.
Copy codeCopied!
Step 4 – Install Watson Studio and Watson Machine Learning services
Install the operators for the IBM Cloud Pak for Data services that you have specified in the COMPONENTS environment variable. In our case the operators for the Watson Studio (ws) and Watson Machine Learning (wml) services will be installed.
[SUCCESS] 2025-06-17T03:52:44.106196ZThe apply-olm command ran successfully.
Copy codeCopied!
Install Watson Studio and Watson Machine Learning services by running the apply-cr command. Note, that this command eventually takes 30 minutes to complete due to the number of services that will be installed during Watson Studio and Watson Machine Learning installation.
[…]
TASK [utils : check if CR status indicates completion for ws-cr in cpd-instance, max retry 150 times 60s delay]
************************************************************
Not ready yet - Retrying: check if CR status indicates completion for ws-cr in cpd-instance, max retry 150 times 60s delay (150 Retries left)
Not ready yet - Retrying: check if CR status indicates completion for ws-cr in cpd-instance, max retry 150 times 60s delay (149 Retries left)
[…]
Copy codeCopied!
Successful completion will look as follows:
[[SUCCESS] 2025-05-22T09:31:09.350954Z The apply-cr command ran successfully.
Copy codeCopied!
Watch the installation progress. You may want to monitor the installation progress by running the following command in a second command line terminal in your bastion host. Make sure to run the source cpd_vars.sh command before, so that the PROJECT_CPD_INST_OPERANDS environment variable is defined. The command shows the list of pods in the IBM Cloud Pak for Data instance namespace with the most recently created pods being shown first:
Open a browser and enter the IBM Cloud Pak for Data web console URL you retrieved from the previous step. Accept the warnings of a potential security risk. Then enter cpadmin as the username and the default IBM Cloud Pak for Data admin password retrieved from step 3. Click Log in to proceed.
Click the Switch Location icon at the upper-right corner.
Watch the progress of the AutoAI experiment. You might need to wait at the first time a bit in Pending state as a new container image for the AutoAI experiment is going to be downloaded from the internet to your OpenShift Container Platform cluster.
After the experiment is complete, notice that the state now shows “Experiment completed” indicating that you have successfully run the AutoAI experiment on your IBM Cloud Pak for Data 5.2 cluster running on OpenShift 4.18 on Power Virtual Server.
Step 6 – Optional: Install Analytics Engine, RStudio, Decision Optimization, Db2 services, DataStage, and IBM Knowledge Catalog
Modify the cpd_vars.sh file in order to specify the additional IBM Cloud Pak for Data services you want to install on the cluster.
cp cpd_vars.sh cpd_vars.sh.ORg
sed -i \
's/COMPONENTS=ws,wml/COMPONENTS=analyticsengine,rstudio,dods,db2oltp,db2wh,dmc,datastage_ent_plus,wkc/g' \
cpd_vars.sh
Copy codeCopied!
In our case, for IBM Cloud Pak for Data on Power Virtual Server, we are going to install these additional services: Analytics Engine, RStudio, Decision Optimization, Db2 Services (Db2 OLTP, Db2 Warehouse, Db2 Management Console), DataStage, and IBM Knowledge Catalog.
Source the modified cpd_vars.sh file.
source cpd_vars.sh
Copy codeCopied!
Verify that the COMPONENTS environment variable has been updated by running the following command:
[SUCCESS] 2025-06-17T05:23:11.944486ZThe apply-olm command ran successfully.
Copy codeCopied!
Run the apply-cr command to install the additional IBM Cloud Pak for Data services on your cluster. Note, that this step can take up to 2-3 hours to complete, due to the large number of services that will be installed.
[SUCCESS] 2025-06-17T06:45:22.843084ZThe apply-cr command ran successfully.
Copy codeCopied!
Next, you need to create a new Db2 database instance on the IBM Cloud Pak for Data cluster to verify that the Db2 service on IBM Cloud Pak for Data is working fine.
Log in to the IBM Cloud Pak for Data web console.
Click the main menu icon and then click Data --> Databases.
The new Db2 database is now being provisioned. Note that the provisioning process can take 10 -15 minutes to complete.
Summary
This tutorial helped you to install a comprehensive AI and machine learning environment using IBM Cloud Pak for Data 5.2 on your IBM Power Virtual Server environment and to run a simple AutoAI experiment.
Note that this Tekton pipeline is not part of the official IBM Cloud Pak for Data product, but is an asset that has been developed by IBM Client Engineering. So, use it at your own risk!
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.