This is a cache of https://developer.ibm.com/tutorials/awb-deploying-order-service-containers-minikube/. It is a snapshot of the page as it appeared on 2025-11-18T04:04:44.151+0000.
Deploying IBM Order Management system and Order Service containers on Minikube - IBM Developer

Tutorial

Deploying IBM Order Management system and Order Service containers on Minikube

Learn to deploy IBM Order Management System, Order Service Containers, and dependencies on Minikube using an operator

By

Sameer Saeed,

Chiranjeevi Dasegowda

This tutorial demonstrates how you can deploy IBM Order Management System and Order Service Containers and the dependent stack such as Elasticsearch and Cassandra through the operator on a desktop-sized machine by using Minikube.

The sample configuration in this tutorial demonstrates how to deploy IBM Order Management System and Order Service Containers as a standalone application for proof of concept (POC) purposes.

Introduction to IBM Sterling Order Management System and IBM Order Service

IBM Sterling Order Management System (OMS) is the backbone of supply chain and commerce initiatives for large enterprises around the world. The product provides a robust platform that is designed to provide B2C and B2B organizations the power to innovate, differentiate, and drive their omnichannel businesses with less overhead. This rapid pace of innovation drives a lot of effort in deployment and automation practices. Tools such as Docker and Kubernetes bring exceptional speed and value to these enterprises to achieve their engineering excellence.

IBM Order Service is a new feature for Sterling Order Management System software. Order Service advances IBM’s modular business service vision for the Sterling Order Management System software platform by building more robust and scalable order search and archival capabilities by utilizing a modernized technology stack and architecture. Order Service is deployed alongside Sterling Order Management System software to provide enhanced functionality as part of an expanded solution footprint and comprises of two components: Order Search and Archive Service.

  • Order Search provides faster access to order data with a more robust query language and reduces the workload on core Sterling Order Management System software application servers by moving it to a scalable and highly available repository. Order Search uses Elasticsearch to store key-order data and makes it available through a set of GraphQL APIs.

  • Archive Service enables customers to retain a greater amount of historical order data by offloading it to an optimized storage repository, reducing the Sterling Order Management System software database resource requirements while still providing seamless access to the data. Archive Service uses Cassandra to efficiently store large amounts of order data and makes it available through a set of GraphQL APIs.

Order Service is available for Sterling Order Management System Software Containers, and is distributed only to users who are entitled to Sterling Order Management System Software Containers. You can install, configure, and deploy the Order Service images in Sterling Order Management System Software Professional or Enterprise edition.

Development and testing with Minikube

Minikube provides a minimal Kubernetes (K8s) cluster with a Docker container runtime, ideal for local development and testing. It is specifically designed for deployment on developers’ desktops.

Note: This guide is intended for development and testing purposes only. For production deployment, consult the official product documentation.

Estimated time

This tutorial should take a few hours to complete.

Prerequisites

Hardware requirements

  • 100 GB+ of storage
  • 24 GB+ of memory (preferably 32+)
  • 8 available virtual CPUs (preferably 16)

Stack used for demonstration purpose

  • OS version: Red Hat Enterprise Linux release 8.9 (Ootpa)
  • minikube version: v1.32.0

For production deployments

  • Use compatible databases and other supported software as specified in the product documentation.

  • Refer to the compatibility report for OMS operator and container image tags.

Deployment steps

Step 1. Installing Minikube

  1. Create a non-root user

    a. Create a non-root user and grant sudo permissions

    sudo useradd -m -s /bin/bash supportuser
     sudo passwd supportuser
     sudo usermod -aG wheel supportuser

    b. Switch to the non-root user

    su - supportuser
     sudo groupadd docker
     sudo usermod -aG docker $USER && newgrp docker
  2. Install dependent packages

    a. Install kubectl: Kubectl is an essential command-line tool used for interacting with Kubernetes clusters. Here's how to install it:

    Update your package manager's repository information

    sudo yum update

    Download and install kubectl

    curl -LO "https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl"
     chmod +x ./kubectl
     sudo mv ./kubectl /usr/local/bin/kubectl

    Verify the installation by checking the version

    kubectl version --client

    b. Install Minikube - Minikube is a tool that allows you to run a Kubernetes cluster locally. Here's how to install it:

    Download and install Minikube

    curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
     sudo install minikube-linux-amd64 /usr/local/bin/minikube

    c. Install Docker and dependent packages

    Install conntrack: Conntrack is a utility used to view and manipulate the network connection tracking table in the Linux kernel, which is essential for Kubernetes. Install it with the following command:

    sudo yum install conntrack

    Install crictl: Crictl is a command-line interface for the Container Runtime Interface (CRI). To install it, follow these steps:

    • Determine the latest version of crictl on the GitHub releases page.
    • Download and install crictl (replace $VERSION with the latest version):

      export VERSION="v1.26.0"
        curl -L https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-${VERSION}-linux-amd64.tar.gz --output crictl-${VERSION}-linux-amd64.tar.gz
        sudo tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bin
        rm -f crictl-$VERSION-linux-amd64.tar.gz

      Note: Remove any conflicting versions of runc and re-install:

      rpm -qa | grep runc
      sudo yum remove <output of above>

      For example:

      sudo yum remove runc-1.1.12-1.module+el8.9.0+21243+a586538b.x86_64
      sudo yum install runc

      Install socat, a utility for multiplexing network connections:

      sudo yum install socat

      Install cri-dockerd by downloading the latest RPM for your OS and installing it:

      Note: Run this command to install libcgroup only if you are on RHEL version 8.x. You can skip if you are on 9.x.

      sudo yum install libcgroup

      Install the Container Networking Interface (CNI) plugins:

      Find the latest version at https://github.com/containernetworking/plugins/releases

      CNI_PLUGIN_VERSION="v1.3.0"
      CNI_PLUGIN_TAR="cni-plugins-linux-amd64-$CNI_PLUGIN_VERSION.tgz"
      CNI_PLUGIN_INSTALL_DIR="/opt/cni/bin"
      curl -LO "https://github.com/containernetworking/plugins/releases/download/$CNI_PLUGIN_VERSION/$CNI_PLUGIN_TAR"
      sudo mkdir -p "$CNI_PLUGIN_INSTALL_DIR"
      sudo tar -xf "$CNI_PLUGIN_TAR" -C "$CNI_PLUGIN_INSTALL_DIR"
      rm "$CNI_PLUGIN_TAR"

      Install Docker: Docker is required for container runtime support. Use the following commands to install Docker on your system:

      Install required utilities and add the Docker repository:

      sudo yum install -y yum-utils
      sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

      Install Docker and related packages:

      sudo yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

      Start Docker:

      sudo systemctl start docker
      sudo systemctl status docker

      d. Start Minikube

      Now that you have installed all the necessary components, you can start Minikube:

      minikube start --driver=docker --cpus=<cores> --memory=<size> --disk-size=<size> --addons=metrics-server,dashboard,ingress

      For example:

      minikube start --driver=docker --cpus=14 --memory=56000 --disk-size=50g --addons=metrics-server,dashboard,ingress

      Validate the installation:

      minikube status
      [...]
      minikube
      type: Control Plane
      host: Running
      kubelet: Running
      apiserver: Running
      kubeconfig: Configured

Step 2. Accessing Minikube dashboard remotely

The Minikube dashboard is a powerful web-based interface that provides insights into the state of your Minikube cluster. As a user-friendly graphical user interface (GUI), it offers various functionalities for managing Kubernetes resources. Here's what you can do using the Minikube dashboard:

Overview of Cluster Resources: The dashboard provides an at-a-glance overview of your Minikube cluster's nodes, pods, services, and more. This makes it easy to monitor the overall health of your cluster and quickly identify any issues.

Managing Deployments: You can create, scale, and manage deployments directly from the dashboard. This simplifies the process of launching applications and ensures they are running optimally.

  • Inspecting Pods and Containers: The dashboard lets you explore the details of pods, containers, and their associated logs. This is particularly valuable for debugging issues and analyzing application behavior.

  • Services and Ingress Management: Manage services and expose them via LoadBalancer, NodePort, or ClusterIP. Additionally, you can configure and manage Ingress resources to control external access to services.

  • ConfigMaps and Secrets: Create and manage ConfigMaps and Secrets, which store configuration data and sensitive information separately from application code.

  • Event Tracking: Stay informed about events in your cluster. The dashboard displays events related to pods, deployments, services, and other resources, aiding in identifying problems.

  • Cluster and Namespace Switching: If you're working with multiple clusters or namespaces, the dashboard allows you to seamlessly switch between them, streamlining management tasks.

  • Pod Terminal Access: With a single click, you can access a terminal directly within a pod's container. This is invaluable for debugging and troubleshooting.

Let's explore how to access the Minikube dashboard remotely and manage Kubernetes resources with ease:

  1. Install the NetworkManager service:

    sudo yum install NetworkManager
  2. Start the NetworkManager service to manage network connections:

    sudo systemctl start NetworkManager
  3. Allow access to the Minikube dashboard port (8001/tcp) through the firewall:

    sudo systemctl start firewalld
     sudo firewall-cmd --add-port=8001/tcp --zone=public --permanent
     sudo firewall-cmd --reload
  4. From the Minikube server, get the URL to access the Minikube dashboard:

    minikube dashboard --url
  5. Access the Minikube dashboard remotely:

    Establish another terminal connection to the Minikube server and start a proxy server that listens on all network interfaces:

    minikube kubectl -- proxy --address='0.0.0.0' --disable-filter=true

    Access the dashboard using the URL provided earlier but replace the IP address with the public IP of the Minikube host.

    The URL should resemble http://<your_instance_ip>:8001/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/.

    Additional troubleshooting for Minikube dashboard access: If you encounter an inaccessible Minikube Dashboard URL and notice that the dashboard pods are in a crash loop backoff (which you can check using the command kubectl get pods -n kubernetes-dashboard), consider the following step to resolve the issue:

    Restart Docker: If Docker-related errors such as networking or iptables issues are observed, restarting the Docker service can help. Use the command sudo systemctl restart docker. This action can reset Docker's networking components and often resolves connectivity and configuration issues impacting pod operations in Minikube.

Step 3. Installing the Operator SDK CLI and OLM

Overview of the OMS Standard Operator

The OMS Standard Operator simplifies containerized deployments by adhering to Kubernetes best practices. It manages applications and components through custom resources, particularly the OMEnvironment resource. This resource allows you to configure:

  • Application images
  • Storage options
  • PostgreSQL and ActiveMQ dependencies
  • Network policies
  • Other essential settings

With these configurations, the operator facilitates the deployment of a fully functional OMS environment. As part of this guide, we will be installing the Operator SDK so that we can utilize the Operator framework to deploy OMS Operators

Operator SDK installation steps

  1. Download and install the Operator SDK CLI:

    RELEASE_VERSION=$(curl -s https://api.github.com/repos/operator-framework/operator-sdk/releases/latest | grep tag_name | cut -d '"' -f 4)
     sudo curl -LO "https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk_linux_amd64"
     sudo chmod +x operator-sdk_linux_amd64
     sudo mv operator-sdk_linux_amd64 /usr/local/bin/operator-sdk
  2. Install OLM:

    operator-sdk olm install --version=latest

    The output should be similar to the following, indicating a successful installation:

    operator-sdk olm install --version=latest
    
     INFO[0000] Fetching CRDs for version "latest"
     INFO[0000] Fetching resources for resolved version "latest"
     INFO[0001] Checking for existing OLM CRDs
     INFO[0001] Checking for existing OLM resources
     INFO[0001] Installing OLM CRDs...
     INFO[0001]   Creating CustomResourceDefinition "catalogsources.operators.coreos.com"
     INFO[0002]   CustomResourceDefinition "catalogsources.operators.coreos.com" created 
     INFO[0002]   Creating CustomResourceDefinition "clusterserviceversions.operators.coreos.com" 
     INFO[0003]   CustomResourceDefinition "clusterserviceversions.operators.coreos.com" created 
     INFO[0003]   Creating CustomResourceDefinition "installplans.operators.coreos.com" 
     [...]
     INFO[0011] Creating OLM resources...                   INFO[0011]   Creating Namespace "olm"                 INFO[0012]   Namespace "olm" created                   INFO[0012]   Creating Namespace "operators"           INFO[0013]   Namespace "operators" created             INFO[0013]   Creating ServiceAccount "olm/olm-operator-serviceaccount" 
     INFO[0014]   ServiceAccount "olm/olm-operator-serviceaccount" created 
     INFO[0014]   Creating ClusterRole "system:controller:operator-lifecycle-manager" 
     [...]
     INFO[0025] Waiting for deployment/olm-operator rollout to complete
     INFO[0026]   Waiting for Deployment "olm/olm-operator" to rollout: 0 of 1 updated replicas are available
     INFO[0033]   Deployment "olm/olm-operator" successfully rolled out
     INFO[0033] Waiting for deployment/catalog-operator rollout to complete
     INFO[0034]   Deployment "olm/catalog-operator" successfully rolled out
     INFO[0034] Waiting for deployment/packageserver rollout to complete
     INFO[0035]   Waiting for Deployment "olm/packageserver" to rollout: 0 of 2 updated replicas are available
     INFO[0038]   Deployment "olm/packageserver" successfully rolled out
     INFO[0038] Successfully installed OLM version "latest"
    
     NAME                                            NAMESPACE    KIND                        STATUS
     catalogsources.operators.coreos.com                          CustomResourceDefinition    Installed
     clusterserviceversions.operators.coreos.com                  CustomResourceDefinition    Installed
     installplans.operators.coreos.com                            CustomResourceDefinition    Installed
     olmconfigs.operators.coreos.com                              CustomResourceDefinition    Installed
     operatorconditions.operators.coreos.com                      CustomResourceDefinition    Installed
     operatorgroups.operators.coreos.com                          CustomResourceDefinition    Installed
     operators.operators.coreos.com                               CustomResourceDefinition    Installed
     subscriptions.operators.coreos.com                           CustomResourceDefinition    Installed
     olm                                                          Namespace                   Installed
     operators                                                    Namespace                   Installed
     olm-operator-serviceaccount                     olm          ServiceAccount              Installed
     system: controller:operator-lifecycle-manager                ClusterRole                 Installed
     olm-operator-binding-olm                                     ClusterRoleBinding          Installed
     cluster                                                      OLMConfig                   Installed
     olm-operator                                    olm          Deployment                  Installed
     catalog-operator                                olm          Deployment                  Installed
     aggregate-olm-edit                                           ClusterRole                 Installed
     aggregate-olm-view                                           ClusterRole                 Installed
     global-operators                                operators    OperatorGroup               Installed
     olm-operators                                   olm          OperatorGroup               Installed
     packageserver                                   olm          ClusterServiceVersion       Installed
     operatorhubio-catalog                           olm          CatalogSource               Installed
     operatorhubio-catalog                           olm          CatalogSource               Installed
     operatorhubio-catalog                           olm          CatalogSource               Installed

    Note: If the OLM install fails for some reason, uninstall the previous version and then re-install.

    To resolve this issue and perform a clean installation of OLM, you can follow these steps:

    i. You need to uninstall the existing OLM resources from your Kubernetes cluster. To do this, you can use the kubectl command. Here is a general approach to uninstall OLM:

    operator-sdk olm uninstall --version=latest
     kubectl delete crd olmconfigs.operators.coreos.com
     kubectl delete clusterrole aggregate-olm-edit
     kubectl delete clusterrole aggregate-olm-view
     kubectl delete clusterrolebinding olm-operator-binding-olm
     kubectl delete clusterrole system:controller:operator-lifecycle-manager
     kubectl delete -n kube-system rolebinding packageserver-service-auth-reader
     kubectl delete -n operators serviceaccount default

    The above commands will delete OLM-related resources in all namespaces. If you want to target a specific namespace, you can omit the --all-namespaces flag.

    ii. After running the commands to delete OLM resources, verify that there are no remaining OLM resources in your cluster:

    kubectl get subscriptions.operators.coreos.com
     kubectl get catalogsources.operators.coreos.com
     kubectl get operatorgroups.operators.coreos.com
     kubectl get clusterserviceversions.operators.coreos.com

    If these commands return empty lists, it means that OLM has been successfully uninstalled.

    iii. After ensuring that OLM is uninstalled, you can proceed with the installation of the desired OLM version. Refer Step 2 above to re-install OLM.

    After installing OLM, you can verify its installation by checking its resources kubectl get crd -n olm

    NAME                                             CREATED AT
     catalogsources.operators.coreos.com              2023-10-25T00:55:49Z
     clusterserviceversions.operators.coreos.com      2023-10-25T00:55:49Z
     installplans.operators.coreos.com                2023-10-25T00:55:49Z
     olmconfigs.operators.coreos.com                  2023-10-25T00:55:49Z
     operatorconditions.operators.coreos.com          2023-10-25T00:55:49Z
     operatorgroups.operators.coreos.com              2023-10-25T00:55:49Z
     operators.operators.coreos.com                   2023-10-25T00:55:49Z
     subscriptions.operators.coreos.com               2023-10-25T00:55:49Z
     subscriptions.operators.coreos.com               2023-10-25T00:55:49Z

    You should see the new OLM resources related to the version you installed.

    By following these steps, you should be able to uninstall existing OLM resources and perform a clean installation of the desired OLM version in your Kubernetes cluster. Be sure to refer to the specific documentation or instructions for the OLM version you are working with for any version-specific installation steps or considerations.

  3. Overwriting PodSecurityStandards (PSS):

    Kubernetes has an equivalent of SecurityContextConstraints (from OpenShift) called PodSecurityStandards (PSS) that enforces different profiles (privileged, baseline, and restricted) at a namespace level. When a restricted profile is defaulted on a namespace, pod spec is enforced to contain the securityContext.seccompProfile.type field with a valid value. In this case, the Operator installation fails because the namespace (olm) has restricted PSS, but the Operator controller deployment does not have the field.

    To overcome this, switch to the baseline PSS that does not enforce the securityContext.seccompProfile.type field, by using the following command:

    kubectl label --overwrite ns olm pod-security.kubernetes.io/enforce=baseline

    Delete the out-of-box OLM CatalogSource:

    kubectl delete catalogsource operatorhubio-catalog -n olm

    The output should be similar to the following, indicating that the OLM CatalogSource was successfully deleted:

    catalogsource.operators.coreos.com "operatorhubio-catalog" deleted

Step 4. Creating IBM entitlement key secret

An image pull secret named ibm-entitlement-key must be created with the IBM entitlement registry credentials in the namespace (project) where you are configuring OMEnvironment. For more information, see the corresponding documentation.

  1. Go to https://myibm.ibm.com/products-services/containerlibrary and copy your entitlement key.

    Export the entitlement key and namespace variables.

    export ENTITLEDKEY="<Entitlement key from MyIBM>"
     export NAMESPACE="<Project or namespace name for OMS deployment>"
  2. Create ibm-entitlement-key under the namespace where you will be deploying OMS and Order Service by running the following command:

    kubectl create secret docker-registry ibm-entitlement-key \
     --docker-server=cp.icr.io \
     --docker-username=cp \
     --docker-password=${ENTITLEDKEY} \
     --namespace=${NAMESPACE}

    Note: The Operator is from open registry. However, most container images are commercial. Contact your IT or Enterprise Administrator to get access to the entitlement key.

Step 5. Installing and deploying IBM Sterling Order Management System (OMS) and IBM Order Service

  1. Create a namespace for OMS. This namespace will also be used for OMS sub-applications such as Call Center, Order Hub, Order Service, etc.

    kubectl create namespace oms
  2. Configure PostgreSQL and ActiveMQ:

    IBM Sterling OMS Operator has the capability to automatically install the required middlewares like PostgreSQL and ActiveMQ for development purposes. Please note that these middlewares are for development purposes only.

    devInstances:
         profile: ProfileColossal
         postgresql:
           repository: docker.io
           tag: '16.1'
           name: postgres
           user: postgres
           password: postgres
           database: postgres
           schema: postgres
           wipeData: true
             # storage:
             #   name: <omsenvironment-operator-pv-oms-test>
           profile: ProfileColossal
             # timezone: <Timezone>
    
         activemq:
           repository: docker.io
           tag: 6.1.0
           name: apache/activemq-classic
           # storage:
             #   name: <omsenvironment-operator-pv-oms-test>
           profile: ProfileColossal
             # timezone: <Timezone>
  3. Preparing and configuring IBM Order Service:

    i. Pull the Cassandra Docker image:

    docker pull cassandra:4.0.10

    ii. Run the Cassandra Docker container:

    docker run --name cassandra -d \
         -e CASSANDRA_BROADCAST_ADDRESS=<your-instance-ip> \
         -p 7000:7000 \
         -p 9042:9042 \
         --restart always \
         cassandra:4.0.10

    iii. Enable port 9042 on RHEL:

    sudo systemctl start firewalld
     sudo systemctl enable firewalld
     sudo firewall-cmd --zone=public --add-port=9042/tcp --permanent
     sudo firewall-cmd --reload
     sudo firewall-cmd --zone=public --list-ports

    iv. Verify that Cassandra is running in a Docker container:

    docker ps

    v. Check Cassandra logs:

    docker logs -f cassandra

    vi. Configure Cassandra keyspaces: Exec into the cqlsh user.

    docker exec -it cassandra cqlsh
    
     Connected to Test Cluster at 127.0.0.1:9042
     [cqlsh 6.0.0 | Cassandra 4.0.10 | CQL spec 3.4.5 | Native protocol v5]
     Use HELP for help.
     cqlsh>

    Then, while exec'd into the cqlsh user, run the following command to create the required keyspace for Cassandra:

    CREATE KEYSPACE orderservice WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'};
     exit

    From following the preceding steps, you will have a Cassandra container running with the necessary keyspaces configured, on port 9042.

  4. Deploying IBM Sterling Order Management System:

    i. Create catalog-source.yaml to create your deployment's catalog source, subscription.yaml to manage your OMS operator subscription, and operator-group.yaml to create the operator groups necessary to deploy OMS CRDs:

    kubectl create -f catalog-source.yaml
     kubectl create -f subscription.yaml
     kubectl create -f operator-group.yaml

    Where catalog-source.yaml contains the following:

    apiVersion: operators.coreos.com/v1alpha1
     kind: CatalogSource
     metadata:
       name: ibm-oms-catalog
       namespace: olm
     spec:
       displayName: IBM OMS Operator Catalog
       # update to 'ibm-oms-pro-case-catalog' if using OMS Professional Edition
        image: icr.io/cpopen/ibm-oms-ent-case-catalog:v1.0
       publisher: IBM
       sourceType: grpc
       updateStrategy:
         registryPoll:
           interval: 10m0s

    subscription.yaml contains the following:

    apiVersion: operators.coreos.com/v1alpha1
     kind: Subscription
     metadata:
       name: oms-operator
       namespace: oms
     spec:
       channel: v1.0
       installPlanApproval: Automatic
       name: ibm-oms-ent
       source: ibm-oms-catalog
       sourceNamespace: olm

    and operator-group.yaml contains the following:

    apiVersion: operators.coreos.com/v1
     kind: OperatorGroup
     metadata:
         name: ibm-oms-ent-group
         namespace: oms
     spec: {}

    You can validate whether your OMS CRDs are created by checking your Custom Resource Definitions on the Minikube dashboard. For any issues, you can check the logs of your olm-operator pod within your olm namespace.

    ii. You will also need to create a persistent volume claim (PVC) to request for storage for your deployment:

    Required storage:

    PVC               Recommended size   Purpose
     oms-pvc           10GB               OMS shared storage for logs and configuration files
     oms-pvc-ordserv   20GB               Order Service shared storage

    To create the PVCs for OMS and Order Service, run:

    kubectl apply -f oms-pvc.yaml -n oms

    Where your oms-pvc.yaml file contains the following:

    apiVersion: v1
     kind: PersistentVolumeClaim
     metadata:
       name: oms-pvc
     spec:
       accessModes:
         - ReadWriteMany
         - ReadWriteOnce
         - ReadOnlyMany
       resources:
         requests:
           storage: 10Gi
       volumeName:
       storageClassName: "standard"
    
     ---
    
     apiVersion: v1
     kind: PersistentVolumeClaim
     metadata:
       name: oms-pvc-ordserv
     spec:
       accessModes:
         - ReadWriteMany
       resources:
         requests:
           storage: 20Gi
       volumeName:
       storageClassName: "standard"

    iii. Update the hostname and namespace fields in the following cert.sh file to match your deployment:

    #!/bin/bash
    
     # Variables
     HOSTNAME="<domain-of-your-instance>"
     EXPIRY_DAYS=365
     CERT_NAME="ingress-cert"
     NAMESPACE="oms"
     PKCS12_NAME="tls.p12"
     JKS_NAME="keystore.jks"
     ALIAS="myapp"
     STOREPASS="password"
     KEYPASS="password"
     DNAME="CN=$HOSTNAME, OU=Example, O=Example, L=City, S=State, C=US"
    
     # Generate self-signed certificate
     openssl req -x509 -nodes -days $EXPIRY_DAYS -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=$HOSTNAME"
    
     # Generate PKCS#12 certificate
     openssl pkcs12 -export -in tls.crt -inkey tls.key -out $PKCS12_NAME -name "$HOSTNAME"
    
     # Add certificate to Java Keystore
     keytool -importkeystore -srckeystore $PKCS12_NAME -srcstoretype pkcs12 -destkeystore $JKS_NAME -deststoretype JKS -alias "$HOSTNAME"
    
     # Create Kubernetes TLS secret
     kubectl create secret tls $CERT_NAME --cert=tls.crt --key=tls.key --namespace $NAMESPACE

    After doing so, run ./cert.sh to generate a TLS secret.

    Note: Make sure to remember the passwords you enter for your truststore and keystore if you do not use the default of mypassword, as you will need them for the next step when creating your secret file

    This script automates the creation of a self-signed certificate and integrates it into your Kubernetes environment.

    It starts by defining variables for configuration, such as the hostname, certificate name, and passwords.

    The script then generates a self-signed certificate using OpenSSL and creates a PKCS#12 certificate from the generated key and certificate.

    This PKCS#12 certificate is then imported into a Java Keystore (JKS) using the keytool utility.

    Finally, the script creates a Kubernetes TLS secret with the generated certificate and key within the specified namespace.

    iv. Pass the passwords you entered for your keystore and truststore within the following oms-secret.yaml file, under the trustStorePassword and keyStorePassword values:

    apiVersion: v1
     kind: Secret
     metadata:
        name: 'oms-secret'
        namespace: 'oms'
     type: Opaque
     stringData:
       consoleAdminPassword: 'password'
       consoleNonAdminPassword: 'password'
       dbPassword: 'postgres'
       trustStorePassword: 'password'
       keyStorePassword: 'password'
       es_username: 'esadmin'
       es_password: 'password'
       cassandra_username: 'csadmin'
       cassandra_password: 'password'

    Note: We will be applying the above OMS secret yaml in step 7, as we will be also be setting up JWT for OMS and Order Service prior to deploying OMS

    v. Create a configmap containing your truststore (.p12) file generated from the previous step.

    kubectl create configmap truststoreconfigmap --from-file=/home/supportuser/path/to/your/tls.p12 -n oms

    Note: If you choose to use a different name for your configmap, ensure that you modify the additionalMounts parameter in your om-environment.yaml file accordingly:

    additionalMounts:
       configMaps:
         - mountPath: '/shared/tls.p12'
           name: truststoreconfigmap
           subPath: tls.p12

    vi. MQ bindings: If you have an existing MQ bindings file from another deployment, you can create a configmap using the contents of your .bindings file. Otherwise, you can use an empty file for your configmap for testing and update it later if needed:

    touch .bindings
     kubectl create configmap oms-bindings --from-file=/home/supportuser/path/to/your/.bindings -n oms

    If you choose to use the same configmap name, your om-environment.yaml should be the following:

    kind: OMEnvironment
     apiVersion: apps.oms.ibm.com/v1beta1
     metadata:
       name: oms
       namespace: oms
       annotations:
         apps.oms.ibm.com/activemq-install-driver: 'yes'
         apps.oms.ibm.com/dbvendor-auto-transform: 'yes'
         apps.oms.ibm.com/dbvendor-install-driver: 'yes'
         apps.oms.ibm.com/refimpl-install: 'yes'
         apps.oms.ibm.com/refimpl-type: 'oms'
         kubernetes.io/ingress.class: 'nginx'
     spec:
       networkPolicy:
         podSelector:
           matchLabels:
             none: none
         policyTypes:
           - Ingress
       security:
         ssl:
           trust:
             storeLocation: '/shared/tls.p12'
             storeType: PKCS12
       license:
         accept: true
         acceptCallCenterStore: true
       common:
         jwt:
          algorithm: RS256
          audience: service
          issuer: oms
         appServer:
           ports:
             http: 9080
             https: 9443
         ingress:
           host: <your-instance-domain-name>
           ssl:
            enabled: true
            identitySecretName: ingress-cert
       serverProfiles:
         - name: small
           resources:
             requests:
               cpu: 200m
               memory: 512Mi
             limits:
               cpu: 1000m
               memory: 1Gi
         - name: medium
           resources:
             requests:
               cpu: 500m
               memory: 1Gi
             limits:
               cpu: 2000m
               memory: 2Gi
         - name: large
           resources:
             requests:
               cpu: 500m
               memory: 2Gi
             limits:
               cpu: 4000m
               memory: 4Gi
         - name: huge
           resources:
             requests:
               cpu: 500m
               memory: 4Gi
             limits:
               cpu: 4000m
               memory: 8Gi
         - name: colossal
           resources:
             requests:
               cpu: 500m
               memory: 4Gi
             limits:
               cpu: 4000m
               memory: 16Gi
       upgradeStrategy: RollingUpdate
       servers:
         - appServer:
             libertyServerXml: default-server-xml
             livenessCheckBeginAfterSeconds: 900
             livenessFailRestartAfterMinutes: 10
             serverName: DefaultAppServer
             terminationGracePeriodSeconds: 60
             vendor: websphere
             vendorFile: servers.properties
           image: {}
           name: server1
           profile: large
           property:
             customerOverrides: AppServerProperties
             jvmArgs: JVMArguments
           replicaCount: 1
       orderHub:
         adminURL: 'server1-oms.<your-instance-domain-name>'
         base:
           replicaCount: 1
           profile: 'medium'
       healthMonitor:
         profile: small
         replicaCount: 1
         upgradeStrategy: RollingUpdate
       orderService:
         cassandra:
           keyspace: orderservice
           contactPoints: '<your-instance-ip>:9042'
         configuration:
           additionalConfig:
             log_level: DEBUG
             order_archive_additional_part_name: ordRel
             service_auth_disable: 'true'
             enable_graphql_introspection: 'true'
             ssl_vertx_disable: 'false'
             ssl_cassandra_disable: 'true'
    
             # note: if you would like to use self-generated keys,
             # you can comment out the following JWT properties
             jwt_algorithm: RS256
             jwt_audience: service
             jwt_ignore_expiration: false
             jwt_issuer: oms
         elasticsearch:
           createDevInstance:
             profile: large
             storage:
               capacity: 20Gi
               name: oms-pvc-ordserv
               storageClassName: 'standard'
         orderServiceVersion: '10.0.2409.2'
         profile: large
         replicaCount: 1
       secret: oms-secret
       jms:
         mq:
           bindingConfigName: oms-bindings
           bindingMountPath: /opt/ssfs/.bindings
       serverProperties:
         customerOverrides:
           - groupName: BaseProperties
             propertyList:
               yfs.yfs.logall: N
               yfs.yfs.searchIndex.rootDirectory: /shared
             derivatives:
               - groupName: AppServerProperties
                 propertyList:
                   yfs.api.security.enabled: Y
                   yfs.interopservlet.security.enabled: false
                   yfs.userauthfilter.enabled: false
    
                   xapirest.servlet.jwt.auth.enabled: true
                   xapirest.servlet.cors.enabled: true
                   xapirest.servlet.cors.allow.credentials: true
                   yfs.yfs.searchIndex.rootDirectory: /shared
    
                   # note: if you would like to use self-generated keys,
                   # you can uncomment the following JWT properties
                   #yfs.yfs.jwt.create.issuer: oms
                   #yfs.yfs.jwt.create.audience: osrv
                   #yfs.yfs.jwt.create.pk.alias: '1'
                   #yfs.yfs.jwt.create.algorithm: RS256
                   #yfs.yfs.jwt.create.expiration: 3600
                   #yfs.yfs.jwt.oms.verify.keyloader: jkstruststore
         jvmArgs:
           - groupName: JVMArguments
       serviceAccount: default
       image:
         imagePullSecrets:
           - name: ibm-entitlement-key
         oms:
           agentDefaultName: om-agent
           appDefaultName: om-app
           pullPolicy: IfNotPresent
           repository: cp.icr.io/cp/ibm-oms-enterprise
           tag: 10.0.2409.2-amd64
         orderHub:
           base:
            imageName: om-orderhub-base
            pullPolicy: IfNotPresent
            repository: cp.icr.io/cp/ibm-oms-enterprise
            tag: 10.0.2409.2-amd64
         orderService:
           imageName: orderservice
           pullPolicy: IfNotPresent
           repository: cp.icr.io/cp/ibm-oms-enterprise
           tag: 10.0.2409.2-amd64
         pullPolicy: IfNotPresent
       database:
         postgresql:
           name: postgres
           host: oms-postgresql.oms.svc.cluster.local
           port: 5432
           user: postgres
           schema: postgres
           secure: false
           dataSourceName: jdbc/OMDS
       devInstances:
         profile: ProfileColossal
         postgresql:
           repository: docker.io
           tag: '16.1'
           name: postgres
           user: postgres
           password: postgres
           database: postgres
           schema: postgres
           wipeData: true
           profile: ProfileColossal
         activemq:
           repository: docker.io
           tag: 6.1.0
           name: apache/activemq-classic
           profile: ProfileColossal
       storage:
         accessMode: ReadWriteMany
         capacity: 10Gi
         name: oms-pvc
         securityContext:
           supplementalGroups:
             - 0
             - 1000
             - 1001
         storageClassName: 'standard'
       additionalMounts:
         configMaps:
           - mountPath: /shared/tls.p12
             name: truststoreconfigmap
             subPath: tls.p12
    
           # note: if you would like to use self-generated keys,
           # you can uncomment the following JWT keystore mount
           #- mountPath: /shared/jwtauth/jwt.jks
           #  name: jwt-jks-keystoreconfigmap
           #  subPath: jwt.jks
    
       # you can comment this property out after your first deployment
       dataManagement:
         mode: create

    Note the following for the above OMEnvironment yaml file:

    • Ensure that you have internet access before starting the k8s operator deployment for the OMS application. This deployment requires downloading a list of images. If the images are not downloaded, the deployment will fail. Alternatively, you can download these images in advance, push them to your local registry, and then perform the deployment by referring to your local registry. The required images are:

      • docker.io/postgres:16.1
      • docker.io/apache/activemq-classic:6.1.0
    • You will need to substitute in your instance's domain name under spec.orderHub.adminURL and spec.ingress.host

    • Under spec.orderService.cassandra.contactPoints, you will need to include your instance's IP.

    • The mode property of spec.dataManagement mode is set to create. You can comment out this property after your first deployment of your OMS pods as the create mode is only required when an empty schema is being set up.

    • After your first deployment, you can comment out this property

    • For upgrading fix packs in the future, you can uncomment this property and set the mode to upgrade, and then re-apply the yaml to install the latest fix packs.

    • You can find more information on this property within the IBM documentation on configuring the dataManagement parameter.

      vii. Generate JWT keys:

      Note: By default, OMS will generate its own keypair using the jwtkeygen job. The public key and keystore containing the private key will be saved to the /shared/jwtauth directory within your OMS PVC. If you would like to use the OMS generated keystore and public key, you do not need to follow the instructions below to generate your own keypair. Instead, you can modify the OMEnvironment yaml provided above to use JWT tokens generated by OMS instead. You can then store the public key generated by the jwtkeygen job under /shared/jwtauth/<jwt-alias-name>.pub within your jwt_oms_public_key value in your Order Service secret.

      a) If you choose to self-generate your keys, you can use the following commands to do so:

    • Generating a 2048-bit RSA private key

      openssl genpkey -algorithm rsa -pkeyopt rsa_keygen_bits:2048 -out private-key.pem
    • Extracting public key from the private key

      openssl pkey -pubout -inform PEM -outform PEM -in private-key.pem -out public-key.pem
    • Creating a Certificate Signing Request (CSR) using the private key

      openssl req -new -key private-key.pem -out certificate.csr -subj “/CN=<your-OMS-hostname>”
    • Signing the CSR with the private key to generate a self-signed certificate that is valid for a year

      openssl x509 -req -days 365 -in certificate.csr -signkey private-key.pem > certificate.pem
    • Bundling the private key and certificate into a PKCS#12 file (certificate.p12)

      openssl pkcs12 -export -out certificate.p12 -inkey private-key.pem -in certificate.pem
    • Import the PKCS#12 file into a Java KeyStore (JKS) file

      keytool -importkeystore -destkeystore jwt.jks -srckeystore certificate.p12 -srcstoretype pkcs12 -alias 1

      The keystore is named as jwt.jks and is mounted to the OMS PVC directory /shared/jwtauth to override the keystore which OMS will try to generate by default.

      Otherwise, if you name your keystore as something else such as keystore.jks, OMS will to try to read the keystore it is generating by itself, which can lead to an Unauthorized error when you try to make your Order Service calls.

      You can find more information on setting up a self-generated JWT keypair within the IBM documentation

      As mentioned in the above documentation, when providing your own keystore to OMS you will need to copy it over to your OMS PVC to override the default generated keystore from the jwtkeygen job:

      Delete existing OMS-generated keystore file (if applicable):

      minikube ssh
      cd /var/hostpath-provisioner/oms/oms-pvc
      sudo rm -rf jwtauth
      mkdir jwtauth
      logout

      Copy self-generated keystore to PVC

      minikube cp /path/to/your/jwt.jks /var/hostpath-provisioner/oms/oms-pvc/jwtauth/jwt.jks

      To validate the above, you can re-deploy your OMEnvironment yaml and check the jwtkeygen job logs. You should not see any messages of it creating a new keystore within the /shared/jwtauth directory. You can also use minikube ssh and validate that your self-generated keystore file that is being mounted under /var/hostpath-provisioner/default/oms-pvc as expected. You should only see your self-generated keystore file (i.e., jwt.jks), and not an OMS-generated public key file (<alias-name>.pub).

      If you followed the above steps to generate your own keystore then you can also validate that your private key within your keystore has an alias of 1 and not operator to ensure that your PVC doesn’t still contain the OMS-generated keystore:

      Checking keystore contents – note the private key alias for the next steps:

      keytool -v -list -keystore jwt.jks -storepass password

      If you followed the above steps, your PK alias should be 1.

      Once OMS has access to your private key from the keystore located within /shared/jwtauth/jwt.jks, Order Service will also require the contents of your self-generated public key (public-key.pem) to be added to the secret used in your OMEnvironment yaml as jwt_oms_public_key to validate the token.

      In step 4, we created a secret called oms-secret.yaml. To add your JWT public key to your secret file, you can append the following property to your secret file:

      apiVersion: v1
      kind: Secret
      metadata:
      name: 'oms-secret'
      type: Opaque
      stringData:
      [...]
      jwt_oms_public_key: <your-public-key>

      To get your public key value, you can run:

      cat public-key.pem

      Note: For your jwt_oms_public_key value, make sure to exclude the -----BEGIN PUBLIC KEY----- and -----END PUBLIC KEY----- parts from your public-key.pem file and store your public key value in one line without any line breaks.

      For additional details, refer to JWT properties for Order Service.

      After doing the above, you can run the following command to add the OMS secret to your deployment:

      kubectl apply -f oms-secret.yaml -n oms

      viii. Deployment: Run the following command to deploy OMS:

      kubectl apply -f om-environment.yaml -n oms

      If you are deploying OMS for the first time, expect it to take around 45-60 minutes for your pods to come up. This is because OMS will need more time to perform the first time setup from setting dataManagement.mode to create.

      Subsequent deployments will be much quicker as you can leave this properly commented out, or set to upgrade.

      Accessing Applications Post Deployment

    • Start the firewall (if not running):

      sudo systemctl start firewalld

      Add 443 port to the public zone. Run the following commands to do so:

      sudo firewall-cmd --add-port=443/tcp --zone=public --permanent
      sudo firewall-cmd --reload

      Setting kubectl capabilities:

      sudo setcap 'cap_net_bind_service=+ep' $(which kubectl)

      Port-forwarding kubectl to use HTTPS:

      kubectl port-forward --address 0.0.0.0 svc/ingress-nginx-controller -n ingress-nginx 443:443

      Revert kubectl capabilities (if needed):

      sudo setcap 'cap_net_bind_service=-ep' $(which kubectl)

      You should then be able to accessing OMS applications once all of your pods come up:

    • OMS: https://server1-oms.<your-instance-domain>/smcfs/console/login.jsp

    • Order Hub: https://server1-oms.<your-instance-domain>/order-management/workspace-home/workspace/welcome
    • Order Service: https://orderservice-oms.<your-instance-domain>.fyre.ibm.com/default/v1/orderservice

      Validating Order Service:

      i. Request JWT token from OMS:

      curl -k --location --request GET 'https://server1-oms.<your-instance-domain>/smcfs/restapi/jwt' --header 'Authorization: Basic <base64-encoded-credentials>' --header 'Content-Type: application/xml'

      Example (using admin / password credentials):

      curl -k --location --request GET 'https://server1-oms.xxxxx.xxxxx.com/smcfs/restapi/jwt' --header 'Authorization: Basic YWRtaW46cGFzc3dvcmQ==' --header 'Content-Type: application/xml'
      
      eyJraWQiOiJvcGVyYXRvciIsImFsZyI6IlJTMjU2In0.eyJpc3MiOiJvbXMiLCJhdWQiOiJzZXJ2aWNlIiwiZXhwIjoxNzc3MDMwOTc5LCJuYmYiOjE3NDEwMzA5NzksImp0aSI6Ikk3OEVZMlpMcnVGNGFfUjg5ekJiS2ciLCJpYXQiOjE3NDEwMzA5NzksInN1YiI6ImFkbWluIiwidXNlcklEIjoiYWRtaW4iLCJncm91cHMiOlsiQ0FUQUxPR19VU0VSUyIsIkNVU1RPTUVSX1VTRVJTIiwiSU5WRU5UT1JZX1VTRVJTIiwiT0NfQURNSU5fR1JPVVAiLCJPQ19CVVNVU0VSX0dST1VQIiwiUFJJQ0lOR19VU0VSUyIsIlNZU1RFTSIsIlNZU1RFTV9TRVRVUF9VU0VSUyJdfQ.evu7PYshvLQ2L6ApHYITNyzT66ri1DMaM9V4wWSzp1rZStOa7BdA9ruj3y69VhaT4oDHUZ23Ce5yMBxjN-pacQbjisyEecsWeYaz8FAGiWXh97lwZsZMXThm6SJreuCI8xRzNzfNnkauwIryUXMcUcT5pTbVMT1ePzPP5kayFlrdcGYBQIzwuHSJ4mnpUmxm1QVbRQ2aHFVqPQxf3bYXgf5r7giPipOykEeV0sWObBesL354btl09pRAhzzmIWkN8i5-OaUbEOg0YcuQc8tg2_B2L44RKu-aBFaHUcSwKClPyKba7kZJKbL9RJPmRhxd-f4ZHLCOOsWXD6sCGTm3lLwMw4A93vvD9kJAktyBCoWcZbdUXaX6EPYV5UGL5fmOnsoFBhTzWZsIUfu9ykaeTaaOOcDaUnymFzR04J3vKHYglO73uTtYcPERlKVxQXaWylD5XWii1_yAvSkHb70EQSrPUUxmNL21DSVn9Tj__rsOvqQYM3i-5qJXArEWJoAJtrb42vGrFLOZKU6n1GBFITjrnYK1NHcFLmLVxbtpjNXUi-sL-boKuIqoaTbirbH3rmbeVJpGll8Xwlqlz4m128rHkY9pMj55d6xsVmL_I9FhM7ZGmhuNnA_QyassWigLTt0Z5NjqPPhjv6-Cdif5pejBfF8_JzzkCzOTm9zgM18

      ii. Use JWT token in Order Service Call:

      curl -k --location --request POST 'https://orderservice-oms.xxxxx.xxxxx.com/default/v1/orderservice' \
        --header 'Authorization: Bearer <your-bearer-token> \
        --header 'Content-Type: application/json' \
        -d '{ "query": "mutation createSearchIndex ($index: SearchIndexInput) { createSearchIndex (index : $index) { id, name } }","variables": { [...] }'

      Example:

      curl -k --location --request POST 'https://orderservice-oms.xxxxx.xxxxx.com/default/v1/orderservice' \
        --header 'Authorization: Bearer eyJraWQiOiJvcGVyYXRvciIsImFsZyI6IlJTMjU2In0.eyJpc3MiOiJvbXMiLCJhdWQiOiJzZXJ2aWNlIiwiZXhwIjoxNzc3MDMwOTc5LCJuYmYiOjE3NDEwMzA5NzksImp0aSI6Ikk3OEVZMlpMcnVGNGFfUjg5ekJiS2ciLCJpYXQiOjE3NDEwMzA5NzksInN1YiI6ImFkbWluIiwidXNlcklEIjoiYWRtaW4iLCJncm91cHMiOlsiQ0FUQUxPR19VU0VSUyIsIkNVU1RPTUVSX1VTRVJTIiwiSU5WRU5UT1JZX1VTRVJTIiwiT0NfQURNSU5fR1JPVVAiLCJPQ19CVVNVU0VSX0dST1VQIiwiUFJJQ0lOR19VU0VSUyIsIlNZU1RFTSIsIlNZU1RFTV9TRVRVUF9VU0VSUyJdfQ.evu7PYshvLQ2L6ApHYITNyzT66ri1DMaM9V4wWSzp1rZStOa7BdA9ruj3y69VhaT4oDHUZ23Ce5yMBxjN-pacQbjisyEecsWeYaz8FAGiWXh97lwZsZMXThm6SJreuCI8xRzNzfNnkauwIryUXMcUcT5pTbVMT1ePzPP5kayFlrdcGYBQIzwuHSJ4mnpUmxm1QVbRQ2aHFVqPQxf3bYXgf5r7giPipOykEeV0sWObBesL354btl09pRAhzzmIWkN8i5-OaUbEOg0YcuQc8tg2_B2L44RKu-aBFaHUcSwKClPyKba7kZJKbL9RJPmRhxd-f4ZHLCOOsWXD6sCGTm3lLwMw4A93vvD9kJAktyBCoWcZbdUXaX6EPYV5UGL5fmOnsoFBhTzWZsIUfu9ykaeTaaOOcDaUnymFzR04J3vKHYglO73uTtYcPERlKVxQXaWylD5XWii1_yAvSkHb70EQSrPUUxmNL21DSVn9Tj__rsOvqQYM3i-5qJXArEWJoAJtrb42vGrFLOZKU6n1GBFITjrnYK1NHcFLmLVxbtpjNXUi-sL-boKuIqoaTbirbH3rmbeVJpGll8Xwlqlz4m128rHkY9pMj55d6xsVmL_I9FhM7ZGmhuNnA_QyassWigLTt0Z5NjqPPhjv6-Cdif5pejBfF8_JzzkCzOTm9zgM18' \
        --header 'Content-Type: application/json' \
        -d '{
      "query": "mutation createSearchIndex ($index: SearchIndexInput) { createSearchIndex (index : $index) { id, name } }",
      "variables": {
        "index": {
          "id": "order1",
          "mappings": {
            "key1": {
              "type": "keyword",
              "index": true,
              "store": false
            },
            "key2": {
              "type": "keyword",
              "index": true,
              "store": true
            },
            "text1": {
              "type": "text",
              "index": true,
              "store": true
            },
            "text2": {
              "type": "text",
              "index": true,
              "store": true
            },
            "value": {
              "type": "double",
              "index": true,
              "store": true
            }
          }
        }
      }
      }'
      
      {"data":{"createSearchIndex":{"id":"order","name":"order-202503"}}}

Summary

This tutorial provided step-by-step instructions on deploying IBM Order Management System and Order Service Containers by using Minikube.