This is a cache of https://developer.ibm.com/tutorials/awb-deploy-sterling-oms-containers-with-sip-integration/. It is a snapshot of the page as it appeared on 2026-02-17T06:46:53.617+0000.
Integrating IBM Sterling OMS and SIP Containers on Minikube
IBM Developer

Tutorial

Integrating IBM Sterling OMS and SIP Containers on Minikube

Learn how to integrate IBM Sterling OMS with SIP on Minikube for development and testing

By Chiranjeevi Dasegowda, Pratap H S

This tutorial guides you through deploying the IBM Sterling Order Management System Software (OMS) and its dependent stack, PostgreSQL and ActiveMQ, using a Kubernetes operator on a local, desktop-sized machine with Minikube. It also covers integrating OMS with IBM Sterling Intelligent Promising (SIP) Certified Containers, enabling you to explore and test the system in a controlled, development-friendly environment.

Objectives

By completing this tutorial, you will:

  • Deploy a standalone OMS application using Minikube.
  • Integrate OMS with SIP to utilize its inventory visibility and promising capabilities for efficient order fulfillment.

Note: This guide is intended for development and testing purposes only. For production deployment, consult the official product documentation.

Introduction to IBM Sterling OMS

IBM Sterling Order Management System (OMS) plays a vital role in supply chain and commerce operations for large enterprises globally. It empowers B2C and B2B organizations by providing a robust platform designed for Innovation, Differentiation, Omni-channel Managemen.

In today’s fast-paced environment, automation and rapid deployment are essential. Technologies such as Docker and Kubernetes bring significant value by simplifying and accelerating deployment processes.

The OMS Certified Containers, available in Professional and Enterprise editions on the Operator Catalog, further enhance this experience. Using the OMS Operator, organizations can streamline enterprise application management across diverse cloud platforms with ease and efficiency.

Introduction to Intelligent Promising (SIP)

IBM Sterling Intelligent Promising (SIP) is a solution designed to help retailers efficiently manage inventory and delivery commitments. By tracking stock and intelligently managing order fulfillment, SIP ensures streamlined operations and enhanced customer satisfaction.

SIP's functionality offers the following modular services:

  • Inventory visibility: Tracks stock across multiple locations, ensuring accurate and up-to-date availability.

  • Promising service: Provides realistic delivery dates based on shipping methods, costs, and inventory.

  • Catalog Service: Manages product catalog data within the SIP configuration for seamless integration.

  • Carrier Service: Optimizes shipping options using advanced algorithms for faster and cost-effective fulfillment.

  • Optimizer Service: Identifies cost-efficient fulfillment strategies in coordination with OMS, aligning with business priorities.

These core modules are further supported by microservices such as rules and common services, making SIP highly adaptable to various business requirements and workflows.

Deployment strategy: Integrating OMS with SIP using the OMS Operator

To integrate IBM Sterling OMS with SIP, ensure you have set up an SIP Container instance as outlined in the Deploying IBM Sterling Intelligent Promising Containers on Minikube tutorial IBM Developer. This setup is a prerequisite for the integration.

Integration phases

This tutorial focuses on Phase 2 of the OMS-SIP integration. For details on Phase 1, refer to the product documentation.

OMS Standard Operator

The OMS Standard Operator simplifies containerized deployments by adhering to Kubernetes best practices. It manages applications and components through custom resources, particularly the OMEnvironment resource. This resource allows you to configure:

  • Application images
  • Storage options
  • PostgreSQL and ActiveMQ dependencies
  • Network policies
  • Other essential settings

With these configurations, the operator facilitates the deployment of a fully functional OMS environment.

Post-integration setup

Once OMS is integrated with SIP:

  • Inventory management: SIP becomes the primary inventory management system.
  • Data handling: Supply, demand, and reservation data are stored directly in SIP instead of OMS.

Key points for Intelligent Promising (SIP) integration

  1. Data management: Demand and supply data are stored directly in SIP, bypassing OMS.
  2. Sourcing functionality: Activating SIP disables OMS’s native smart sourcing functionality.
  3. Master of inventory and capacity:
    • SIP acts as the inventory master.
    • OMS retains its role as the master of capacity. Any availability calls requiring capacity calculations must be routed through OMS APIs.
  4. Security: Ensure secure communication by enabling TLS version 1.2 for outbound data from OMS to SIP.

Note: For detailed configuration and production guidance, refer to the product documentation.

Development and testing with Minikube

Minikube provides a minimal Kubernetes (K8s) cluster with a Docker container runtime, ideal for local development and testing. It is specifically designed for deployment on developers’ desktops.

The sample configuration in this tutorial demonstrates how to deploy OMS as a standalone application for Proof of Concept (POC) purposes.

For Production Deployment

Prerequisites

Hardware requirements

  • 100 GB+ of storage
  • 24 GB+ of memory (preferably 32+)
  • 8 available virtual CPUs (preferably 16)

Stack used for demonstration purpose

  • OS version: Red Hat Enterprise Linux release 8.9 (Ootpa)
  • minikube version: v1.32.0
  • OMS operator version, image: 1.0.19, 10.0.2409.0-amd64

Estimated time

This tutorial should take a few hours to complete.

Deployment Steps

Step 1. Installing Minikube

  1. Create a non-root user

    a. Create a non-root user and grant Sudo permissions

    sudo useradd -m -s /bin/bash supportuser
     sudo passwd supportuser
     sudo usermod -aG wheel supportuser

    b. Switch to the non-root user

    su - supportuser
     sudo groupadd docker
     sudo usermod -aG docker $USER && newgrp docker
  2. Install dependent packages

    a. Install kubectl - Kubectl is the essential command-line tool used for interacting with Kubernetes clusters. Here's how to install it:

    Update your package manager's repository information

    sudo yum update

    Download and install kubectl

    curl -LO "https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl"
     chmod +x ./kubectl
     sudo mv ./kubectl /usr/local/bin/kubectl

    Verify the installation by checking the version

    kubectl version --client

    b. Install Minikube - Minikube is a tool that allows you to run a Kubernetes cluster locally. Here's how to install it:

    Download and install Minikube

    curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
         sudo install minikube-linux-amd64 /usr/local/bin/minikube

    c. Install Docker and dependent packages

    Install conntrack

    Conntrack is a utility used to view and manipulate the network connection tracking table in the Linux kernel, which is essential for Kubernetes. Install it with the following command:

    sudo yum install conntrack

    Install crictl

    Crictl is a command-line interface for the Container Runtime Interface (CRI). To install it, follow these steps:

    • Determine the latest version of crictl on the GitHub releases page.
    • Download and install crictl (replace $VERSION with the latest version):

      export VERSION="v1.26.0"
        curl -L https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-${VERSION}-linux-amd64.tar.gz --output crictl-${VERSION}-linux-amd64.tar.gz
        sudo tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bin
        rm -f crictl-$VERSION-linux-amd64.tar.gz

      Note: Remove any conflicting versions of runc and re-install:

      rpm -qa | grep runc
      sudo yum remove <output of above>

      For example: sudo yum remove runc-1.1.12-1.module+el8.9.0+21243+a586538b.x86_64 and then sudo yum install runc

      Install socat, a utility for multiplexing network connections

      sudo yum install socat

      Install cri-dockerd by downloading the latest RPM for your OS and installing it:

      Note: Run this command to install libcgroup only if you are on RHEL version 8.x. You can skip if you are on 9.x.

      sudo yum install libcgroup

      Install the Container Networking Interface (CNI) plugins:

      Find the latest version at https://github.com/containernetworking/plugins/releases

      CNI_PLUGIN_VERSION="v1.3.0"
        CNI_PLUGIN_TAR="cni-plugins-linux-amd64-$CNI_PLUGIN_VERSION.tgz"
        CNI_PLUGIN_INSTALL_DIR="/opt/cni/bin"
        curl -LO "https://github.com/containernetworking/plugins/releases/download/$CNI_PLUGIN_VERSION/$CNI_PLUGIN_TAR"
        sudo mkdir -p "$CNI_PLUGIN_INSTALL_DIR"
        sudo tar -xf "$CNI_PLUGIN_TAR" -C "$CNI_PLUGIN_INSTALL_DIR"
        rm "$CNI_PLUGIN_TAR"

      Install Docker

      Docker is required for container runtime support. Use the following commands to install Docker on your system:

      Install required utilities and add the Docker repository:

      sudo yum install -y yum-utils
        sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

      Install Docker and related packages:

      sudo yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

      Start Docker

      sudo systemctl start docker
        sudo systemctl status docker

      d. Start Minikube

      Now that you have installed all the necessary components, you can start Minikube:

      minikube start --driver=docker --cpus=<> --memory=<> --disk-size=<> --addons=metrics-server,dashboard,ingress

      For example:

      minikube start --driver=docker --cpus=14 --memory=56000 --disk-size=50g --addons=metrics-server,dashboard,ingress

      Validate the installation:

      `minikube status`
      > minikube
            type: Control Plane
            host: Running
            kubelet: Running
            apiserver: Running
            kubeconfig: Configured

Step 2. Accessing Minikube dashboard remotely

The Minikube dashboard is a powerful web-based interface that provides insights into the state of your Minikube cluster. As a user-friendly graphical user interface (GUI), it offers various functionalities for managing Kubernetes resources. Here's what you can do using the Minikube dashboard:

  • Overview of Cluster Resources: The dashboard provides an at-a-glance overview of your Minikube cluster's nodes, pods, services, and more. This makes it easy to monitor the overall health of your cluster and quickly identify any issues.

  • Managing Deployments: You can create, scale, and manage deployments directly from the dashboard. This simplifies the process of launching applications and ensures they are running optimally.

  • Inspecting Pods and Containers: The dashboard lets you explore the details of pods, containers, and their associated logs. This is particularly valuable for debugging issues and analyzing application behavior.

  • Services and Ingress Management: Manage services and expose them via LoadBalancer, NodePort, or ClusterIP. Additionally, you can configure and manage Ingress resources to control external access to services.

  • ConfigMaps and Secrets: Create and manage ConfigMaps and Secrets, which store configuration data and sensitive information separately from application code.

  • Event Tracking: Stay informed about events in your cluster. The dashboard displays events related to pods, deployments, services, and other resources, aiding in identifying problems.

  • Cluster and Namespace Switching: If you're working with multiple clusters or namespaces, the dashboard allows you to seamlessly switch between them, streamlining management tasks.

  • Pod Terminal Access: With a single click, you can access a terminal directly within a pod's container. This is invaluable for debugging and troubleshooting.

Let's explore how to access the Minikube dashboard remotely and manage Kubernetes resources with ease:

  1. Install the NetworkManage service

    sudo yum install NetworkManager
  2. Start the NetworkManager service to manage network connections:

    sudo systemctl start NetworkManager
  3. Allow access to the Minikube dashboard port (8001/tcp) through the firewall:

    sudo systemctl start firewalld
     sudo firewall-cmd --add-port=8001/tcp --zone=public --permanent
     sudo firewall-cmd --reload
  4. From the Minikube server, get the URL to access the Minikube dashboard:

    minikube dashboard --url
  5. Access Minikube dashboard remotely:

    Establish another terminal connection to the minikube server and start a proxy server that listens on all network interfaces:

    minikube kubectl -- proxy --address='0.0.0.0' --disable-filter=true

    Access the dashboard using the URL provided earlier but replace the IP address with the public IP of the Minikube host.

    The URL should resemble http://<Minikube_Public_IP>:8001/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

    Additional troubleshooting for Minikube dashboard access:

    If you encounter an inaccessible Minikube Dashboard URL and notice that the dashboard pods are in a crash loop backoff (kubectl get pods -n kubernetes-dashboard), consider the following step to resolve the issue:

    Restart Docker: If Docker-related errors such as networking or iptables issues are observed, restarting the Docker service can help. Use the command sudo systemctl restart docker. This action can reset Docker's networking components and often resolves connectivity and configuration issues impacting pod operations in Minikube.

Step 3. Installing the Operator SDK CLI and OLM

  1. Download and install the Operator SDK CLI

    RELEASE_VERSION=$(curl -s https://api.github.com/repos/operator-framework/operator-sdk/releases/latest | grep tag_name | cut -d '"' -f 4)
     sudo curl -LO "https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk_linux_amd64"
     sudo chmod +x operator-sdk_linux_amd64
     sudo mv operator-sdk_linux_amd64 /usr/local/bin/operator-sdk
  2. Install OLM

    Run: operator-sdk olm install --version=latest

    Output should be similar to the following indicating successful install:

    > INFO[0160]   Deployment `olm/packageserver` successfully rolled out
    > INFO[0161] Successfully installed OLM version `latest`
    NAME                                            NAMESPACE    KIND                        STATUS
     catalogsources.operators.coreos.com                          CustomResourceDefinition    Installed
     clusterserviceversions.operators.coreos.com                  CustomResourceDefinition    Installed
     installplans.operators.coreos.com                            CustomResourceDefinition    Installed
     olmconfigs.operators.coreos.com                              CustomResourceDefinition    Installed
     operatorconditions.operators.coreos.com                      CustomResourceDefinition    Installed
     operatorgroups.operators.coreos.com                          CustomResourceDefinition    Installed
     operators.operators.coreos.com                               CustomResourceDefinition    Installed
     subscriptions.operators.coreos.com                           CustomResourceDefinition    Installed
     olm                                                          Namespace                   Installed
     operators                                                    Namespace                   Installed
     olm-operator-serviceaccount                     olm          ServiceAccount              Installed
     system: controller:operator-lifecycle-manager                ClusterRole                 Installed
     olm-operator-binding-olm                                     ClusterRoleBinding          Installed
     cluster                                                      OLMConfig                   Installed
     olm-operator                                    olm          Deployment                  Installed
     catalog-operator                                olm          Deployment                  Installed
     aggregate-olm-edit                                           ClusterRole                 Installed
     aggregate-olm-view                                           ClusterRole                 Installed
     global-operators                                operators    OperatorGroup               Installed
     olm-operators                                   olm          OperatorGroup               Installed
     packageserver                                   olm          ClusterServiceVersion       Installed
     operatorhubio-catalog                           olm          CatalogSource               Installed
     operatorhubio-catalog                           olm          CatalogSource               Installed
     operatorhubio-catalog                           olm          CatalogSource               Installed

    Note: If the OLM install fails due to some reason, uninstall the previous version and then re-install.

    To resolve this issue and perform a clean installation of OLM, you can follow these steps:

    i. You need to uninstall the existing OLM resources from your Kubernetes cluster. To do this, you can use the kubectl command. Here is a general approach to uninstall OLM:

    operator-sdk olm uninstall --version=latest
     kubectl delete crd olmconfigs.operators.coreos.com
     kubectl delete clusterrole aggregate-olm-edit
     kubectl delete clusterrole aggregate-olm-view
     kubectl delete clusterrolebinding olm-operator-binding-olm
     kubectl delete clusterrole system:controller:operator-lifecycle-manager
     kubectl delete -n kube-system rolebinding packageserver-service-auth-reader
     kubectl delete -n operators serviceaccount default

    The above commands will delete OLM-related resources in all namespaces. If you want to target a specific namespace, you can omit the --all-namespaces flag.

    ii. After running the commands to delete OLM resources, verify that there are no remaining OLM resources in your cluster:

    kubectl get subscriptions.operators.coreos.com
     kubectl get catalogsources.operators.coreos.com
     kubectl get operatorgroups.operators.coreos.com
     kubectl get clusterserviceversions.operators.coreos.com

    If these commands return empty lists, it means that OLM has been successfully uninstalled.

    iii. After ensuring that OLM is uninstalled, you can proceed with the installation of the desired OLM version. Refer Step 2 above to re-install OLM.

  3. After installing OLM, you can verify its installation by checking its resources kubectl get crd -n olm

    NAME                                             CREATED AT
     catalogsources.operators.coreos.com              2023-10-25T00:55:49Z
     clusterserviceversions.operators.coreos.com      2023-10-25T00:55:49Z
     installplans.operators.coreos.com                2023-10-25T00:55:49Z
     olmconfigs.operators.coreos.com                  2023-10-25T00:55:49Z
     operatorconditions.operators.coreos.com          2023-10-25T00:55:49Z
     operatorgroups.operators.coreos.com              2023-10-25T00:55:49Z
     operators.operators.coreos.com                   2023-10-25T00:55:49Z
     subscriptions.operators.coreos.com               2023-10-25T00:55:49Z
     subscriptions.operators.coreos.com               2023-10-25T00:55:49Z

    You should see the new OLM resources related to the version you installed.

    By following these steps, you should be able to uninstall existing OLM resources and perform a clean installation of the desired OLM version in your Kubernetes cluster. Be sure to refer to the specific documentation or instructions for the OLM version you are working with for any version-specific installation steps or considerations.

  4. Overwriting PodSecurityStandards (PSS)

    Kubernetes has an equivalent of SecurityContextConstraints (of OpenShift) called PodSecurityStandards (PSS) that enforces different profiles (privileged, baseline, and restricted) at a namespace level. When a restricted profile is defaulted on a namespace, pod spec is enforced to contain the securityContext.seccompProfile.type field with a valid value. In this case, the Operator installation fails because the namespace (olm) has restricted PSS, but the Operator controller deployment does not have the field.

    To overcome this, switch to baseline PSS that does not enforce the securityContext.seccompProfile.type field, by using the following command:

    kubectl label --overwrite ns olm pod-security.kubernetes.io/enforce=baseline
  5. Delete the oob olm CatalogSource:

    kubectl delete catalogsource operatorhubio-catalog -n olm
     > catalogsource.operators.coreos.com "operatorhubio-catalog" deleted

Step 4. Creating IBM Entitlement Key Secret

An image pull secret named ibm-entitlement-key must be created with the IBM entitlement registry credentials in the namespace (project) where you are configuring SIPEnvironment. For more information, see the corresponding documentation.

  1. Go to https://myibm.ibm.com/products-services/containerlibrary and copy your entitlement key.

  2. Export the entitlement key and namespace variables.

    export ENTITLEDKEY="<Entitlement Key from MyIBM>"
     export NAMESPACE="<project or Namespace Name for SIP deployment>"
  3. Create ibm-entitlement-key under the namespace where you will be deploying SIP by running the following command.

    kubectl create secret docker-registry ibm-entitlement-key   \
         --docker-server=cp.icr.io                   \
         --docker-username=cp                        \
         --docker-password=${ENTITLEDKEY}            \
         --namespace=${NAMESPACE}

    Note: The Operator is from open registry. However, most container images are commercial. Contact your IT or Enterprise Administrator to get access to the entitlement key.

Step 5. Installing OMS for SIP Integration

  1. Preparing and configuring IBM Sterling Intelligent Order Management Operator.

    a. Create a CatalogSource YAML named catalog_source_oms.yaml. A CatalogSource is a repository in Kubernetes that houses information about available Operators, which can be installed on the cluster. It acts as a marketplace for Operators, allowing users to browse and select from a variety of packaged Kubernetes applications.

    apiVersion: operators.coreos.com/v1alpha1
     kind: CatalogSource
     metadata:
       name: ibm-oms-catalog
       namespace: olm
     spec:
       displayName: IBM OMS Operator Catalog
       # For the image name, see the following catalog source image names table and use the appropriate value.
       image: 'icr.io/cpopen/ibm-oms-ent-case-catalog:v1.0.19-10.0.2409.0'
       publisher: IBM
       sourceType: grpc
       updateStrategy:
         registryPoll:
           interval: 10m

    b. Run the following command to create the CatalogSource:

    `kubectl create -f catalog_source_oms.yaml -n olm`

    Confirm that the CatalogSource is successfully created by running:

    `kubectl get catalogsource,pods -n olm`

    Before you proceed:

    Switch to the desired namespace where you want to deploy the resources. Use the following command to set the namespace:

    `kubectl config set-context --current --namespace=<your-namespace>`

    In the preceding command, replace <your-namespace> with the name of your namespace.

    Note: For demonstration purposes, this tutorial uses the oms namespace. You can deploy resources in the oms namespace or modify the YAML files to target a different namespace of your choice.

    Recommendation: If deploying both IBM Sterling Order Management (OMS) containers and SIP on the same Kubernetes cluster, consider using separate namespaces to better organize and group the resources.

  2. An OperatorGroup in Kubernetes defines the namespaces where a specific Operator will be active. It determines the scope of the Operator’s capabilities, specifying which namespaces it can observe and manage. This feature supports multi-tenancy and access control within the cluster.

    Key considerations for Operator Groups

    • Ensure that only one OperatorGroup exists per namespace to avoid conflicts.
    • Multiple OperatorGroups in the same namespace can lead to deployment issues and hinder proper functioning of the Operators.
    • This constraint helps maintain clear and controlled access to resources within the namespace.

      a. Create an Operator Group YAML file named oms-operator-group.yaml.

      b. Run the following command to create the OperatorGroup: kubectl create -f oms-operator-group.yaml

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      name: oms-operator-global
      namespace: oms
      spec: {}
  3. Create Subscription. In Kubernetes, a Subscription is used to track desired Operators from a CatalogSource and manage their updates. It ensures that an Operator is kept within a specified version range, automating the process of installation and upgrades as new versions become available.

    a. Create a Subscription YAML file named oms_subscription.yaml.

    b. Run kubectl create -f oms_subscription.yaml

    apiVersion: operators.coreos.com/v1alpha1
     kind: Subscription
     metadata:
       name: ibm-oms-ent-sub
       namespace: oms
     spec:
       channel: v1.0
       installPlanApproval: Automatic
       name: ibm-oms-ent
       source: ibm-oms-catalog
       sourceNamespace: olm
  4. Validate the configuration by running the following commands:

    • kubectl get sub ibm-oms-ent-sub -o jsonpath="{.status.installedCSV}" && echo

      alt

    • kubectl get pods -n oms

      alt

  5. Installing OMS application

    a. Create a Kubernetes Persistent Volume Claim (PVC) with the ReadWriteMany access mode and a minimum storage requirement of 10 GB. This PVC should use the standard storage class provided by Minikube, which dynamically provisions a Persistent Volume (PV) during deployment. Ensure that this storage is accessible by all containers across the cluster. The owner group of the directory where the PV is mounted must have write access, and the owner group ID should be specified in the spec.storage.securityContext.fsGroup parameter of the OMEnvironment custom resource. This PV is crucial for storing data from middleware services such as PostgreSQL and Active MQ when deploying OMS in development mode. Additionally, the PV stores the JWI secret that the OMS generates and also stores the truststore, allowing you to add trusted certificates.

    kind: PersistentVolumeClaim
     apiVersion: v1
     metadata:
       name: oms-pvc
     spec:
       accessModes:
         - ReadWriteMany
       resources:
         requests:
           storage: 40Gi
       volumeName: oms-pv
       storageClassName: "standard"

    b. Create a PVC YAML named oms-pvc.yaml by running the following command:

    `kubectl create -f oms-pvc.yaml -n oms`

    c. Configure PostgreSQL and ActiveMQ. IBM Sterling OMS Operator has the capability to automatically install the required middlewares such as PostgreSQL and ActiveMQ for develeopment purposes. Please note that these middlewares are for develeopment purposes only.

    devInstances:
         profile: ProfileColossal
         postgresql:
           repository: docker.io
           tag: '16.1'
           name: postgres
           user: postgres
           password: postgres
           database: postgres
           schema: postgres
           wipeData: true
             # storage:
             #   name: <omsenvironment-operator-pv-oms-test>
           profile: ProfileColossal
             # timezone: <Timezone>
    
         activemq:
           repository: docker.io
           tag: 6.1.0
           name: apache/activemq-classic
           # storage:
             #   name: <omsenvironment-operator-pv-oms-test>
           profile: ProfileColossal
             # timezone: <Timezone>

    d. Configure customer_overrides properties:

    • Set the following properties in the customer_overrides properties file to enable the integration with SIP:

      iv_integration.tenantId: default
       iv_integration.clientId: DEFAULT
       iv_integration.secret: DEFAULT
       iv_integration.baseUrl: https://<SIP_HOSTNAME>/inventory
       iv_integration.authentication.mode: JWT
       iv_integration.IVApiVersion: v2
       iv_integration.nodeAvailability.apiUrl: /v2/availability/node/
       iv_integration.networkAvailability.cached.apiUrl: /v2/availability/network/
       iv_integration.nodeAvailability.cached.apiUrl: /v2/availability/node/
       iv_integration.reservations.apiUrl: /v2/reservations/
    • Set the following properties in the customer_overrides properties file to facilitate agent and integration server communication with ActiveMQ:

      yfs.iv.integration.icf: org.apache.activemq.jndi.ActiveMQInitialContextFactory
       yfs.iv.integration.supply.providerurl: tcp://<replace_active_mq_host>:61616?jms.prefetchPolicy.all=0
       yfs.iv.integration.sendsupplyupdates.event.queue: dynamicQueues/DEV.QUEUE.1
       yfs.iv.integration.supply.qcf: ConnectionFactory
       yfs.iv.integration.demand.providerurl: tcp://<replace_active_mq_host>:61616?jms.prefetchPolicy.all=0
       yfs.iv.integration.senddemandupdates.event.queue: dynamicQueues/DEV.QUEUE.2
       yfs.iv.integration.demand.qcf: ConnectionFactory

      Note: Navigate to services in the same namespace that you are installing OMS and find the oms-activemq service to get the active mq hostname and replace the same with <replace_active_mq_host>.

      e. Configure OMS to generate the JWT token by using its own private-public key pair. IBM OMS can generate the JWT token by using its own private-public certficate key pair for development use cases.

      common:
       jwt:
          algorithm: RS256
          audience: service
          issuer: oms

      Notes:

      • The private key is imported to the keystore and public key is copied to sharedCertificates in Persistent Volume. For example, <sharedDirectory/jwtauth/operator.pub>. Users can configure this public key in OMS Gateway as explained in the Creating a JWT issuer secret by using a public key topic.

      f. Create a secret which is used for setting sensitive information for creating OMEnvironment through Sterling Order Management System Software Operator.

      i. Create a secret oms-secret named oms-secret.yaml.

      ii. Run kubectl create -f oms-secret.yaml -n oms.

      apiVersion: v1
       kind: Secret
       metadata:
         name: 'oms-secret'
       type: Opaque
       stringData:
         consoleAdminPassword: 'password'
         consoleNonAdminPassword: 'password'
         dbPassword: 'postgress'
         trustStorePassword: 'changeit'
         keyStorePassword: 'changeit'
         ivSecret: 'DEFAULT'

      g. Configure the integration servers in OMEnvironment as shown so that supply, demand, and reservation data is directly updated to Sterling Intelligent Promising, and not stored in Sterling Order Management System Software.

      - name: "integration"
      replicaCount: 1
      profile: ProfileHuge
      property:
       customerOverrides: IV_props
       jvmArgs: IVJVMArgs
      integration:
       names: [ IV_ADJUST_IS, IV_ADJUST_ID ]
       readinessFailRestartAfterMinutes: 10
       terminationGracePeriodSeconds: 60

      h.Configure the truststore of OMS to include the certificate of SIP.This ensures that OMS can establish a trusted and secure connection when initiating calls to SIP. In order to fecilitate this add the SIP certificate to trustedCerts directory in shared path as configued in PVC in step a of this article. Copy SIP's tls.crt which is a part of sip-operator-ca secret inside the path in PVC like as shown below

      security:
       ssl:
         trust:
           trustedCertDir: /shared/certs/trustedCerts

      i. Configure the following annotations in OMEnvironment to facilitate the use of middleware services such as postgreSQL and ActiveMQ, enable SIP integration, utilize the reference implementation data, and control JWT token regeneration.

Annotation Value Description
apps.oms.ibm.com/activate-iv-integration yes Setting this annotation to 'yes' enables OMS and SIP integration.
apps.oms.ibm.com/activate-iv-integration yes Setting this annotation to 'yes' enables OMS and SIP integration.
apps.oms.ibm.com/dbvendor-auto-transform yes Setting this annotation to 'yes' enables OMS to automatically transform the POD properties to connect to the PostgreSQL database specified in the parameters.
apps.oms.ibm.com/dbvendor-install-driver yes Setting this annotation to 'yes' enables OMS to install the database driver for the postgreSQL.
apps.oms.ibm.com/refimpl-install yes Setting this annoation to 'yes' will install the reference implementation data for development purposes only.
apps.oms.ibm.com/refimpl-type oms Setting this annotation to 'oms' will install the oms reference implementation data.
apps.oms.ibm.com/regenerate-jwt-privatekey yes Setting this annotation to 'yes' will generate the private key in shared volume.

Step 6. Deploy OMS using Operators

  1. Create a OMEnvironment YAML named oms-env.yaml.

    Note: Ensure that you have internet access before starting the K8s operator deployment for the OMS application. This deployment requires downloading a list of images. If the images are not downloaded, the deployment will fail. Alternatively, you can download these images in advance, push them to your local registry, and then perform the deployment by referring to your local registry. Required images follow:

    • docker.io/postgres:16.1
    • docker.io/apache/activemq-classic:6.1.0
  2. Run kubectl create -f oms-env.yaml

    apiVersion: apps.oms.ibm.com/v1beta1
     kind: OMEnvironment
     metadata:
       name: oms
       namespace: oms
       annotations:
         apps.oms.ibm.com/activate-iv-integration: 'yes'
         apps.oms.ibm.com/activemq-install-driver: 'yes'
         apps.oms.ibm.com/dbvendor-auto-transform: 'yes'
         apps.oms.ibm.com/dbvendor-install-driver: 'yes'
         apps.oms.ibm.com/refimpl-install: 'yes'
         apps.oms.ibm.com/refimpl-type: 'oms'
         apps.oms.ibm.com/regenerate-jwt-privatekey: 'yes'
     spec:
       license:
         accept: true
         acceptCallCenterStore: true
       secret: oms-secret
       image:
         imagePullSecrets:
           - name: ibm-entitlement-key
         oms:
           repository: cp.icr.io/cp/ibm-oms-professional
           tag: 10.0.2409.0-amd64
         orderHub:
           base:
             repository: cp.icr.io/cp/ibm-oms-professional
             tag: 10.0.2409.0-amd64
           extn:
             repository: cp.icr.io/cp/ibm-oms-professional
             tag: 10.0.2409.0-amd64
         callCenter:
           base:
             repository: cp.icr.io/cp/ibm-oms-professional
             tag: 10.0.2409.0-amd64
           extn:
             repository: cp.icr.io/cp/cp/ibm-oms-professional
             tag: 10.0.2409.0-amd64
       dataManagement:
         mode: create
         property:
           customerOverrides: IV_props
       storage:
         name: oms-pvc
         storageClassName: standard
    
       database:
         postgresql:
           name: postgres
           host: oms-postgresql.oms.svc.cluster.local
           port: 5432
           user: postgres
           schema: postgres
           secure: false
           dataSourceName: jdbc/OMDS
       devInstances:
         profile: ProfileColossal
         postgresql:
           repository: docker.io
           tag: '16.1'
           name: postgres
           user: postgres
           password: postgres
           database: postgres
           schema: postgres
           wipeData: true
           profile: ProfileColossal
         activemq:
           repository: docker.io
           tag: 6.1.0
           name: apache/activemq-classic
           profile: ProfileColossal
       networkPolicy:
         podSelector:
           matchLabels:
           none: none
         policyTypes:
           - Ingress
         ingress: []
       common:
         ingress:
           host: replace_with_hostname_of_the_linux_vm
         jwt:
           algorithm: RS256
           audience: service
           issuer: oms
       servers:
         - name: "smcfs"
           replicaCount: 1
           profile: ProfileHuge
           property:
             customerOverrides: IV_props
             jvmArgs: IVJVMArgs
           appServer:
             serverName: DefaultAppServer
             vendor: websphere
             vendorFile: servers.properties
             libertyServerXml: default-server-xml
             ingress:
               labels: {}
               annotations: {}
               contextRoots: [ smcfs, sbc, sma, isccs, wsc, adminCenter, icc, isf ]
             dataSource:
               maxPoolSize: 50
               minPoolSize: 10
             threads:
               max: 100
               min: 20
         - name: "integration"
           replicaCount: 1
           profile: ProfileHuge
           property:
             customerOverrides: IV_props
             jvmArgs: IVJVMArgs
           integration:
           names: [ IV_ADJUST_IS, IV_ADJUST_ID ]
           readinessFailRestartAfterMinutes: 10
           terminationGracePeriodSeconds: 60
       serverProfiles:
         - name: ProfileHuge
           resources:
             requests:
               cpu: '1'
               memory: 4Gi
             limits:
               cpu: '3'
               memory: 10Gi
       orderHub:
         bindingAppServerName: 'smcfs'
         base:
           replicaCount: 1
         extn:
           replicaCount: 1
       callCenter:
         bindingAppServerName: 'smcfs'
         base:
           replicaCount: 1
         extn:
           replicaCount: 1
       serverProperties:
         customerOverrides:
         - groupName: IV_props
           propertyList:
             iv_integration.tenantId: default
             iv_integration.clientId: DEFAULT
             iv_integration.secret: DEFAULT
             iv_integration.baseUrl: https://replace_SIP_Instance_URL/inventory
             iv_integration.authentication.mode: JWT
             iv_integration.IVApiVersion: v2
             iv_integration.nodeAvailability.apiUrl: /v2/availability/node/
             iv_integration.networkAvailability.cached.apiUrl: /v2/availability/network/
             iv_integration.nodeAvailability.cached.apiUrl: /v2/availability/node/
             iv_integration.reservations.apiUrl: /v2/reservations/
             yfs.iv.integration.icf: org.apache.activemq.jndi.ActiveMQInitialContextFactory
             yfs.iv.integration.supply.providerurl: tcp://oms-activemq.oms.svc.cluster.local:61616?jms.prefetchPolicy.all=0
             yfs.iv.integration.sendsupplyupdates.event.queue: dynamicQueues/DEV.QUEUE.1
             yfs.iv.integration.supply.qcf: ConnectionFactory
             yfs.iv.integration.demand.providerurl: tcp://oms-activemq.oms.svc.cluster.local:61616?jms.prefetchPolicy.all=0
             yfs.iv.integration.senddemandupdates.event.queue: dynamicQueues/DEV.QUEUE.2
             yfs.iv.integration.demand.qcf: ConnectionFactory
         jvmArgs:
         - groupName: IVJVMArgs
           propertyList:
             - -Dhttps.protocols=TLSv1.2
             - -Dcom.ibm.jsse2.overrideDefaultTLS=true
       healthMonitor:
         replicaCount: 1
         profile: "balanced"
         property: {}
       security:
         ssl:
           trust:
           trustedCertDir: /shared/certs/trustedCerts

    Note: If you encounter rate-limit issues while the OMS deployment attempts to pull images for external services such as PostgreSQL and ActiveMQ, refer to the technical note for strategies to address and bypass these issues.

Step 7. Validating the instance post deployment

  1. Deployment validation:

    a. Run command kubectl describe omenvironments.apps.oms.ibm.com to validate the deployment status.

    b. Status of the preceding command should show: OMEnvironmentAvailable.

    Notes:

    • If the status remains as InProgress for an extended period after deployment, there may be issues with the deployment process. It is advisable to review the OMS controller manager PODs for any errors.
    • All PODs should be up and running when you execute the command kubectl get pods. If you encounter any errors or notice PODs in an error state, use commandkubectl logs -f <podname> or kubectl describe pod <podname> to view the error details.
  2. Access Application URLs through port forwarding:

    a. Start the firewall (if not running)

    `sudo systemctl start firewalld`

    b. Add a couple of ports to the public zone. Run these commands:

    sudo firewall-cmd --add-port=9443/tcp --zone=public --permanent
    
     sudo firewall-cmd --add-port=8080/tcp --zone=public --permanent
    
     sudo firewall-cmd --reload

    c. Use Kubernetes to forward the required ports. The command format follows:

    kubectl port-forward --address 0.0.0.0 svc/ingress-nginx-controller -n ingress-nginx 9443:443
    • You can now access the SMCFS application via the RHEL server URL:

      https://smcfs-<namespace>.<replace_with_hostname_of_the_linux_vm>:9443/smcfs/console/login.jsp
    • You can also access the Order hub application via the RHEL server URL:

      https://smcfs-<namespace>.<replace_with_hostname_of_the_linux_vm>:9443/smcfs/order-management

Step 8. Validating the integration of OMS with SIP

The Order Management System (OMS) communicates with Sterling Intelligent Promising (SIP) using a JWT (JSON Web Token) for secure authentication. In this process, OMS is responsible for generating the JWT as explained in the Step 5. Installing OMS for SIP Integration section of this tutorial. The JWT is sent along with each request. The SIP system is configured to validate this token upon receiving a request as explained in Step 5 of the Deploying IBM Sterling Intelligent Promising Containers on Minikube tutorial.

If the token is successfully validated, the call proceeds as expected. However, if the token validation fails, the call is rejected, resulting in an authentication failure. This ensures that only authenticated requests are processed by SIP.

Before proceeding with validation of integration of OMS with SIP, configure the JWT token at OMS gateway as explained in Step 5 of the Deploying IBM Sterling Intelligent Promising Containers on Minikube tutorial.

OMS can communicate with SIP in two modes:

  • Asynchronous mode using integration servers
  • Synchronous mode using APIs

  • Validating the Asynchronous mode using integration servers.

    a. Ensure that the integration servers IV_ADJUST_IS and IV_ADJUST_ID that we have configured in Step 5. Installing OMS for SIP Integration of this tutorial is running without errors.

    b. Create an order with the following API input using HTTP API tester:

    Note: Since we have installed ReferenceImplementation (RI) data when installing OMS in this tutorial for demo/testing purposes, we are using the sample configuration data from RI as shown in the following example.

    <Order ApplyDefaultTemplate="Y" BuyerOrganizationCode="Bolton_MTRXB" BillToID="Bolton"
                DocumentType="0001" DraftOrderFlag="N" EnteredBy="bjones"
                EnterpriseCode="Matrix-B" IgnoreOrdering="Y" OrderHeaderKey="TestOrder_01_T1"
                OrderName="" OrderNo="TestOrder_01_T1" SellerOrganizationCode="Matrix-B" ShipToID="Bolton">
             <PersonInfoBillTo  ZipCode="83703"/>
             <OrderLines>
                 <OrderLine Action="CREATE"  DeliveryMethod="SHP" ItemGroupCode="PROD" OrderLineKey="TestOrder_01_Line1"
                            ShipToID="Bolton" ShipNode="Mtrx_Store_1" Quantity="1" >
                 <OrderLineTranQuantity OrderedQty="1" TransactionalUOM="EACH"/>
                 <Item ItemID="100001" ItemShortDesc="Tierra 42" Plasma Television/ HDTV"
                       UnitOfMeasure="EACH"/>
                 </OrderLine>
             </OrderLines>
     </Order>
  • When we create an order with the preceding input XML, an order gets created in OMS and a demand for the order line gets published in SIP.

    a. Verify that the demand created in SIP by calling the following getdemands REST API:

    GET : https://{{hostname}}/inventory/default/v1/demands?itemId=100001&unitOfMeasure=EACH&shipNode=Mtrx_Store_1

    Output for the preceding API call should look like the following example:

    [
         {
             "itemId": "100001",
             "unitOfMeasure": "EACH",
             "type": "OPEN_ORDER",
             "shipNode": "Mtrx_Store_1",
             "quantity": 5.0,
             "shipDate": "2023-10-28T00:00:00.000Z",
             "cancelDate": "2500-01-01T00:00:00.000Z",
             "minShipByDate": "1900-01-01T00:00:00.000Z"
         }
     ]

    With this we have validated that OMS is now communicating to SIP over asynchronous mode using integration servers.

  • Validating the Synchronous mode using API:

    a. Adjust the supply for an item, say 100002 in SIP using the following adjustSupply REST API as shown:

    POST : https://{{hostname}}/inventory/default/v1/supplies

    {
             "supplies": [
               {
                 "itemId": "100002",
                 "unitOfMeasure": "EACH",
                 "shipNode": "Mtrx_Store_1",
                 "type": "ONHAND",
                 "changedQuantity": 5
               }
             ]
           }

    b. Create an order in OMS for the same item for which we have adjusted the supply, in SIP, using the following API input using API tester:

    <Order ApplyDefaultTemplate="Y" BuyerOrganizationCode="Bolton_MTRXB" BillToID="Bolton"
                DocumentType="0001" DraftOrderFlag="N" EnteredBy="bjones"
                    EnterpriseCode="Matrix-B" IgnoreOrdering="Y" OrderHeaderKey="TestOrder_02_T1"
                OrderName="" OrderNo="TestOrder_02_T1" SellerOrganizationCode="Matrix-B" ShipToID="Bolton">
             <PersonInfoBillTo  ZipCode="83703"/>
             <OrderLines>
                 <OrderLine Action="CREATE"  DeliveryMethod="SHP" ItemGroupCode="PROD" OrderLineKey="TestOrder_02_Line1"
                            ShipToID="Bolton" ShipNode="Mtrx_Store_1" Quantity="1" >
                 <OrderLineTranQuantity OrderedQty="1" TransactionalUOM="EACH"/>
                 <Item ItemID="100002" ItemShortDesc="Tierra 42" Plasma Television/ HDTV"
                       UnitOfMeasure="EACH"/>
                 </OrderLine>
             </OrderLines>
         </Order>

    c. Invoke the Schedule order API in OMS with the following input using API tester:

    <ScheduleOrder AllocationRuleID="SYSTEM" IgnoreOrdering="Y" IgnoreReleaseDate="Y" OrderHeaderKey="TestOrder_02_T1" ScheduleAndRelease="N" />

    d. After successfully invoking the schedule order in OMS, a reservation gets created in SIP for the same item. Validate the reservation in SIP by calling the search API for reservation.

    POST: https://{{hostname}}/inventory/default/v2/reservations/search-requests?pageSize=1
    {
             "data": {
                 "itemId": "100002",
                 "unitOfMeasure": {
                     "operator": "equals",
                     "values": [
                         "EACH"
                     ]
                 },
                 "shipNode": { //Either distributionGroup or ShipNode is allowed
                     "operator": "equals",
                     "values": [
                         "Mtrx_Store_1"
                     ]
                 }
             }
         }

    Output of the preceding API call should look the following example:

    {
             "data": [
                 {
                     "expirationTs": "2024-10-29T17:00:00Z",
                     "unitOfMeasure": "EACH",
                     "reservedQuantity": 1.0,
                     "reference": "TestOrder_02_T1",
                     "itemId": "100002",
                     "availabilityType": "SCHEDULE",
                     "shipNode": "Mtrx_Store_1",
                     "reservationTs": "2024-10-28T16:06:00Z",
                     "tenantId": "default",
                     "id": "57bd70dd-b6a7-40ac-81e9-c1c827a61d6f"
                 }
             ],
             "meta": {
                 "pagination": {
                     "pageSize": 1
                 }
             }
         }

We have now validated that OMS is now communicating to SIP over synchronous mode using APIs.

Summary

This tutorial provided step-by-step instructions on deploying OMS Containers with SIP Integration and validating the integration of OMS with SIP.

Useful information