About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Tutorial
Integrating IBM Sterling OMS and SIP Containers on Minikube
Learn how to integrate IBM Sterling OMS with SIP on Minikube for development and testing
This tutorial guides you through deploying the IBM Sterling Order Management System Software (OMS) and its dependent stack, PostgreSQL and ActiveMQ, using a Kubernetes operator on a local, desktop-sized machine with Minikube. It also covers integrating OMS with IBM Sterling Intelligent Promising (SIP) Certified Containers, enabling you to explore and test the system in a controlled, development-friendly environment.
Objectives
By completing this tutorial, you will:
- Deploy a standalone OMS application using Minikube.
- Integrate OMS with SIP to utilize its inventory visibility and promising capabilities for efficient order fulfillment.
Note: This guide is intended for development and testing purposes only. For production deployment, consult the official product documentation.
Introduction to IBM Sterling OMS
IBM Sterling Order Management System (OMS) plays a vital role in supply chain and commerce operations for large enterprises globally. It empowers B2C and B2B organizations by providing a robust platform designed for Innovation, Differentiation, Omni-channel Managemen.
In today’s fast-paced environment, automation and rapid deployment are essential. Technologies such as Docker and Kubernetes bring significant value by simplifying and accelerating deployment processes.
The OMS Certified Containers, available in Professional and Enterprise editions on the Operator Catalog, further enhance this experience. Using the OMS Operator, organizations can streamline enterprise application management across diverse cloud platforms with ease and efficiency.
Introduction to Intelligent Promising (SIP)
IBM Sterling Intelligent Promising (SIP) is a solution designed to help retailers efficiently manage inventory and delivery commitments. By tracking stock and intelligently managing order fulfillment, SIP ensures streamlined operations and enhanced customer satisfaction.
SIP's functionality offers the following modular services:
Inventory visibility: Tracks stock across multiple locations, ensuring accurate and up-to-date availability.
Promising service: Provides realistic delivery dates based on shipping methods, costs, and inventory.
Catalog Service: Manages product catalog data within the SIP configuration for seamless integration.
Carrier Service: Optimizes shipping options using advanced algorithms for faster and cost-effective fulfillment.
Optimizer Service: Identifies cost-efficient fulfillment strategies in coordination with OMS, aligning with business priorities.
These core modules are further supported by microservices such as rules and common services, making SIP highly adaptable to various business requirements and workflows.
Deployment strategy: Integrating OMS with SIP using the OMS Operator
To integrate IBM Sterling OMS with SIP, ensure you have set up an SIP Container instance as outlined in the Deploying IBM Sterling Intelligent Promising Containers on Minikube tutorial IBM Developer. This setup is a prerequisite for the integration.
Integration phases
This tutorial focuses on Phase 2 of the OMS-SIP integration. For details on Phase 1, refer to the product documentation.
OMS Standard Operator
The OMS Standard Operator simplifies containerized deployments by adhering to Kubernetes best practices. It manages applications and components through custom resources, particularly the OMEnvironment resource. This resource allows you to configure:
- Application images
- Storage options
- PostgreSQL and ActiveMQ dependencies
- Network policies
- Other essential settings
With these configurations, the operator facilitates the deployment of a fully functional OMS environment.
Post-integration setup
Once OMS is integrated with SIP:
- Inventory management: SIP becomes the primary inventory management system.
- Data handling: Supply, demand, and reservation data are stored directly in SIP instead of OMS.
Key points for Intelligent Promising (SIP) integration
- Data management: Demand and supply data are stored directly in SIP, bypassing OMS.
- Sourcing functionality: Activating SIP disables OMS’s native smart sourcing functionality.
- Master of inventory and capacity:
- SIP acts as the inventory master.
- OMS retains its role as the master of capacity. Any availability calls requiring capacity calculations must be routed through OMS APIs.
- Security: Ensure secure communication by enabling TLS version 1.2 for outbound data from OMS to SIP.
Note: For detailed configuration and production guidance, refer to the product documentation.
Development and testing with Minikube
Minikube provides a minimal Kubernetes (K8s) cluster with a Docker container runtime, ideal for local development and testing. It is specifically designed for deployment on developers’ desktops.
The sample configuration in this tutorial demonstrates how to deploy OMS as a standalone application for Proof of Concept (POC) purposes.
For Production Deployment
- Use compatible databases and other supported software as specified in the product documentation.
- Refer to the compatibility report for OMS operator and container image tags.
Prerequisites
Hardware requirements
- 100 GB+ of storage
- 24 GB+ of memory (preferably 32+)
- 8 available virtual CPUs (preferably 16)
Stack used for demonstration purpose
- OS version: Red Hat Enterprise Linux release 8.9 (Ootpa)
- minikube version: v1.32.0
- OMS operator version, image: 1.0.19, 10.0.2409.0-amd64
Estimated time
This tutorial should take a few hours to complete.
Deployment Steps
Step 1. Installing Minikube
Create a non-root user
a. Create a non-root user and grant Sudo permissions
sudo useradd -m -s /bin/bash supportuser sudo passwd supportuser sudo usermod -aG wheel supportuserb. Switch to the non-root user
su - supportuser sudo groupadd docker sudo usermod -aG docker $USER && newgrp dockerInstall dependent packages
a. Install kubectl - Kubectl is the essential command-line tool used for interacting with Kubernetes clusters. Here's how to install it:
Update your package manager's repository information
sudo yum updateDownload and install kubectl
curl -LO "https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl" chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin/kubectlVerify the installation by checking the version
kubectl version --clientb. Install Minikube - Minikube is a tool that allows you to run a Kubernetes cluster locally. Here's how to install it:
Download and install Minikube
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 sudo install minikube-linux-amd64 /usr/local/bin/minikubec. Install Docker and dependent packages
Install conntrack
Conntrack is a utility used to view and manipulate the network connection tracking table in the Linux kernel, which is essential for Kubernetes. Install it with the following command:
sudo yum install conntrackInstall crictl
Crictl is a command-line interface for the Container Runtime Interface (CRI). To install it, follow these steps:
- Determine the latest version of crictl on the GitHub releases page.
Download and install crictl (replace
$VERSIONwith the latest version):export VERSION="v1.26.0" curl -L https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-${VERSION}-linux-amd64.tar.gz --output crictl-${VERSION}-linux-amd64.tar.gz sudo tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bin rm -f crictl-$VERSION-linux-amd64.tar.gzNote: Remove any conflicting versions of runc and re-install:
rpm -qa | grep runcsudo yum remove <output of above>For example:
sudo yum remove runc-1.1.12-1.module+el8.9.0+21243+a586538b.x86_64and thensudo yum install runcInstall socat, a utility for multiplexing network connections
sudo yum install socatInstall cri-dockerd by downloading the latest RPM for your OS and installing it:
Note: Run this command to install
libcgrouponly if you are on RHEL version 8.x. You can skip if you are on 9.x.sudo yum install libcgroupInstall the Container Networking Interface (CNI) plugins:
Find the latest version at https://github.com/containernetworking/plugins/releases
CNI_PLUGIN_VERSION="v1.3.0" CNI_PLUGIN_TAR="cni-plugins-linux-amd64-$CNI_PLUGIN_VERSION.tgz" CNI_PLUGIN_INSTALL_DIR="/opt/cni/bin" curl -LO "https://github.com/containernetworking/plugins/releases/download/$CNI_PLUGIN_VERSION/$CNI_PLUGIN_TAR" sudo mkdir -p "$CNI_PLUGIN_INSTALL_DIR" sudo tar -xf "$CNI_PLUGIN_TAR" -C "$CNI_PLUGIN_INSTALL_DIR" rm "$CNI_PLUGIN_TAR"Install Docker
Docker is required for container runtime support. Use the following commands to install Docker on your system:
Install required utilities and add the Docker repository:
sudo yum install -y yum-utils sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repoInstall Docker and related packages:
sudo yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-pluginStart Docker
sudo systemctl start docker sudo systemctl status dockerd. Start Minikube
Now that you have installed all the necessary components, you can start Minikube:
minikube start --driver=docker --cpus=<> --memory=<> --disk-size=<> --addons=metrics-server,dashboard,ingressFor example:
minikube start --driver=docker --cpus=14 --memory=56000 --disk-size=50g --addons=metrics-server,dashboard,ingressValidate the installation:
`minikube status` > minikube type: Control Plane host: Running kubelet: Running apiserver: Running kubeconfig: Configured
Step 2. Accessing Minikube dashboard remotely
The Minikube dashboard is a powerful web-based interface that provides insights into the state of your Minikube cluster. As a user-friendly graphical user interface (GUI), it offers various functionalities for managing Kubernetes resources. Here's what you can do using the Minikube dashboard:
Overview of Cluster Resources: The dashboard provides an at-a-glance overview of your Minikube cluster's nodes, pods, services, and more. This makes it easy to monitor the overall health of your cluster and quickly identify any issues.
Managing Deployments: You can create, scale, and manage deployments directly from the dashboard. This simplifies the process of launching applications and ensures they are running optimally.
Inspecting Pods and Containers: The dashboard lets you explore the details of pods, containers, and their associated logs. This is particularly valuable for debugging issues and analyzing application behavior.
Services and Ingress Management: Manage services and expose them via LoadBalancer, NodePort, or ClusterIP. Additionally, you can configure and manage Ingress resources to control external access to services.
ConfigMaps and Secrets: Create and manage ConfigMaps and Secrets, which store configuration data and sensitive information separately from application code.
Event Tracking: Stay informed about events in your cluster. The dashboard displays events related to pods, deployments, services, and other resources, aiding in identifying problems.
Cluster and Namespace Switching: If you're working with multiple clusters or namespaces, the dashboard allows you to seamlessly switch between them, streamlining management tasks.
Pod Terminal Access: With a single click, you can access a terminal directly within a pod's container. This is invaluable for debugging and troubleshooting.
Let's explore how to access the Minikube dashboard remotely and manage Kubernetes resources with ease:
Install the
NetworkManageservicesudo yum install NetworkManagerStart the
NetworkManagerservice to manage network connections:sudo systemctl start NetworkManagerAllow access to the Minikube dashboard port (
8001/tcp) through the firewall:sudo systemctl start firewalld sudo firewall-cmd --add-port=8001/tcp --zone=public --permanent sudo firewall-cmd --reloadFrom the Minikube server, get the URL to access the Minikube dashboard:
minikube dashboard --urlAccess Minikube dashboard remotely:
Establish another terminal connection to the minikube server and start a proxy server that listens on all network interfaces:
minikube kubectl -- proxy --address='0.0.0.0' --disable-filter=trueAccess the dashboard using the URL provided earlier but replace the IP address with the public IP of the Minikube host.
The URL should resemble
http://<Minikube_Public_IP>:8001/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/Additional troubleshooting for Minikube dashboard access:
If you encounter an inaccessible Minikube Dashboard URL and notice that the dashboard pods are in a crash loop backoff (
kubectl get pods -n kubernetes-dashboard), consider the following step to resolve the issue:Restart Docker: If Docker-related errors such as networking or iptables issues are observed, restarting the Docker service can help. Use the command
sudo systemctl restart docker. This action can reset Docker's networking components and often resolves connectivity and configuration issues impacting pod operations in Minikube.
Step 3. Installing the Operator SDK CLI and OLM
Download and install the Operator SDK CLI
RELEASE_VERSION=$(curl -s https://api.github.com/repos/operator-framework/operator-sdk/releases/latest | grep tag_name | cut -d '"' -f 4) sudo curl -LO "https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk_linux_amd64" sudo chmod +x operator-sdk_linux_amd64 sudo mv operator-sdk_linux_amd64 /usr/local/bin/operator-sdkInstall OLM
Run:
operator-sdk olm install --version=latestOutput should be similar to the following indicating successful install:
> INFO[0160] Deployment `olm/packageserver` successfully rolled out> INFO[0161] Successfully installed OLM version `latest`NAME NAMESPACE KIND STATUS catalogsources.operators.coreos.com CustomResourceDefinition Installed clusterserviceversions.operators.coreos.com CustomResourceDefinition Installed installplans.operators.coreos.com CustomResourceDefinition Installed olmconfigs.operators.coreos.com CustomResourceDefinition Installed operatorconditions.operators.coreos.com CustomResourceDefinition Installed operatorgroups.operators.coreos.com CustomResourceDefinition Installed operators.operators.coreos.com CustomResourceDefinition Installed subscriptions.operators.coreos.com CustomResourceDefinition Installed olm Namespace Installed operators Namespace Installed olm-operator-serviceaccount olm ServiceAccount Installed system: controller:operator-lifecycle-manager ClusterRole Installed olm-operator-binding-olm ClusterRoleBinding Installed cluster OLMConfig Installed olm-operator olm Deployment Installed catalog-operator olm Deployment Installed aggregate-olm-edit ClusterRole Installed aggregate-olm-view ClusterRole Installed global-operators operators OperatorGroup Installed olm-operators olm OperatorGroup Installed packageserver olm ClusterServiceVersion Installed operatorhubio-catalog olm CatalogSource Installed operatorhubio-catalog olm CatalogSource Installed operatorhubio-catalog olm CatalogSource InstalledNote: If the OLM install fails due to some reason, uninstall the previous version and then re-install.
To resolve this issue and perform a clean installation of OLM, you can follow these steps:
i. You need to uninstall the existing OLM resources from your Kubernetes cluster. To do this, you can use the
kubectlcommand. Here is a general approach to uninstall OLM:operator-sdk olm uninstall --version=latest kubectl delete crd olmconfigs.operators.coreos.com kubectl delete clusterrole aggregate-olm-edit kubectl delete clusterrole aggregate-olm-view kubectl delete clusterrolebinding olm-operator-binding-olm kubectl delete clusterrole system:controller:operator-lifecycle-manager kubectl delete -n kube-system rolebinding packageserver-service-auth-reader kubectl delete -n operators serviceaccount defaultThe above commands will delete OLM-related resources in all namespaces. If you want to target a specific namespace, you can omit the
--all-namespacesflag.ii. After running the commands to delete OLM resources, verify that there are no remaining OLM resources in your cluster:
kubectl get subscriptions.operators.coreos.com kubectl get catalogsources.operators.coreos.com kubectl get operatorgroups.operators.coreos.com kubectl get clusterserviceversions.operators.coreos.comIf these commands return empty lists, it means that OLM has been successfully uninstalled.
iii. After ensuring that OLM is uninstalled, you can proceed with the installation of the desired OLM version. Refer Step 2 above to re-install OLM.
After installing OLM, you can verify its installation by checking its resources
kubectl get crd -n olmNAME CREATED AT catalogsources.operators.coreos.com 2023-10-25T00:55:49Z clusterserviceversions.operators.coreos.com 2023-10-25T00:55:49Z installplans.operators.coreos.com 2023-10-25T00:55:49Z olmconfigs.operators.coreos.com 2023-10-25T00:55:49Z operatorconditions.operators.coreos.com 2023-10-25T00:55:49Z operatorgroups.operators.coreos.com 2023-10-25T00:55:49Z operators.operators.coreos.com 2023-10-25T00:55:49Z subscriptions.operators.coreos.com 2023-10-25T00:55:49Z subscriptions.operators.coreos.com 2023-10-25T00:55:49ZYou should see the new OLM resources related to the version you installed.
By following these steps, you should be able to uninstall existing OLM resources and perform a clean installation of the desired OLM version in your Kubernetes cluster. Be sure to refer to the specific documentation or instructions for the OLM version you are working with for any version-specific installation steps or considerations.
Overwriting PodSecurityStandards (PSS)
Kubernetes has an equivalent of SecurityContextConstraints (of OpenShift) called PodSecurityStandards (PSS) that enforces different profiles (privileged, baseline, and restricted) at a namespace level. When a restricted profile is defaulted on a namespace, pod spec is enforced to contain the
securityContext.seccompProfile.typefield with a valid value. In this case, the Operator installation fails because the namespace (olm) has restricted PSS, but the Operator controller deployment does not have the field.To overcome this, switch to baseline PSS that does not enforce the
securityContext.seccompProfile.typefield, by using the following command:kubectl label --overwrite ns olm pod-security.kubernetes.io/enforce=baselineDelete the oob olm CatalogSource:
kubectl delete catalogsource operatorhubio-catalog -n olm > catalogsource.operators.coreos.com "operatorhubio-catalog" deleted
Step 4. Creating IBM Entitlement Key Secret
An image pull secret named ibm-entitlement-key must be created with the IBM entitlement registry credentials in the namespace (project) where you are configuring SIPEnvironment. For more information, see the corresponding documentation.
Go to https://myibm.ibm.com/products-services/containerlibrary and copy your entitlement key.
Export the entitlement key and namespace variables.
export ENTITLEDKEY="<Entitlement Key from MyIBM>" export NAMESPACE="<project or Namespace Name for SIP deployment>"Create
ibm-entitlement-keyunder the namespace where you will be deploying SIP by running the following command.kubectl create secret docker-registry ibm-entitlement-key \ --docker-server=cp.icr.io \ --docker-username=cp \ --docker-password=${ENTITLEDKEY} \ --namespace=${NAMESPACE}Note: The Operator is from open registry. However, most container images are commercial. Contact your IT or Enterprise Administrator to get access to the entitlement key.
Step 5. Installing OMS for SIP Integration
Preparing and configuring IBM Sterling Intelligent Order Management Operator.
a. Create a CatalogSource YAML named catalog_source_oms.yaml. A
CatalogSourceis a repository in Kubernetes that houses information about available Operators, which can be installed on the cluster. It acts as a marketplace for Operators, allowing users to browse and select from a variety of packaged Kubernetes applications.apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: ibm-oms-catalog namespace: olm spec: displayName: IBM OMS Operator Catalog # For the image name, see the following catalog source image names table and use the appropriate value. image: 'icr.io/cpopen/ibm-oms-ent-case-catalog:v1.0.19-10.0.2409.0' publisher: IBM sourceType: grpc updateStrategy: registryPoll: interval: 10mb. Run the following command to create the
CatalogSource:`kubectl create -f catalog_source_oms.yaml -n olm`Confirm that the
CatalogSourceis successfully created by running:`kubectl get catalogsource,pods -n olm`Before you proceed:
Switch to the desired namespace where you want to deploy the resources. Use the following command to set the namespace:
`kubectl config set-context --current --namespace=<your-namespace>`In the preceding command, replace
<your-namespace>with the name of your namespace.Note: For demonstration purposes, this tutorial uses the
omsnamespace. You can deploy resources in theomsnamespace or modify the YAML files to target a different namespace of your choice.Recommendation: If deploying both IBM Sterling Order Management (OMS) containers and SIP on the same Kubernetes cluster, consider using separate namespaces to better organize and group the resources.
An
OperatorGroupin Kubernetes defines the namespaces where a specific Operator will be active. It determines the scope of the Operator’s capabilities, specifying which namespaces it can observe and manage. This feature supports multi-tenancy and access control within the cluster.Key considerations for Operator Groups
- Ensure that only one OperatorGroup exists per namespace to avoid conflicts.
- Multiple OperatorGroups in the same namespace can lead to deployment issues and hinder proper functioning of the Operators.
This constraint helps maintain clear and controlled access to resources within the namespace.
a. Create an Operator Group YAML file named
oms-operator-group.yaml.b. Run the following command to create the
OperatorGroup:kubectl create -f oms-operator-group.yamlapiVersion: operators.coreos.com/v1 kind: OperatorGroup name: oms-operator-global namespace: oms spec: {}
Create Subscription. In Kubernetes, a
Subscriptionis used to track desired Operators from a CatalogSource and manage their updates. It ensures that an Operator is kept within a specified version range, automating the process of installation and upgrades as new versions become available.a. Create a Subscription YAML file named oms_subscription.yaml.
b. Run
kubectl create -f oms_subscription.yamlapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ibm-oms-ent-sub namespace: oms spec: channel: v1.0 installPlanApproval: Automatic name: ibm-oms-ent source: ibm-oms-catalog sourceNamespace: olmValidate the configuration by running the following commands:
kubectl get sub ibm-oms-ent-sub -o jsonpath="{.status.installedCSV}" && echo
kubectl get pods -n oms
Installing OMS application
a. Create a Kubernetes Persistent Volume Claim (PVC) with the
ReadWriteManyaccess mode and a minimum storage requirement of 10 GB. This PVC should use thestandardstorage class provided by Minikube, which dynamically provisions a Persistent Volume (PV) during deployment. Ensure that this storage is accessible by all containers across the cluster. The owner group of the directory where the PV is mounted must have write access, and the owner group ID should be specified in thespec.storage.securityContext.fsGroupparameter of theOMEnvironmentcustom resource. This PV is crucial for storing data from middleware services such as PostgreSQL and Active MQ when deploying OMS in development mode. Additionally, the PV stores the JWI secret that the OMS generates and also stores the truststore, allowing you to add trusted certificates.kind: PersistentVolumeClaim apiVersion: v1 metadata: name: oms-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 40Gi volumeName: oms-pv storageClassName: "standard"b. Create a PVC YAML named oms-pvc.yaml by running the following command:
`kubectl create -f oms-pvc.yaml -n oms`c. Configure PostgreSQL and ActiveMQ. IBM Sterling OMS Operator has the capability to automatically install the required middlewares such as PostgreSQL and ActiveMQ for develeopment purposes. Please note that these middlewares are for develeopment purposes only.
devInstances: profile: ProfileColossal postgresql: repository: docker.io tag: '16.1' name: postgres user: postgres password: postgres database: postgres schema: postgres wipeData: true # storage: # name: <omsenvironment-operator-pv-oms-test> profile: ProfileColossal # timezone: <Timezone> activemq: repository: docker.io tag: 6.1.0 name: apache/activemq-classic # storage: # name: <omsenvironment-operator-pv-oms-test> profile: ProfileColossal # timezone: <Timezone>d. Configure
customer_overridesproperties:Set the following properties in the
customer_overridesproperties file to enable the integration with SIP:iv_integration.tenantId: default iv_integration.clientId: DEFAULT iv_integration.secret: DEFAULT iv_integration.baseUrl: https://<SIP_HOSTNAME>/inventory iv_integration.authentication.mode: JWT iv_integration.IVApiVersion: v2 iv_integration.nodeAvailability.apiUrl: /v2/availability/node/ iv_integration.networkAvailability.cached.apiUrl: /v2/availability/network/ iv_integration.nodeAvailability.cached.apiUrl: /v2/availability/node/ iv_integration.reservations.apiUrl: /v2/reservations/Set the following properties in the
customer_overridesproperties file to facilitate agent and integration server communication with ActiveMQ:yfs.iv.integration.icf: org.apache.activemq.jndi.ActiveMQInitialContextFactory yfs.iv.integration.supply.providerurl: tcp://<replace_active_mq_host>:61616?jms.prefetchPolicy.all=0 yfs.iv.integration.sendsupplyupdates.event.queue: dynamicQueues/DEV.QUEUE.1 yfs.iv.integration.supply.qcf: ConnectionFactory yfs.iv.integration.demand.providerurl: tcp://<replace_active_mq_host>:61616?jms.prefetchPolicy.all=0 yfs.iv.integration.senddemandupdates.event.queue: dynamicQueues/DEV.QUEUE.2 yfs.iv.integration.demand.qcf: ConnectionFactoryNote: Navigate to services in the same namespace that you are installing OMS and find the
oms-activemqservice to get the active mq hostname and replace the same with<replace_active_mq_host>.e. Configure OMS to generate the JWT token by using its own private-public key pair. IBM OMS can generate the JWT token by using its own private-public certficate key pair for development use cases.
common: jwt: algorithm: RS256 audience: service issuer: omsNotes:
- You can also use your own private-public key pair to generate the JWT token for production use cases. For more details refer to Step 5.4 of the following Deploying IBM Sterling Intelligent Promising Containers on Minikube tutorial.
- The private key is imported to the keystore and public key is copied to sharedCertificates in Persistent Volume. For example,
<sharedDirectory/jwtauth/operator.pub>. Users can configure this public key in OMS Gateway as explained in the Creating a JWT issuer secret by using a public key topic.
f. Create a secret which is used for setting sensitive information for creating
OMEnvironmentthrough Sterling Order Management System Software Operator.i. Create a secret
oms-secretnamedoms-secret.yaml.ii. Run
kubectl create -f oms-secret.yaml -n oms.apiVersion: v1 kind: Secret metadata: name: 'oms-secret' type: Opaque stringData: consoleAdminPassword: 'password' consoleNonAdminPassword: 'password' dbPassword: 'postgress' trustStorePassword: 'changeit' keyStorePassword: 'changeit' ivSecret: 'DEFAULT'g. Configure the integration servers in
OMEnvironmentas shown so that supply, demand, and reservation data is directly updated to Sterling Intelligent Promising, and not stored in Sterling Order Management System Software.- name: "integration" replicaCount: 1 profile: ProfileHuge property: customerOverrides: IV_props jvmArgs: IVJVMArgs integration: names: [ IV_ADJUST_IS, IV_ADJUST_ID ] readinessFailRestartAfterMinutes: 10 terminationGracePeriodSeconds: 60h.Configure the truststore of OMS to include the certificate of SIP.This ensures that OMS can establish a trusted and secure connection when initiating calls to SIP. In order to fecilitate this add the SIP certificate to trustedCerts directory in shared path as configued in PVC in step a of this article. Copy SIP's tls.crt which is a part of sip-operator-ca secret inside the path in PVC like as shown below
security: ssl: trust: trustedCertDir: /shared/certs/trustedCertsi. Configure the following annotations in
OMEnvironmentto facilitate the use of middleware services such as postgreSQL and ActiveMQ, enable SIP integration, utilize the reference implementation data, and control JWT token regeneration.
| Annotation | Value | Description |
|---|---|---|
| apps.oms.ibm.com/activate-iv-integration | yes | Setting this annotation to 'yes' enables OMS and SIP integration. |
| apps.oms.ibm.com/activate-iv-integration | yes | Setting this annotation to 'yes' enables OMS and SIP integration. |
| apps.oms.ibm.com/dbvendor-auto-transform | yes | Setting this annotation to 'yes' enables OMS to automatically transform the POD properties to connect to the PostgreSQL database specified in the parameters. |
| apps.oms.ibm.com/dbvendor-install-driver | yes | Setting this annotation to 'yes' enables OMS to install the database driver for the postgreSQL. |
| apps.oms.ibm.com/refimpl-install | yes | Setting this annoation to 'yes' will install the reference implementation data for development purposes only. |
| apps.oms.ibm.com/refimpl-type | oms | Setting this annotation to 'oms' will install the oms reference implementation data. |
| apps.oms.ibm.com/regenerate-jwt-privatekey | yes | Setting this annotation to 'yes' will generate the private key in shared volume. |
Step 6. Deploy OMS using Operators
Create a
OMEnvironmentYAML namedoms-env.yaml.Note: Ensure that you have internet access before starting the
K8s operatordeployment for the OMS application. This deployment requires downloading a list of images. If the images are not downloaded, the deployment will fail. Alternatively, you can download these images in advance, push them to your local registry, and then perform the deployment by referring to your local registry. Required images follow:- docker.io/postgres:16.1
- docker.io/apache/activemq-classic:6.1.0
Run
kubectl create -f oms-env.yamlapiVersion: apps.oms.ibm.com/v1beta1 kind: OMEnvironment metadata: name: oms namespace: oms annotations: apps.oms.ibm.com/activate-iv-integration: 'yes' apps.oms.ibm.com/activemq-install-driver: 'yes' apps.oms.ibm.com/dbvendor-auto-transform: 'yes' apps.oms.ibm.com/dbvendor-install-driver: 'yes' apps.oms.ibm.com/refimpl-install: 'yes' apps.oms.ibm.com/refimpl-type: 'oms' apps.oms.ibm.com/regenerate-jwt-privatekey: 'yes' spec: license: accept: true acceptCallCenterStore: true secret: oms-secret image: imagePullSecrets: - name: ibm-entitlement-key oms: repository: cp.icr.io/cp/ibm-oms-professional tag: 10.0.2409.0-amd64 orderHub: base: repository: cp.icr.io/cp/ibm-oms-professional tag: 10.0.2409.0-amd64 extn: repository: cp.icr.io/cp/ibm-oms-professional tag: 10.0.2409.0-amd64 callCenter: base: repository: cp.icr.io/cp/ibm-oms-professional tag: 10.0.2409.0-amd64 extn: repository: cp.icr.io/cp/cp/ibm-oms-professional tag: 10.0.2409.0-amd64 dataManagement: mode: create property: customerOverrides: IV_props storage: name: oms-pvc storageClassName: standard database: postgresql: name: postgres host: oms-postgresql.oms.svc.cluster.local port: 5432 user: postgres schema: postgres secure: false dataSourceName: jdbc/OMDS devInstances: profile: ProfileColossal postgresql: repository: docker.io tag: '16.1' name: postgres user: postgres password: postgres database: postgres schema: postgres wipeData: true profile: ProfileColossal activemq: repository: docker.io tag: 6.1.0 name: apache/activemq-classic profile: ProfileColossal networkPolicy: podSelector: matchLabels: none: none policyTypes: - Ingress ingress: [] common: ingress: host: replace_with_hostname_of_the_linux_vm jwt: algorithm: RS256 audience: service issuer: oms servers: - name: "smcfs" replicaCount: 1 profile: ProfileHuge property: customerOverrides: IV_props jvmArgs: IVJVMArgs appServer: serverName: DefaultAppServer vendor: websphere vendorFile: servers.properties libertyServerXml: default-server-xml ingress: labels: {} annotations: {} contextRoots: [ smcfs, sbc, sma, isccs, wsc, adminCenter, icc, isf ] dataSource: maxPoolSize: 50 minPoolSize: 10 threads: max: 100 min: 20 - name: "integration" replicaCount: 1 profile: ProfileHuge property: customerOverrides: IV_props jvmArgs: IVJVMArgs integration: names: [ IV_ADJUST_IS, IV_ADJUST_ID ] readinessFailRestartAfterMinutes: 10 terminationGracePeriodSeconds: 60 serverProfiles: - name: ProfileHuge resources: requests: cpu: '1' memory: 4Gi limits: cpu: '3' memory: 10Gi orderHub: bindingAppServerName: 'smcfs' base: replicaCount: 1 extn: replicaCount: 1 callCenter: bindingAppServerName: 'smcfs' base: replicaCount: 1 extn: replicaCount: 1 serverProperties: customerOverrides: - groupName: IV_props propertyList: iv_integration.tenantId: default iv_integration.clientId: DEFAULT iv_integration.secret: DEFAULT iv_integration.baseUrl: https://replace_SIP_Instance_URL/inventory iv_integration.authentication.mode: JWT iv_integration.IVApiVersion: v2 iv_integration.nodeAvailability.apiUrl: /v2/availability/node/ iv_integration.networkAvailability.cached.apiUrl: /v2/availability/network/ iv_integration.nodeAvailability.cached.apiUrl: /v2/availability/node/ iv_integration.reservations.apiUrl: /v2/reservations/ yfs.iv.integration.icf: org.apache.activemq.jndi.ActiveMQInitialContextFactory yfs.iv.integration.supply.providerurl: tcp://oms-activemq.oms.svc.cluster.local:61616?jms.prefetchPolicy.all=0 yfs.iv.integration.sendsupplyupdates.event.queue: dynamicQueues/DEV.QUEUE.1 yfs.iv.integration.supply.qcf: ConnectionFactory yfs.iv.integration.demand.providerurl: tcp://oms-activemq.oms.svc.cluster.local:61616?jms.prefetchPolicy.all=0 yfs.iv.integration.senddemandupdates.event.queue: dynamicQueues/DEV.QUEUE.2 yfs.iv.integration.demand.qcf: ConnectionFactory jvmArgs: - groupName: IVJVMArgs propertyList: - -Dhttps.protocols=TLSv1.2 - -Dcom.ibm.jsse2.overrideDefaultTLS=true healthMonitor: replicaCount: 1 profile: "balanced" property: {} security: ssl: trust: trustedCertDir: /shared/certs/trustedCertsNote: If you encounter rate-limit issues while the OMS deployment attempts to pull images for external services such as PostgreSQL and ActiveMQ, refer to the technical note for strategies to address and bypass these issues.
Step 7. Validating the instance post deployment
Deployment validation:
a. Run command
kubectl describe omenvironments.apps.oms.ibm.comto validate the deployment status.b. Status of the preceding command should show:
OMEnvironmentAvailable.Notes:
- If the status remains as
InProgressfor an extended period after deployment, there may be issues with the deployment process. It is advisable to review the OMS controller manager PODs for any errors.
- All PODs should be up and running when you execute the command
kubectl get pods. If you encounter any errors or notice PODs in an error state, use commandkubectl logs -f <podname>orkubectl describe pod <podname>to view the error details.
- If the status remains as
Access Application URLs through port forwarding:
a. Start the firewall (if not running)
`sudo systemctl start firewalld`b. Add a couple of ports to the public zone. Run these commands:
sudo firewall-cmd --add-port=9443/tcp --zone=public --permanent sudo firewall-cmd --add-port=8080/tcp --zone=public --permanent sudo firewall-cmd --reloadc. Use Kubernetes to forward the required ports. The command format follows:
kubectl port-forward --address 0.0.0.0 svc/ingress-nginx-controller -n ingress-nginx 9443:443You can now access the SMCFS application via the RHEL server URL:
https://smcfs-<namespace>.<replace_with_hostname_of_the_linux_vm>:9443/smcfs/console/login.jspYou can also access the Order hub application via the RHEL server URL:
https://smcfs-<namespace>.<replace_with_hostname_of_the_linux_vm>:9443/smcfs/order-management
Step 8. Validating the integration of OMS with SIP
The Order Management System (OMS) communicates with Sterling Intelligent Promising (SIP) using a JWT (JSON Web Token) for secure authentication. In this process, OMS is responsible for generating the JWT as explained in the Step 5. Installing OMS for SIP Integration section of this tutorial. The JWT is sent along with each request. The SIP system is configured to validate this token upon receiving a request as explained in Step 5 of the Deploying IBM Sterling Intelligent Promising Containers on Minikube tutorial.
If the token is successfully validated, the call proceeds as expected. However, if the token validation fails, the call is rejected, resulting in an authentication failure. This ensures that only authenticated requests are processed by SIP.
Before proceeding with validation of integration of OMS with SIP, configure the JWT token at OMS gateway as explained in Step 5 of the Deploying IBM Sterling Intelligent Promising Containers on Minikube tutorial.
OMS can communicate with SIP in two modes:
- Asynchronous mode using integration servers
Synchronous mode using APIs
Validating the Asynchronous mode using integration servers.
a. Ensure that the integration servers
IV_ADJUST_ISandIV_ADJUST_IDthat we have configured in Step 5. Installing OMS for SIP Integration of this tutorial is running without errors.b. Create an order with the following API input using HTTP API tester:
Note: Since we have installed ReferenceImplementation (RI) data when installing OMS in this tutorial for demo/testing purposes, we are using the sample configuration data from RI as shown in the following example.
<Order ApplyDefaultTemplate="Y" BuyerOrganizationCode="Bolton_MTRXB" BillToID="Bolton" DocumentType="0001" DraftOrderFlag="N" EnteredBy="bjones" EnterpriseCode="Matrix-B" IgnoreOrdering="Y" OrderHeaderKey="TestOrder_01_T1" OrderName="" OrderNo="TestOrder_01_T1" SellerOrganizationCode="Matrix-B" ShipToID="Bolton"> <PersonInfoBillTo ZipCode="83703"/> <OrderLines> <OrderLine Action="CREATE" DeliveryMethod="SHP" ItemGroupCode="PROD" OrderLineKey="TestOrder_01_Line1" ShipToID="Bolton" ShipNode="Mtrx_Store_1" Quantity="1" > <OrderLineTranQuantity OrderedQty="1" TransactionalUOM="EACH"/> <Item ItemID="100001" ItemShortDesc="Tierra 42" Plasma Television/ HDTV" UnitOfMeasure="EACH"/> </OrderLine> </OrderLines> </Order>When we create an order with the preceding input XML, an order gets created in OMS and a demand for the order line gets published in SIP.
a. Verify that the demand created in SIP by calling the following
getdemandsREST API:GET : https://{{hostname}}/inventory/default/v1/demands?itemId=100001&unitOfMeasure=EACH&shipNode=Mtrx_Store_1Output for the preceding API call should look like the following example:
[ { "itemId": "100001", "unitOfMeasure": "EACH", "type": "OPEN_ORDER", "shipNode": "Mtrx_Store_1", "quantity": 5.0, "shipDate": "2023-10-28T00:00:00.000Z", "cancelDate": "2500-01-01T00:00:00.000Z", "minShipByDate": "1900-01-01T00:00:00.000Z" } ]With this we have validated that OMS is now communicating to SIP over asynchronous mode using integration servers.
Validating the Synchronous mode using API:
a. Adjust the supply for an item, say 100002 in SIP using the following
adjustSupplyREST API as shown:POST : https://{{hostname}}/inventory/default/v1/supplies{ "supplies": [ { "itemId": "100002", "unitOfMeasure": "EACH", "shipNode": "Mtrx_Store_1", "type": "ONHAND", "changedQuantity": 5 } ] }b. Create an order in OMS for the same item for which we have adjusted the supply, in SIP, using the following API input using API tester:
<Order ApplyDefaultTemplate="Y" BuyerOrganizationCode="Bolton_MTRXB" BillToID="Bolton" DocumentType="0001" DraftOrderFlag="N" EnteredBy="bjones" EnterpriseCode="Matrix-B" IgnoreOrdering="Y" OrderHeaderKey="TestOrder_02_T1" OrderName="" OrderNo="TestOrder_02_T1" SellerOrganizationCode="Matrix-B" ShipToID="Bolton"> <PersonInfoBillTo ZipCode="83703"/> <OrderLines> <OrderLine Action="CREATE" DeliveryMethod="SHP" ItemGroupCode="PROD" OrderLineKey="TestOrder_02_Line1" ShipToID="Bolton" ShipNode="Mtrx_Store_1" Quantity="1" > <OrderLineTranQuantity OrderedQty="1" TransactionalUOM="EACH"/> <Item ItemID="100002" ItemShortDesc="Tierra 42" Plasma Television/ HDTV" UnitOfMeasure="EACH"/> </OrderLine> </OrderLines> </Order>c. Invoke the Schedule order API in OMS with the following input using API tester:
<ScheduleOrder AllocationRuleID="SYSTEM" IgnoreOrdering="Y" IgnoreReleaseDate="Y" OrderHeaderKey="TestOrder_02_T1" ScheduleAndRelease="N" />d. After successfully invoking the schedule order in OMS, a reservation gets created in SIP for the same item. Validate the reservation in SIP by calling the search API for reservation.
POST: https://{{hostname}}/inventory/default/v2/reservations/search-requests?pageSize=1{ "data": { "itemId": "100002", "unitOfMeasure": { "operator": "equals", "values": [ "EACH" ] }, "shipNode": { //Either distributionGroup or ShipNode is allowed "operator": "equals", "values": [ "Mtrx_Store_1" ] } } }Output of the preceding API call should look the following example:
{ "data": [ { "expirationTs": "2024-10-29T17:00:00Z", "unitOfMeasure": "EACH", "reservedQuantity": 1.0, "reference": "TestOrder_02_T1", "itemId": "100002", "availabilityType": "SCHEDULE", "shipNode": "Mtrx_Store_1", "reservationTs": "2024-10-28T16:06:00Z", "tenantId": "default", "id": "57bd70dd-b6a7-40ac-81e9-c1c827a61d6f" } ], "meta": { "pagination": { "pageSize": 1 } } }
We have now validated that OMS is now communicating to SIP over synchronous mode using APIs.
Summary
This tutorial provided step-by-step instructions on deploying OMS Containers with SIP Integration and validating the integration of OMS with SIP.