This is a cache of https://developer.ibm.com/tutorials/awb-deploying-order-service-containers-minikube/. It is a snapshot of the page as it appeared on 2025-11-18T04:04:44.151+0000.
Deploying IBM Order Management system and Order Service containers on Minikube - IBM Developer
This tutorial demonstrates how you can deploy IBM Order Management System and Order Service Containers and the dependent stack such as Elasticsearch and Cassandra through the operator on a desktop-sized machine by using Minikube.
The sample configuration in this tutorial demonstrates how to deploy IBM Order Management System and Order Service Containers as a standalone application for proof of concept (POC) purposes.
Introduction to IBM Sterling Order Management System and IBM Order Service
IBM Sterling Order Management System (OMS) is the backbone of supply chain and commerce initiatives for large enterprises around the world. The product provides a robust platform that is designed to provide B2C and B2B organizations the power to innovate, differentiate, and drive their omnichannel businesses with less overhead. This rapid pace of innovation drives a lot of effort in deployment and automation practices. Tools such as Docker and Kubernetes bring exceptional speed and value to these enterprises to achieve their engineering excellence.
IBM Order Service is a new feature for Sterling Order Management System software. Order Service advances IBM’s modular business service vision for the Sterling Order Management System software platform by building more robust and scalable order search and archival capabilities by utilizing a modernized technology stack and architecture. Order Service is deployed alongside Sterling Order Management System software to provide enhanced functionality as part of an expanded solution footprint and comprises of two components: Order Search and Archive Service.
Order Search provides faster access to order data with a more robust query language and reduces the workload on core Sterling Order Management System software application servers by moving it to a scalable and highly available repository. Order Search uses Elasticsearch to store key-order data and makes it available through a set of GraphQL APIs.
Archive Service enables customers to retain a greater amount of historical order data by offloading it to an optimized storage repository, reducing the Sterling Order Management System software database resource requirements while still providing seamless access to the data. Archive Service uses Cassandra to efficiently store large amounts of order data and makes it available through a set of GraphQL APIs.
Order Service is available for Sterling Order Management System Software Containers, and is distributed only to users who are entitled to Sterling Order Management System Software Containers. You can install, configure, and deploy the Order Service images in Sterling Order Management System Software Professional or Enterprise edition.
Development and testing with Minikube
Minikube provides a minimal Kubernetes (K8s) cluster with a Docker container runtime, ideal for local development and testing. It is specifically designed for deployment on developers’ desktops.
Note: This guide is intended for development and testing purposes only. For production deployment, consult the official product documentation.
Estimated time
This tutorial should take a few hours to complete.
Prerequisites
Hardware requirements
100 GB+ of storage
24 GB+ of memory (preferably 32+)
8 available virtual CPUs (preferably 16)
Stack used for demonstration purpose
OS version: Red Hat Enterprise Linux release 8.9 (Ootpa)
minikube version: v1.32.0
For production deployments
Use compatible databases and other supported software as specified in the product documentation.
Refer to the compatibility report for OMS operator and container image tags.
Deployment steps
Step 1. Installing Minikube
Create a non-root user
a. Create a non-root user and grant sudo permissions
Install conntrack: Conntrack is a utility used to view and manipulate the network connection tracking table in the Linux kernel, which is essential for Kubernetes. Install it with the following command:
sudo yum install conntrack
Copy codeCopied!
Install crictl: Crictl is a command-line interface for the Container Runtime Interface (CRI). To install it, follow these steps:
Determine the latest version of crictl on the GitHub releases page.
Download and install crictl (replace $VERSION with the latest version):
minikube status
[...]
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
Copy codeCopied!
Step 2. Accessing Minikube dashboard remotely
The Minikube dashboard is a powerful web-based interface that provides insights into the state of your Minikube cluster. As a user-friendly graphical user interface (GUI), it offers various functionalities for managing Kubernetes resources. Here's what you can do using the Minikube dashboard:
Overview of Cluster Resources: The dashboard provides an at-a-glance overview of your Minikube cluster's nodes, pods, services, and more. This makes it easy to monitor the overall health of your cluster and quickly identify any issues.
Managing Deployments: You can create, scale, and manage deployments directly from the dashboard. This simplifies the process of launching applications and ensures they are running optimally.
Inspecting Pods and Containers: The dashboard lets you explore the details of pods, containers, and their associated logs. This is particularly valuable for debugging issues and analyzing application behavior.
Services and Ingress Management: Manage services and expose them via LoadBalancer, NodePort, or ClusterIP. Additionally, you can configure and manage Ingress resources to control external access to services.
ConfigMaps and Secrets: Create and manage ConfigMaps and Secrets, which store configuration data and sensitive information separately from application code.
Event Tracking: Stay informed about events in your cluster. The dashboard displays events related to pods, deployments, services, and other resources, aiding in identifying problems.
Cluster and Namespace Switching: If you're working with multiple clusters or namespaces, the dashboard allows you to seamlessly switch between them, streamlining management tasks.
Pod Terminal Access: With a single click, you can access a terminal directly within a pod's container. This is invaluable for debugging and troubleshooting.
Let's explore how to access the Minikube dashboard remotely and manage Kubernetes resources with ease:
Install the NetworkManager service:
sudo yum install NetworkManager
Copy codeCopied!
Start the NetworkManager service to manage network connections:
sudo systemctl start NetworkManager
Copy codeCopied!
Allow access to the Minikube dashboard port (8001/tcp) through the firewall:
Access the dashboard using the URL provided earlier but replace the IP address with the public IP of the Minikube host.
The URL should resemble http://<your_instance_ip>:8001/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/.
Additional troubleshooting for Minikube dashboard access:
If you encounter an inaccessible Minikube Dashboard URL and notice that the dashboard pods are in a crash loop backoff (which you can check using the command kubectl get pods -n kubernetes-dashboard), consider the following step to resolve the issue:
Restart Docker: If Docker-related errors such as networking or iptables issues are observed, restarting the Docker service can help. Use the command sudo systemctl restart docker. This action can reset Docker's networking components and often resolves connectivity and configuration issues impacting pod operations in Minikube.
Step 3. Installing the Operator SDK CLI and OLM
Overview of the OMS Standard Operator
The OMS Standard Operator simplifies containerized deployments by adhering to Kubernetes best practices. It manages applications and components through custom resources, particularly the OMEnvironment resource. This resource allows you to configure:
Application images
Storage options
PostgreSQL and ActiveMQ dependencies
Network policies
Other essential settings
With these configurations, the operator facilitates the deployment of a fully functional OMS environment. As part of this guide, we will be installing the Operator SDK so that we can utilize the Operator framework to deploy OMS Operators
The output should be similar to the following, indicating a successful installation:
operator-sdk olm install --version=latest
INFO[0000] Fetching CRDs for version "latest"
INFO[0000] Fetching resources for resolved version "latest"
INFO[0001] Checking for existing OLM CRDs
INFO[0001] Checking for existing OLM resources
INFO[0001] Installing OLM CRDs...
INFO[0001] Creating CustomResourceDefinition "catalogsources.operators.coreos.com"
INFO[0002] CustomResourceDefinition "catalogsources.operators.coreos.com" created
INFO[0002] Creating CustomResourceDefinition "clusterserviceversions.operators.coreos.com"
INFO[0003] CustomResourceDefinition "clusterserviceversions.operators.coreos.com" created
INFO[0003] Creating CustomResourceDefinition "installplans.operators.coreos.com"
[...]
INFO[0011] Creating OLM resources... INFO[0011] Creating Namespace "olm" INFO[0012] Namespace "olm" created INFO[0012] Creating Namespace "operators" INFO[0013] Namespace "operators" created INFO[0013] Creating ServiceAccount "olm/olm-operator-serviceaccount"
INFO[0014] ServiceAccount "olm/olm-operator-serviceaccount" created
INFO[0014] Creating ClusterRole "system:controller:operator-lifecycle-manager"
[...]
INFO[0025] Waiting for deployment/olm-operator rollout to complete
INFO[0026] Waiting for Deployment "olm/olm-operator" to rollout: 0 of 1 updated replicas are available
INFO[0033] Deployment "olm/olm-operator" successfully rolled out
INFO[0033] Waiting for deployment/catalog-operator rollout to complete
INFO[0034] Deployment "olm/catalog-operator" successfully rolled out
INFO[0034] Waiting for deployment/packageserver rollout to complete
INFO[0035] Waiting for Deployment "olm/packageserver" to rollout: 0 of 2 updated replicas are available
INFO[0038] Deployment "olm/packageserver" successfully rolled out
INFO[0038] Successfully installed OLM version "latest"
NAME NAMESPACE KIND STATUS
catalogsources.operators.coreos.com CustomResourceDefinition Installed
clusterserviceversions.operators.coreos.com CustomResourceDefinition Installed
installplans.operators.coreos.com CustomResourceDefinition Installed
olmconfigs.operators.coreos.com CustomResourceDefinition Installed
operatorconditions.operators.coreos.com CustomResourceDefinition Installed
operatorgroups.operators.coreos.com CustomResourceDefinition Installed
operators.operators.coreos.com CustomResourceDefinition Installed
subscriptions.operators.coreos.com CustomResourceDefinition Installed
olm Namespace Installed
operators Namespace Installed
olm-operator-serviceaccount olm ServiceAccount Installed
system: controller:operator-lifecycle-manager ClusterRole Installed
olm-operator-binding-olm ClusterRoleBinding Installed
cluster OLMConfig Installed
olm-operator olm Deployment Installed
catalog-operator olm Deployment Installed
aggregate-olm-edit ClusterRole Installed
aggregate-olm-view ClusterRole Installed
global-operators operators OperatorGroup Installed
olm-operators olm OperatorGroup Installed
packageserver olm ClusterServiceVersion Installed
operatorhubio-catalog olm CatalogSource Installed
operatorhubio-catalog olm CatalogSource Installed
operatorhubio-catalog olm CatalogSource Installed
Copy codeCopied!Show more
Note: If the OLM install fails for some reason, uninstall the previous version and then re-install.
To resolve this issue and perform a clean installation of OLM, you can follow these steps:
i. You need to uninstall the existing OLM resources from your Kubernetes cluster. To do this, you can use the kubectl command. Here is a general approach to uninstall OLM:
The above commands will delete OLM-related resources in all namespaces. If you want to target a specific namespace, you can omit the --all-namespaces flag.
ii. After running the commands to delete OLM resources, verify that there are no remaining OLM resources in your cluster:
kubectl get subscriptions.operators.coreos.com
kubectl get catalogsources.operators.coreos.com
kubectl get operatorgroups.operators.coreos.com
kubectl get clusterserviceversions.operators.coreos.com
Copy codeCopied!
If these commands return empty lists, it means that OLM has been successfully uninstalled.
iii. After ensuring that OLM is uninstalled, you can proceed with the installation of the desired OLM version. Refer Step 2 above to re-install OLM.
After installing OLM, you can verify its installation by checking its resources kubectl get crd -n olm
NAME CREATED AT
catalogsources.operators.coreos.com 2023-10-25T00:55:49Z
clusterserviceversions.operators.coreos.com 2023-10-25T00:55:49Z
installplans.operators.coreos.com 2023-10-25T00:55:49Z
olmconfigs.operators.coreos.com 2023-10-25T00:55:49Z
operatorconditions.operators.coreos.com 2023-10-25T00:55:49Z
operatorgroups.operators.coreos.com 2023-10-25T00:55:49Z
operators.operators.coreos.com 2023-10-25T00:55:49Z
subscriptions.operators.coreos.com 2023-10-25T00:55:49Z
subscriptions.operators.coreos.com 2023-10-25T00:55:49Z
Copy codeCopied!
You should see the new OLM resources related to the version you installed.
By following these steps, you should be able to uninstall existing OLM resources and perform a clean installation of the desired OLM version in your Kubernetes cluster. Be sure to refer to the specific documentation or instructions for the OLM version you are working with for any version-specific installation steps or considerations.
Overwriting PodSecurityStandards (PSS):
Kubernetes has an equivalent of SecurityContextConstraints (from OpenShift) called PodSecurityStandards (PSS) that enforces different profiles (privileged, baseline, and restricted) at a namespace level. When a restricted profile is defaulted on a namespace, pod spec is enforced to contain the securityContext.seccompProfile.type field with a valid value. In this case, the Operator installation fails because the namespace (olm) has restricted PSS, but the Operator controller deployment does not have the field.
To overcome this, switch to the baseline PSS that does not enforce the securityContext.seccompProfile.type field, by using the following command:
An image pull secret named ibm-entitlement-key must be created with the IBM entitlement registry credentials in the namespace (project) where you are configuring OMEnvironment. For more information, see the corresponding documentation.
Note: The Operator is from open registry. However, most container images are commercial. Contact your IT or Enterprise Administrator to get access to the entitlement key.
Step 5. Installing and deploying IBM Sterling Order Management System (OMS) and IBM Order Service
Create a namespace for OMS. This namespace will also be used for OMS sub-applications such as Call Center, Order Hub, Order Service, etc.
kubectl create namespace oms
Copy codeCopied!
Configure PostgreSQL and ActiveMQ:
IBM Sterling OMS Operator has the capability to automatically install the required middlewares like PostgreSQL and ActiveMQ for development purposes. Please note that these middlewares are for development purposes only.
From following the preceding steps, you will have a Cassandra container running with the necessary keyspaces configured, on port 9042.
Deploying IBM Sterling Order Management System:
i. Create catalog-source.yaml to create your deployment's catalog source, subscription.yaml to manage your OMS operator subscription, and operator-group.yaml to create the operator groups necessary to deploy OMS CRDs:
apiVersion:operators.coreos.com/v1alpha1kind:CatalogSourcemetadata:name:ibm-oms-catalognamespace:olmspec:displayName:IBMOMSOperatorCatalog# update to 'ibm-oms-pro-case-catalog' if using OMS Professional Editionimage:icr.io/cpopen/ibm-oms-ent-case-catalog:v1.0publisher:IBMsourceType:grpcupdateStrategy:registryPoll:interval:10m0s
You can validate whether your OMS CRDs are created by checking your Custom Resource Definitions on the Minikube dashboard. For any issues, you can check the logs of your olm-operator pod within your olm namespace.
ii. You will also need to create a persistent volume claim (PVC) to request for storage for your deployment:
Required storage:
PVC Recommended size Purpose
oms-pvc 10GB OMS shared storage for logs and configuration files
oms-pvc-ordserv 20GB Order Service shared storage
Copy codeCopied!
To create the PVCs for OMS and Order Service, run:
kubectl apply -f oms-pvc.yaml -n oms
Copy codeCopied!
Where your oms-pvc.yaml file contains the following:
After doing so, run ./cert.sh to generate a TLS secret.
Note: Make sure to remember the passwords you enter for your truststore and keystore if you do not use the default of mypassword, as you will need them for the next step when creating your secret file
This script automates the creation of a self-signed certificate and integrates it into your Kubernetes environment.
It starts by defining variables for configuration,
such as the hostname, certificate name, and passwords.
The script then generates a self-signed certificate using OpenSSL and creates a PKCS#12 certificate from the generated key and certificate.
This PKCS#12 certificate is then imported into a Java Keystore (JKS) using the keytool utility.
Finally, the script creates a Kubernetes TLS secret with the generated certificate and key within the specified namespace.
iv. Pass the passwords you entered for your keystore and truststore within the following oms-secret.yaml file, under the trustStorePassword and keyStorePassword values:
Note: If you choose to use a different name for your configmap, ensure that you modify the additionalMounts parameter in your om-environment.yaml file accordingly:
vi. MQ bindings: If you have an existing MQ bindings file from another deployment, you can create a configmap using the contents of your .bindings file. Otherwise, you can use an empty file for your configmap for testing and update it later if needed:
If you choose to use the same configmap name, your om-environment.yaml should be the following:
kind:OMEnvironmentapiVersion:apps.oms.ibm.com/v1beta1metadata:name:omsnamespace:omsannotations:apps.oms.ibm.com/activemq-install-driver:'yes'apps.oms.ibm.com/dbvendor-auto-transform:'yes'apps.oms.ibm.com/dbvendor-install-driver:'yes'apps.oms.ibm.com/refimpl-install:'yes'apps.oms.ibm.com/refimpl-type:'oms'kubernetes.io/ingress.class:'nginx'spec:networkPolicy:podSelector:matchLabels:none:nonepolicyTypes:-Ingresssecurity:ssl:trust:storeLocation:'/shared/tls.p12'storeType:PKCS12license:accept:trueacceptCallCenterStore:truecommon:jwt:algorithm:RS256audience:serviceissuer:omsappServer:ports:http:9080https:9443ingress:host:<your-instance-domain-name>ssl:enabled:trueidentitySecretName:ingress-certserverProfiles:-name:smallresources:requests:cpu:200mmemory:512Milimits:cpu:1000mmemory:1Gi-name:mediumresources:requests:cpu:500mmemory:1Gilimits:cpu:2000mmemory:2Gi-name:largeresources:requests:cpu:500mmemory:2Gilimits:cpu:4000mmemory:4Gi-name:hugeresources:requests:cpu:500mmemory:4Gilimits:cpu:4000mmemory:8Gi-name:colossalresources:requests:cpu:500mmemory:4Gilimits:cpu:4000mmemory:16GiupgradeStrategy:RollingUpdateservers:-appServer:libertyServerXml:default-server-xmllivenessCheckBeginAfterSeconds:900livenessFailRestartAfterMinutes:10serverName:DefaultAppServerterminationGracePeriodSeconds:60vendor:webspherevendorFile:servers.propertiesimage: {}
name:server1profile:largeproperty:customerOverrides:AppServerPropertiesjvmArgs:JVMArgumentsreplicaCount:1orderHub:adminURL:'server1-oms.<your-instance-domain-name>'base:replicaCount:1profile:'medium'healthMonitor:profile:smallreplicaCount:1upgradeStrategy:RollingUpdateorderService:cassandra:keyspace:orderservicecontactPoints:'<your-instance-ip>:9042'configuration:additionalConfig:log_level:DEBUGorder_archive_additional_part_name:ordRelservice_auth_disable:'true'enable_graphql_introspection:'true'ssl_vertx_disable:'false'ssl_cassandra_disable:'true'# note: if you would like to use self-generated keys,# you can comment out the following JWT propertiesjwt_algorithm:RS256jwt_audience:servicejwt_ignore_expiration:falsejwt_issuer:omselasticsearch:createDevInstance:profile:largestorage:capacity:20Giname:oms-pvc-ordservstorageClassName:'standard'orderServiceVersion:'10.0.2409.2'profile:largereplicaCount:1secret:oms-secretjms:mq:bindingConfigName:oms-bindingsbindingMountPath:/opt/ssfs/.bindingsserverProperties:customerOverrides:-groupName:BasePropertiespropertyList:yfs.yfs.logall:Nyfs.yfs.searchIndex.rootDirectory:/sharedderivatives:-groupName:AppServerPropertiespropertyList:yfs.api.security.enabled:Yyfs.interopservlet.security.enabled:falseyfs.userauthfilter.enabled:falsexapirest.servlet.jwt.auth.enabled:truexapirest.servlet.cors.enabled:truexapirest.servlet.cors.allow.credentials:trueyfs.yfs.searchIndex.rootDirectory:/shared# note: if you would like to use self-generated keys,# you can uncomment the following JWT properties#yfs.yfs.jwt.create.issuer: oms#yfs.yfs.jwt.create.audience: osrv#yfs.yfs.jwt.create.pk.alias: '1'#yfs.yfs.jwt.create.algorithm: RS256#yfs.yfs.jwt.create.expiration: 3600#yfs.yfs.jwt.oms.verify.keyloader: jkstruststorejvmArgs:-groupName:JVMArgumentsserviceAccount:defaultimage:imagePullSecrets:-name:ibm-entitlement-keyoms:agentDefaultName:om-agentappDefaultName:om-apppullPolicy:IfNotPresentrepository:cp.icr.io/cp/ibm-oms-enterprisetag:10.0.2409.2-amd64orderHub:base:imageName:om-orderhub-basepullPolicy:IfNotPresentrepository:cp.icr.io/cp/ibm-oms-enterprisetag:10.0.2409.2-amd64orderService:imageName:orderservicepullPolicy:IfNotPresentrepository:cp.icr.io/cp/ibm-oms-enterprisetag:10.0.2409.2-amd64pullPolicy:IfNotPresentdatabase:postgresql:name:postgreshost:oms-postgresql.oms.svc.cluster.localport:5432user:postgresschema:postgressecure:falsedataSourceName:jdbc/OMDSdevInstances:profile:ProfileColossalpostgresql:repository:docker.iotag:'16.1'name:postgresuser:postgrespassword:postgresdatabase:postgresschema:postgreswipeData:trueprofile:ProfileColossalactivemq:repository:docker.iotag:6.1.0name:apache/activemq-classicprofile:ProfileColossalstorage:accessMode:ReadWriteManycapacity:10Giname:oms-pvcsecurityContext:supplementalGroups:-0-1000-1001storageClassName:'standard'additionalMounts:configMaps:-mountPath:/shared/tls.p12name:truststoreconfigmapsubPath:tls.p12# note: if you would like to use self-generated keys,# you can uncomment the following JWT keystore mount#- mountPath: /shared/jwtauth/jwt.jks# name: jwt-jks-keystoreconfigmap# subPath: jwt.jks# you can comment this property out after your first deploymentdataManagement:mode:create
Copy codeCopied!Show more
Note the following for the above OMEnvironment yaml file:
Ensure that you have internet access before starting the k8s operator deployment for the OMS application. This deployment requires downloading a list of images. If the images are not downloaded, the deployment will fail. Alternatively, you can download these images in advance, push them to your local registry, and then perform the deployment by referring to your local registry. The required images are:
docker.io/postgres:16.1
docker.io/apache/activemq-classic:6.1.0
You will need to substitute in your instance's domain name under spec.orderHub.adminURL and spec.ingress.host
Under spec.orderService.cassandra.contactPoints, you will need to include your instance's IP.
The mode property of spec.dataManagement mode is set to create. You can comment out this property after your first deployment of your OMS pods as the create mode is only required when an empty schema is being set up.
After your first deployment, you can comment out this property
For upgrading fix packs in the future, you can uncomment this property and set the mode to upgrade, and then re-apply the yaml to install the latest fix packs.
Note: By default, OMS will generate its own keypair using the jwtkeygen job. The public key and keystore containing the private key will be saved to the /shared/jwtauth directory within your OMS PVC. If you would like to use the OMS generated keystore and public key, you do not need to follow the instructions below to generate your own keypair. Instead, you can modify the OMEnvironment yaml provided above to use JWT tokens generated by OMS instead. You can then store the public key generated by the jwtkeygen job under /shared/jwtauth/<jwt-alias-name>.pub within your jwt_oms_public_key value in your Order Service secret.
a) If you choose to self-generate your keys, you can use the following commands to do so:
The keystore is named as jwt.jks and is mounted to the OMS PVC directory /shared/jwtauth to override the keystore which OMS will try to generate by default.
Otherwise, if you name your keystore as something else such as keystore.jks, OMS will to try to read the keystore it is generating by itself, which can lead to an Unauthorized error when you try to make your Order Service calls.
You can find more information on setting up a self-generated JWT keypair within the IBM documentation
As mentioned in the above documentation, when providing your own keystore to OMS you will need to copy it over to your OMS PVC to override the default generated keystore from the jwtkeygen job:
To validate the above, you can re-deploy your OMEnvironment yaml and check the jwtkeygen job logs. You should not see any messages of it creating a new keystore within the /shared/jwtauth directory. You can also use minikube ssh and validate that your self-generated keystore file that is being mounted under /var/hostpath-provisioner/default/oms-pvc as expected. You should only see your self-generated keystore file (i.e., jwt.jks), and not an OMS-generated public key file (<alias-name>.pub).
If you followed the above steps to generate your own keystore then you can also validate that your private key within your keystore has an alias of 1 and not operator to ensure that your PVC doesn’t still contain the OMS-generated keystore:
Checking keystore contents – note the private key alias for the next steps:
If you followed the above steps, your PK alias should be 1.
Once OMS has access to your private key from the keystore located within /shared/jwtauth/jwt.jks, Order Service will also require the contents of your self-generated public key (public-key.pem) to be added to the secret used in your OMEnvironment yaml as jwt_oms_public_key to validate the token.
In step 4, we created a secret called oms-secret.yaml. To add your JWT public key to your secret file, you can append the following property to your secret file:
Note: For your jwt_oms_public_key value, make sure to exclude the -----BEGIN PUBLIC KEY----- and -----END PUBLIC KEY----- parts from your public-key.pem file and store your public key value in one line without any line breaks.
After doing the above, you can run the following command to add the OMS secret to your deployment:
kubectl apply -f oms-secret.yaml -n oms
Copy codeCopied!
viii. Deployment: Run the following command to deploy OMS:
kubectl apply -f om-environment.yaml -n oms
Copy codeCopied!
If you are deploying OMS for the first time, expect it to take around 45-60 minutes for your pods to come up. This is because OMS will need more time to perform the first time setup from setting dataManagement.mode to create.
Subsequent deployments will be much quicker as you can leave this properly commented out, or set to upgrade.
Accessing Applications Post Deployment
Start the firewall (if not running):
sudo systemctl start firewalld
Copy codeCopied!
Add 443 port to the public zone. Run the following commands to do so:
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.