This is a cache of https://developer.ibm.com/tutorials/deploy-guardium-data-security-center-vmware/. It is a snapshot of the page as it appeared on 2025-11-17T02:35:55.506+0000.
Deploy Guardium Data Security Center on a single-node VMware vSphere cluster - IBM Developer
In today’s data-driven landscape, securing sensitive data across hybrid environments is a top priority. IBM Guardium Data Security Center helps organizations monitor, audit, and protect structured and unstructured data in real time. This tutorial walks you through the steps to install and configure Guardium Data Security Center’s Guardium Query Service on a single-node OpenShift cluster running on VMware vSphere. These steps are deal for learning how to set up proof-of-concept, development, or light production use cases.
This tutorial is especially helpful for:
Security engineers and architects evaluating Guardium’s capabilities
DevSecOps teams integrating security into CI/CD pipelines
Administrators tasked with standing up compliant environments quickly
Unlike traditional product documentation, this hands-on tutorial offers a real-world walkthrough that includes:
Manual configuration of networking, DNS, and storage in a vSphere environment
Insights into OpenShift’s single-node deployment for constrained or isolated environments
Custom Resource creation and operator-based installation of Guardium components
Prerequisites
Before completing this tutorial, be sure that you have:
Familiarity with the Red Hat OpenShift Container Platform installation and update processes
Familiarity with the different cluster installation methods available
An environment suitable for cluster installation
All necessary hardware and software requirements for Red Hat OpenShift and IBM Guardium Data Security Center
In this tutorial, we use:
Red Hat OpenShift Container Platform, version 4.16
IBM Guardium Data Security Center – IBM Guardium Quantum Safe, version 3.6.2
Part 1. Install a single-node Red Hat OpenShift cluster manually
A single-node Red Hat OpenShift cluster can be deployed using standard installation methods. This setup is particularly suitable for lightweight application deployment workloads.
Note: A single-node cluster does not provide high availability (HA), which means it is vulnerable to failures that could impact availability.
To ensure a successful installation, review the following resources:
Step 1: Install required tools on the installer node
Log in to the Red Hat OpenShift environment: oc login <OCP endpoint>. You can download the Red Hat OpenShift Container Platform from the Red Hat OpenShift mirror site.
Select the appropriate architecture and Red Hat OpenShift version.
Download and extract the oc CLI binary file.
Step 2: Configure DHCP and DNS for OpenShift
Correct DHCP and DNS configuration is essential to ensure proper cluster functionality.
Configure your DHCP server to assign persistent IP addresses to nodes.
Set up DNS resolution and reverse DNS resolution.
Ensure the required DNS records are configured:
Usage
Fully Qualified Domain Name
Description
Kubernetes API
api.[cluster_name].[base_domain]
DNS A/AAAA or CNAME record resolvable externally and internally
Internal API
int.[cluster_name].[base_domain]
DNS A/AAAA or CNAME record resolvable internally
Ingress route
*.apps.[cluster_name].[base_domain]
Wildcard DNS A/AAAA or CNAME record targeting the node
Step 3: Create the installation configuration file manually
Note: To create the installation configuration file, you will need:
An SSH public key for accessing the cluster nodes
The OpenShift installation program and the pull secret that you obtained previously
To create the installation configuration file:
Create an installation directory: mkdir <installation_directory>
Generate and customize the install-config.yaml file within the directory.
Save and back up the file for future installations.
Example install-config.yaml for VMware vSphere
Example install-config.yaml for VMware vSphere
additionalTrustBundlePolicy:ProxyonlyapiVersion:v1baseDomain:example.comcompute:-architecture: amd64hyperthreading:Enabledname:workerplatform:{}replicas:0controlPlane:architecture:amd64hyperthreading:Enabledname:masterreplicas:1platform:vsphere:cpus:24coresPerSocket:2memoryMB:307200osDisk:diskSizeGB:500metadata:creationTimestamp:nullname:ocptestnetworking:clusterNetwork:-cidr: 10.128.0.0/14hostPrefix:23machineNetwork:-cidr: 10.0.0.0/16networkType:OVNKubernetesserviceNetwork:-172.30.0.0/16platform:vsphere:failureDomains:-name: generated-failure-domainregion:generated-regionserver:<vpshere server ip or FQDN>topology:computeCluster:</Datacenter/host/NewCluster>datacenter:<Datacenter>datastore:</Datacenter/datastore/datastore1>networks:-<LAN>resourcePool:</Datacenter/host/NewCluster//Resources>folder:</Datacenter/vm/ocp-infra>zone:generated-zonevcenters:-datacenters:-Datacenterpassword:<"password">port:443server:<vsphere server ip or FQDN>user:<vsphere user name>diskType:thinfips:falsepullSecret:''sshKey:''
Copy codeCopied!Show more
Step 4. Deploy the OpenShift cluster
Generate the Kubernetes manifests. For installation_directory, specify the installation directory that contains the install-config.yaml file you created.
Note: Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory:
Deploy the cluster using the installation program. For installation_directory, specify the directory name to store the files that the installation program creates.
Note: To view different installation details, specify warn, debug, or error instead of info.
Step 5: Verify the OpenShift cluster deployment
When the cluster deployment completes successfully, the terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user.
Credential information also outputs to the installation log. To check the installation log:
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Note: To complete this step, you must have successfully deployed the OpenShift Container Platform and installed the oc CLI.
For installation_directory, specify the path to the directory where you stored the installation files.
Confirm that the cluster recognizes the machines:
[root@localhost tamil]# oc get node
NAME STATUS ROLES AGE VERSION
tamil-mxpv5-master-0 Ready control-plane,master,worker 6d14h v1.29.10+67d3387
Copy codeCopied!
Step 7: Review the OpenShift dashboard
Review the Red Hat OpenShift dashboard, confirming your cluster details:
Step 8: Configure the image registry storage
The image registry operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the registry operator is made available.
After installation, you must edit the image registry operator configuration to switch the managementState from Removed to Managed. When this is complete, you must configure storage.
To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy.
Note: Block storage volumes are supported but not recommended for use with the image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica.
Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica:
Modify the registry configuration to use the new storage:
oc edit config.imageregistry.operator.openshift.io -o yaml
Example output
storage: pvc: claim:1
Copy codeCopied!
Example image registry PVC
Part 2. Deploy IBM Guardium Data Security Center
The deployment described in this tutorial includes the installation of IBM Guardium Quantum Safe, which consists of three components. The detailed deployment process for Guardium Quantum Safe is similar to the Guardium Data Security Center deployment. You can find more details in the link below.
Step 3: Create a Guardium Data Security Center instance using a custom resource
For reference, see this Guardium Data Security Center Cookbook custom resource YAML file.
Apply the example custom resource YAML file to create the Guardium Data Security Center instance:
oc apply -f <guardium-data-security-center-custom-resource-example.yaml> Example output
Verify the status of the deployed instance:
oc get guardiumdatasecuritycenter Example
Guardium Data Security Center login page
You can now navigate to the exposed route for the Guardium Data Security Center UI. You should see the following login screen:
When you have successfully logged in to Guardium Data Security Center, you’ll see the main dashboard. This is your control center for data activity monitoring, policy management, and compliance reporting
Summary and next steps
In this tutorial, you have:
Deployed a Red Hat OpenShift 4.16 single-node cluster on VMware vSphere
Installed necessary tools including the OpenShift CLI and created install configs
Configured DHCP, DNS, and storage (including image registry PVC)
Deployed and verified the IBM Guardium Data Security Center v3.6.2 components using Operator Lifecycle Manager and custom resources
Optionally integrated LDAP authentication for centralized access control
Your environment is now ready to begin data security operations using Guardium. You can use the Guardium Data Security Center dashboard to:
Monitor data access and behavior
Define policies for sensitive data protection
Trigger alerts and generate compliance reports
To continue to expand your knowledge and skills, check out these additional IBM Developer resources:
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.