This is a cache of https://developer.ibm.com/articles/awb-servers-become-part-of-cloud/. It is a snapshot of the page as it appeared on 2025-11-16T02:14:27.986+0000.
How servers become part of the Cloud - IBM Developer
Understanding how the cloud works is key to building stable and reliable cloud applications. Should you keep all your servers in one zone or spread them out? What causes capacity errors, and how do you fix them? Where is your workload actually running and what happens if that server fails? How do you ensure it's running on something stable?
To design secure and resilient applications, you need to answer these questions. Knowing how a server becomes part of the cloud and how IBM monitors, detects issues, and fixes them is essential to trusting your workloads with us.
If you’ve ever wondered what goes on behind the scenes to bring a new server into the cloud, this article is for you.
Cloud resiliency and scale come from more than just stacking servers in a room. It takes careful planning, smart design, and coordination of both physical and logical components.
The IBM Cloud is always evolving. We’re constantly adding capacity, launching new features, fixing servers, and upgrading infrastructure to give our customers the best experience possible.
Bringing a new server into the cloud involves a lot of moving parts—from configuration and power to networking and placement. In this article, we’ll walk you through how IBM adds new servers to the VPC cloud: where they live, how they’re set up, how they’re connected, and how they become a trusted part of the IBM Cloud.
Building blocks of a Cloud
To understand how a cloud works, we need to look at the two main components that make it all possible:
Physical components: The actual hardware and infrastructure.
Logical components: The software and systems that organize and manage that hardware.
The physical building blocks of the Cloud
Clouds are made up of real, physical machines. It all starts with servers.
A server (also called a node or host) looks like a big gray metal box. You've probably seen photos of data centers filled with these boxes, blinking LEDs, and endless cables.
Servers are mounted in racks, which look a bit like filing cabinets without drawers. These racks live in server rooms, which are part of data centers. Multiple data centers in a location such as Dallas or London make up a region. These five layers define a server’s physical location, but they don’t make it part of the cloud yet.
Racks include more than servers. They also hold:
Network switches
Power distribution units (PDUs)
Cables and cable paths
Open space for ventilation
Other specialized components
The arrangement of all these components is called a rack elevation. It defines how to build the rack and ensures consistency across all racks of the same type.
An elevation is a high-level blueprint for how components are arranged in a rack. It defines the placement of servers, switches, power units, and cables, and most importantly, it balances power and heat.
Elevations are carefully designed to manage two key challenges in cloud data centers:
Power consumption: Servers and other devices draw a lot of electricity.
Heat: All that power generates significant heat, which needs to be managed efficiently.
Interestingly, in a cloud data center, nearly everything is a kind of server. The actual servers are just one part of the puzzle — power distribution units (PDUs), network switches, and even smart outlets are all specialized servers that play a role in keeping things running.
Sample H200 server rack elevation
The following sample elevation shows a rack containing H200 GPU servers.
This elevation is designed to:
Spread power usage across different phases and PDUs.
Provide clear server slot mapping.
Include space for high-power GPU servers, such as the gx3d-h200.
Allow room for network switches (foe example, TOR, MTOR).
Include fillers and pass-throughs for airflow and cabling.
Powerful servers such as the gx3d-h200 consume more energy and generate more heat, so they’re spaced out. Smaller server classes might fit up to 44 servers in a single rack.
Each server has a class — a label that tells us about its configuration. For example:
gx3d: Third-generation server with local storage.
h200: A specific GPU server class.
IBM Cloud supports over 50 server classes, each with custom rack designs to maximize efficiency while managing heat and power. Each VPC cloud offering uses its own elevation.
Every VPC cloud offering uses a unique elevation, carefully designed to manage power and heat while making the most of available space. These layouts also account for the cabling that connects servers to network devices and power distribution units (PDUs).
If you were to combine all the elevations, you’d have a complete physical map of the VPC cloud.
Each server has a unique physical address that tells you exactly where it sits in a data center.
For example:
dal3-qz1-sr3-rk095-s08
This breaks down as:
dal3: Dallas data center 3
qz1: Quality zone 1 (production)
sr3: Server room 3
rk095: Rack 95
s08: Slot 8
This address tells us where a server is physically located but not its place in the cloud architecture. Interestingly, this same address also serves as the server’s hostname.
Logical building blocks of the Cloud
The logical pieces help us understand where each server fits within the cloud infrastructure. Every server plays a specific role. Most provide the compute power that hosts the Virtual Server Instances (VSIs) our customers run. Others support management functions, storage, and essential backend services the cloud relies on.
Each server is assigned to a logical MZone. These MZones align with the zones customers see when creating a VSI such as Dallas Zone 1, 2, or 3.
All of this is tracked in a central system called Platform Inventory. This system brings together rack data, server allocations, and MZone mappings to build a complete logical view of the VPC cloud across all regions and data centers. It powers the services our customers use and is updated hundreds of times every day.
These are the foundational elements that make the VPC cloud possible. Next, let’s explore why and how we add new servers.
Buying a new server
The process of buying a new server begins with a plan and the first step is securing rack space. Each server needs to be placed in a rack within a server room, and that rack must have enough space, power, and network connectivity.
Once the rack is identified, it needs to be wired up with fiber, optics, and other components to connect it to the network. This requires knowing the network topology of the rack. Will the rack hold 22 small servers or just 2 large GPU servers? The answer affects how the rack is built and what network resources are needed.
The rack elevation defines everything that goes into the rack—from servers and switches down to cable ties. Every component is documented in a Bill of Materials (BOM).
The BOM lists:
Every part needed
Quantity required
Vendors for each part
Details our procurement team uses to place orders
After the BOM is ready, we reach out to vendors, negotiate pricing, and place the orders. Then comes the waiting.
Ordering cloud hardware isn’t like same-day shipping. Some parts can take weeks or even months to arrive. That’s why we carefully manage inventory and forecast capacity, making sure we always have long-lead-time components ready to go when needed.
From delivery to production: How we expand the Cloud
When a new part arrives at the data center, we begin by unboxing it, adding an asset tag, and scanning it into our system. From that moment, the part is considered active. Once all required parts for a rack are scanned in, we start the physical build.
Our Data Center Operations team follows a detailed build guide to install, power, and cable everything in the rack. When the build is complete, we add the servers to our platform inventory system.
IBM Cloud uses two inventory systems:
Physical inventory tracks hardware such as servers and components.
Platform inventory tracks the logical structure of the cloud such as zones and roles.
Here’s how we define the two layers:
Physical cloud = Racks and hosts
Logical cloud = Servers and zones
Each server hosts one or more Virtual Server Instances (VSIs), and zones group servers into logical areas. Every production region has three zones.
Server lifecycle in platform inventory
Planned: The server is on the roadmap but not yet installed
Racking: It’s being physically added to a rack
Configuration: Software is installed and settings are applied
Production: The server is ready for customer workloads
After racking, the server moves into the configuration phase. This is when we decide its role. Most servers become compute nodes, but others may support internal cloud operations.
We then configure the switches and begin the bring-up process. During bring-up, the platform inventory system tracks each step:
Firmware updates
Installing the Release Bundle (our custom software package)
Running validation tests
This phase also helps us catch and fix any physical setup issues.
Once all software is installed and tests are passed, the server is ready for the final step: expansion. At this point, it’s assigned to a zone and becomes available for use in production.
What could go wrong?
The short answer: anything and everything.
Servers are physical machines. They can be dropped, damaged during shipping, or arrive with missing parts. All of these issues have happened before.
Mistakes and accidents are part of the process. What matters most is how quickly we catch and fix them. The resilience of our cloud depends on detecting problems early, recovering fast, and preventing impact to our customers.
Summary and next steps
Now you know the basic building blocks of how a server becomes part of the IBM Cloud. This understanding helps you:
Plan server placement
Design for high availability
Build stable, cloud-native applications
It also shows the care and structure behind running your critical workloads on a reliable platform.
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.