This is a cache of https://developer.ibm.com/articles/what-is-edge-computing/. It is a snapshot of the page as it appeared on 2025-11-14T13:03:03.247+0000.
Overview of edge computing? - IBM Developer

Article

Overview of edge computing?

An edge computing environment distributes and manages workloads beyond the data center or cloud, in or near the locations where an enterprise conducts business

By

Glen Darling

Edge computing is about placing computer workloads (both hardware and software) as close as possible to the edge—to where the data is being created and where actions are occurring. Edge computing environments give customers faster response times, greater data privacy, and reduced data transfer costs.

As small, inexpensive, and networking-capable computers become more powerful, edge computing will become more pervasive in all sectors of the economy.

  • For telecommunications providers, edge computing environments allow customers to run workloads in their multi-access edge computing (MEC) facilities that are embedded in local communities.
  • For farmers, edge computing means monitoring growing conditions on a much finer scale to more precisely act and increase yields.
  • For retailers, edge computing means they can manage the lifecycles of their broad-ranging workloads on devices of many types and architectures across multiple store locations.
  • For manufacturers, edge computing environments allow them to use a modern cloud-native style of development and deployment while remaining "air-gapped" (completely disconnected) from the internet.

Processing data at the edge

One significant difference with edge computing is what happens with data at the edge of the network. Where does all this data come from? It's common to think about large data as centralized in the cloud, where most data processing probably occurs today. But the cloud is not where data comes from. Data comes from people, our environment, and our machinery (cars, factory machines, mobile devices, and such). When we or our machines perform actions out in the real world, we produce data. Our world itself is really a vast source of data as well.

In the Internet of Things (IoT) era, computers at the edge were usually just mindless data pumps. Their job was to periodically sample their environment at a particular resolution. Then, they would send corresponding data records to, or passively await commands from, a centralized server in a data center somewhere. Those centralized servers could be in a public cloud, private cloud, or any data center. It really didn't matter. The significant difference is that those IoT devices were not able to make local decisions. They were essentially dumb peripherals for those central servers. Decisions were made centrally. This architecture does not scale well.

Small and inexpensive computers are evolving rapidly. They are much more powerful now and capable of significant local processing, including sophisticated analytics and even artificial intelligence (AI) applications. Their numbers have grown exponentially to the point where simply pumping all their data to the cloud is prohibitively expensive or even impossible. These factors helped to enable the rise of edge computing.

While IoT devices send unfiltered data to the cloud, edge devices enable local processing of the data. With edge computers, you can sample your data at higher resolutions and at greater sampling frequencies. After local analysis, they can still send their data to the cloud, but it can be a much smaller volume of much higher value data. Your central servers can then concentrate on aggregation of data over large numbers of edge computers, covering larger areas, and for longer periods, which means you can more easily observe important trends.

Performing analytics locally at the edge also produces extremely low stimulus-response times. When both a sensor and an actuator are attached to the same edge device, or to edge servers that are near to each other in the network, response times can be just a few milliseconds or even less than a millisecond in some cases. This is much faster than a typical round-trip time for communications with any remote cloud server. These extremely low latencies also cause much faster reactions to the necessary decisions and activities.

Defining the edges in edge computing, from edge devices to edge servers to the network edge and public or private clouds

What is the edge?

To telecommunications providers, edge computing is about the edge of their provider networks, which is called the network edge. As these providers roll out their 5G infrastructures, they are able to bring very low latency compute capacity to their customers. By hosting Kubernetes clusters in their MEC facilities at their network edges, they enable businesses to deploy cloud-native applications within a few milliseconds from their customers. In turn, this new edge computing environment enables entirely new business applications.

Highly secure 5G networks also enable direct connectivity in many new locations. Now, individual machines in a retail distribution warehouse, store, bank, or hotel, or on the floor of a factory, can safely communicate with remote services. With 5G, computers can be securely deployed into these new locations, which creates new solutions and business opportunities.

Computers that are deployed at the edge can vary widely. They can be data-center-class machines, which are called edge servers. You might find edge servers in a factory, hospital, distribution warehouse, shipping port, or stadium. These machines are usually x86 hardware, and are mostly fungible assets. That is, a workload that is destined for any of these edge servers can be placed on any of them since they provide a generic compute service. Edge servers are often deployed in clusters that run Kubernetes, thereby enabling cloud-native features such as autonomous workload monitoring, automatic scaling up and down to manage load, and automatic workload migration on failures.

The other type of edge machine is called an edge device. An edge device is typically built for a particular purpose, or is installed in a particular context such that it serves some specific purpose. An example is a smart camera that is installed in a grocery store and pointed at a set of store shelves where various brands of pasta are offered for sale. Another example is a machine that monitors the humidity in an area and turns on a humidifier or dehumidifier as necessary to maintain an appropriate environment.

Therefore, computing workloads on edge devices are not as interchangeable as they are on edge servers. For example, it would make no sense to run smart camera software for shelf monitoring on the humidity manager machine that has no camera and is nowhere near those shelves. Unlike edge servers, edge devices can be extremely diverse in hardware architecture, power consumption, compute capacity, memory, storage, networking ability, attached peripherals, and more. They are also often deployed in environments quite unlike data centers. Basic services such as network connections or even power supplies might be intermittent or otherwise unreliable.

Often, edge computing architectures consist of edge devices that aggregate to edge servers, which might be deployed at the network edge. In turn, those edge servers often aggregate to more centralized hybrid cloud environments. These architectures enable intelligent analysis at very low latency right where the data is created, but they also enable central aggregation, control, and management.

The sheer number of edge computing devices in the world is astonishing. The number of edge devices in the field has grown into the tens of billions. This proliferation of edge devices might cause major changes in the ways that people work, play, learn, and live. If these predictions are correct, then a single large enterprise can have tens of thousands, or hundreds of thousands, or maybe even millions of edge machines to manage!

How will humans manage all of these edge machines? How will we make sure that the right software workloads are deployed in all the right places in a timely manner? How will we ensure that urgent security patches are delivered to all of those workloads that require them? How will we prevent malevolent actors from commandeering these devices into a botnet and wreaking havoc with them?

From cloud-native to edge-native

To manage edge servers and devices, you can use lessons learned in the hybrid cloud environment. Decades ago, virtual machines (VMs) replaced dedicated hardware in these environments. More recently, containers revolutionized data centers. Containers are orders of magnitude more efficient than VMs, and they can be run in most places where there is a Linux kernel. As a result, the most common way to deploy software in the cloud today is by using containers. Cloud-native computing mostly refers to the use of containers to deploy scalable microservices. Edge-native architectures move these containerized microservices out to the edge.

With most clouds, you can manage containerized cloud workloads at scale by using Kubernetes clusters. Edge data centers, such as 5G MECs, also often provide Kubernetes for containerized software deployment. Kubernetes can help you to automatically deploy new instances when load increases, and delete instances when load reduces. It can also help keep your applications robust by migrating instances to new machines in the cluster when machine failures occur. Kubernetes is a great choice when edge servers are as uniform and fungible as possible.

Edge devices tend to be more diverse in architecture and capabilities. They have less reliable network connections, and even unreliable power; they are not interchangeable in purpose. They are also less physically secure, often being deployed outdoors or in other unsupervised locations. Edge device environments are very unlike data center environments. Kubernetes, which was designed for managing containers in cloud data centers, is therefore not always the best choice for edge device management.

The diversity of edge computing, from the network edge to edge servers and devices, works best with an open ecosystem. Open ecosystems are enabled by open source software and an array of vendors. The Linux Foundation Edge organization provides the Open Horizon project just for this purpose. Open Horizon enables edge devices, edge servers (running only Linux and Docker), and entire Kubernetes clusters to be simultaneously and securely managed at scale by the same tools. The IBM Edge Application Manager product is IBM's commercial distribution of Open Horizon. On top of Open Horizon, IBM Edge Application Manager adds a graphical web UI to simplify the management of your containers across your entire fleet of edge devices and servers.

Summary

The purpose of edge computing is to bring your applications closer to where the data is created and action must happen. When you do this, you can achieve much faster response times (very low latency from when an event happens until a response occurs). You also get the benefit of less data in transit through the internet, which reduces backhaul costs, and keeps the data more local, secure, and private. With edge computing, you can also analyze more data at higher resolutions and frequencies.

Some of the primary enablers of edge computing include 5G networks and the prevalence of smaller, less expensive, more powerful, and networked computers that support containers on many different hardware architectures.

On one end of the spectrum, edge computing is about providing services at the telco provider edge, enabling mobile and cell phone applications to have extremely fast response times. On the other end of the spectrum, edge computing is about using the greater computing capacity of today's small devices out in the field. There are also many other edge applications between these extremes and edge computing is forecast to grow exponentially throughout the rest of this decade at least.

The edge computing revolution is comparable to the previous mobile computing revolution. It is poised to have a similar, or an even greater, impact on our lives and businesses. Edge computing presents both a huge challenge to manage all that data safely and a huge opportunity to create new value for customers.

IBM Edge Application Manager, which is based on Open Horizon, is designed to help you navigate through this huge paradigm shift. With IBM support, you can be ready to manage the challenges of edge computing and take your business to the next level. A whole new world is coming. Let's create it together.

Watch the following video to learn more about edge computing.


Video will open in new tab or window.