It sometimes makes sense to treat edge computing not as a generic category but as two distinct types of architectures: cloud edge and device edge.
Most people talk about edge computing as a singular type of architecture. But in some respects, it makes sense to think of edge computing as two fundamentally distinct types of architectures: Device edge and cloud edge.
Although a device edge and a cloud edge operate in similar ways from an architectural perspective, they cater to different types of use cases, and they pose different challenges.
Here’s a breakdown of how device edge and cloud edge compare.
Edge computing, defined
First, let’s briefly define edge computing itself.
Edge computing is any type of architecture in which workloads are hosted closer to the “edge” of the network — which typically means closer to end-users — than they would be in conventional architectures that centralize processing and data storage inside large data centers.
By moving workloads closer to the users who need to access them, edge can improve performance significantly, especially in contexts (like self-driving cars or automated manufacturing lines) where even just slight delays caused by network latency or bandwidth issues would be unacceptable.
Two ways to build an edge architecture
There are two main ways to implement an architecture that brings workloads closer to users.
The first is a device edge. In a device edge, workloads are offloaded to individual devices located on the edge of the network.
For example, a device edge could be powered by IoT sensors that store and/or process the data they collect. In this case, the advantage would be that the data can be analyzed without having to move into a central data center first, which would take time.
A device edge could also consist of end-user devices, such as smartphones. If you offload processing to those devices, users will typically see faster results than they would if processing happens in a central data center, and they need to wait for results to arrive over the network.
In a cloud edge, data storage or processing takes place on conventional servers. The servers are closer to end-users than centralized data centers, but they are still servers.
A cloud edge could consist of small data centers that are strategically located close to the users they serve. A Content Delivery Network, or CDN, is a classic example of this type of setup (and one that, incidentally, was widely used long before edge computing came into vogue).
Differences between cloud and device edge
Both a device edge and a cloud edge bring workloads closer to users to enhance performance. But they are different in certain key respects:
- Capacity: A device edge is likely to have more limited compute, memory, and storage resources than a cloud edge. You can only store and process data within the constraints of small-scale devices rather than having conventional servers at your disposal.
- Security: There are security challenges to address with both device and cloud edges, but they are somewhat different. Arguably, it’s easier to secure data on a cloud edge, where it is more centralized than on a device edge, where it may be hard to keep track of and secure each individual device.
- Latency: In a cloud edge, you still have a network separating servers from end-users. A device edge that places workloads directly on end-user devices can take the network out of the picture entirely (provided data both originates and is consumed at the edge, at least), which means latency ceases to be a consideration.
Because of these differences, it sometimes makes sense to treat edge computing not as a generic category but as two distinct types of architectures. Rather than asking, “should we use edge computing?” ask, “should we use a device edge, a cloud edge, or no edge at all?”