Cutting Through the Fog: The Speediness of Edge, Hybrid, and All-Cloud

PinIt

When trying to determine where to carry out analysis, application characteristics and edge-cloud transfer bandwidth are the key factors affecting performance.

In recent years, there’s been a push toward the edge with the formulation of “fog computing,” in which assets and processing are deployed wherever suitable along the spectrum between centralized cloud and edge environment, for the application at hand.

As we move enterprises into the real-time economy, there’s been debate about which stage of the spectrum delivers the optimum performance or fastest responsiveness. Cloud services provide capacity and processing on demand where and when needed, but often are slowed by latency issues as data and commands are sent through various networks of networks. Hybrid arrangements offer some degree of local processing and data movement may be accelerated through in-memory systems, but speeds may be inconsistent. Moving processing to edge devices entirely may deliver rapid on-site analytics, but cannot be readily shared across enterprises.

See also: Why Edge Computing Can Help IoT Reach Full Potential

That’s the gist of a presentation and paper presented at the recent IEEE Edge Computing conference by a team of researchers from the University of Arkansas. The team, led by Dumitrel Loghin of the National University of Singapore, put these three major modes of edge computing through the test, and concludes that when it comes to speeds, it’s a draw between the three modes of data delivery. Their measurement-driven analysis “reveals a diverse performance landscape where there is no clear winner among cloud-only, edge-only and hybrid processing. However, application characteristics and edge-cloud transfer bandwidth are the key factors affecting performance.”

The team conducted their measurements across seven different MapReduce applications on two low-power edge devices and on AWS cloud. While not all MapReduce applications are suitable for hybrid edge-cloud processing, among those that were found to be suitable, they analyzed speeds across both separate edge and cloud clusters and a single edge-cloud cluster.

A number of factors affected performance, including application characteristics, such as selectivity, and edge-cloud bandwidth, the researchers found. For example, the two primary Hadoop applications measured showed varying speeds based on intra- and inter-cluster networking links. The single hybrid MapReduce cluster is 41% and 63% slower than separate edge and cloud clusters for some selected processes, but 85% and 100% faster for others.

The researchers also note that the setups were created “using three AWS cloud regions, such that two of them simulate the edge and the third one represents the cloud. We choose to simulate the edge using cloud instances because we want to analyze the influence of transfer time and bandwidth rather than the effect of hardware on cloud speedup over the edge. By using the same type of nodes for the edge and cloud, we minimize the effect of cloud speedup. Moreover, we use two regions for the edge because
in real-world scenarios organizations have more than one edge
cluster to aggregate the data from.”

Avatar

About Joe McKendrick

Joe McKendrick is RTInsights Industry Editor. He is a regular contributor to Forbes on digital, cloud and Big Data topics. He served on the organizing committee for the recent IEEE International Conference on Edge Computing (full bio). Follow him on Twitter @joemckendrick.

Leave a Reply