Top Considerations When Building an Edge Management Solution

PinIt

Here are the top ten considerations to take into account when building or selecting, and deploying your edge management solution.

When it comes time to deploy, orchestrate and monitor your solution at the edge, have you put as much thought into edge management as your edge application? Developers may have gone so far as to containerize their apps, but is that enough? 

Unfortunately, today’s edge environment is not like your mobile phone or your laptop. The edge is often heterogeneous in nature and very remote in location, with resource constraints, security concerns, and all sorts of connectivity variations and challenges. There are many more considerations to managing the edge versus managing a fleet of laptops or phones. 

Here are the top ten considerations to take into account when building or selecting, and deploying your edge management solution. Your edge management solution should:

1) Help manage/monitor the edge nodes as well as the applications/workloads on the nodes. Edge nodes are the host compute platforms that run your edge applications. An edge solution that only manages and monitors the applications isn’t enough. You also need to understand the state and status of the “box” running those applications.

2) Deploy containerized and native binary workloads. Many have tried to use cloud-native technologies to deploy and orchestrate application workloads at the edge. Containers are great, but today’s edge is heterogeneous, resource-constrained, and potentially unable to run some type of container runtimes and enterprise-grade solutions that are typically used in cloud management situations. When the edge gets thin, sometimes you have to deploy/orchestrate/monitor a simple binary package to your edge node.

See also: Edge Now Has More Than 1/3 of All Data

3) Avoid having edge nodes that must always be connected. In some edge deployments, the edge node may only be connected back to the enterprise for brief periods (think about ocean vessels or rail cars). Connectivity, when it is available, could be expensive (both in money and things like power usage). The edge management solution must operate within this constraint. Communications should be instigated only as needed and by the edge nodes when necessary. Nodes phone home to provide telemetry, signal that it’s having issues, or check for updates. Don’t have the enterprise call down to the nodes to check in or get status all the time.

4) Be resilient and fault tolerant. Things happen at the edge. Edge nodes lose connectivity, have things disconnected/reconnected, and suffer power outages and other unexpected reboots all the time. The edge node is not like an environmentally controlled, physically secured data center.   The edge management solution must help detect these types of issues but also automatically help work to get things back in good working order when connectivity, power, or other resource is restored.

See also: Edge Computing Spend To Reach $317 Billion by 2026

5) Include tools to dig into edge issues. When something goes wrong at the edge, how do you diagnose and fix the issue? Again, the edge node is typically resource constrained. It probably doesn’t have your favorite OS tools and analytics data that you might find on your enterprise server or desktop. And because of its remoteness, you are not often going to be sitting in front of an edge node’s monitor – if it even has a monitor at all. Make sure your edge management solution has the tools and data streams available to help diagnose issues – some of these must be provisioned on demand so as not to take away valuable resources during normal operations that the node needs when performing its edge duties.

6) Operate on-premises (aka on-prem) or from the cloud. The edge nodes must typically connect to some back-end “controller.” It is from this back-end controller that the “single-pane-of-glass” human interface to the edge is typically offered. Due to the nature of the edge and customer needs, management solutions must provide flexibility to operate the back end completely disconnected from the Internet or other systems. Running on-prem versus running in the cloud is a pretty standard need in edge deployments. Think about a factory where nothing can be connected to the outside world. The back-end controller needs to have connectivity to edge nodes but can’t always be connected to the Internet and/or run in the cloud.

7) Have a small/lightweight agent that minimizes resource utilization at the edge. Edge nodes are often resource constrained. Even when they are not, the valuable edge node resources (compute, memory, storage, network) need to support the job of collecting edge data and computing decisions to take action at the edge. The edge node’s precious resources can’t be all spent running and operating the edge management solution agent. When the management solution at the edge takes more resources than the edge applications at the edge, there’s a problem.

8) Offer a user-friendly interface that simplifies the edge for humans. The edge can be large. In some cases, the edge management solution must manage hundreds or thousands of edge nodes and the applications on those nodes. The edge management solution has to simplify the edge picture for the operators in the loop. For example, don’t force a person to have to drill into each node to get an appreciation of the health and status of the nodes under management. Instead, the management solution needs to provide a means to alert the operator to issues (current or potential) and things that seem to be out of standard deviations.

Of course, this also means having a user interface that helps to combine and filter some of the issues when they are related to the same source (being overwhelmed by alarms such that the real issue is hidden is a problem in operational environments). Allow the operators to focus on monitoring and fixing issues, not figuring out where the issue is and locating all the data associated with the issue. Also, recognize that some operators will build their own tools or have preferences to “see what’s going on.”  So, the edge management solution should also provide APIs and CLIs for scripting or building alternate tools.

9) Scale to meet edge needs. Deploying and managing a few edge nodes in a lab works well for a demonstration of the technology, but ensure your edge management works at your edge scale and distance. Think about worst-case scenarios. If your edge is deployed across the globe and consists of thousands of edge nodes, what happens if the edge management solution struggles at that scale? When edge management fails, and rolling trucks to edge nodes is the only option, then your edge management solution has not properly addressed edge scale.

10) Help before zero-day. Zero-day, the day your edge solution first goes into operation, is a big day for your edge applications and your edge management solution. Monitoring it all and addressing issues is an important part of the edge management solution. But before zero-day, how does your edge management solution help?

As the edge node boxes arrive at their deployment location, how does the edge management solution know of their existence? How did the operating system and firmware get on those boxes? For that matter, how did infrastructure software (like Docker or virtual machine infrastructure) and the edge management agent get to the edge nodes? Does the rollout of your edge solution require lots of technicians to touch the box first? Does your edge management solution get you pretty close to zero-touch provisioning of your edge systems (a situation whereby the edge nodes get plugged into power and network and provision themselves)? Good edge management should help reduce the burdens of solution operation even before zero-day.

A solution that takes these points into consideration should help you meet your edge management needs.

Jim White

About Jim White

Jim White is the CTO at IOTech. Jim has over 25 years of experience in software development for IoT Edge systems, enterprise application integration and mobile applications. Most recently, Jim was a Distinguished Engineer and Director of the IoT Platform Development Team within the IoT Solutions Division of Dell Technologies, where he was the chief architect for Dell's largest open-source effort to date, EdgeX Foundry. EdgeX is an open framework for building industrial IoT Edge computing systems and is now a Linux Foundation (LF) Edge project.  Jim will continue to serve as Vice Chair of the EdgeX Technical Steering Committee. Prior to Dell, Jim was a partner at Intertech, specializing in Java and .NET application development. Jim is co-author of 'Java2 Micro Edition: Java in Small Things', a Lynda.com author, and a frequent conference speaker.

Leave a Reply

Your email address will not be published. Required fields are marked *