The edge is where data gets generated, events occur, things and people interact. The key is putting intelligence there.
The Internet of Things (IoT) holds great promise for improving operational efficiencies and vastly reducing costly downtime. But for IoT to realize its potential, computational challenges must be overcome. Even with the immense power of the cloud to run machine learning and artificial intelligence algorithms, vast amounts of data must be quickly moved and analyzed before actions can be taken. That’s where edge computing can help.
To get a better understanding of the biggest computing hindrances for IoT and where edge computing can be effectively applied, we sat down with Joe Speed, CTO of IoT Solutions and Technology at ADLINK, a provider of leading-edge computing solutions that support the transition to connected Industrial IoT systems across industries.
The computational challenges of IoT projects
RTInsights: What computational challenges do companies typically face when deploying a major IoT project?
Speed: With the Internet of Things, you’re talking about working with the things themselves. So people, places, equipment, and devices. That’s where events occur. That’s where data is generated that gives you information about the current state of these entities and what they are doing. The question is, how do you make sense of all the data?
See also: Why Practical Business Applications for IoT are Still Lagging
When I started in this area, IoT to us largely meant connecting all the things to the cloud. And so with every event that occurs in, on, or near the things, the idea was to get all that data up to the cloud so we could do some Big Data analysis, we could put the data into data lakes, and we could try to make sense of the data.
There are a few challenges with that. One is that things have to be connected, and they have to be connected all the time. If you have a high rate of telemetry, if there’s a lot of data being created, you have to be able to get all of the data to the cloud. That may not be economically or physically feasible.
If I have a very high rate of very “voluminous” data that I’m generating in these things, and it’s in a copper mine somewhere in the Outback, that could be a challenge. It could be prohibitively expensive, even physically impossible to get all of that data to the cloud.
There are other issues to consider. What if I bring the data to the cloud, make sense of it in the cloud, and then realize a machine’s about to break, a building’s going to burn down, or a person’s going to be injured? If I send back the appropriate actions, it’s probably too late. Too much time has elapsed, so the machine has broken, the building’s burned down, a person’s been injured.
One of the things that you start to realize is that you really need to put the computation, the analysis, and the intelligence at the edge.
What is edge?
RTInsights: What do you mean by “edge” here?
Speed: It means putting the intelligence on, in, or near the thing so that you can take this high volume of information, the data, and all the events, and make sense of it quickly, and take action quickly.
The cloud definitely still has a role. Imagine you’re doing fleet management. You have a large number of things like power generators, robots, and trucks. Instead of streaming raw data to the cloud, what if you could actually make sense of it at the edge? You still stream to the cloud, but you stream information – the results of the analytics.
Instead of streaming raw engine telemetry to the cloud, you actually send to the cloud things like “the engine is healthy,” “the engine is operating at 98% efficiency,” “the thermal profile is good.”
This changes the volume of data by orders of magnitude. This helps with the economics since the mobile operators charge by the byte. It also helps with the physics as to what’s possible.
See also: Digital Experiments Will Supercharge IoT Innovation
This still gives you real-time visibility and monitoring from the cloud. And you still get things like the low-latency interventions.
You’re talking about detection intervention in milliseconds, not seconds, at the edge. This is something I’ve been acutely aware of for some time and have been doing a big push around IoT and edge intelligence for several years now.
Video has crystallized the importance of the edge with IoT
RTInsights: Can you provide a sample use case?
Speed: The one use case that has crystallized the importance of the edge with IoT in everyone’s eyes is video. A camera module is just another sensor, right? In some ways, no different than vibration, pressure, voltage, any of these other sensors that you think about. However, a video camera is a particularly useful general-purpose sensor that can be used for so many things.
One of the great advantages of video is also one of the greatest challenges. That is that video is a particularly high-bandwidth sensor. So cameras, vision systems, computer vision, whatever the use case, video generates data at an extremely high rate and extremely large size.
If I stream all that video to the cloud at – pick a number, 30, 60, 120, 240 frames per second – am I using HD or 4K? This becomes a huge problem, and prohibitively expensive to transmit the data to the cloud. And there are delays. If I’m using video for industrial controls or worker safety, the latency of getting the video to the cloud, making sense of it, and then bringing those interventions, those decisions back, is too slow.
That’s one case that’s really made the edge come to the fore. Where do I put my computation? We believe that the correct answer is you put your intelligence at the edge, so it is in, on, or near the thing. It can be in the equipment, near the equipment, or on premises, in the facility, or on the factory floor. Even with this approach, the cloud still can play a role, because to make sense of it, there’s a couple of things that need to happen.
Making sense of your intelligence at the edge
RTInsights: Can you outline the main steps?
Speed: You need to develop your machine learning models. You’re using TensorFlow, Caffe, or some other framework for creating these things. Creating these models is computationally expensive, but then running those models, what’s called inference, ML inference, to actually use the model, to watch a live video stream, is, in comparison, computationally inexpensive.
Developing the model is something you do occasionally; running the inference is something that happens constantly.
You collect your data, whether that be vibration, telemetry from industrial equipment, or the video images. You bring those to the cloud, you use tools like AWS SageMaker to develop the machine learning models, and then you publish those models to run in computers that are, as I said, on, in, or near the thing.
Either it’s actually in a controller that’s running in the robot or in the equipment. Or it’s in some industrial controllers, in PLCs near the equipment. Or it’s in a camera, in a smart camera that is pointed at the equipment, or people, or venue.
See also: Center for Edge and Fog Computing
The same split goes for the monitoring. When the camera’s pointed at this thing, it might see that the fuel gauge says “50%,” “I just saw a bear,” or “a stack flame has flared.” Instead of pushing video to the cloud for analysis, the device makes the interpretation and sends information about the event to the cloud. In other words, instead of publishing video to the cloud, I’m just sending: “Fuel level, 50%. I saw a bear. It just flared.”
The volume of IoT data that requires analysis
RTInsights: Can we put into perspective the magnitude of IoT data that requires analysis?
Speed: To talk about it as gigabytes, terabytes, petabytes, is probably a little boring. When you talk about the magnitude, there’s a couple of things to look at here.
One is volume of data. How many bytes for each thing and how many sensors do I have?
Look at work we did in Puerto Rico after Hurricane Maria on emergency power generators, so that we could get hospitals reopened, schools reopened, all these things. Just the number of tags, the amount of data that you can generate is kind of amazing. You look at the volume of data, and then when you bring video into the mix, it takes it to a whole other level, and especially if you’re doing sensor fusion.
Sensors have much value, but typically, for a lot of these applications, whether it be an autonomous vehicle, industrial automation, or machine condition monitoring, there’s no one sensor that you treat as the absolute truth. Instead, you look at sensor fusion traits.
It’s not, “What’s the pressure? What’s the temperature? What’s the RPM?” It’s looking at all these things in aggregate. That’s the only way to understand what’s going on with the equipment.
Look at the work we’ve done with Hendrick Racing. In the past, a person would sit there at the test dyno, which is this blast-proof chamber, and watch gauges and the engine, listening with his ears, and in his head, he’s doing a sensor fusion so that he knows which decisions to make quickly. We’ve done a very analogous task with these systems.
The reason for the IoT data analysis problem
RTInsights: So is the IoT data analysis problem due to data quantity, speed at which it’s generated, the richness of the data? All the above? Other factors?
Speed: Certainly, one problem is quantity; it’s a lot of data. You also must look at the speed of it. What’s the rate? What’s the rate at which the data’s generated? What is the latency at which the data must be understood? And, what is the speed at which it must be acted on?
If you’re running control systems for a 300-megawatt gas turbine, in many situations in the past, you might have collected the telemetry, brought it to the cloud, analyzed it, made a decision, and then sent back an intervention. However, if I detect that we’re about to have a failure, this process is too slow. You need these things to happen in milliseconds, single-digit kind of stuff.
See also: Edge Computing Unlocks the Business Value of IoT Data
Speed and latency are critical. You talk about data in motion, data at rest. For a while, there was great interest in putting the data in data lakes and then using something like Hadoop to perform Big Data analytics on that data.
I’m sure there are data-scientists and others who care a lot about data at rest. But much of what we do from a product and engineering perspective focuses on data in motion. It’s the streams, it’s the rivers, it’s data in motion. Once it’s landed, once it’s in a database, I don’t care about it anymore. That’s someone else’s challenge. Acting on that data inflight, understanding that it’s in flight, taking action on the data that’s inflight, is critical to all of the things that we care about.
Traditional computer architectures are limited
RTInsights: What are the limitations of traditional computer architectures when it comes to realizing the full value of IoT data?
Speed: Traditional computer architectures break down with IoT. With IoT you must take into account the speed of the data, the data in motion, the real-time aspect, and making sense of it. You also need to consider the kinds of things you do with that data. This leads to a new set of approaches for computer infrastructure.
Today’s new technologies require a different approach. If I’m doing machine learning, recognizing patterns, and doing AI computer-vision kinds of things, I need to develop with frameworks like TensorFlow, for example.
We bring the data to AWS SageMaker, develop in TensorFlow, and then deploy those models to the edge. Well, to run those models, these models can run on any CPU.
You can take a TensorFlow application and just drop it on any Intel or ARM CPU, and it runs. The challenge is, does it run fast enough? If I have a particularly complex or heavy model, can it run fast enough on these architectures?
RTInsights: Can it?
Speed: For a lot of use cases, maybe even most use cases, the answer is no, they cannot. So there are a few things that need to happen. One is to use architectures that are specific to these kinds of things, that have some specific adaptation.
You look at things like NVIDIA GPUs, running my TensorFlow, turning that into TensorRT, and running it on NVIDIA GPUs and Intel GPUs.
You look at things like the work that Intel has done on OpenVINO, of being able to take these frameworks and make them run, basically, optimized for Intel architecture, and we’ve seen some dramatic performance improvements from that.
You see rather surprisingly strong results. You look at things like video processing units, so Intel Movidius Myriad 2 or Myriad X. These are custom silicon. These are chips that are designed specifically to do the video processing.
A typical approach in the past might have been, get a bunch of NVIDIA GPU cards and put them in a gamer PC. That doesn’t really work on a train, or in a factory floor, or in a plane, so we do the development of hardware and software specifically for these kinds of rugged environments. Working with these new elements, you can do extensive analysis in a very small form-factor.
Another factor to consider is that there’s some really interesting work that’s being done with the software. Even before it gets to the hardware, to the edge, what could I do with my models to make them smaller, to make them more efficient, to make them be able to run at faster rates, higher throughput on less hardware?
Some developments here include the work that AWS has been doing with SageMaker and something called SageMaker Neo that they’ve open-sourced as Neo-AI. There also is the work that Intel’s been doing on OpenVINO. This actually has enabled us to create some pretty potent packages, some pretty great performance in a small footprint, lower power budget, in our vision systems, cameras, all these kinds of things running at the edge.
The role of edge computing
RTInsights: How does edge computing help?
Speed: So edge computing, by putting the computation, by putting the intelligence on, in, or near the things, we solve the issues of, how do I get low-latency understanding? How do I get low-latency decisions? How do I get low-latency interventions?
It’s absolutely safety-critical. For example, you wouldn’t dream of remotely driving vehicles on highways from the cloud, right? The latency is too much.
What if I lose my cell connection or the car enters a tunnel? You obviously understand the safety implications. Well, that’s true of so many other situations, but that’s probably just the easiest one to understand.
Think about quality inspection. Suppose I’m making very expensive airplane parts and my inspection of this inflight manufacturing process is not done at the edge. And then suppose I get damaged parts. Because, by the time the team sees that “Oh, we’re making a mistake here,” it’s too late. The part’s damaged. It’s ruined.
Using the edge also helps from an economic perspective by not having to back-haul all that data to the cloud. Suppose I want to track a subject as he moves from camera to camera. To take that workload and move it to the cloud, you’d be paying for that by the hour. It can be expensive.
In contrast, if I have a thousand cameras, and each camera can make sense of what’s happening itself, that makes more sense. If a camera can talk to its peers for things like tracking of a subject as they move from camera A to camera B to camera C’s field of view, that’s computationally better.
Edge computing makes possible things you’re seeing now in self-driving vehicles, robotics, industrial controls, quality inspection. These things would simply not be feasible without edge.