Sponsored by Dell Technologies

Using AI and ML to Extract Actionable Insights in Edge Applications

PinIt

If data starts at the Edge, why can’t we do as much as possible right there from an AI point of view?

The explosive growth in Edge devices and applications requires new thinking as to where and how data is analyzed, and insights are derived. New Edge computing options, coupled with more demanding speed-to-insight requirements in many use cases, are driving up the use of artificial intelligence (AI) and machine learning (ML) in Edge applications.

Where AI and ML are applied (at the Edge or in a data center or cloud facility) is a complex matter. To get some insights into current strategies and best practices, we recently sat down with Said Tabet, Chief Architect, AI/ML & Edge; and Calvin Smith, CTO, Emerging Technology Solutions; both in the Office of the Global CTO at Dell Technologies.

We discussed the growing need for AI and ML to bring sense to the large amount of Edge data that is generated today, the compute requirements for AI/ML in Edge applications, and whether such computations should be done at the Edge or in a data center or cloud facility.

Emerging Trends

RTInsights: What are today’s emerging trends, and how do AI and ML fit into the Edge discussion?

Tabet: Today, when people talk about emerging trends, they often mention many things like Edge, IoT, AI/ML, augmented reality, virtual reality, blockchain, and 5G. We position Edge as the next thing in terms of where we’re going with these technologies—not just trends, but true adoption. I think from the perspective of the data and the user experience, there is a need for insight and, thanks to our impatience as human beings, coupled with real-world latency issues, to get that insight as quickly as possible. Also, the idea that if the data starts at the Edge, why can’t we do as much as possible right there from an AI point of view?

Obviously, AI, and particularly ML, is greedy in terms of the amount of data it needs. It needs to learn quickly. What is it that we can really do at the Edge? I think that’s where this discussion starts. Blockchain or distributed ledgers are other areas of consideration here. Typically, you’re going to see a need for a great deal of trust, particularly from a data point of view. There also needs to be trust in the insights we’re generating, how we react, and the actionable items that come from these findings. That brings in some additional need from the overall security, privacy, and governance perspective. You need to take all of this into account within that experience, whether you’re a business person, or an individual, or a fleet of vehicles.

Smith: If we take a general estimate of there being, let’s say between 20 and 30 billion IoT devices connected today. Going back, Said, I think it was back in 2014 or so that the number of connected mobile devices outpaced the number of people in the world, right? Then in 2017, IoT devices outpaced the global human population, too.

Tabet: Yes. That’s right.

Smith: It’s been a big jump, and it’s going to keep jumping. Are you going to hire 27 billion people to go out and maintain these Edge devices? Then, going up through the stack of the infrastructure, obviously, it’s not a 1:1 mapping, but it’s physically impossible to hire enough database administrators, data scientists, architects, and engineers. Instead, it’s all about driving automation and optimization at the Edge. Just the sheer volume of data, and, as Dr. Tabet mentioned, the greediness of applications and functions, specifically in AI. You need to be able to process a great deal of information for multiple reasons, one being cost. You want to analyze what’s actually valuable data at the Edge before you start sending it out to the data center or cloud.

The Role of AI/ML with Edge

RTInsights: Why do we need AI/ML when we talk about Edge?

Tabet: There are several reasons. First, there is the why. From the perspective of automation, AI and ML are a way to automate more and be a little bit more disciplined about it, and you do that at the Edge. You see that today. It needs to be enforced by this view of building this continuum of the cloud to the Edge, including the data plane, or the control plane, and the development kits, et cetera, that the developers feel that if I’m writing for the Edge, it’s the same as the cloud.

From the AI perspective, automation is a big thing. At the Edge, you won’t do the truly deep processing there (i.e., Deep Learning – DL), at least not usually. The other aspect I think that’s really important is the fact that the insights you’re going to get very quickly at the Edge are often going to be different from those that you’re going to bring to your data centers or to your cloud, connecting that with other types of information. At that point, you are losing a great deal of intelligence momentum, in terms of the insights that you’re getting, or the decision making you’re going to do if you’re doing that at the Edge. Still, the Edge does need AI. They go hand in hand. 

Smith: I think the other reason is a business one. All of these “things, sensors, actuators, devices,” they can be as big as a connected home, or a connected cruise ship, or a connected car. Or they can be in or on a factory, a tractor, or a pump. Long story short, all industries now are getting commoditized, right? There’s the option to build and buy things all around the globe. The ways that companies are trying to differentiate is through services in association with the products and assets they sell. The more that you can derive value, many companies are switching from a product to a service. It’s call product-to-service transformation.

They’re trying to sell their assets as services. Sometimes it’s even a switch in the business model and goes from CapEx to OpEx. Sometimes they’re still only going to sell as CapEx, but they may be able to sell an additional set of services or just use it as a differentiator when their products are smart and connected. Again, as Said mentioned, optimizing and automating and being able to pull in the right data at the right time, and the right place is enabling companies to compete. Just simply manufacturing an item doesn’t always derive value anymore.

Tabet: One more thing I want to add is that at the Edge, if we’re looking at these tens or hundreds of thousands of devices from a single enterprise or organization, whether it’s in autonomous vehicles, you’re looking at the cars as instances, each one of those cars may behave differently under different environments. Learning from that is very important when you bring it together. It’s the same in industrial automation. You can look at wind turbines, engines in planes, or healthcare. In many of these different environments, you have a much more precise, much more efficient, better performance of these AI modules, or AI algorithms when you bring that information back to the data center or the cloud. In other words, while there is definitely value in data from one asset, you start to glean true insights from connected fleets of assets and their interactions in different environments.

Deciding Where Edge AI/ML Should Be Done

RTInsights: Where is the AI/ML compute work done for Edge applications?

Tabet: This goes back to the previous point where in many these cases, the AI algorithms do need a great deal of data to train. You do that (and there is disagreement) in the data center or in the cloud, in a centralized environment where you can have these high-powered compute capabilities. At the Edge, you would deploy these algorithms, and they can be much more efficient to use for influencing purposes at the Edge. There is obviously talk, where at some point, we’ll be able to do some level of training at the Edge, as well. This will be limited at first due to the heavy constraints of most Edge environments.

Considerations for Edge Computing

RTInsights: What are the requirements for compute solutions at the Edge?

Tabet: Well, this is very tricky, because there are different definitions of what Edge is. You talk to a car manufacturer, and they say my car is the Edge. When you talk to a turbine manufacturer, the wind turbines are their Edge. Manufacturing devices in a factory are Edge, as well. They’re going to have different environments. Some of them will be very harsh. At Dell, we’ve had a great deal of experience with compute in environments that might include those harsh conditions with strong vibrations and extremely high or extremely low temperatures. The one thing I would say is the number-one requirement [for compute solutions at the Edge] is power consumption. It’s got to be low power. This goes against anything that we know, particularly in HPC, right? You’re using lots of GPUs, you got heat, and you need power. That’s not possible in these [Edge] environments. We’re going to have to bring that [the compute components] to another form factor, or possibly we may even think in terms of different kinds of accelerators, such as a new generation of AI-specific types of accelerators that are coming up in the next few years.

Smith: The cool and interesting thing is we’re continually able to work in a ruggedize way for harsh environments and be able to work in, say, negative five, sometimes 10 degrees Celsius and then up to 55-plus degrees Celsius. As Dr. Tabet mentioned, for hazardous or harsh environments, you need to be able to prevent issues associated with the vibration, shock, and all that kind of jazz. Yet, we’re continually able to make the form factors smaller and smaller. Clearly, we do this with the help of our partners and chip developers.

What’s interesting is this notion of making the form factors smaller and more ruggedized, and then at the same time, making them as simple to operate and use as possible. From an application standpoint, it’s not about the cloud being executed at the Edge, per se, although that can happen, too. It’s more about cloud-native principles being brought to the Edge. The simplicity and ease with which you can port, whether they be containers or VMs [virtual machines], to different types of infrastructure, and different types of environments, and have a single-pane-of-glass view. That then also potentially can enable a multi-cloud environment. The Edge can be your new control point, your new pane-of-glass visibility into what’s happening that bridges the gap between the OT, or operational technology side, and the IT side. It’s fascinating. It’s a new frontier for exploration, and it’s driving a great deal of the product roadmaps for the future, I would say.

Edge AI/ML Use Cases

RTInsights: Can you give some examples of AI/ML Edge applications?

Tabet: One I’ve been working on for a few years now is around the challenges related to mobility applications [like autonomous vehicles]. Working with several organizations and with our customers directly, we are looking at how we can bring different capabilities to this market. I’ll abstract that a little bit and give you the examples as well that could facilitate this kind of Edge deployment. There are use cases for Edge in what we call the RSU, the roadside units, the vehicles themselves, or with the sensing that they’re doing. Some of these examples are extended versions of what we call the HD maps, high-definition maps, where the maps are semantically rich, context-driven, and get updated in near real-time.

That’s one example where AI is used to reduce the amount and the cost of the data that gets transferred. You only deal with what you need for those specific services. For example, videos can be reduced. You can reduce the amount of data. You can focus on very specific objects that you want to detect. Those are the kind of examples at that level that can help.

Other ones are related to the health of those Edge devices, where you’re monitoring a specific device, an engine, a full car, et cetera, and you’re trying to do as much analysis as you can at the vehicle or device level, particularly for safety reasons (i.e., condition-based monitoring in many IoT use cases).

There are other examples in the retail domain, as well, where you’ll see a lot more Edge deployment, but in a different way. In a sense that you have that connection from the Edge to the cloud to the data center, in the Edge cloud, as we call it, where you do as much as you can at the Edge. [The Edge is where] you’re gathering the data and doing all the analysis that is needed. You’re providing a better experience to the end-user, in the case of retail, for example. You’re trying to personalize that experience to them so that you can minimize the cost, but also optimize the services.

Right now, particularly in the situation we’re in, there are many cases that are related to healthcare. How much data can we gather and react to it as quickly as possible at the Edge? Normally we’re talking about a distributed environment in the scale of the hundreds of thousands or millions of devices, as Calvin was saying. This is an area where AI and ML can play a much bigger role. We talk about the data changes all the time, and with the AI capabilities, some of these applications would adapt themselves. The learning continues, and the training continues at that level.

In all these areas—healthcare, retail, autonomous vehicles, mobility in general, and many other areas—you’re lowering the cost through predictive or conditional maintenance. Edge also gives you the ability to do remote control of devices, so if your experts can’t safely go to the place where the data is being collected, they can actually provide that capability remotely, and even include things like AR or VR. But you also do most of the work ahead of time at the Edge, so you can minimize their presence, if needed, in person. Those are just some examples.

Smith: I would add two more than are important. One is that we have a large business in and around safety and security. Like Said mentioned, you may be running the very large algorithms, and processing the data for things like … well, let me give you an example. Imagine you have a scenario where there’s a gunshot in a public place, say it’s outside of a gas station. You need to make a lot of automated and immediate decisions to figure out what course of action to take. One thing is, think about gunshot recognition from an audio standpoint, but correlate it with object recognition from a computer vision standpoint to actually show that it was a weapon, and not just a car backfiring.

Then, if you have a perpetrator that did this, you can also have what is now actually fairly simplistic algorithms that can be executed at the Edge, but probably originated in the data center, for things like license plate recognition. Then, you can identify the license plate of the fleeing suspect. It’s all automated and executed at the Edge. There’s a lot of use cases in that area involving cameras, and surveillance, and security, and general safety for citizens.

The other big use case area, which I think we would be remiss not to mention, is Deep Learning for Smart Facilities, which is a testbed in the Industrial Internet Consortium that we started quite a while ago with Toshiba. Since its inception, we also have added in SAS and Wipro, with different companies bringing different values to the table.

The original notion was to build an enormous facility designed as state-of-the-art. If I’m not mistaken, I believe it was built in 2011, and it already had, how many sensors, Said? Like 20,000 sensors or something like that?

Tabet: More than that, I would say. It was probably 35,000 back when it was built in 2011.

Smith: 35,000, right. It was a brand-new, state-of-the-art facility, but the designers wanted to push the envelope and learn and do more. A neural network was implemented through a series of servers that connected to a parameter server, letting the building essentially self-learn in association with its critical systems. We are talking about things like elevators, and escalators, and of course, high-cost things like HVAC systems. The idea was to do, at least initially, anomaly detection and look for correlations between things that a person (without AI) would be hard-pressed to find.

For example, there were some very fascinating findings on things happening in the kitchen. Data determined that those happenings actually raised costs, and because of those actions, specific sections of ventilation were being closed down. It’s incredible what you can start to find when it’s the data rather than humans investigating things. We’re talking a true, deep neural net where it’s self-learning – teaching itself what to find, looking for cross-correlations that humans wouldn’t normally determine on their own. When you really think about it, all this was at the “Edge.” This is all being executed within the building. Then, some of the core processing, of course, was back in the data center. 

Tabet: In a more recent use case for a project, we added a number of devices, similar to all these assets that deal with HVAC and other things within the building. Each one of them was equipped with its own machine learning algorithms or AI algorithms in some cases, and that allowed them to be self-sustainable, but at the same time, learn from each other. Back to Calvin’s story, this is done in such a way that we’re going to see more and more of this kind of autonomous AI, if I could use that term. Really, the idea that we don’t feed it direction, but over time it will self-level and self-learn in terms of its parameters and the optimization of the productivity that is required.

Salvatore Salamone

About Salvatore Salamone

Salvatore Salamone is a physicist by training who has been writing about science and information technology for more than 30 years. During that time, he has been a senior or executive editor at many industry-leading publications including High Technology, Network World, Byte Magazine, Data Communications, LAN Times, InternetWeek, Bio-IT World, and Lightwave, The Journal of Fiber Optics. He also is the author of three business technology books.

Leave a Reply

Your email address will not be published. Required fields are marked *