Sponsored by Dell Technologies
Center for Edge Computing and 5G

Thoughts on Emerging Technologies, Edge, and IoT

PinIt

IoT, machine learning, artificial intelligence, 5G, augmented reality, and virtual reality all benefit from increased edge compute power.

The edge is much more than just a collection of connected devices or sensors. As intelligence devices and sensors create massive amounts of data, businesses must make technology decisions about getting the most value out of that data. Is the analysis done on the device, edge, in the cloud, or at a data center? Is the data retained for regulatory reasons or to find the root cause of problems? Will new services like 5G play a role?

These issues are on the minds of many companies today. To help identify key strategies and sort through the different approaches to edge computing and the Internet of Things (IoT), we recently sat down with Krish Iyer, Strategy Lead for Edge and ICV, and Calvin Smith, CTO, Emerging Technology Solutions; both in the Office of the Global CTO at Dell Technologies.

We explored the role of emerging technologies, why edge and IoT are so important today, factors to consider when deciding when to process at the edge, compute considerations when using the edge, and what the future holds. Here is a summary of our conversation.

Download Now: Edge Creates Your Data-Driven Advantages

Emerging Technologies

RTInsights: What are today’s emerging technologies?

Iyer: We see a market inflection in several new technologies. Edge is clearly a front runner along with IoT,  AI [artificial intelligence], and ML [machine learning]. Clearly, these technologies represent a market that is starting to realize a great deal of traction. Most importantly, it’s the traction where many customers are starting to see edge as an extension of their cloud, and they are starting to look at edge as a way to distribute their workloads that cannot be handled by some of their cloud infrastructure. That, in my opinion, is probably the most important driver pushing many of the organizations to consider edge.

Smith: I think what’s interesting is this notion of intersections. It’s not like edge emerged out of nowhere. Many of the early use cases for IoT are now becoming more readily addressable because of the combination of technologies. Krish mentioned IoT, Edge, ML, and AI. There’s also 5G, Augmented Reality, and Virtual Reality. More broadly, as the costs come down for compute and things like GPUs are used at the edge, the technology capability goes up in terms of the amount of automation you can run.

It’s just amazing the proliferation of different technologies that are being executed where the data is created, which is at the edge. Ultimately at the end of the day, even if it [the created edge data] does go back to a cloud or a data center, much of the actuation back to those devices is going to be at the edge, too. It’s the net-new center of the universe. It’s funny how things go in ebbs and flows, right? I mean, we went from the advent of the PC and a more decentralized model to a monolithic data center approach, then to a pseudo-monolithic cloud approach, and now we’re going back to a distributed architecture. It’s really interesting to see how things have evolved over time.

Iyer: Yes. I guess one way to look at Edge is that it is a combination of heterogeneous systems. Edge is not monolithic, it’s not homogenous, and it is a set of different functions. These functions are typically used to collect data, to process data, and to store data. These functions also need to transfer data to other functions or need to perform some action based on that data to enable other functions like data processing, data analytics, etc. These functions may also need to be performed in challenging environments, like high-temperature environments, or rugged terrain. That’s why the term heterogeneous is going to be very critical in edge.

Why All the Attention on Edge & IoT?

RTInsights: You touched on this a little bit, but why all the attention on edge and IoT, and why now?

Iyer: It’s interesting. As Calvin said, edge is not new. Edge has always been there. And distributed systems have always been there. But we are coming full circle again. The way the market is shifting is that some of the functions or applications that typically ran on centralized data centers or the core infrastructure are recognizing problems. Companies find some of the applications have bandwidth or latency requirements that are not achievable with a centralized approach. That necessitates moving these applications closer to where data originates.

As Calvin said, moving the data processing closer to where data is created as laws of physics don’t permit running some of these applications at the core. It’s a speed-of-light problem you’re dealing with. It’s a natural phenomenon that’s causing the shift from the core to the edge.

See Also: Center for Edge Computing and 5G

Deciding Where to Process Data

RTInsights: That dovetails into the next question, and that’s what factors should businesses consider when they try to decide where to process the data? In particular, what considerations should be taken into account when you’re trying to figure out: Do I process at the edge or not?

Iyer: Many of the factors are technical, but many also could be business-related, as well as governmental- and regulatory-related. From a technical perspective, again, the speed of light is a big factor. Even if the cloud operators promise that they can process data at the core to satisfy this requirement, the cost of doing that is going to be so high that customers are going to say, “There’s no way I’m going to be able to pay those costs. Customers are going to essentially look at this as a no-brainer. Move the processing to the edge.

The second thing to consider is latency. For use cases like autonomous vehicles (or AR [augmented reality] and VR [virtual reality]), it’s a fact that a few milliseconds of lag is essentially the difference between safe driving and an accident. For applications that require results in milliseconds, or even faster, latency matters. To deliver the required results, the processing has to be done at the edge.

The third thing I can think of is: how do you manage bandwidth? The cost of sending these huge amounts of data that Calvin mentioned to the cloud and back is going to be expensive and inefficient. It’s going to drive up costs phenomenally, and that’s going to be a huge deterrent for most customers.

Another factor that is important is security, especially when it comes to edge. You can actually isolate some potential security problems before attacks permeate into the core data center. Many organizations can track and isolate some of the security attacks at the edge and close those systems off before the problem comes to the core. You can actually do early detection of any intrusion attacks, or any denial of services, and so on, right at the edge before it reaches the core, shutting things off before an attack on your central and your core infrastructure.

Then, there is the ability to scale. You’re looking at environments where you can add additional sites or add additional environments as your needs arise. If it’s a seasonal situation, or you just need to add more functionality, edge provides high levels of scalability.

These are some of the high-level technical requirements, but there also are things like regulatory requirements in the case of healthcare applications or GDPR [General Data Protection Regulation compliance]. In most cases, there are mandates that data needs to be collected at the location [where it is generated], and not transmitted back to a central data center. Many organizations must follow these mandates.

Smith: Krish is spot-on. I would also add that it’s not like it’s a dichotomy that it’s edge versus cloud or edge versus core. It’s a spectrum, a continuum. We know that there are going to be workloads that run from the edge, others from the core or cloud. It’s just a matter of placing the right workloads in the right places; and executing against each at the right time. The notion a decade ago was to collect and store everything, regardless of cost. Today, data is still key, but it’s the analysis that adds the value – it’s been said that data is the new bacon. Data is the new gold. Data is the new oil. That’s true, but not if it’s static information that doesn’t add any value. What’s interesting is when you start to do very basic filtering and machine learning at the edge. You don’t have to send every instance of device data saying: “I’m alive, it’s 72 degrees, it’s still 72 degrees,” back to the data center or cloud.

You don’t have to send those kinds of messages on a sub-millisecond basis. If you do, it’s going to get very expensive very quickly when you look at the sheer volume of devices in the world. You want to be able to parse that data and make some sense of it at the edge, in situ. Some perishable, ephemeral data only has value for a short period of time. What you really want to do is anomaly detection to figure out what’s the important information we really have to send back or keep. Back at the core or cloud, you can do your deeper analysis, figuring out, how has this happened before? Is this anomaly happening to other parts of the fleet of assets that we have in the field? That’s where the value starts coming in. You need the whole stack and a singular view on the entirety of your dataset. The important thing is, in addition to all the key parameters Krish mentioned, there are also logical reasons that you need to consider. There are parameters for your entire distributed architecture, and you’ve got to figure out what makes sense to store, forward, analyze, and process, where, when, and why. There’s a different logic for pretty much any architecture. It’s all highly contingent on the use case and the infrastructure itself.

Considerations for Moving Data to the Edge

RTInsights: Along those lines, what are the factors to consider for shifting data and compute to the edge?

Iyer: The speed of light problem, costs, and security are important factors. And we talked about bandwidth, high availability, and scalability as other factors. The ability to reduce a great deal of data, to be able to do data and metadata processing at the edge, and only send the most relevant data back to the core is going to be another key factor. Much of this depends on the vertical and the use case.

For example, telecom operators and content delivery networks might have specific requirements for edge. They might need to leverage edge for something deeper than many of the other verticals. These industries need to figure out what kind of services to provide to users in a specific geographic location. To do that, they might have to gather the context of the geographic edge and be able to provide specific services for specific localities. The situation might be different for, let’s say, operational technology use cases that require doing predictive analytics at the edge for IoT devices and machinery. Again, it all depends on and comes down to what specific vertical demands are.

For example, on the retail side, how do I make my customer user experience really positive? How can I provide AR or VR experience that makes it seamless, with no buffering involved? How do I make the overall user experience positive and interactive, so the customer is able to make buying decisions right there? Healthcare providers will have a completely different set of requirements for applications like telehealth and other remote diagnosis applications. There are also many regulatory requirements that come into play for such verticals. Edge is so critical; it is something that must work.

Smith: We also need to expand people’s horizons on what we use to define the edge. In an industrial context, an edge can be the factory floor itself, and edge can be that car we’ve described as the moving data center of the future. The car itself is essentially the edge. It could be an offshore oil rig, the entire rig, or a section of it. It’s multiple things, large and small, and wholly defined by the use case and what you are attempting to do. Edge computing is the interesting part.

What’s also interesting is the form factor in terms of what you actually use for edge compute – this is highly varied, too. Without being too product-centric, Dell has gateways (which are very simplistic compared to converged, or hyper-converged appliances) that do some of the protocol normalization, some analysis, and can be used for some IoT platforms and smaller form factor things. They have very finite and specific objectives and map to a number of devices.

On the larger form factor side of things, we actually have solutions called modular micro data centers. We recently announced one, the Dell EMC Modular Data Center Micro 415, a small, edge data center with built-in power and cooling and remote management features. And we also offer one called the Dell EMC Modular Data Center Micro 815 – essentially a full rack. These solutions are flexible and scalable. As named, they are modular, and they scale up to enable you to be able to build out your data center in a use-case defined, composable way at the edge. We can literally airlift and drop in data centers at your edge, regardless of the environment.

Think about that from a military context for people in the field. Think about that for the top of a building where historically you’d want to do the processing in a basement because there’s better cooling. Well, these solutions have cooling built in. Part of the innovation is the chassis and the enclosure and the way that it’s cooled and powered. We’re walking into these new worlds where, to all the points Krish made about bandwidth, and cost, and latency, there are also different constraints in those environments with regard to vibration, and dust, and shock and hazardous conditions. We can literally drop in ruggedized, enclosed micro data centers with storage, compute, and networking that can solve problems in near real-time at the edge. It’s the beginning of a very interesting change in the way people do business.

Edge and IoT Future

RTInsights: What do you see for the future with edge and IoT?

Iyer: The enhanced need for all of the key points we spoke about earlier will drive investment in the edge. Applications will drive the future. It all depends on the type of apps and the developers that create the apps. Applications are getting smarter every day. For infrastructure, or environments to support these applications, they need to be smarter as well. They need to grow at the same speed as the applications grow. Enhancements are happening, and the industry is adapting to a disaggregated approach, by not approaching it with monolithic infrastructure, and being able to right-size themselves to support these applications. Yet, we must consider that the pace of growth is not always there to support the growing demands of some of these applications.

Besides being application-driven, the future of edge also will draw large value from the cloud. Cloud is not going anywhere. I think cloud is still going to be an integral control point for the edge. Cloud is still going to serve as the key operating model, or the environment that essentially lends a great deal of data processing, data handling, data management support.

Having said that, I think the future of edge is also going to be defined by how vendors come together. One thing that we’ve learned is there’s no single organization, or vendor, that has a monopoly over the edge. It’s a combination of multiple players that need to come together to provide services, hosting, operations, data, security, and so on, to a multitude of other vendors to form an ecosystem. This includes an ecosystem of proprietary vendors and an ecosystem of open-source vendors all coming together to provide end-to-end solutions. This cooperation is needed from application development to application support, to developer support, to security, to compliance, and more.

For the most part, edge infrastructure will be horizontal, and vertical solutions will require systems integrators to bring it to the last mile for verticals like healthcare, manufacturing, and military. That again is going to require an ecosystem to come together to provide the needed functionality.

Smith: You mentioned the autonomous use case earlier. I think that’s a really good one. The futurists are going to make provocative statements and go, “Oh, edge is going to eat the world!” I don’t know if you remember, but 10 or 15 years ago, everyone was saying, “Is cloud going to eat the world?” Well, yes, to a certain extent, it did, but data centers didn’t go away. At the same time, edge is not going to eat the world either. I mean, it’s going to be big, and it is already growing, but cloud’s not going to go away.

If you look at an autonomous vehicle, it is kind of a car as the moving data center of the future. I like this analogy because people understand it. The car is essentially the edge itself, but then there are other edges that it may be connecting to. There may be vehicle-to-vehicle (V2V) communications. There may be vehicle-to-base-station or other infrastructure (V2X) communications where you’re connecting to an LTE cellular service, or in the future, or specific metros currently, a 5G service. Then, generally, there’s almost definitely going to be cloud connectivity for things like fleet management and looking at cross-connecting all these cars.

Again, it’s not cost-effective to send all the data from every vehicle to the cloud, and it’s not fast enough for safety-related features if you’re looking at things like smart airbag deployment, which is increasingly getting smarter based on weight and different parameters of the passenger. Or, object recognition for a multitude of cameras that have to enable an autonomous, or semi-autonomous vehicle, or ADAS [advanced driver-assistance systems]. All these place constraints and limitations. Basically, you have to run some of this locally, but you also might want to dig deeper into the data in the cloud or a core to get insights into anomalies, and that information is going to influence other cars.

For instance, suppose I want to predict an airbag malfunction or something that’s going to end up being a warranty recall. All that important information needs to go up to the cloud, but again you don’t have to send all of the static data – just the anomalous or error-recognition data. Then there is tremendous value when you analyze this data across multiple cars. Like you mentioned earlier Krish, an autonomous car is just a nice way of thinking about tying it all together, but it’s not the only example. There are multiple industries where you start to understand the harmony and rationale for placing workloads in different areas across edge, core, and cloud. It’s an exciting time. If we can do this same interview a year from now, and then ten years from now, I would love to get back together and see if our edge proliferation prophecy is accurate.

Salvatore Salamone

About Salvatore Salamone

Salvatore Salamone is a physicist by training who has been writing about science and information technology for more than 30 years. During that time, he has been a senior or executive editor at many industry-leading publications including High Technology, Network World, Byte Magazine, Data Communications, LAN Times, InternetWeek, Bio-IT World, and Lightwave, The Journal of Fiber Optics. He also is the author of three business technology books.

Leave a Reply

Your email address will not be published. Required fields are marked *