This article is sponsored by ADLINK and was produced independently of RTInsights.

Industrial IoT at Scale: What’s Really Needed (Part II)

PinIt

Continued from Industrial IoT at Scale: What’s Really Needed (Part I)

Fog Computing

Fog computing is emerging as the main paradigm to address the connectivity, bandwidth, latency, cost and security challenges imposed by cloud-centric architectures. The main idea behind fog computing is to provide elastic compute, storage and communication close to the “things” so that (1) data does not need to be sent all the way to the cloud, or at least not all data and not all the time, and (2) the infrastructure is designed from the ground up to deal with cyber-physical-systems (CPS) as opposed to IT systems. With fog computing, the infrastructure takes into account the constraints that interactions with the physical world impose: latency, determinism, load balancing, and fault-tolerance.

Software-Defined Automation, Digitalization and Fog Computing
As discussed earlier, cloud-centric architectures fall short in addressing a large class of IoT applications. These limitations have motivated the need for fog computing to address the connectivity, bandwidth, latency, cost and security challenges imposed by cloud-centric architectures.

Now let’s consider some additional industry trends that are further motivating this paradigm shift and formulate a more precise definition of fog computing.

Two trends that are in some way at the core of the Industrial Internet of Things revolution are Software-Defined Automation, or Software-Defined Machines, and Digital Twins.

A trend that is disrupting several industries, Software-Defined Automation’s raison d’être is the replacement of specialized hardware implementations, such as a Programmable Logic Controller (PLC) on an industrial floor, with software running in a virtualized environment.

Digital Twins, as the name hints, are a digital representation (computerized model) of a physical entity such as a compressor or a turbine, that is “animated” through the live data coming from the physical brother or sister. Digital Twins have several applications, including monitoring, diagnostics, and prognostics. Additionally, Digital Twins provide useful insights to R&D teams for improving next- generation designs as well as continuously ameliorating the fidelity of their models.

As Software-Defined Automation transforms specialized hardware into software it creates an opportunity for convergence and consolidation. Transform PLCs into software-defined PLCs, for instance, and suddenly they can be deployed on commodity hardware in a virtualized environment and decoupled from the I/O logic, which can remain closer to the source of data.

As a result of Software-Defined Automation and Digital Twins, there is an opportunity for modernizing the factory floor, consolidating its hardware, and increasing availability and productivity. Improved manageability, resilience to failure, and innovation agility also take place. Software-Defined Automation affords the opportunity to manage these systems as a data center. As this trend is influencing a large class of industries, it is worth highlighting that the transformations described above, along with the benefits, are not limited to industrial automation.

But there is a catch! The catch is that the large majority of these systems, whether in industrial transportation, or medical domains, are subject to the performance constraint already described. These systems interact with the physical world, so they must react at the pace the physical device imposes.

As a consequence, while traditional cloud infrastructures would be functionally perfect to address these use cases, they turn out to be inadequate as (1) they were not designed with these non-functional requirements in mind, and (2) they are often too heavyweight. Cloud infrastructures were designed for IT systems in which a delay in the response time may create a bored or upset customer, but will not cause a robot arm to smash against a wall or other machinery, or worse, hurt a human operator.

Fog computing is not just about applying distributed computing to the edge. Fog computing is about providing an infrastructure that—while virtualizing elastic compute, storage, and communication—also addresses the non-functional properties characteristic of these domains.

Fog computing makes it possible to provision and manage software-defined hardware, e.g., a Soft PLC, Digital Twins, analytics, and anything else that might be needed to run on the system while ensuring the proper non-functional requirements and delivering convergence, manageability, availability, agility, and efficiency improvement.

Deploying, monitoring and managing software on the edge is made possible with fog computing’s flexible infrastructure. Simply deploying some logic on an edge gateway isn’t fog computing. Neither is fog computing traditional distributed computing.

Angelo Corsaro

About Angelo Corsaro

Angelo Corsaro, Ph.D., is Chief Technology Officer (CTO) at ADLINK Technology Inc. As CTO he leads the Advanced Technology Office and looks after corporate technology strategy and innovation. Earlier Corsaro served as PrismTech’s (an ADLINK Company) CTO, where he directed the technology strategy innovation for the Vortex IIoT platform. He also served as Scientist at the SELEX-SI and FINMECCANICA Technology Directorate. There, he was responsible for the corporate middleware strategy, for strategic standardization, and R&D collaborations with top universities. Corsaro is a well-known and cited expert in high-performance and large-scale distributed systems and with hundreds of publications in referred journals, conferences, workshops, and magazines.

Leave a Reply

Your email address will not be published. Required fields are marked *