Sponsored by Siemens
Improving Service and Profits With Connected Products
Learn More

How Asset-Heavy Industries Can Bridge the Gap Between Data Collection and PdM

PinIt

Certain asset-heavy industries like utilities know the need to collect real-time operational data for maintenance. How do you do this efficiently?

In industries such as automotive, manufacturing and utilities, having a high-performing, reliable asset base is critical. This means keeping your machines running by using the data available to repair them before they break down—with a predictive maintenance strategy (PdM)—rather than rushing to perform emergency maintenance and suffering downtime. The way forward for these industries comes with developing and implementing an effective maintenance analytics solution, but in the past, the solution has been as complicated as—if not more than—the problem.

See also: Offering post-sale maintenance packages boosts bottom lines

While creating a relevant data analytics model can allow you to predict system failures before they occur, doing so successfully first requires the ability to access and analyze both historical and current data. Traditionally, this has meant heavy-handed and inefficient implementations of data gathering techniques. Modern cloud-based and edge-enabled software, however, can help industries to implement effective maintenance analytics solutions.

Let’s take a look at some of the potential complications before examining how these new technologies can solve them.

Potential Complications in Utilizing Data

Asset-heavy industries have an immense, data-rich environment on their hands, but are often restricted from leveraging that data for a few reasons. Here are some common setbacks faced by industries with innumerable assets but few ways to harness the data those assets provide:

Accessing and cleaning machine data is easier said than done – First off, some data is simply not accessible. It can be stuck in an outdated format on an older machine or siloed in a disconnected asset. Any data that is not labeled or structured in a way that enables useful analytics might also be sensitive or proprietary. As a result, a disproportionate amount of time can be spent on finding, accessing and preparing data for use. Having data is different from having usable data.

Building predictive models can be a catch-22 – Machines are built to last, with instances of critical equipment failure a rarity. And while data may be offered as a magic bullet, you may find yourself in a situation where you either don’t have the right data for accurate predictions or little to no recorded degradation in variable performances leading up to an event. In essence, the data you have is not applicable to your problem and the necessary data cannot be found.

Putting models into operational production takes time and organizational commitment – Custom applications don’t just appear out of nowhere and solve all your problems, obviously. An IT portfolio must be reprioritized to build an application with an integrated, reasonable user interface within an existing IT environment. This process itself can take months and requires resources. Exposing an application model to live data on a remote, disconnected asset without the appropriate interface is difficult. Data architectures on assets built for on-site production are meant for monitoring and control, not data science. The result? Models that ‘get stuck in PowerPoint mode’ sitting on local laptops in headquarters running on cold, historical data.

Models are only as good as their use – Maintaining and re-training models require a lot of work and can interrupt normal operations. In some cases, an application may not be integrated with how the work is done at asset-level. Additionally, it might be insufficiently automated or designed in such a way that prevents decision makers from adoption. Building expensive technology needs a runway but it also requires the return of improving performance.

Ground-up fixes are costly, time-consuming, and rarely deliver ROI – Many companies launch risky programs to clean up data, build data science capabilities, modernize IT platforms, and run operational excellence improvements. These transformation programs have the potential to lock up the organization for years, cost hundreds of millions and are unlikely to deliver on expectations in the end.

Technologies and solutions – Now that we have explored some of the main complicating components, we can take a look at some of the solutions. A variety of technologies comprise the core drivers of the maintenance analytics category and they can be effectively utilized to maintain a high-performing, reliable asset base.

Anomaly detection – Since failures are rare, you may be better off building a model that tells you with what certainty the asset is operating in a normal state. Companies may be better off building anomaly detection models, rather than models trying to predict failure.

Audio/video – Current efforts focus on discovering patterns in data from existing machine sensors, such as point-based temperatures, velocities, and positions. At the same time, seasoned technicians are already using patterns in data from their eyes, hands, and ears to determine whether a piece of equipment is behaving as it should or not. You could argue that they are looking for new patterns where there is sensor data, but not looking for new sensor data where they know there are patterns.

Cloud-based, open technologies – Leverage the Silicon Valley way of working in order to accelerate deployment. Existing legacy IT environments are riddled with complexity and the associated governance processes are consequently rigid and time-consuming. It is critical to leverage modern cloud-based technologies in order to enable the real-time connection of live streaming asset data to data models and circumvent legacy IT restrictions. This will also work to protect the operational stability of existing systems.

Edge computing – Take advantage of edge computing capabilities. Models are typically trained and demonstrated on historical data pulled off of historians, enterprise resource planning architectures, or other production data repositories. However, for predictive models to work in practice, they need to be fed live data. There are two complementary approaches for achieving this. First, you can stream the live data to the model. One way is by hosting the model in a cloud and then connecting the machine to this cloud using an interface running on the machine (or running on a server connected to the machine). Second, you can deploy the model close to the data. This is possible by creating an executable from the model, and deploying this executable on or near the machine itself. Both approaches will likely be necessary to manage bandwidth and meet requirements posed by remote assets, data integrity protection (i.e., “on-premise” requirements).

Model lifecycle management – A greater number of models will be created, deployed and actively used to make decisions based on an increasing amount of data from more assets than ever before. As such, we will need to think carefully about model lifecycle management. Not only with respect to making sure we always have the latest and greatest models with the best performance deployed everywhere, but also in terms of traceability and auditability. What model was used to make which decision? On what data was that particular model trained? What new/additional data must we store (e.g., anomaly data) to maintain model integrity? This calls for new capabilities and technologies.

Although the future may belong equally to humans and machines, we need to take a human-centric approach to design solutions and data products. Creating a new operational environment (call it Industry 4.0 or Maintenance 4.0, if you will) in which both can work together is the most effective way to achieve this—in cross-functional teams where technicians, data scientists and business leaders design easy-to-use solutions to specific business problems.

Competitive advantage materializes through the ability to turn data into value; less so in the data or models themselves. The accuracy of any model will only deliver a competitive advantage if you are able to put it into production faster than any of your competitors. On the contrary, it makes perfect sense to focus the internal effort on reducing the time-to-production, reducing the cost-of-learning, and instead activate an ecosystem to figure out what models are the best ones.

Martin Lundqvist

About Martin Lundqvist

Martin Lundqvist is the VP of Government Solutions at Arundo Analytics. Based in Stockholm, he initially focused on SICPA and was Sales Executive for Sweden. In addition to being a closet Python coder on the weekends, Martin brings a rich experience in the Government Sector across multiple continents. Most recently, he dedicated five years to establish McKinsey & Company's global Digital Government service line working with government leaders and institutions globally. Martin holds an M.Sc. in Industrial Management and Engineering from Linköping (Sweden) and Swiss Federal Institute of Technology with a major in Computer Science.

Leave a Reply

Your email address will not be published. Required fields are marked *