A discussion on the importance of having trust in industrial data, the challenges that have prevented past technologies from working, and how Generative AI helps.
Process industries have some of the most complex operations that require constant monitoring and fast actions when things go wrong. Those responsible for taking action spend hours manually looking for critical data in manuals, spec sheets, logs, and more that will help them make the right decisions.
Increasingly, companies are looking to Generative AI to help their front-line workers get fast access to the reliable and trusted data they need. RTInsights recently sat down with Ben Skal, Senior Director of Product Marketing at Cognite, to talk about the importance of having trust in data in the industry today, the challenges that have prevented past technologies from working, and how Generative AI helps.
Here is a summary of our conversation.
RTInsights: What makes it difficult for front-line SMEs, engineers, and other users to trust data in operational processes today?
Skal: The best place to start is thinking about when operators, field engineers, and subject matter experts need data to make decisions about a plant operation, anomalies, and disruptions to the processes that happen daily.
For a process engineer, that means I’m trying to perform some level of root cause analysis. For a field engineer, this probably means you’re trying to troubleshoot issues out in the field. And for maintenance, you’re probably trying to navigate around an unplanned downtime event.
When these types of disruptions or anomalies happen, it’s common for these different personas to look at three, four, or five different systems to collect all the information they need in a timely manner. Take the example of a fluctuating tank level and why a tank level is going up and down. You may want to look at engineering design data, historian data, and work order data to be able to understand the full context of what is happening with this piece of equipment.
We’ve heard from our customers it could take 30 to 50% of their day just pulling this information together. And so, part of this is about the time it takes to find and liberate data and information.
The next part is about being able to trust the data that you do find, given it took so much effort to get to it. For example, how do you verify when you go and pull that drawing that you pulled the most up-to-date drawing, or if you’re pulling the process data that you’ve got the right data for the right piece of equipment?
And the last piece to this is that when you have this issue once, you troubleshoot it, and you get back to normal operations. But if you later have a similar issue, you start this whole manual process all over again. So, the difficulty really lies in the challenge of liberating the data and then understanding how all the different data is connected and being able to trust that it is correct and complete. Once you do that, you can quickly navigate and engage with the information that you need in a very confident manner.
RTInsights: What challenges does this create for digital transformation efforts? Why hasn’t existing technology been able to solve the problem?
Skal: When you think about operational environments in industrial operations, such as those in the energy and manufacturing industries, there are critical processes with zero to no tolerance for disruptions. But they still happen. A high confidence in the decision-making is required. And it’s really difficult to trust the solutions if you have to spend a lot of additional time verifying the information that you used to make the decision is correct. So if you don’t trust the information, you end up making an intuition-based decision. Then organization are relying heavily on how much experience the person making that decision has.
There’s still an opportunity to improve, especially given how much change is happening with mergers and acquisitions, and skilled workforce attrition. Within the chemical industry, it is super common to see different companies end up trading assets. When that happens, all of this foundation that you’ve built and standardized within your existing company must be merged with the new company. So, you have a whole new technology stack.
While there are many manual ways to map, understand, and connect all these different data sources and types, most of this has been done with point solutions. With the amount of change that you see in the industry, even from site-to-site within an existing organization, it’s so diverse that unless you automate the process of liberating data and connecting it, it’s going to be very, very difficult to manually build out a way to trust industrial data.
RTInsights: What are the implications for organizations that want to tap into their unstructured data with generative AI-based solutions?
Skal: I’ll talk about three. The first, which is gaining a lot of popularity, is around the semantic search of unstructured data like documents, images, videos, and more. You’ve got OEM specification manuals, safety data sheets, and many things like this that contain super specific and important pieces of information. Some of these documents can be 100 to 200 pages long.
Generative AI could be used to search that document and take you exactly where you need to go with transparency. So, you ask the question of the document. Not only do you get the answer, but then it also says it looks like that’s on page 97.
So that’s the first opportunity. The second is around summarization. Maybe I’ve got a series of operator logs, and I want to know what’s happened over the past week, and I want a summary of any critical events. Generative AI can do that. Generative AI can also handle the language barrier. So, while those logs may be written in English, you can ask generative AI the question in any language you want, Chinese, Spanish, Italian, or French, and you get back an answer that’s been translated from the English document into the language that you searched in.
These two uses are about interacting with information. The third use of generative AI is a little bit different. It is about how to use generative AI to provide more context. We’ve been working on and are planning on releasing a document extractor using generative AI in the next couple of months.
It works by uploading a document. Let’s stick with the OEM specification sheet. That document may have min and max pressures, model numbers, the supplier, and all sorts of information about a piece of equipment. You can ask generative AI to extract those specific parameters that you’re looking for and actually tag that as metadata and digitize that information.
So now, If you want to understand why a piece of equipment is malfunctioning and you want to understand the engineering min and max pressures, you have access to that through search in context versus having to scan through a document.
RTInsights: What does it take to solve the industrial data trust issue, and where should organizations start?
Skal: Cognite has always taken the perspective that it’s important to start with an expected end-user value gain. If I can make this data trusted, who’s going to benefit, and what are they going to use it for?
Earlier, I talked about some of the opportunities there are across operations, process engineers, and maintenance. Specifically, we’re talking about allowing operators to access documentation when they’re out in the field. Or allowing process engineers to have a single workspace to visualize all of their drawings, process data, and work orders and be able to work through troubleshooting or root cause analysis faster. Maintenance can benefit by being able to better optimize and prioritize their work orders just by layering some analytics on top of all the work orders that are currently being collected.
All of this comes back to the concept of being able to liberate data. You’d be able to address all of those opportunities that I talked about previously if you could connect live historian readings with an asset hierarchy from an ERP system and work orders from your CMMS system. And then, in some way, visualize that in a process and instrumentation diagram, or 3D model while layering on an AI-powered search.
RTInsights: What is Cognite’s role, and what have you learned about data trust from your customers? Where do they see the benefits of using your solutions?
Skal: The biggest thing that we’ve learned is how much of a pain it still is to find the information you need to make decisions. As I noted, 30 to 50% of the time is spent looking for data. Some of our customers tell us that the first hour to two to three hours of their shift is figuring out what happened in the previous shift and then collecting all the information they need to be able to plan what they’re going to be doing that day.
From Cognite’s perspective, we are really focused on that data liberation aspect. So, how do we unlock data so you’re no longer spending time in four to five to six systems? How do we make that information easy to access so no matter what piece of information you start from, you can get to where you need to go?
You might be looking at a drawing, and from that, you can see all the live process data, see what’s happening out there in the field, and then be able to do some basic no-code analysis to start troubleshooting the problem. You would also be able to tag other users and start collaborating. And then, of course, because this is done in a more automated way, you now have a permanent record that you can continue to revisit and get that updated information.
From a value delivery standpoint, this has helped our customers make proactive adjustments to their processes. They are able to tweak certain parameters, measure how those tweaks impact the processes, and link that to an increase in production. They’ve been able to cut field inspection times by upwards of 30%. The root cause analysis for one of our customers shows that it is down by 50%. And so, all of these different use cases are centered around how I can access information in a way in the context that I need to make the decision that’s important for my role.