How to Make Generative AI Work for Industry

PinIt

Risk-averse industries must address concerns about hallucinations, security, and privacy before they can confidently embrace generative AI. Here are some tips on how to do that.

Predictive operations have been a pursuit of industry for two decades, aiming to prevent equipment failures and optimize operations. Challenges in data availability and accessibility have limited its potential, but technology like generative AI is making these challenges a thing of the past.

We began addressing data availability challenges with the rise of industrial IoT and machine learning/artificial intelligence (ML/AI). With the advent of Hybrid AI, we address accessibility by making data more understandable. However, the real accelerator of industrial transformation lies in generative AI. Capable of generating novel solutions based on existing data patterns, generative AI will make industrial data more useable across the organization than ever before.

While pure AI systems can analyze vast amounts of sensor data, historical records, and unstructured data to predict equipment failures and make recommendations to optimize actions, there is distrust in their “black box nature.” Hybrid AI evolved to solve this problem by harnessing both human expertise and AI algorithms’ analytical power. It addresses the “explainability” issue of pure AI systems, enhancing transparency and trust in decision-making. Human feedback helps refine AI models, leading to more accurate and adaptable solutions for complex industrial processes. Until recently, Hybrid AI has been the best solution for complex industrial process problems. It has enabled industry professionals to make more informed decisions, identify hidden patterns or anomalies, and optimize complex processes. 

With generative AI, AI models can learn from data without explicit guidance. Generative AI allows us to create new data, content, or solutions based on existing patterns and insights. By ingesting diverse datasets, generative AI models can optimize turnaround plans, assist decision-making, and drive operational excellence. Generative AI’s accessibility through language interfaces and CoPilots facilitates real-time collaboration between humans and AI, empowering operators to make faster, more informed decisions.

But how do we make generative AI work for risk-averse industries where there are valid concerns about hallucinations, security, and privacy?

Solving the Generative AI Hallucinations Challenge

Large Language Models (LLMs) can produce inaccurate or misleading information, which is a concern in critical decision-making processes. Hallucinations happen because the knowledge base of LLMs, while vast, is often outdated and based solely on publicly available data. To make LLMs work for industry, LLMs need to be “trained” on curated, contextualized industrial data—your specific industrial data. Further, this contextualized data must include explicit semantic relationships, ensuring the text consumed is relevant to particular industrial assets or tasks.

This is where an industrial knowledge graph comes in. An industrial knowledge graph processes data to improve quality through normalization, scaling, and augmentation for calculated or aggregated attributes. When prompting an LLM to provide information about operating industrial assets, the industrial knowledge graph will include the data and documents related to those assets and the explicit and implicit semantic relationships across different data types and sources.

Codifying this context as an industrial knowledge graph is critical to leveraging your vast industrial data within the context window limitations and enabling consistent, deterministic navigation of these meaningful relationships. It is a crucial first step to allowing the subsequent, equally important Retrieval Augmented Generation implementation.

Retrieval Augmented Generation, also known as RAG, is a design pattern that allows LLMs to incorporate specific industrial data as content when responding to prompts. A type of in-context learning, RAG lets us use off-the-shelf LLMs and control their behavior through conditioning on private contextualized data. This approach allows us to utilize the reasoning engine of LLMs to provide deterministic answers based on the specific inputs we provide rather than relying on the generative engine to create a probabilistic response based on existing public information.

See also: Data Privacy Concerns Deter Enterprises From Commercial LLMs

Addressing Generative AI Security and Privacy Issues

Like any advanced technology, LLMs can be vulnerable to adversarial attacks and data leakage. In an industrial setting, these concerns are magnified due to sensitive data, such as proprietary designs and customer information. Ensuring proper anonymization, safeguarding LLM infrastructure, securing data transfers, and implementing robust authentication mechanisms are vital steps to mitigate cybersecurity risks and protect sensitive information. RAG allows for maintaining access controls and enables us to keep industrial data proprietary and secure within the corporate tenant, building trust with large enterprises and meeting stringent security and audit requirements.

Generative AI is a game-changer in industry. Integrating data contextualization, industrial knowledge graphs, and RAG technology lays the groundwork for successful and transformative AI implementations in industry. Embracing generative AI’s potential while addressing its challenges will drive the growth of an intelligent and resilient industrial future.

Moe Tanabian

About Moe Tanabian

Moe Tanabian is the Chief Product Officer at Cognite. In that role, he spearheads product strategy, execution, and management, driving innovation at scale in the realms of software products, AI, ML, and IoT. Most recently, the AI innovators at Cognite published The Definitive Guide to Generative AI for Industry, a free resource to aid in successful digital transformation.  Before joining Cognite, Moe served as Vice President and General Manager of Azure Light Edge at Microsoft, where he led a $2 billion P&L for Microsoft's IoT, OT, and embedded industrial products. Prior to that, he held influential positions at Samsung, where he excelled as Vice President of Smart Products and IoT, and Amazon, where he played a key role in building and delivering the Amazon Android Appstore for Kindle Fire and Amazon Phone devices.

Leave a Reply

Your email address will not be published. Required fields are marked *