Best Practices for Deploying and Scaling Industrial AI
Artificial Intelligence (AI) is transforming industrial operations, helping organizations optimize workflows, reduce downtime, and enhance productivity. Different industry verticals leverage AI in unique ways.
Accelerating Manufacturing Digital Transformation with Industrial Connectivity and IoT
Digital transformation is empowering industrial organizations to deliver sustainable innovation, disruption-proof products and services, and continuous operational improvement.
Leading a transportation revolution in autonomous, electric, shared mobility and connectivity with the next generation of design and development tools.
As businesses become data-driven and rely more heavily on analytics to operate, getting high-quality, trusted data to the right data user at the right time is essential.
The goal of automated integration is to enable applications and systems that were built separately to easily share data and work together, resulting in new capabilities and efficiencies that cut costs, uncover insights, and much more.
Digital transformation requires continuous intelligence (CI). Today’s digital businesses are leveraging this new category of software which includes real-time analytics and insights from a single, cloud-native platform across multiple use cases to speed decision-making, and drive world-class customer experiences.
Best Practices for Deploying and Scaling Industrial AI
Artificial Intelligence (AI) is transforming industrial operations, helping organizations optimize workflows, reduce downtime, and enhance productivity. Different industry verticals leverage AI in unique ways.
Accelerating Manufacturing Digital Transformation with Industrial Connectivity and IoT
Digital transformation is empowering industrial organizations to deliver sustainable innovation, disruption-proof products and services, and continuous operational improvement.
Leading a transportation revolution in autonomous, electric, shared mobility and connectivity with the next generation of design and development tools.
As businesses become data-driven and rely more heavily on analytics to operate, getting high-quality, trusted data to the right data user at the right time is essential.
The goal of automated integration is to enable applications and systems that were built separately to easily share data and work together, resulting in new capabilities and efficiencies that cut costs, uncover insights, and much more.
Digital transformation requires continuous intelligence (CI). Today’s digital businesses are leveraging this new category of software which includes real-time analytics and insights from a single, cloud-native platform across multiple use cases to speed decision-making, and drive world-class customer experiences.
The AI Agents concept envisions future AI applications such as search, data analysis, and creative tools. This concept reflects the innovation and future of work powered by AI. Idea
Agentic AI promises to unlock massive productivity and efficiency gains for organizations, but those results are not being realized yet. Many see MCP (Model Context Protocol) as the key, but it will only provide the desired business impact if implemented in the right way. This article explores critical areas that will help organizations achieve profound business results through agentic AI and MCP.
5 Keys to Unlocking Business Value with MCP and Agentic AI
zeit·geist /ˈzītˌɡīst/ noun the defining spirit or mood of a particular period of history as shown by the ideas and beliefs of the time. “The MCP craze captured the zeitgeist of the AI boom in the early 21st century”
The tech industry is a curious place. Its progress is staggering, its innovations remarkable, its impact profound. Yet our industry has a tendency to overhype new trends, predicting that they will act as a silver bullet to solve all of our unsolved problems. Blockchain, serverless computing, or the metaverse are a few examples that didn’t quite live up to their promise. MCP (Model Context Protocol) is the technology capturing the zeitgeist of these times, the would be silver bullet of agentic AI. However, MCP itself is not the answer to every obstacle sitting between organizations and their AI-enabled goals. Still, you can use MCP to raise important questions about your organization’s digital foundation. This article explains how MCP can be used to unlock business value through agentic AI, but only when you address the questions it raises.
Since the “ChatGPT moment” of late 2022, the tech industry has been treating it as an inevitability that generative AI will transform every aspect of business, and to some degree society. As technologists around the world digested the capabilities of large language models, they began hypothesizing how LLM’s could be combined with data and software functions to provide revolutionary process automation and human augmentation to drive unprecedented levels of efficiency and productivity in every industry. The vehicle for this value would be an exploding population of “AI agents” that would perform tasks autonomously or in support of human workers. Thus began the age of “agentic AI.”
The problem was, the more possibilities people imagined for the potential of agentic AI, the more they struggled to get started with implementing it. Along with reticence about security implications and concerns about skill gaps, a big obstacle for organizations was figuring out how to connect LLM’s to the core data and applications they used to run their businesses. Each agent would need “tools” to make those connections, but there was no singular way of finding and using those tools. MCP hit the mainstream in the fall of 2024, and was immediately and widely seen as an ideal protocol for discovering and invoking agent tools.
MCP was created by David Soria Parra and Justin Spahr-Summers, two engineers at Anthropic who needed a way to connect the Claude Desktop application to external resources. This origin story is important for two reasons. On one hand, MCP’s origin as a practical solution to a real problem explains its promise. On the other hand, the context of connecting a desktop app to outside services explains the risks of extending MCP into all areas of distributed computing, risks we will explore later. Regardless, MCP has been embraced emphatically by the tech industry, with companies rushing to provide their data and application functions through MCP servers and a whole MCP ecosystem forming virtually overnight.
MCP is merely a new application programming interface (API) protocol, albeit one created with LLM’s in mind. APIs have been at the pinnacle of all post-web innovations, from the proliferation of the world wide web itself to mobile apps, social networks, and cloud computing. As Stephen Fishman and I explore in our recent book, Unbundling the Enterprise, it is not APIs themselves but the way they are used that led to those technical innovations as well as the rise of web giants like Amazon and Google. With that background let’s consider the questions that MCP raises, whose answers will pave the path for organizations to overcome the obstacles and start reaping the rewards of agentic AI.
How can you make sure your MCP tools are the ones your AI agents need?
The common pattern emerging for rapidly exposing data and functions as MCP servers is to simply layer an MCP facade over an existing web API. The problem with this approach is that most APIs are designed for human users, not machines. In the name of flexibility, many web APIs are created as a set of “CRUD” (Create Read Update Delete) operations on top of a set of data objects. Although this design approach can support many use cases, it is not optimal for human users either. However, humans have the capacity to independently find or figure out the information needed to use the APIs. Machines–even AI agents–are not as adept.
AI agents have LLM-based reasoning, which means they can plan and execute tasks from a series of subtasks. However, they are not deterministic, and LLM-based planning is an inefficient and expensive function. Tools, on the other hand, are deterministic and repeatable. When it comes to agentic processing, the more deterministic and repeatable tasks can be made, the better. That means that the MCP interface should be at the coarsest granularity possible for a particular task. As an example, imagine an agent discovering an MCP server that is a facade over the Salesforce API, which includes hundreds of data objects with multiple operations each. Performing a task as simple as “create customer” would require tens of tool calls with repeated sequential planning to boot. It is a much better approach to craft MCP tools that are aligned with the tasks and sub-tasks that the agent is expected to execute.
Ironically, the API design community has been promoting the idea of “APIs as products” for years. This approach centers API design on anticipated consumption patterns and provides an optimal developer experience (for humans and machines alike). Treating “tools as products” in the same way perfectly matches what AI agents need. To make sure your MCP tools are right for your AI agents, apply product thinking and design them at the task level.
How can you help AI agents hear more signal than noise from your MCP tools?
To AI agents, context is everything. Although LLM training sets are unfathomably massive, AI agents are most effective when dealing with precise data that applies to the particular task they are trying to accomplish. Because of the common CRUD style of API design discussed above, web API responses can often contain a boatload of data. Once again, simply translating such an API into an MCP tool would be problematic, potentially triggering cognitive overload in the agent.
Precision rules when it comes to AI agents. Take a surgical approach to MCP tool responses, making sure the data aligns with the agent’s scope and intent. If the agent needs current contact information, it might get that data by using an MCP tool that calls a “Get Customer” API. Rather than returning the whole customer payload, it would be better to only return the needed contact information. No address histories, no preferred locations, no superfluous information. In cases like this, less is more when it comes to MCP tool responses.
Advertisement
How can you help your AI agents understand what your MCP tools do?
One of the most important elements of MCP is how it provides the facility to describe tool semantics. If you want to let your agent know what a tool does, you can include a tool description that the agent can access by listing tools on an MCP server. Once again, this is an area of risk if you are turning existing APIs into MCP tools. Semantic information for APIs might be included in API specs, but too often those description fields are given short shrift. The rich metadata that would be useful to an AI agent is more likely to be housed in the API documentation site.
Tool selection is one of the most important things an AI agent does. It becomes harder for an agent to choose tools as it has more to choose from. The best way to improve tool selection is to provide thorough, accurate, precise semantic information about the tools. If you are turning APIs into MCP tools, make sure you take all the metadata a human developer might use–which might come from disparate sources–and package it up using MCP’s description facilities.
Advertisement
How can you make your MCP connections more secure?
Having an LLM desktop application access local resources–the original use case for MCP–was fairly benign from a security perspective. When extrapolated on to the distributed system landscape, security requirements become much more complex. As organizations started experimenting with MCP in this broad context, it became clear that the protocol had many gaps. Thankfully, the MCP authors and growing community has been working tirelessly to document the many MCP security use cases and address them in the specification.
Still, there is a long way to go. Although the MCP authorization protocol is now well-defined, the complexity of seemingly simplistic agentic workflows can exceed what the spec currently provides. For example, imagine an AI agent whose purpose is to help Human Resources workers with employee onboarding. That agent might need tools to add employee contact information in Workday, add payroll details in ADP, and open tickets in ServiceNow to issue new IT equipment. This use case alone would need to consider the identity of the HR worker, the scope of the agent, and the permission level required to access each back end application. Given the dynamic nature of agentic processing, ensuring appropriate access control is even more critical than in more deterministic settings.
Luckily, this is an area where the legacy of web APIs helps out. These types of multi-actor integrations featuring deep call chains and layered permissions are known to the API world. Specifications like OAuth and OpenID Connect provide a foundation that can be–and is being–utilized in MCP environments. To make MCP connections more secure and support agentic use cases like the employee onboarding example, leverage API security best practices.
Advertisement
How can you make your MCP tools scalable?
Again owing to its original desktop context, MCP offers two modes: local (inter-process) and remote (network-based). Regardless of how it is accessed, though, in most enterprise use cases, MCP will trigger network hops somewhere along the way. When the MCP client is an AI agent that requires scale, this has big implications. To avoid the traps set by the fallacies of distributed computing, it is vital to consider the full context of how your MCP tools are being deployed. With the luxury of cloud and edge computing options available today, almost any hybrid deployment model is possible. When it comes to AI agents using MCP tools, again it comes down to understanding the context of tool execution to determine the right mode.
There are dangers to using the local mode of MCP tool calling. It may be tempting to try this mode for efficiency purposes, but it may negate necessary functions for security and reliability. If entertaining local mode, it would be best to have as much of the MCP tool call stack localized along with the MCP client. The bottom line is that you have to be conscious of the overall distributed landscape in order to make your MCP-empowered agentic solutions scalable and reliable.
Advertisement
A Final Word
These are exciting times in our industry! It’s easy to get swept up in the zeitgeist and yearn for silver bullets. Personally, I find it reassuring that the time worn principles that underpin this article still apply, even in the age of AI. So keep going, stay curious, and let MCP be the question that leads you to the next question. Keep answering, keep digging, and you’ll find you are on a road paved with business value.
Matt McLarty is Chief Technology Officer at Boomi, which helps organizations around the world to thrive in the digital age. He is an active member of the worldwide API community and has led global technical teams at Salesforce, IBM, and CA Technologies after starting his career in financial technology. Matt has co-authored books for O'Reilly, co-hosts the API Experience podcast, and is co-author of the new book Unbundling the Enterprise from IT Revolution.
Engineers should rely on AI to automate tedious tasks like healing broken tests automatically and generating boilerplate test cases. But humans should remain in charge of which tests are executed and how to react to complex test failures.
Model-as-a-Service reflects a pragmatic response to the realities of modern AI adoption. MaaS offers a way to balance innovation with control, speed with governance, and scalability with cost discipline.
The AI agent boom of 2025 made automation accessible. The next phase, which involves multi-agent systems, will determine who turns that accessibility into real, sustained value.
Most organizations are attempting to solve a 21st-century problem with a 20th-century approach: manual effort. They are falling into “The Manual Migration Trap,” a predictable and avoidable series of failures rooted in an over-reliance on human intervention to solve a machine-scale problem.
Zero Trust assumes a strong, unified source of identity and policy. Most enterprises have the opposite. Create a cross-functional “Zero Trust Council” with shared KPIs tied to business outcomes to fix the situation.
Organizations that align their data strategies with 2026 cloud evolution trends will be well-positioned for success in the modern AI-dominated business world.
Analysis and market insights on real-time analytics including Big Data, the IoT, and cognitive computing. Business use cases and technologies are discussed.
Advertiser Disclosure: Some of the products that appear on
this site are from companies from which TechnologyAdvice
receives compensation. This compensation may impact how and
where products appear on this site including, for example,
the order in which they appear. TechnologyAdvice does not
include all companies or all types of products available in
the marketplace.