MCP: The USB-C Port for AI, Yes or No? - RTInsights

MCP: The USB-C Port for AI, Yes or No?

MCP: The USB-C Port for AI, Yes or No?

Graph technology expert Dominik Tomicevic on why Anthropic’s model context protocol offers a much stronger way for LLMs to connect with the outside world than previous approaches, but it needs the helping hand of graph technology and GraphRAG.

Apr 21, 2026
4 minute read

Model context protocol (MCP), an open-source standard introduced by Anthropic in late 2024, is designed to standardize how large language models (LLMs) connect to external tools, databases, and data sources. It enables AI applications such as Claude and Cursor to interact with files, APIs, and databases without requiring bespoke, hard-coded integrations for each tool, hence its description as a “USB-C port for AI.”

While there is certainly some truth in that perspective, useful as it is, MCP still needs a bit of help to fully deliver. Let’s explore how.

See also: MCP: Enabling the Next Phase of Enterprise AI

Give LLMs tool access

 

One point in its favor is that MCP provides a way for LLMs to interact with the outside world that’s way better than earlier approaches. Acting as a bridge between models and external tools, APIs, and data sources, it removes the need for teams to manually orchestrate multi-step pipelines—retrieving data, formatting it, injecting it into prompts, and parsing outputs. Instead, AI teams can expose capabilities directly to the model.

The LLM can decide which tools to use, when to use them, and how to combine their outputs. That moves AI systems toward more dynamic, agent-like behavior. In turn, developer teams save significant time by defining a set of tools—each representing an action or data source—and letting the model orchestrate them, from querying databases to triggering workflows such as sending emails or updating systems.

Allowing engineers to give the model descriptions of these tools and letting the model select the most appropriate one based on a given task is definitely going to save time, plus make systems more flexible. As a result, model providers, frameworks, and platforms are increasingly supporting MCP-style interactions, and many developer tools now assume some form of tool-based orchestration.

However, despite these benefits, MCP introduces new challenges that cannot be ignored. These are gaps around context, and how connecting too many servers can clog up the model’s context window with tool definitions, leaving fewer tokens for actual user data and slowing down reasoning.

What this boils down to is that, just as giving an LLM too many options encourages hallucination, context-window overload operates in a similar way. At its simplest, as the number of available tools grows, the model’s ability to reliably select the correct one decreases, tempting the LLM to use a tool incorrectly or try to be helpful and combine tools in ways that don’t achieve the intended result.

This means you still need to curate the tools the LLM runs across, with best practice being to keep that set as minimal as possible. For example, for up-to-date information, give it a web search tool; if it needs reliable calculations, give it a calculator tool, and so on.

At the same time, an LLM might know how to use a tool, but not what to do with it. A model might generate syntactically correct queries, but without understanding the underlying schema or data relationships, those queries may be meaningless or incomplete.

The only way to fix this is by providing the AI with the structured knowledge it needs to interpret data correctly and decide which tool is appropriate in the first place. Without that grounding, even the most well-designed MCP system can behave unpredictably.

There are also important security implications to consider, such as unauthorized data access, prompt injection that triggers unintended actions, and more. Organizations need to understand which tools an AI app uses, why it chose them, and what actions were executed as a result.

See also: The Growing Importance of Securing MCP Servers for AI Agents

Advertisement

Why GraphRAG could help

So what serves as the bridge to accessing structured knowledge, providing reliable context, or guiding decision-making? This is where architectural choices become critical to improving both the quality and structure of the context provided to the model. Retrieval-augmented generation (RAG) is key, and increasingly, graph-based approaches in the shape of GraphRAG, as first suggested by Microsoft’s AI research team, are being taken up.

What’s going on under the hood? Traditional RAG systems rely on vector search to retrieve relevant information. This helps reduce hallucinations, but the model can still struggle when it comes to complex relationships or implicit structures in the data. GraphRAG provides a solution to this by adding a knowledge graph layer; rather than relying purely on similarity, it encodes entities, relationships, and rules in a structured form.

This provides the model with a clearer understanding of how data is connected and what it represents. In the context of MCP, this has two important effects. First, it improves tool selection; when the LLM has access to structured knowledge, it can more accurately determine which tools are relevant to a given task. Second, it enables more controlled execution; by embedding rules and relationships directly into the data layer, the system can guide the model toward valid actions and away from risky or nonsensical ones.

For example, a graph can encode constraints such as permissions, dependencies, or business logic, which can provide guardrails that complement MCP’s action layer. The result, we are arguing, is a more balanced system where an MCP component handles interaction and execution, RAG provides relevant context, and the knowledge graph adds clarity, constraints, and reasoning support.

MCP is great, but I am concerned that a lot of teams are just rushing ahead with it and not really seeing the context for the MCP trees. Graphs and GraphRAG are critically important to supporting and securing AI’s reasoning. I’d go so far as to say that an AI should be able to dynamically enable or disable tools based on what’s being asked, giving the model only what it needs.

I don’t think most teams are there yet, but GraphRAG has to be the next step after MCP engagement and context constraining on the road to practical, enterprise-ready artificial intelligence.

Dominik Tomicevic

Dominik Tomicevic is the CEO of Memgraph, a software company addressing the shortcomings of existing market solutions with an open-source in-memory graph database purpose-built for dynamic, real-time enterprise applications In 2011, Dominik was one of only four people worldwide to receive the Microsoft Imagine Cup Grant, personally awarded by Bill Gates. In 2016, he founded Memgraph, a venture-backed graph database company specializing in high-performance, real-time connected data processing. In 2017, Forbes recognized Dominik as one of the top 10 Technology CEOs to watch. Today, Memgraph boasts an open-source community of 150,000 members and customers, including NASA, Cedars-Sinai, and Capitec Bank.

Recommended for you...

Is AI Compute Becoming the Next Bottleneck?
Akhil Verghese
Apr 20, 2026
Real-time Analytics News for the Week Ending April 19
Building AI Operations: A Practical Guide
Robin Kamen
Apr 16, 2026
Why 2026 Will Be the Year Agentic Orchestration Delivers
Eran Sher
Apr 15, 2026

Featured Resources from Cloud Data Insights

MCP: The USB-C Port for AI, Yes or No?
Is AI Compute Becoming the Next Bottleneck?
Akhil Verghese
Apr 20, 2026
Real-time Analytics News for the Week Ending April 19
Why Digitizing Leadership Standard Work Is No Longer Optional in Manufacturing
Renato Basso
Apr 17, 2026
RT Insights Logo

Analysis and market insights on real-time analytics including Big Data, the IoT, and cognitive computing. Business use cases and technologies are discussed.

Property of TechnologyAdvice. © 2026 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.