Why Your AI Pilot Is Stuck in Purgatory; And What to Do About It

PinIt

The organizations winning in AI are making the hard decisions early, before touching a model, before signing a contract, before announcing a transformation initiative.

Here’s what few enterprise leaders are willing to say out loud: most AI pilots aren’t failing because of bad data. They’re failing because organizations rush the wrong steps and systematically avoid the unglamorous foundation work that actually determines success. Executives want transformation but dodge the hard calls on architecture, governance, and accountability. They skip the boring parts—and pay for it later.

Executive enthusiasm – and then the slow death. That pilot everyone loved in the boardroom? It’s still stuck in staging. Recent research from MIT confirms the gut feeling: roughly 95 percent of enterprise generative AI pilots fail to deliver measurable financial returns. Other analyses indicate that 42 percent of companies abandoned most of their AI initiatives in 2025—more than double the rate from just a year earlier.

That’s billions in wasted investment. The question isn’t whether you can afford AI. It’s whether you can afford another failed pilot.

See also: Studies Find Scaling Enterprise AI Proves Challenging

The Decisions Nobody Wants to Make

When AI stalls, the blame lands on regulation, the models, or “our data isn’t ready.” Safe targets, all of them. Nobody gets fired for bad data. But these explanations let everyone off the hook for the actual problem.

What actually kills these projects is the conversations nobody wants to have. Should we build this ourselves or partner with someone? Who decides what happens to the data? Who takes the blame if it fails? Leaders love the idea of AI transformation. They’re less enthusiastic about the meetings where those questions get answered.

These aren’t technical problems. They’re leadership problems disguised as technical ones.

Consider the build-versus-buy decision. The instinct is to build in-house. The reality? Companies that buy or partner succeed at roughly double the rate of those that build. Your team knows the business. But they haven’t done this 200 times. Vendors have. And in AI, speed to production matters.

But that means admitting your team—however talented—might not be the right fit for this one. Most leadership teams would rather not have that conversation.

The Infrastructure Reality Check

There’s another failure mode: infrastructure. Leaders consistently underestimate what AI actually demands. Traditional capacity planning doesn’t work here.

Financial services organizations that project modest infrastructure cost increases often find actual costs exceed estimates by factors of three or four. Manufacturers roll out predictive maintenance, and watch storage needs double every six months. Healthcare systems go live with diagnostic AI and hit network bottlenecks nobody saw in testing.

Why the miss? AI workloads don’t behave like traditional apps. One successful use case spreads fast—and every new instance needs more compute. The hardware doesn’t follow normal price curves. And what worked fine in dev often falls apart at scale.

Agentic AI further complicates things. A single user query can trigger dozens of internal AI calls, each burning tokens and compute. Traditional planning has no model for this.

Then there’s obsolescence. Companies that spent months building custom RAG implementations are watching that work get commoditized by off-the-shelf solutions. What took six months to build can become irrelevant in six weeks.

The Three Questions That Actually Matter

Before you touch a model, write code, or draft an RFP, get clear on three things.

1) What problem are we actually solving?

This sounds obvious, but the number of AI initiatives launched without a clear, measurable business problem is staggering. “Implement AI” is not a business objective. “Reduce customer service response time by 40 percent” is. “Use generative AI” is a technology choice. “Cut contract review cycles from two weeks to two days” is a business outcome.

Here’s what’s counterintuitive: the biggest returns usually come from back-office work—procurement, finance, operations—not the customer-facing stuff that gets all the attention. Companies keep funding flashy projects while ignoring where the money actually is.

Forcing specificity about the problem also helps identify whether AI is even the right solution. Sometimes, traditional automation, workflow optimization, or simply better data management will achieve the objective more quickly and reliably than an AI implementation.

2) Where does the data actually live?

AI runs on data. Bring in third-party tools, and you face a real question: how do you use them without losing control of proprietary information?

This challenge operates on multiple levels. Many AI services require sending corporate data to external systems for processing. The use of external large language models may grant service providers certain rights to uploaded data. Even on-premises AI solutions require careful consideration of how data flows through systems and what information might be inadvertently exposed.

You need clear classification: what’s too sensitive to leave your walls, what can go external under tight controls, and what’s fair game. That classification drives every implementation decision.

In healthcare and financial services, the stakes are higher. HIPAA and GDPR don’t always play nicely with how AI uses data. But companies that sort out governance upfront move faster than those trying to fix it mid-project.

3) Who owns outcomes—not experiments?

AI projects usually start as experiments—data scientists exploring in a sandbox, with no clear owner. Fine at first. But getting to production means someone has to own the outcome.

Success means line managers and front-line teams driving adoption—not just the AI lab. When it stays in the hands of specialists, it stays in pilot purgatory. Nobody with operational authority has skin in the game.

The organizational changes required are often more challenging than the technical implementation. Organizations need cross-functional teams that bring together expertise from traditional IT domains, data science, and business units to address implementation challenges collaboratively. Success requires an integrated perspective that bridges infrastructure capabilities with application requirements.

How the Winners Are Doing It Differently

Organizations achieving AI success are taking a fundamentally different approach: measured build-up, then rapid scale, instead of “launch first, figure it out later.”

They invest in the unglamorous stuff. Successful implementations follow a counterintuitive split: 10 percent on algorithms, 20 percent on infrastructure, 70 percent on people and process. Most failures invert that—obsessing over the model while ignoring everything that makes it work.

Successful organizations also approach AI deployment with security testing built in from the start – large language models and conversational AI present unique vulnerabilities that traditional penetration testing cannot adequately address. Forward-thinking organizations implement continuous red teaming specifically for AI implementations, systematically testing attack vectors and identifying weaknesses before malicious actors discover them.

Most importantly, they build for flexibility. AI moves fast—today’s cutting edge is tomorrow’s legacy. Winners design with abstraction layers that insulate them from tech shifts. They don’t bet everything on one approach.

From Pilot Purgatory to Production

The real unlock for enterprise AI isn’t faster models or more GPUs. It’s forcing the strategic clarity that most organizations avoid. Before your subsequent AI initiative launches, pressure-test these elements:

Start with measurable business outcomes. Define success in terms that tie directly to revenue, cost reduction, or risk mitigation. If you can’t articulate the cost of the non-AI alternative, you haven’t defined the problem clearly enough.

Map your data reality honestly. Document where sensitive data lives, how it needs to move, and what governance structures must be in place before—not after—implementation begins.

Assign outcome ownership. Identify the business leader accountable for production results, not just the technical team responsible for building the system.

Plan for infrastructure reality. Develop capacity projections that account for the exponential nature of successful AI adoption, not the linear models that work for traditional applications.

Build in flexibility. Design architectures that can adapt to rapid technological change rather than optimizing for today’s specific solutions.

The gap between organizations that experiment with AI and those that transform through it comes down to the willingness to do the boring work. Strategic clarity about problems, data, and accountability isn’t exciting. It doesn’t make good headlines. But it’s the foundation that determines whether AI investments deliver lasting value or end up in the growing pile of abandoned pilots.

The organizations winning in AI aren’t necessarily the ones with the most significant budgets or the most sophisticated technical teams. They’re the ones willing to make the hard decisions early—before touching a model, before signing a contract, before announcing a transformation initiative. They skip the hype and do the homework.

In a landscape where 95 percent of AI pilots fail to deliver measurable returns, that discipline is the real competitive advantage.

Avatar photo

About Daniel Clydesdale-Cotter

Daniel Clydesdale-Cotter is the CIO at EchoStor, where he helps organizations navigate complex infrastructure decisions and modernization initiatives. He brings extensive experience in enterprise IT strategy and infrastructure optimization.

Leave a Reply

Your email address will not be published. Required fields are marked *