You've run some pilots. Maybe a chatbot that showed promise, some automation that saved a few hours here and there, an experiment that got people excited. Now what?
This is where most organizations get stuck. The experiments were interesting. Scaling them to enterprise impact? That's a different challenge entirely.
McKinsey's November 2025 research confirms what we're seeing with clients: 88% of organizations are using AI, but only 38% have moved beyond experimentation to scale AI across their enterprise. The majority—about 62%—are still in the piloting or experimenting phase. The gap between "doing AI" and "capturing measurable impact" is where good intentions go to die.
If you're in that gap, here's what we've learned about getting to the other side.
Pilots are designed to answer "can this work?" Scaling is designed to answer "can this work reliably, securely, and at volume across our organization?" Those are very different questions with very different requirements.
This is why "just expand the pilot" rarely works. Scaling usually means rebuilding with production requirements in mind from the start.
Here's a reality check from the data: company size correlates strongly with scaling success.
Nearly half of respondents from companies with more than $5 billion in revenue have reached the scaling phase, compared with just 29% of those with less than $100 million in revenues.
This isn't because smaller organizations are less capable—it's because scaling requires dedicated resources. Larger companies can invest more in infrastructure, talent, and the organizational change management that scaling demands. Mid-market companies need to be more strategic about where they focus, which makes getting the foundation right even more critical.
The latest research shows clear patterns in where organizations are seeing measurable impact from AI:
Cost reduction (% reporting decreases):
These functions have clear, measurable tasks where AI can demonstrably reduce time and effort.
Revenue increases (% reporting gains):
These are areas where AI enhances decision-making and customer-facing activities.
Emerging hotspot: Knowledge management has emerged as one of the top functions for AI use—on par with IT and marketing. This reflects the rapid adoption of AI assistants for research, document analysis, and information synthesis.
You've probably heard a lot about AI agents—systems that can autonomously plan and execute multi-step workflows. The reality is more measured than the hype suggests.
The numbers: While 62% of organizations are experimenting with AI agents, only 23% are scaling them anywhere in their enterprise. And in any given business function, no more than 10% report scaling AI agents.
Where agents are gaining traction:
For most organizations, agents remain in the exploratory stage. The foundations you're building now—clean data, documented processes, flexible infrastructure—will determine how quickly you can take advantage of agentic capabilities as they mature.
As you move from experiments to enterprise, tool selection becomes more consequential. A pattern we see frequently: organizations try to make one tool do everything, or they end up with a confusing patchwork of point solutions.
A more practical approach is a portfolio:
If your current pilots are generating enthusiasm but not data, they're not setting you up for scale. Successful pilots are structured experiments with clear hypotheses and measurement.
For pilots that lead somewhere:
✅ Choose 2-4 departments and concrete workflows (e.g., "RFP drafting in Sales," "policy Q&A for HR")
✅ Connect real data sources those teams actually rely on—not test data
✅ Measure what matters:
✅ Watch for risks: security incidents, governance gaps, access to content people shouldn't see
The goal isn't just "did this work?" It's "what would it take to roll this out to 500 people?" and "what did we learn that applies to other use cases?"
Pilots can often fly under the radar of security and compliance teams. Enterprise deployment can't—and shouldn't try to.
The data shows this is a real issue: 51% of organizations using AI have experienced at least one negative consequence in the past year.
Organizations further along in their AI journey are more likely to be actively mitigating these risks.
What you need to think through:
Building this operating model early—before you scale—saves enormous headaches later.
If your pilots have shown promise, you've already answered the easy question: "Can AI work here?"
The harder question is next: "Are we willing to do what scaling actually requires?"
Because scaling isn't a technology project with an end date. It's a decision to permanently change how your organization operates. It means:
That's not a criticism—it's a reality check. There's nothing wrong with staying in pilot mode if you're not ready for that commitment. Pilots generate learning, build internal capability, and keep options open.
But if you've been piloting for 18 months and you're still piloting, it's worth asking why. Is it because you're learning and iterating? Or because nobody's willing to make the call to scale or not to?
The path forward isn't more pilots. It's a decision. Either commit to the operational work that turns experiments into infrastructure, or be intentional about staying small. Both are valid choices. Limbo isn't.