In our previous blog on the AI agent transition, we looked at where this shift stands, the debate it's generating, and why the historical pattern favors opportunity over displacement at the macro level. But macro trends don't run companies. The organizations that capture disproportionate value from AI agents will be the ones that do the hard internal work of building a coherent AI agent strategy. Most haven't started.
Harvard Business School researchers call the critical factor "change fitness." The framing matters because it shifts attention away from the technology department and onto your entire operating model. Change fitness means how quickly you can redesign workflows, retrain teams, update governance, and help your people absorb a genuinely new way of working. Most organizations have significant ground to cover, and buying more AI licenses won't fix that.
The following five shifts are where that ground gets covered.
For decades, knowledge workers have built their professional identity around being good at doing the work. The researching, the analyzing, the writing, the coding. Moving to Level 3 and beyond asks them to shift from doing to orchestrating. From writing the analysis to designing the framework that AI agents use to write it. From managing people doing tasks to managing systems that coordinate agents doing tasks.
This is a change in what it means to be good at your job. For a lot of people, that's going to feel like being told the thing they're best at no longer matters.
Even at the frontier, this isn't about handing everything over. Anthropic's research shows that professionals use AI in roughly 60% of their work but fully delegate only 0–20% of tasks. The working model is collaboration, not abdication.
Organizations that don't help their people navigate this identity shift will face resistance, burnout, and talent loss. The ones that invest in it will find their best people become even more valuable. A great financial analyst becomes a great architect of analytical frameworks, a great evaluator of quality, and a great translator of insight to decision-makers. That's a different role, and arguably a more creative and higher-leverage one. People who can combine domain expertise with the ability to orchestrate AI systems will be extraordinarily valuable. But people need help seeing it that way.
Anthropic CEO Dario Amodei has warned that AI could eliminate a large share of entry-level white-collar jobs within several years. The reason is straightforward: the foundational tasks through which juniors have always built expertise (research summaries, first-draft analyses, routine reporting) are exactly what AI handles most naturally.
This creates a structural problem that goes well beyond displacement. If your junior employees never do the foundational work, how do they develop the judgment your organization needs in its senior leaders five years from now? The talent pipeline that has powered knowledge-intensive organizations for decades could erode silently while everyone focuses on productivity gains.
Keeping juniors on busywork for nostalgia's sake isn't the answer. The answer is deliberately redesigning how they develop. That means structured training tasks that build core skills AI can't shortcut, paired human-AI review sessions where juniors learn to evaluate outputs critically, simulated environments for practicing complex judgment calls, and protected mentorship time that doesn't get sacrificed to "agent throughput" pressure. Organizations that solve this will have a durable talent advantage.
In our previous blog we described the "hypertail," the explosion of customized agents and applications that AI makes economically viable for the first time. That's the opportunity. The challenge is that without governance, this creative explosion becomes agent sprawl.
Machine identities already outnumber humans 82-to-1 in many enterprises. Agents accumulate credentials that never get revoked. Different teams deploy agents doing overlapping work, or worse, agents that make conflicting decisions. Imagine procurement and finance deploying agents that work at cross purposes. One stockpiling inventory for supply chain resilience. The other freezing capital spending to protect quarterly earnings. Without coordination, your own systems end up fighting each other.
The instinct will be to restrict deployment. That's the wrong move. What you need is governance that scales alongside autonomy, what leading practitioners call an "Agent Control Plane." At minimum, that means an inventory of every agent, clear access controls, performance evaluation mechanisms, real-time visibility, and incident procedures. Gartner predicts a 40%+ cancellation rate for agentic AI projects, driven primarily by inadequate governance. Deloitte's data shows that organizations with mature governance deploy agents more frequently and with better outcomes. The organizations deploying fastest are the ones with the strongest guardrails.
Training people to use AI tools while leaving their job descriptions, incentives, and evaluation criteria unchanged wastes most of the potential. Your job architecture needs to evolve to reflect the shift from execution to orchestration. That includes how you evaluate performance, what you promote for, and what you reward.
Consider what this looks like in practice. If a marketing manager's performance review still measures "campaigns launched" rather than "campaign outcomes generated through AI-assisted workflows," you're incentivizing the old model. The same applies to engineering teams measured on lines of code rather than problems solved, or analysts evaluated on reports produced rather than decisions influenced. Every role that touches AI agents needs updated success criteria.
As agents make more decisions, the system for capturing errors, diagnosing root causes, and improving performance becomes a core competitive advantage. Think of it as DevOps applied to knowledge work. You'll also need visibility into whether agents are delivering value: success rates, cost per outcome, and what percentage of workflows resolve without human intervention. Most organizations aren't tracking any of this yet.
McKinsey's analysis shows that just a few functions (sales, marketing, software engineering, customer operations) account for the vast majority of generative AI's potential corporate impact. Those are the functions where job redesign is most urgent.
Workers need more than training on new tools. They need honest engagement with how their roles are evolving and why the orchestration skills they're building matter more than the execution skills they're leaving behind.
How you frame this shift internally will determine how much resistance you face. With over 4 million Baby Boomers exiting the US workforce annually and birth rates declining globally, AI is increasingly necessary to maintain output with a shrinking labor force. Companies that frame agent deployment as "maintaining capability in a changing talent landscape" will find less resistance than those that let the narrative default to "replacing workers."
This framing works because it's true. The demographic math doesn't support the workforce levels most companies need over the next decade. AI agents aren't arriving to take occupied seats. In many cases, they're filling seats that would otherwise stay empty. Leading with that narrative, backed by data, gives your people a reason to engage with the transition rather than resist it.
The technology will advance regardless of what any individual company does. What you can control is your preparedness: your governance foundation, your job architecture, your talent pipeline, and your leadership's willingness to treat this as the operating model shift it is.
The companies that get this right will build things that weren't possible before. The window for laying that foundation is now.