Strategic Confidence Without Operational Honesty Is a Liability
In Part 1 of this article, I walked through what I consider the most underreported finding in Deloitte’s 2026 State of AI in the Enterprise report: that while strategic and governance preparedness are rising, the operational dimensions — technology infrastructure, data management, and talent — are all declining year over year. Organizations are getting more confident about their AI plans while becoming less equipped to execute them.
That post is the data. This second post is the leadership question that the data raises.
If the gaps are visible in the numbers, and the numbers are published in a widely circulated report that most senior leaders have at least glanced at, why are the operational gaps widening rather than closing? The answer is not that leaders don’t care. Most do. The answer is something more structural and more uncomfortable: strategic confidence, left unchecked, has a way of substituting for operational honesty. And in AI, that substitution is becoming one of the most consequential leadership blind spots of this technology cycle.
What “Prepared for a Different Future” Actually Means
The Deloitte report includes an observation from the head of AI strategy at a major European bank that I keep returning to. He described how many organizations prepared for an AI future by building infrastructure and governance for traditional AI models, but then found those investments largely obsolete when large language models arrived. Nearly 80 to 90 percent of new use cases became generative AI, and the foundations organizations had built were designed for a different kind of system entirely.
His point was not that preparation is futile. It was that preparation designed around a fixed target is inherently fragile in an environment where the target keeps moving.
The most common approach to AI readiness is to identify a current capability gap, invest in closing it, and move forward. That approach made reasonable sense when enterprise technology evolved slowly enough that closing last year’s gap still left you reasonably prepared for next year. It does not make sense in an environment where agentic AI is projected to go from 23% adoption to 74% adoption within two years — the same two-year window in which most organizations expect to close their current preparedness gaps.
The organizations that will be genuinely prepared are not necessarily the ones working hardest to close today’s specific gaps. They are the ones building organizational capacity for continuous adaptation. The specific capability matters less than the infrastructure for developing capabilities: talent development processes that work fast enough to keep pace with tool evolution, data governance practices built to accommodate new models and new use cases rather than locked to specific deployments, and technical infrastructure designed to be modular and updatable rather than purpose-built for the current moment.
MIT’s NANDA initiative, led by researcher Aditya Challapally (“The GenAI Divide: State of AI in Business 2025“) found that 95% of generative AI pilots fail to reach production, despite record investment. McKinsey similarly found that while 90% of companies now use AI, only one-third have scaled it across functions. The common thread in those failure rates is not technology. It is organizational readiness of the kind that strategy documents and vendor announcements cannot produce.
The Confidence Trap
Here is the risk I want to name directly: executive confidence in AI strategy can become a liability if it is not grounded in an honest assessment of operational readiness.
When leaders feel strategically prepared, they tend to accelerate. They greenlight more projects, set more ambitious timelines, and communicate more expansive commitments to boards and shareholders. All of that activity increases pressure on the operational infrastructure, which is, by the numbers, becoming less ready relative to the ambitions being set.
The organizations that manage this well are the ones with leaders who distinguish clearly between strategic conviction (which they should have) and operational confidence (which should be proportional to what the data actually shows). They use strategic clarity to set direction. They use honest operational assessment to set realistic timelines.
The organizations that manage this poorly are the ones where confidence in the strategy bleeds into an assumption of readiness, where the absence of visible problems is mistaken for the presence of genuine capabilities, and where the measurement systems are designed to report progress rather than surface gaps.
I have watched this pattern play out across every major technology wave of the past three decades. The organizations that emerged from those waves with the most durable advantage were rarely the fastest movers. They were the ones that moved with accurate self-knowledge — honest about what they could actually deliver, deliberate about building the foundations that made scale sustainable.
The AI version of this trap has a particular texture that I want to describe precisely, because it is slightly different from previous cycles. In earlier waves (cloud adoption, mobile, social) the confidence trap was mostly a resource allocation problem. Organizations over-invested in the strategic layer and under-invested in the operational layer, but the consequences were generally bounded. Pilots failed. Rollouts stalled. Budgets were wasted.
In AI, the confidence trap carries a higher risk because the technology is not waiting for organizational readiness to catch up. Agents are deploying. Automations are running. Decisions are being made at scale by systems that were stood up during the optimistic phase, before the operational foundations were solid. The gap between strategic confidence and operational readiness is no longer just a planning problem. It is an active risk in production environments — right now, in organizations that believe they are further along than they are.
The Leadership Behavior That Closes the Gap
This is not a technology problem that can be solved by a better platform or a smarter vendor. It is a leadership behavior problem. And it has a specific shape.
The leaders who close this gap consistently share a few observable characteristics. They distinguish between the enthusiasm they communicate externally and the assessment they conduct internally. They create reporting structures that surface bad news fast, rather than filtering it before it reaches the top. They measure outcomes — behavior change, production deployment rates, actual business impact — rather than inputs like training completion, license deployment, and pilot count.
They also resist what I’d call the announcement substitution effect: the tendency to treat a well-communicated AI strategy as evidence of AI readiness. Announcing a strategy is not the same as executing one. Publishing a responsible AI framework is not the same as governing AI in practice. Deploying Copilot to 10,000 employees is not the same as 10,000 employees working differently because of it.
The leaders I’ve observed get this right are the ones who stay curious about the distance between their stated position and their actual position. They ask the uncomfortable questions: what percentage of our pilots have actually reached production? How many employees have meaningfully changed how they work? Can we demonstrate that our data governance practices are actually governing our AI deployments? If they can’t answer those questions confidently, they treat the inability to answer as the signal — not as a communications gap to be managed, but as an operational gap to be closed.
A Practical Diagnostic
The Deloitte preparedness framework — five dimensions, each assessed on a spectrum from not prepared to highly prepared — is more useful as a diagnostic than as a scorecard.
The scorecard question is “how prepared are we?” It is designed to produce a number that can be compared to a benchmark and reported upward.
The diagnostic question is different: “where is the gap between what we are saying about our AI strategy and what our organization can actually execute?” That question is designed to surface the distance between intention and capability — and it is considerably more useful, and considerably more uncomfortable to ask.
If your organization’s strategic confidence is significantly higher than your operational readiness across infrastructure, data, and talent, that gap is not a communications problem to be managed. It is an execution risk to be addressed. The most productive thing leaders can do with that information is stop treating strategic preparedness as a proxy for overall readiness and start building the operational foundations that make the strategy executable.
That means investing in data quality and governance as AI infrastructure rather than IT hygiene. It means building talent development programs that produce behavior change rather than completion certificates. It means treating technical infrastructure modernization as a strategic priority rather than a cost center. It means measuring actual operational readiness with the same rigor applied to measuring strategic ambition.
Where This Leads
The gap between planning confidence and execution readiness is not new. It has shown up in every major technology wave I have observed over 35 years. But in AI, it is currently wider than it has been in any previous cycle, and it is widening at precisely the moment when the technology is moving fastest.
The organizations that close this gap deliberately and honestly will have a meaningful advantage. Not because they moved faster, but because they moved with accurate knowledge of where they actually were. The ones that allow strategic confidence to substitute for operational readiness will find out the hard way that the two are not the same thing.
This dynamic — the leadership behavior that creates and sustains the gap between strategic confidence and operational reality — is one of the core themes I’m exploring in my forthcoming book, The AI Leadership Gap. It turns out that the most consequential decisions in AI adoption are not technical decisions. They are leadership decisions. And the organizations that get those right will have an advantage that no tool deployment can replicate.
More on that soon. In the meantime: act serious, treat it seriously, get serious results — starting with an honest look at the distance between where you say you are and where you actually are.




