Why Most AI Projects Fail Before They Start

[This is the first in a 3-article series that draws on a conversation with Andrew Amann, CEO of NineTwoThree AI Studio, on the CollabTalk Podcast.]

Why Most AI Projects Fail Before They StartWe’ve all seen it: a company announces an AI initiative with great fanfare, deploys a chatbot or internal tool, and six months later nobody’s using it. The technology wasn’t the problem. The philosophy was.

Despite billions poured into AI adoption, a striking number of projects never deliver measurable value. And the reasons are remarkably consistent: not technical failures, but strategic and cultural ones that were baked in from day one.

The Wrong Starting Question

Most AI projects begin with some version of “how do we use AI?” That question is almost guaranteed to produce a bad outcome. It centers the technology, not the problem. It invites solutions in search of a use case. And it puts enormous pressure on teams to justify the investment in ways that often lead to theater, such as dashboards nobody looks at, chatbots that answer three questions, and “AI-powered” branding slapped on tools that were already working fine.

The right starting question is: “where does our work actually get stuck?” Where is data being entered manually that could be captured automatically? Where is someone’s entire job to gather information from five places and put it in one place? Where are decisions being delayed because the right data isn’t accessible at the right moment? Those are the seams — and that’s where AI has genuine leverage.

The Chatbot Cautionary Tale

The early wave of enterprise chatbots is a useful case study in what happens when AI is deployed without architectural discipline. Language models are probabilistic. They predict the most likely next output based on what they’ve learned. That’s brilliant for many tasks. It’s a disaster when you need a deterministic answer: the price of a product, the status of a claim, whether a specific policy applies.

When you ask a stochastic system to reliably handle deterministic questions without guardrails and proper routing, you get hallucinations. You get off-brand responses. You get the kind of headline-making failures that made a lot of executives declare AI “not ready” — when really, the implementation wasn’t ready.

“When large language models first started getting into enterprises, companies that had a lot of big pockets but also a lot of innovation labs were able to spend the $300,000 to build a chatbot that had guardrails — but a lot of them had a lot of flaws.” — Andrew Amann, CEO of NineTwoThree AI Studio

The Mandate Problem

Why Most AI Projects Fail Before They StartThere’s a second failure pattern that’s just as damaging: the top-down AI mandate. A CEO or board tells a department to “find an AI use case by Q3.” The team, under pressure, selects something visible rather than something valuable. They deploy. Employees, sensing their jobs may be threatened and having had no input in the process, don’t engage. Adoption is poor. The project is quietly shelved.

The organizations that avoid this pattern almost always have the same starting point: someone inside the business who is genuinely frustrated by a workflow and curious whether AI can fix it. That intrinsic motivation — solving a real, felt problem — produces more successful implementations than any top-down initiative I’ve seen. The technology follows the problem. Not the other way around.

What This Means for Your Organization

Before you commit budget to an AI project, pressure-test the starting point. Is there a specific workflow with a measurable bottleneck? Is there a team that’s pushing for a solution, or a team that’s being pulled toward one? Can you define what success looks like in 90 days? A cost reduced, a process time cut, a volume handled without adding headcount? If you can’t answer those questions clearly, the project isn’t ready. And that’s actually good news, because it means you still have time to get the foundation right.

 

Questions to consider:

  • Where in your organization does work get stuck, duplicated, or lost in handoff, and has anyone actually mapped that end-to-end?
  • Are your AI initiatives being pushed from real pain inside the business, or pulled from a mandate above it?
  • Could you define what a successful AI project looks like for your organization in three specific, measurable terms?

▶  Next in the series → Part 2: The Architecture of AI That Actually Works — what separates the 20% of projects that deliver from the 80% that don’t.

Christian Buckley

Christian is a Microsoft Regional Director and M365 MVP (focused on SharePoint, Teams, and Copilot), and an award-winning product marketer and technology evangelist, based in Dallas, Texas. He is a startup advisor and investor, and an independent consultant providing fractional marketing and channel development services for Microsoft partners. He hosts the #CollabTalk Podcast, #ProjectFailureFiles series, Guardians of M365 Governance (#GoM365gov) series, and the Microsoft 365 Ask-Me-Anything (#M365AMA) series.