Copilot Readiness: Foundational vs. Basic
I got home from the M365 Community Conference in Orlando late on Saturday, spending an extra day with friends. Even now, I’m still in that post-session headspace where your mind is half-reviewing what you said and half-considering some of the questions from people who lingered after. It was a good session with a full and engaged crowd. The kind of room where people are actually leaning forward.
An attendee came up while I was packing up my laptop and told me that while he really enjoyed the session, his key takeaway wasn’t the governance framework I had outlined, wasn’t the feature roadmap, and wasn’t even my walkthrough. It was a distinction I had drawn almost in passing: the difference between “basic” steps and “foundational” steps.
That conversation stuck with me on the flight home. So here’s the post I’ve been turning over since I landed:
Basic and Foundational Are Not the Same Thing
Basic things are easy. That’s almost the definition. Turning on a feature is basic. Buying a license is basic. Clicking through a setup wizard, attending a training, putting Copilot on everyone’s desktop: basic, basic, basic. None of that is wrong. Basic things need to happen. But they are not the foundation.
Foundational things are harder because they take real time and effort, and because they establish the standards and architecture that make more difficult scenarios possible later. They are the load-bearing work. Skipping them doesn’t just create problems; it sets a ceiling on what you can actually achieve with everything you build on top.
Take Copilot as a concrete example. You can turn it on and it will work. It will generate summaries, answer questions, draft content. But if your SharePoint environment lacks consistent naming conventions, structured metadata, and clean content boundaries, the results will be lackluster at best. At worst, Copilot will hallucinate, surfacing outdated documents, conflating similar-sounding files, or confidently summarizing content that no longer reflects current policy. The tool is not broken. The foundation is.
The trap most organizations fall into is completing the basic things and concluding they’ve done the work. They’ve stood up the tool, they’ve run the rollout, they’ve got adoption numbers to show the steering committee. But the standards and structure that would make the tool genuinely useful were never established. And because Copilot technically functions, it can take months before anyone realizes the outputs aren’t trustworthy.
This is the core problem I was addressing in Orlando: the gap between “Copilot is installed” and “Copilot is useful” lives almost entirely in the foundational work that most organizations skip.
The Three Foundational Scenarios Most Organizations Skip
The meat of my session was about SharePoint content readiness. Specifically, the unglamorous, non-headline-grabbing work that determines whether AI can actually find, understand, and surface your content correctly. I walked through three areas where the gap between basic and foundational shows up most clearly.
All three are things organizations routinely skip. All three are things they’ll eventually regret.
Foundational #1: The Content Audit
The basic version of a content audit is scrolling through a library, recognizing some familiar folder names, and deciding it looks fine. The foundational version is actually exporting that library to Excel, sorting by modified date, flagging files with no owner, and triaging what you find into a structured backlog. Same goal, completely different level of rigor, and a completely different outcome for anything that runs on top of it.
Copilot does not know that a policy document from 2019 is outdated. It will surface it anyway, with the same confidence it surfaces something from last month. The audit is what gives you visibility into what AI is actually working with. Without it, you are not deploying AI on your knowledge base. You are deploying it on whatever happened to accumulate over the past decade, and hoping for the best.
Foundational #2: Naming Conventions and Essential Columns
I got a (minor) laugh in Orlando when I put up an example filename: FinalFinal_v2_revised.docx. Everyone recognized it. The basic approach to file naming is letting people name things however they want because “everyone knows where things are.” That may be true for the person who created the file. It is not true for AI.
The foundational version is a naming convention applied consistently so a file name tells a stranger what is inside without opening it, paired with structured metadata columns that give Copilot the context it needs to distinguish between content types, identify owners, and surface the right document for the right query. Without that structure, everything looks the same to the model. You are handing it an undifferentiated pile of text and hoping it figures out what matters. Sometimes it will. Often it won’t. And you won’t always be able to tell the difference.
Foundational #3: Archiving Stale Content
The basic instinct with old content is to either delete it or ignore it. Both create problems. Deletion can remove context, break audit trails, and run into retention policy issues. Ignoring stale content means Copilot keeps surfacing it alongside current material, with no mechanism for understanding which is which.
The foundational approach is a deliberate archiving strategy that removes old content from active circulation while preserving it where it needs to be preserved. It is not complicated in concept, but it requires actual decisions: what gets archived, who owns that call, and how you keep it from drifting back into the mix. Most organizations skip this entirely. Then they wonder why Copilot is surfacing contradictory information and users have stopped trusting the results.
He Was Right
The distinction matters because it changes how you prioritize. If you think you are doing foundational work when you are actually doing basic work, you will keep expecting outcomes that your foundation cannot support. You will blame the tool. You will wonder why adoption is flat. You will commission another training.
The honest answer, in most cases, is that the content environment was never ready. The audit was never done. The naming convention was never enforced. The stale content was never archived. The foundation was never built.
That is not a technology problem. It is a discipline problem. And it is solvable, but only if you stop confusing basic with foundational.
Pick one library. Do the audit. Fix the names. Archive what is stale. None of that will make it into a conference keynote. But it is what everything else you are trying to build is standing on.




