Summary of Guardians of M365 Governance Ep.28
In Episode 28 of the Guardians of M365 Governance monthly webcast, my co-hosts Ragnar Heil (@RagnarH), Joy Apple (@JoyOfSharePoint), and I explore the growing gap between the rapid evolution of AI capabilities and the slower, more complex reality of governing them inside organizations. What begins as simple experimentation—building agents to summarize meetings, tasks, and data—quickly evolves into more autonomous systems that operate across multiple tools and datasets. This shift introduces a fundamental tension: the more powerful and independent these agents become, the more risk they introduce around data access, security, and compliance.
A central theme is the tradeoff between control and usability. As organizations attempt to lock down AI tools with stricter governance—through sandboxing, endpoint restrictions, and limited integrations—they often reduce the effectiveness of those tools. This friction can unintentionally drive users toward “shadow AI,” where they seek out more capable solutions outside approved environments. The discussion highlights that this is not a new problem, but rather an evolution of long-standing governance challenges in Microsoft 365 and beyond: balancing openness for productivity with safeguards for security, all while ensuring that governance doesn’t become a bottleneck to innovation.
The episode ultimately emphasizes that effective AI governance is less about technology and more about process, communication, and shared responsibility. Organizations must establish clear policies around supported vs. allowed tools, define how data can be accessed and used by autonomous agents, and introduce governance frameworks—such as approval workflows, lifecycle management, and risk-based controls for agents treated almost like users. At the same time, success depends on creating a culture where business users and IT collaborate openly, ensuring that innovation is guided rather than suppressed. Moving from shadow AI to managed AI isn’t about eliminating experimentation—it’s about channeling it into a governed, scalable, and trusted model.
If you missed the live stream, you can watch the recording here:
If you enjoy this content, please be sure to follow my YouTube channel at https://www.youtube.com/@buckleyplanet where you can find all past episodes and get notifications when new livestreams have been scheduled.




