The AI You Didn't Approve — Cyber Governance in the AI Era
By Dr. Inshan Meahjohn | CEO, Digital Alliance Global (DAG) Group Inc.
Every organisation I advise asks the same question about AI: "How do we adopt it safely?" The better question, and the one most are not asking, is: "How much AI adoption has already happened without our knowledge?"
The answer, based on recent data, is a lot.
LayerX's 2025 Enterprise AI & SaaS Data Security Report found that 77% of employees paste company data into generative AI prompts. Eighty-two percent of those interactions happen through unmanaged personal accounts. IBM's 2025 Cost of a Data Breach Report, conducted by the Ponemon Institute across 600 organisations, found that 63% have no AI governance policies in place.
This is what I call shadow AI. And after years of working in digital transformation and institutional governance, I believe it is the most underestimated risk facing organisations today.
Why I frame this as a control problem
Most conversations about AI risk focus on the technology itself: model bias, hallucination, misuse. Those are real concerns. But shadow AI is a different kind of problem. It is about institutional control.
When an analyst uses a personal AI account to summarise board papers, that data leaves the organisation and may be retained, exposed, or processed in ways no one approved. When a developer connects an unvetted AI coding assistant to internal repositories, source code is processed by a model the organisation did not select or review. When a team lead builds a low-code AI workflow that connects to enterprise systems, operational judgment shifts outside the control environment before anyone has sanctioned the shift.
This is not unauthorised tool use. This is what I describe as unmanaged delegation of institutional judgment. The distinction matters because it changes how leaders should respond.
The numbers tell a clear story
Cyberhaven's 2026 AI Adoption and Risk Report, based on actual usage data from three million workers across 222 companies, found that 39.7% of all AI interactions involve sensitive data. In organisations with high adoption, 71.4% of employees now use generative AI. Personal account usage is significant: 32.3% for ChatGPT and 24.9% for Gemini, all bypassing SSO, enterprise logging, and data retention controls.
IBM's 2025 report found that shadow AI added an average of USD 670,000 to breach costs. One in five studied organisations experienced breaches directly linked to shadow AI. Among those that experienced AI-related breaches, 97% lacked proper AI access controls.
CrowdStrike's Falcon sensors detect more than 1,800 distinct AI applications running across enterprise endpoints. The scale of unmanaged AI use is no longer anecdotal.
A personal perspective from small states
My career has been built at the intersection of technology governance and institutional capacity, first in public-sector digital transformation across the Caribbean, and now through DAG's advisory work and my research on institutional development in Small Island Developing States.
In these contexts, shadow AI carries consequences that go beyond the financial. A government ministry using consumer AI to draft policy advice is a sovereignty question about who and what is informing state decisions. Where institutional capacity is already constrained and public trust is hard-won, the stakes of losing control over how institutional judgments are formed are severe.
This perspective informs how I think about shadow AI governance globally. The problem is not unique to small states, but the consequences are sharper, and the margin for error is smaller.
RSAC 2026: shadow AI becomes a product category
At RSAC 2026 in late March, shadow AI moved from theoretical concern to product category. Microsoft introduced shadow AI protection in Edge for Business, detecting sensitive data in AI prompts in real time. CrowdStrike expanded its Falcon platform with shadow AI discovery covering Copilot Studio agents, Salesforce Agentforce, and ChatGPT Enterprise.
The detection and response tools now exist. Boards can no longer claim the problem is that the technology to address shadow AI is immature. The gap is organisational, not technological.
Five things every organisation should do
Banning AI is not the answer. People route around restrictions when productivity tools are easier to access outside the enterprise than within it.
1. Build visibility. Know which AI services employees are using, how they access them, and what categories of data flow into them.
2. Provide a sanctioned enterprise AI pathway. Employees default to consumer tools when the organisation offers nothing better with approved controls.
3. Classify data before it leaves. Brainstorming prompts, customer records, board papers, source code, and regulated datasets are not equivalent uses.
4. Govern AI tools as privileged actors. Any AI capability with access to enterprise systems, repositories, or decision workflows should sit inside the identity and access model.
5. Review AI exposure quarterly. AI adoption evolves faster than annual policy cycles. Leadership should expect periodic reporting on usage patterns, exposure metrics, new tools, and control effectiveness.
The question I leave with every board
If AI usage is already happening through unmanaged accounts, what is your organisation's actual data perimeter?
Most organisations do not have an AI problem. They have a control problem disguised as innovation.
Dr. Inshan Meahjohn is the CEO of Digital Alliance Global (DAG) Group Inc. and a researcher in digital governance, AI policy, and institutional development in Small Island Developing States. This article is adapted from his LinkedIn newsletter, "Cyber Governance in the AI Era."
Read more: inshanmeahjohn.com | LinkedIn | ORCID | Medium | Substack | WordPress
Sources:
- LayerX, Enterprise AI & SaaS Data Security Report, October 2025
- Cyberhaven Labs, 2026 AI Adoption & Risk Report (3 million workers, 222 companies)
- IBM / Ponemon Institute, 2025 Cost of a Data Breach Report (600 organisations)
- Microsoft Edge Blog, "Protect your enterprise from shadow AI," RSAC 2026, March 2026
- CrowdStrike, "CrowdStrike Establishes the Endpoint as the Epicenter for AI Security," March 2026
Comments
Post a Comment