The first time we actually measured it, the number stopped the meeting. We thought maybe 15 percent of employees were using unsanctioned AI tools at work. The proxy logs said 78 percent. Of those, about half were pasting content that touched data we classified as restricted.
That was not a policy failure. That was a policy that had been written as if the internet did not exist.
How to actually count it
Surveys undercount by roughly 3x in my experience. People do not want to say they are using ChatGPT to write performance reviews. You need data the employees do not self-report.
The sources that work:
- Web proxy or SASE logs for requests to known AI inference endpoints. Start with the obvious ones (OpenAI, Anthropic, Google AI Studio, Perplexity, Mistral's hosted offering) and expand from there.
- Browser extension telemetry if you have a managed extension story. Extensions often talk to inference endpoints directly.
- CASB data for sanctioned SaaS apps that have AI features now enabled by default. Your sanctioned CRM probably grew a generative feature this quarter.
- Expense report patterns. Individual ChatGPT Plus and Claude Pro subscriptions show up if you look for them.
Combine those. Build the number. Present it to leadership without editorial. The number will do the persuading.
Why banning does not work
Every org that has tried a blanket ban has watched the same movie. Usage goes down on the corporate network for two weeks. Then it moves to personal phones connected to personal hotspots, personal laptops at home, and browser extensions that proxy through services you have never heard of. The data exposure does not decrease. Your visibility into it does.
The employees using these tools are not malicious. They are productive people reaching for tools that make them more productive. A policy that tells them to be less productive loses. It loses against market forces, against peer pressure, against the CEO who uses ChatGPT herself.
The sanctioned-alternative strategy
What works is providing an enterprise alternative that is at least as good as the consumer tool for the majority of use cases. We rolled out an enterprise Claude deployment with a clear data handling posture (no training, short retention, contractual protections) and a thin UI that works with SSO. Within three months, 70 percent of the Shadow AI traffic we had been seeing routed to the sanctioned tool. Not because of policy. Because of convenience: SSO, no personal credit card, full context length, no rate limits.
The remaining 30 percent is a mix of specialized tools (image generation, coding assistants) and a long tail. For those we triaged: sanction the ones with acceptable data posture, offer guidance on the rest.
Training over prohibition
Most people are not trying to leak data. They do not know that pasting a customer list into ChatGPT is different from pasting it into a Google Doc. Twenty minutes of training on what is and is not safe to paste changes behavior more than any policy document. Make the training concrete: "here are examples of data that is fine to put into the sanctioned tool, here are examples that should never leave this perimeter." People respond to examples, not categories.
Conditional access for AI tools
For the stuff you have sanctioned, put real controls around it. Conditional access policies that require SSO, device compliance, and trusted network for high-risk tools. DLP integration at the egress that inspects content going to AI endpoints for sensitive patterns. Logging that is comparable to what you have for other SaaS.
You cannot apply the same controls to consumer ChatGPT on a personal device. You can make it less convenient to use than the sanctioned alternative. That is usually enough.
The posture to aim for
Accept that AI tools are now part of how knowledge work gets done. Provide a sanctioned path that is genuinely good. Measure what slips past it. Train people to know the difference. Ban only the things that are genuinely unacceptable (consumer inference with restricted data) and enforce those bans where you can. Revisit quarterly, because the tool landscape changes faster than your policies will.
The worst outcome is not Shadow AI. The worst outcome is a policy that pretends it does not exist and leaves you blind.