How are you handling shadow AI and random SaaS tools?
Posted by shangheigh@reddit | sysadmin | View on Reddit | 28 comments
At this stage I am just curious to know how you all manage all the unsanctioned AI tools and SaaS apps employees are using behind the scenes (ChatGPT, Midjourney, random AI copilots in the browser, niche SaaS plugins, etc.). I am talking specifically about shadow AI / shadow SaaS here (please do not mention traditional EDR, AV, FW or email security, I know they all work hand in hand, but I am interested in this specific area of risk and governance).
As a systems admin managing a mixed team (IT, security, a bit of platform), I keep seeing new AI tools pop up in browser histories, OAuth grants, and expense reports. People are pasting internal docs into web UIs and connecting personal Google Drives to AI note-takers.
Any ideas? Would love to hear how you guys do this.
Loose-Profile-3938@reddit
You probably cannot eliminate shadow AI, so I’d go for visibility and guardrails instead of pure blocking. Tighten OAuth and app approvals where you can, but also give people an approved option fast, otherwise they just move the work onto personal accounts and you lose any control. A simple intake flow for new tools, plus a monthly sweep of SSO grants and expense reports, catches most of the random sprawl. When we were cleaning this up, Se͏tyl was useful as a central place to log the apps we discovered and assign an owner, so the follow ups were properly recorded.
CortexVortex1@reddit
Been wrestling with this mess. Started with visibility first, you can't govern what you can't see. We deployed Layerx to get realtime tracking of which GenAI tools people use, then built policies around the data. Blocking everything just drives people to mobile hotspots. The key insight i have seen is most shadow AI usage isn't malicious, it's productivity driven. Give them sanctioned alternatives that don't suck, then DLP the risky stuff.
Frequent-Contract925@reddit
We looked at this problem pretty deeply and found that most orgs cobble together 3-4 partial solutions — DNS log analysis catches some AI domains, email receipt mining surfaces SaaS signups, and cloud billing APIs show what's actually costing money. But none of them alone gives you a complete picture. The real gap is correlation: knowing that the same person who signed up for an AI tool via email is also hitting that domain in DNS and has it showing up in cloud spend.
We're building a tool that fuses these signals together to give you a single AI inventory. Happy to share what we've learned if anyone's interested.
Dramatic-Month4269@reddit
I would love to talk to more people having this problem. I feel employees are just going to use the best tools to alleviate as much work as possible. If we make those tools less critical to use (e.g., removing PII, obfuscating info etc.), this could be a viable path, no? Speaking about traditional ChatGPT use for example -- would be interested to hear your throughts!
CryptographerOld9631@reddit
This is actually one of the major problems of the company im working in too
Once someone deployed secrets and APIs and got a -$35,000 loss a couple of months ago
We fixed it (not completely) by enforcing strict policies, but still no major solution
Previous_Piano9488@reddit
For SaaS AI Apps: You need a browser extension that can detect AI app activity
For MCPs and AI agents: You need a desktop agent to detect MCPs in IDEs or an Agent/ MCP running on a desktop.
Traditional EDRs don't work here.
We use Akto for both browser-based and desktop-level detection - https://www.akto.io/
localkinegrind@reddit
firewall blocking creates user revolt and shadow workarounds. I dont rec that route. There are smarter appoaches like using the layerx extension that lets you see what's being used first, then set smart policies.
error40mgnr@reddit
We’ve been using Fendr (fendr.tech) which is great. It’s purely a AI controls and visibility tool and works via a lightweight browser extension.
Pricing is very reasonable too vs some of the clunkier tools
Zatetics@reddit
We dont have shadow AI because we pay for self managed AI tooling via azure foundry.
The way to handle it is to offer the tooling in an environment you control.
Drea_Analyzer@reddit
I noticed your concerns about managing unsanctioned AI tools and SaaS applications. It can be challenging to keep everything organized and compliant. AI Navigator offers insights to help streamline your software management, and I’d love to invite you to try our free scanner to see how we can assist.
Only-Sandwich1854@reddit
We use Harmonic Security - browser plug in, opens up all visibility to AI useage as well as visibility of sensitive data and file uploads. They also block etc. I’d recommend getting a demo.
EVIL5@reddit
I’m just going to say that “good” and “cheap” are rarely found together. Playing music has never been cheap in the history of mankind and you’ll get what you pay for. Hope this reminder helps.
TrueBoxOfPain@reddit
Start with using search in r/sysadmin by word AI
winter_roth@reddit
You're right about browser native detection being key. Traditional dlp wont catch the semantic stuff. someone pasting customer acquisition strategy into claude doesn't trigger regex rules but it's still a leak. lately we’ve been looking at browser native solutions like layer-x that catch GenAI uploads before they happen. Most deploy as browser extension so no network changes needed.
iamMRmiagi@reddit
It's a real challenge when the people you're battling against are the execs and IT has to manage upwards.
I'm using:
- Chrome Admin to block extensions & monitor browser activity
- Admin Consent approval to limit unapproved sign ins
- Cloud App governance/app discovery (?) to monitor SaaS adoption
- Sign in Analysis to understand SaaS usage
- Firewall logs to track traffic to unsanctioned apps
- DLP to monitor and alert data exfiltration/sensitive file uploads (we need more work to block it)
and I've still probably missed a few
shangheigh@reddit (OP)
Thanks fr the breakdown. Will look into those
osh-rang5D@reddit
Create policies then move on with my life. It's not worth the squeeze if the owners don't care.
shangheigh@reddit (OP)
Honestly sometimes I feel that way
shangheigh@reddit (OP)
Yeah it’s a mess. Figured the only way out is to have some browser native detection tooling. Lately I’ve been into browser extensions like layerx for real time visibility, catches everything without the endless game of blocking domains. Another thing ive noted, pure blocking never works and will bite back hard.
VoltageOnTheLow@reddit
This question gets asked all the time. If you are a Microsoft shop, block all except Copilot.
Ensure that the human policies are up to date as well.
Yuptodat@reddit
I feel like it's every day at this point.
troubledtravel@reddit
First you need a corporate policy on AI usage. That the CEO endorses. Then you need to help everyone be aware of this policy. Then you need to sanction allowable apps and block other apps. At least a starting point.
Aegisnir@reddit
Network filtering on the firewall, agent based dns filtering on workstations, awareness training, strong written policies that employees need to sign that they understand it is a fireable offense with zero tolerance, rewards for those that report actual offenses. Everyone will continue using them if you restrict without the understanding why. Sure some won’t give a fuck anyway, but some will.
Round-Classic-7746@reddit
A lot of folks end up in the same boat where people just grab tools to get work done and IT only finds out after someone pasted prod DB creds into sme random AI prompt tool.
For us it’s been a mix of things:
Blocking everything isn’t really a thing anymore because there are so many tools floating around, especially the AI ones that run in browsers.
pvatokahu@reddit
The shadow AI thing is getting crazy. At my last company we tried locking down browser extensions and monitoring OAuth grants but people just started using their phones or personal laptops. Found one engineer who'd been copying entire product specs into Claude for "better formatting" - had no idea what data retention policies were on the other end.
We ended up building a internal catalog of approved AI tools with pre-negotiated enterprise agreements. ChatGPT Enterprise, GitHub Copilot for the devs, couple others. The trick was making them easier to access than the consumer versions - single sign-on, no credit card needed, that kind of thing. Still had people sneaking around but at least we could point to alternatives and say "use this instead." The expense report angle is smart though.. never thought to check there for subscriptions.
The scariest part is the browser-based stuff. Those AI writing assistants that just sit there watching everything you type? We found one that was literally sending keystrokes to some random server in Eastern Europe. No way to block them all without breaking half the legitimate web apps people need. At Okahu we're actually seeing a ton of interest in monitoring AI API usage patterns - like catching when someone's sending way more data to an LLM than they should be. But for all the random SaaS tools... i think you're fighting a losing battle unless you can offer something better internally.
JonesTownJamboree@reddit
>people just started using their phones or personal laptops.
This right here is the worse bit.
At this point, our (IT's) policy for this is to report it to management citing the policy against it if we find out. Best we can do.
JonesTownJamboree@reddit
Two pronged:
First, find out what users want and give them something acceptable. We're a full MS shop, so they have access to MS products and have hypothetical control over all that stuff. Is CoPilot or M365 "the best"? Dunno and don't care. That's who we have agreements with and can hypothetically configure/control. On the very off chance someone shows a legit need for something outside that either because the MS equivalent doesn't have the functionality they need, we're more than happy to work to get that thing legit on-boarded.
Second, good old fashioned blocking and control. Our firewall has never been happier to show the "you can't access this per administration" page than in the age of AI. I'm more than happy to tell anyone that they can't use free ChatGPT or whatever since all they ever do is try to shove work data into it. Beyond that, we've tightened down things within the environment like all Teams addons require approval, same with trying to OAuth random bullshit online.
Users get mad, but we can't have them shoving PHI or confidential business data into LLM bots we have no agreement or control over. And trying to explain it to them just gets either glassy eyed looks or "I don't care! I need Claude because my son who's good at Fortnite said it's better than CoPilot!"
NoyzMaker@reddit
That's something leadership needs to decide if they want it locked down or they accept the risks. Then just implement that policy based on their direction.