How are companies actually handling employees using ChatGPT/Claude with internal data?
Posted by lazyintruder@reddit | sysadmin | View on Reddit | 14 comments
At this point it seems pretty normal that most teams are using tools like ChatGPT, Claude, Gemini, etc. in their day to day work.
Things like summarizing docs, debugging code, analyzing data, writing content.
Which also means people are pasting in:
- internal docs
- customer data
- codebases (claude code)
- financial info
I’m curious how companies are actually handling this in practice.
Do you:
- have any internal policies around AI usage?
- rely on employee judgment?
- restrict certain types of data?
- not really care as long as it helps productivity?
Also curious how teams are tackling this from a tooling perspective.
Are people standardizing around something like Microsoft Copilot or other enterprise tools?
Or is it still a mix of individual tools depending on preference?
I have also heard some companies say enterprise tools do not use your data for training, but I am not sure how much that actually changes behavior internally.
Also wondering if this varies by industry like fintech vs SaaS vs agencies, and by company size.
Trying to understand whether this is something companies actively manage, or if it is still mostly informal.
Top-Perspective-4069@reddit
This question come up 3-4 times per week, lots of collected information already in the sub.
xendr0me@reddit
And if you look at OP's history you can confirm intentions. It seems like 75% of the questions on here lately are marketing surveys, or people hiding their product marketing and sneaking it in later.
Kumorigoe@reddit
And now he won't be posting in here again.
Sam_DevOps@reddit
The practical approach I've seen work best for SMBs (and that scales to mid-size):
This is the cleanest solution if data sovereignty is non-negotiable. You can run a local LLM (Qwen 3.5, Llama 4, or similar) on standard GPU hardware with something like vLLM for multi-user serving. Same chat experience, zero data leaves the building. Cost is hardware upfront + electricity, but no per-token API fees and no compliance headaches.
For most internal use cases (summarizing docs, drafting emails, code assistance, data analysis), a 30B-parameter model running locally is more than enough. You don't need GPT-4 level for 90% of what employees actually do with these tools.
When you genuinely need the best reasoning (complex code review, architecture decisions, legal analysis), use the API with a Data Processing Agreement - not the consumer chat. Both OpenAI and Anthropic offer API terms where your data isn't used for training. This covers the 10% of tasks where local models fall short.
Block consumer AI URLs (chat.openai.com, claude.ai, gemini.google.com) at the proxy/firewall level. But simultaneously provide an approved internal alternative - either self-hosted or API-backed with your own frontend (LibreChat, Open WebUI, etc.).
The "ban everything" approach never works. People will just use it on their phones. Better to give them a safe, monitored channel where you control what data goes where.
If you want to get fancy, you can put a lightweight classifier in front of the LLM that flags when someone tries to paste PII, credentials, or data matching certain patterns. Works like a DLP but for AI prompts. Not strictly necessary if you're fully self-hosted, but useful if you're routing some traffic to external APIs.
The companies doing this well aren't fighting the AI wave - they're channeling it.
ChiefBroady@reddit
We basically block all access to all non approved AI tool and offer copilot business to users with a business case.
OneSeaworthiness7768@reddit
This question gets posted here repeatedly always by someone looking to make a solution to sell for it. I can’t believe people still answer these genuinely.
Ztoffels@reddit
lol they dont, thats why its not fucking working for them.
For example, my company wants me to use AI, but they make you jump through several fire lit hoops before you can get that to happen.
And if only a person of the whole approvers decides “no” thats its, you are fucked.
So its like “Use AI, but figure out how without our data”….
Worth-Paper-360@reddit
This is not an IT issue.
For your startup it is a policy issue. Ask the cto/founder etc to make these decisions.
Large companies have policies.
BastettCheetah@reddit
You have a commercial relationship with one provider, with NDAs and contracts, and you ban all others.
If you don't have an AI tool and people are going all shadow IT on you, that's on your management.
CaesarOfSalads@reddit
This is the way. It doesn't stop people from going around and using their phone or personal devices, but if you provide a solution and make the alternatives inconvenient, a lot of the risk goes away.
lordsiriusDE@reddit
This
che-che-chester@reddit
Step one is creating a formal policy telling users what is and isn’t allowed.
Then you need to block any AI you don’t allow.
Another option is a DLP tool for AI to block users from uploading company data.
Kuipyr@reddit
Offer them something because unless you work in a SCIF they’re going find a way. Copilot with Purview is pretty good.
itskdog@reddit
Block all of them with the web filter.