How are you handling employees using personal ChatGPT accounts at work? We had an incident last week.
Posted by fxs38@reddit | sysadmin | View on Reddit | 382 comments
One of our devs was debugging a nasty production issue at 11pm. Stress, time pressure, wanted to move fast. Pasted a chunk of our internal API code into ChatGPT Free — his personal account — to get help. Got the fix. Shipped it. Told his manager the next day like it was nothing.
We only found out because he mentioned it in standup.
We have no idea how many times this has happened. We have no logs. No policy that was ever actually enforced. Just a vague "don't put company data in AI tools" in the employee handbook that nobody read.
So now I'm sitting here wondering: what are other people actually doing about this?
Not looking for "block ChatGPT at the firewall" answers — we've been down that road, it just makes people use their phones or hotspots. I mean actually tracking and managing it.
Are people running anything to get visibility into which AI tools employees are using? Or is everyone just hoping for the best?
legrenabeach@reddit
I don't know the technical aspect, but I know how others handle it. A friend of mine is a developer for a major UK Bank. He works hybrid, two days from home and three days in the office. He can only use the company laptop to access anything work related, as it uses a VPN to access the bank's network and allow access to all internal tools etc, no matter what broadband or mobile data connection he uses. They use two different LLMs, one internally developed and a mainstream one with a company account. The rest is blocked, I believe both at company firewall and at local laptop level.
Regardless of that, their rules are very strict; any company data whatsoever ends up in a system not approved by the bank, it's a major violation and grounds for dismissal on account of serious misconduct. So it's not only an IT matter but also an HR, management and "culture" one too.
fxs38@reddit (OP)
Banking is indeed a highly regulated environment. I wish more companies could afford an internal LLM deployment, not easy for small business
GroteGlon@reddit
You honestly don't need that much to run some LLM's
Superb_Raccoon@reddit
A cluster of Dells with AI cars in them is a lot of horsepower.
bingblangblong@reddit
Yeah and nowhere near as good as Claude etc
Superb_Raccoon@reddit
Its the same exact model, genius. Nothing magical.
ElonTaco@reddit
Its not...
Superb_Raccoon@reddit
It is.
https://huggingface.co/Jackrong/Qwen3.5-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled
ElonTaco@reddit
You obviously do not understand what distilling a model is.
youplaymenot@reddit
It's not the same exact model genius, you think anthropic is just letting anyone download their exact cloud model? Let alone having massive amounts of ram to run that model.
Superb_Raccoon@reddit
https://huggingface.co/Jackrong/Qwen3.5-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled
Download it for yourself.
Hammock-of-Cake@reddit
You don’t appear to understand what Claude is. This distilled model is not Claude.
bingblangblong@reddit
He's wrong, I looked into it and there's zero mention of IBM running Claude's model anywhere. I don't think he understands what a inference engine is.
Superb_Raccoon@reddit
Yes. We have it in the lab at IBM.
Did you not read the deal we have with them?
That model, along with hundreds of others, is available on premise via WatsonX.AI (stupid marketing) and Orchestrate.
Your company? Maybe not. But Big Tech is Big.
Namaha@reddit
They literally leaked their entire source code yesterday lol
jasmeralia@reddit
No, they didn't. They leaked the code for the Claude client, not the actual models. That's vastly different, although there were certainly some tidbits of info inside it that they didn't want public yet.
Hammock-of-Cake@reddit
Wait, how exactly are they the same?
Superb_Raccoon@reddit
Same inference engine.
https://techcrunch.com/2025/10/07/anthropic-and-ibm-announce-strategic-partnership/
You probabl6 missed this. IBM runs Claude on prem, and provides it to clients on Prem as part of Orchestrate..
bingblangblong@reddit
Decided to look into this.
https://newsroom.ibm.com/2025-10-07-2025-ibm-and-anthropic-partner-to-advance-enterprise-software-development-with-proven-security-and-governance
There's no mention here of IBM running their own Claude, same with the TechCrunch article. It only says:
So what that probably means is they're using Anthropic's API to access Claude. Not the same as IBM running it themselves.
I could not find any mention anywhere of Anthropic publicly releasing a model. What that huggingface.co page says is:
"imitation of Claude-4.6-Opus reasoning chains" so it is... not at all Opus 4.6. It is trying to be like Opus 4.6.
The "inference engine" refers to the software that runs the model, like vLLM, llama.cpp etc. That's completely separate from the model weights themselves. I think you are misusing the phrase to imply the model is the same, which it is not.
Superb_Raccoon@reddit
I work there. I know.
bingblangblong@reddit
OK I believe that you work there but you misused inference engine to imply it's the same model and the link you provided is demonstrably not Claude. You also posted the techcrunch article to validate your claim that IBM runs Claude locally but it doesn't say that so I'm not sure why you posted it.
MaNbEaRpIgSlAyA@reddit
Opus, Sonnet, and Haiku have never been open sourced.
Superb_Raccoon@reddit
So?
Who said they were?
I've explained in other threads, wont either repeating for people that dont check before they comment.
UMustBeNooHere@reddit
What brand would you recommend? Chevy, Nissan, BMW?
Spong_Durnflungle@reddit
He's trolling you. You wouldn't download a car.
VegasRudeboy@reddit
You wouldn't steal a handbag. You wouldn't steal a car. You wouldn't steal a baby. You wouldn't shoot a policeman and then steal his helmet. You wouldn't go to the toilet in his helmet and then send it to the policeman's grieving widow. And then steal it again!
Ur-Best-Friend@reddit
Well... not on a Thursday, I suppose.
cyber_r0nin@reddit
What about in a box?
cyber_r0nin@reddit
Not on a train?
Superb_Raccoon@reddit
I do not like green eggs and ham.
tuvar_hiede@reddit
I downloaded RAM does that count? I dont think it was made by dodge though...
UMustBeNooHere@reddit
Arrrrr ya sure?!?
Superb_Raccoon@reddit
Yugo.
wolfdukex@reddit
Gary Numan. That's all the cars you need.
Dr_Movado@reddit
I mean, there are the nvidia spark machines and oem versions (I have a dell pro max with a gb10) for like 5K that you can start out with and test use cases with before committing to more infrastructure and moving to production.
Superb_Raccoon@reddit
Or rent the infrastructure. I work for a fortune 50, so money is a small object.
Any fortune 1000 should be able to afford a few big machines if they need them, or buy cloud services if that makes more sense.
A good storage system is $500k at a minimum, and a serious purchase of hardware, software and storage is in the $100 million range.
I've worked on more than one deal at that level.
krilltazz@reddit
I made a small custom llm because i wanted to chat with npcs in fallout 4. Works pretty good, lightweight and runs pefectly on my gtx 970.
Superb_Raccoon@reddit
Got a link?
Ever seen COVAS?
https://github.com/lucaelin/covas-next-aiserver/
It provides an AI co-pilot for Elite dangerous. I run a lightweight model in LMStudio to run it.
krilltazz@reddit
No link yet. Its a combo of python and llama. I added cool shit like follower reaction and awarness system based on where you are, threat level health. Custom voices for all 61 named npcs and random voices for npcs . A "speech" system while in combat you can negotiate with the leader. based on inspiration threat caps or promises. If they feel cheated or you fail to move them combat insues. Im just having fun with it. I convinced 2 raiders to fight for me. 2 just ran away lol. I will check out your link.
Superb_Raccoon@reddit
Which model did you use?
krilltazz@reddit
I dont even remeber anymore. Which are you refering? I got this this this running on duct tape. End result is almost exactly like i envision.
FendaIton@reddit
Training a model from scratch is too much for most companies, and data governance generally prevents people running inference on a 7B model.
GroteGlon@reddit
True, but most companies also really don't need a privately run LLM in the first place.
Used_Gear8871@reddit
Could they also just create an SLM (small language model)?
GroteGlon@reddit
You can run smaller deepseek models with 8B parameters with 12gb gram, and bigger 32B parameter models with 24gb of vram, so an rtx 4090. Most use cases really don't need a 670B parameter model.
If you're a smaller company that for some reason rally needs a private LLM you could be done with an about 5k USD investment. And if you really want to run bigger models you can look at quantization etc, or just investing more money.
420GB@reddit
Most companies can afford a couple hundred bucks up front cost for a NPU or GPU
sofixa11@reddit
NPUs are mostly useless, and GPUs to run large enough to be useful models do not cost a "couple hundred bucks". Nor does the rest of the machine they have to go in.
Bazzatron@reddit
Well, Claude just inadvertently went open source. The cost of hardware to run it and make it available internally is going to be lower than the cost of a major data breach.
But I broadly agree that this is a management issue and not a tech one. People need to be told that sending code to an external AI is no different than sending it to a coder on Fiverr, and both constitute a binnable offense.
MAM_Reddit_@reddit
In all honesty, it's probably not just an custom Local LLM built for the job, it's it's probably a ringfenced version of Copilot which is absolutely useless 90% of the time.
jib_reddit@reddit
Copilot now runs on (upto) ChatGPT 5.4 so it should be the same.
BevvyTime@reddit
You can do it for a few thousand
NowWithExtraSauce@reddit
Can you afford $20/mo per user for Claude Pro?
tuvar_hiede@reddit
Its all about the ROI, what was that shipped code worth vs the cost of a company account or local AI? Personally I don't care for AI. Its going to really bite us in the ass once the experienced people of today are replaced by vibe coders who's only skill is AI verbiage to get the best results.
fergy80@reddit
If you pay for Gemini, then they don't use your data to train. What's wrong with that? Are you dealing with data that legally can't be exported or is healthcare data?
creamersrealm@reddit
We're running Anthropic models on Bedrock and the deployment was stupid simple. We only pay for the usage we use.
MathmoKiwi@reddit
Can't even give them Microsoft's Copilot?
VernapatorCur@reddit
That doesn't prevent them from signing into the external one with a personal account
hoax1337@reddit
If the ones that are available to use are sufficient, there won't be a need to use your personal account, right?
cyber_r0nin@reddit
It's called having integrity. Which a percentage of this sub don't have...
stephendt@reddit
Or snapping a photo on the screen and dumping it into the mobile app
xzer@reddit
Handled the same way if you take a photo of sensitive data on your phone and share it, instant dismissal lol
improbablyatthegame@reddit
The point being, how do you enforce this when it’s being done on a BYOD device that’s is allowed applications outside the scope of MDM policies.
You’re relying on snitching, whether it’s self or otherwise.
roboticfoxdeer@reddit
Honestly when I interned at Boeing the scary military man warning us not to use our phone camera ever scared me straight. Maybe a scary guy is needed just this once lol
cyber_r0nin@reddit
It's called prison in a supermax
legrenabeach@reddit
The bank I am describing isn't BYOD.
As for the photo-on-phone scenario, that is exactly the same question as "how do I prevent people taking photos of confidential data and uploading it elsewhere", i.e. a management/training/HR problem, not an IT one - unless you are in an extreme regulated environment where you can only work on-prem and no smartphones are allowed in the workplace.
420GB@reddit
If you allow BYOD you don't care about anything anyways
YourWorstFear53@reddit
Facts
xzer@reddit
I mean isn't this no different than the dawn of time of employees who can see payroll, sensitive trade data, insider secrets?
You start at the hire training suite pushed through HR, it is stuffed down your throat (at my bank) repeatedly at the start of any role YOU DO NOT EMAIL YOUR SELF WORK DOCUMENTS. Alongside it being fed to you at the start there is an annual course you have to do touching on this subject, its the same here.
We do it for calling from your personal phone number to clients, texting coworkers company related conversations, etc.
improbablyatthegame@reddit
It’s definitely within this context but supercharged and the ease of breach is only going to make these instances more dangerous and accessible to end uses.
The torrent of shitstorms that will come from these systems is going to be massive. Absolutely blows my mind that companies are allowing everything into them. Seemingly no one has learned the lesson of data correlation being more dangerous than a singular data point.
binaryhextechdude@reddit
What amazes me is these people will be on a call with IT and they will admit to it. Statements like "I didn't have access to xyz drive last week so I just got Sally to email me the documents I needed so I could work on them"
So you circumvented restricted access and now you're boasting to IT about it?
I wish I could block their accounts.
binaryhextechdude@reddit
When they log in there is a fair use agreement they have to accept. An email went out stating what is and is not allowed regarding company data. If they continue to skirt the rules they can discuss their future employment with HR. Not my problem.
Humpaaa@reddit
Managed proxy that blocks all external LLMs, it's done in any regulated environment.
jamesyt666@reddit
I just took a photo on my phone and sent the image to claude..., you can't stop AI use, only educate people on how to use it effectively and safely. If you are not using it you will fall behind as a company or as an individual now!
CommanderSpleen@reddit
It all depends on the risk scenario. In extremely sensitive environments, personal phones wouldnt be allowed around confidential info.
PrestigiousShift134@reddit
lol or you could just use the MDM controls ChatGPT ships with …
sofixa11@reddit
Will that block Claude? Perplexity? Mistral? Kimi? DeepSeek? Whatever company shows up tomorrow?
xamboozi@reddit
You don't let employees copy code to a personal. USB, email, messengers, should all be blocked and banned. Employees should all be thinking of "that one guy that got fired for copying/leaking code".
424f42_424f42@reddit
Can't sign into what you can't get to at all.
binaryhextechdude@reddit
Well actually... My office blocked every AI other than copilot but in my personal life I pay for google workspace as I use my personal domain for email. I can't go directly to https://gemini.google.com/ as it's blocked but I'm signed into my personal email and I found a way to get into Gemini and it's fully functional.
I like being employed though so I only use it for non work related searches.
dagbrown@reddit
So you know that those tools are blocked, but you found a way to get past the block anyway? Do you expect HR to congratulate you for your cleverness and let you go ahead and exploit the loophole you think you've found?
"I oNlY uSe iT fOr nOn wOrK rElAtEd sEaRcHeS" is probably not going to be holding much water when they're looking for an excuse to get rid of you for being a hazard to company confidential information.
424f42_424f42@reddit
Well that's just a whole bunch of fireable actions.
PrestigiousShift134@reddit
Nobody gives a fuck. I paste company secrets into my Claude max account all day every day.
amcco1@reddit
You can use browser extensions that are automatically installed on the browser to block them from signing in with personal accounts or exfilling certain company data.
xzer@reddit
It's not 2011 most mainstream companies who offer corporate solutions can block personal accounts brother.
TehZiiM@reddit
And yet, he could use his phone to take a picture of company data and let it run through his personal gpt account
legrenabeach@reddit
Same as with any other company data, he could use his phone to take pictures and sell them to competitors. That's not an LLM-specific problem, that's a sackable offence whatever data he does that with.
kyob@reddit
No security controls will protect you if someone sitting at home takes a photo of the code and uploads it to an LLM.
Recent_Carpenter8644@reddit
How would they know this has happened?
PM_ME_YOUR_GLIMMER@reddit
You don’t need to know it happens every time. It would be ideal yes, but also if your coworker got fired for using a non authorized tool with company data you may be less likely to use a non authorized tool.
ride_whenever@reddit
They won’t use personal gpt if you’ve given them an enterprise account or Claude/cursor etc.
Especially for devs, Claude CLI or some other IDE/terminal based ai assistant is so powerful for troubleshooting complex code-bases.
cyber_r0nin@reddit
You realize the more devs use these tools the sooner these devs will be out of a job. The whole purpose of using the AI is to train the AI at the same time. Eventually the AI will get well enough at development that a developer may not be needed. Instead of training your human replacement (some new intern) you're training a non-human Ai. Granted this will take a while...
ride_whenever@reddit
The point this will happen has very little to do with the ai tooling as such.
It’s all about the surfacable content, and being able to give it context. Most companies just don’t have that available, but the AI is relatively good at producing that as it goes.
Once the tools are widely enough adopted, we’ll see companies coming up that have entirely grown with this tooling, and the associated documentation/contextual layers - that will be the point that you’ll see the end of current software dev.
Same with sysadmin work, once companies come up with their infrastructure established with contextual cues, that’ll be the end of it.
However in the meantime, I’m loving being able to work at the speed that I’m thinking, rather than being slowed down by the tedious act of execution. It’s exhilarating, and I’m massively enthused with my job again.
cyber_r0nin@reddit
I understand that, just...change is annoying...
Substantial-Fruit447@reddit
My org set up CoPilot as our approved internal AI service.
We can link to our Outlook, OneDrive, feed it company data, within reason.
We were told "nothing confidential" and "no API keys or client secrets, or credentials otherwise."
Obviously it's not going to stop anyone from feeding stuff into another LLM on another device, or taking a picture of something on their personal phone and sending it into Claude.
You could go the route of blocking it, but that's a lot of work.
Instead, this is something to be handled more at the policy and management level.
Clearly define which services you can and cannot use and what data you can and cannot put into it. Anyone that violates that policy will be dealt with administratively.
ansibleloop@reddit
I love the Copilot comedy clown show in VS Code
The auto-prediction is enabled by default, meaning whatever you type is sent to Copilot
So if you paste an API key or password or whatever into VS Code, off it goes to Copilot
lotekjunky@reddit
and copilot abides by the Microsoft data privacy agreement. they also will not train models on your data.
ansibleloop@reddit
Yes these companies who stole all the data to train the models are definitely trustworthy
They definitely won't train the models on your human input now there's no LLM-untouched internet data they can scrape
We just saw Claude code sending telemetry based on using certain words too
If you want truly private AI then you need to run the model yourself
lotekjunky@reddit
read please. Microsoft is not openai. Microsoft is not Google. Microsoft is not antropic. Microsoft is an enterprise b2b provider certified for highly regulated industries. If you have anything other than conjecture, share it. https://www.microsoft.com/licensing/docs/view/Microsoft-Products-and-Services-Data-Protection-Addendum-DPA
Substantial-Fruit447@reddit
Wtf no way
AggravatingAmount438@reddit
This is the problem: The people responsible for punishing this behavior (HR) don't understand the problem or risks.
And because it makes them look anti-AI to a bunch of CEOs that just regurgitate everything AI related, they don't punish for it.
And until there's a real consequence and punishment for constantly giving your company's and customer's information up freely to the all-consuming AI wheel, people aren't going to stop.
It's easy for them, and convenient. And they get to take all the credit for using AI.
SHANE523@reddit
One of our departments calls our CEO, "ChatGPT", that is how much he uses it.
electrobento@reddit
Outright blocking access to non-approved AI is technically simple and any legit company can do it. The politics are hard though.
juggy_11@reddit
It’s simple but I won’t recommend it. That just creates opportunities for shadow IT. What’s stopping someone from going on their personal computers and copying company data there? I’m with the OP on this one, this is more of a policy level issue than a technical one. You violate the policy and we catch you, you get fired.
TheCyFi@reddit
What’s stopping someone from copying company data to work from personal devices? Having an approved corporate LLM model that works well, DLP, and Conditional Access Policies go a long way. Obviously, it won’t prevent someone from manually recreating the data on their personal device, but that takes some pretty blatant malicious intent. At the very least, it certainly makes it much more difficult than just adopting and using an approved corporate LLM model.
juggy_11@reddit
Yes, have an approved corporate AI and put controls around it. Completely blocking non-approved AI on corporate machines though will make it more likely for employees to just use their personal machines to help them do their work. So how do you monitor it? You can’t. So instead of blocking it, put a policy around it. Better yet, offer ChatGPT enterprise or Gemini Enterprise.
TheCyFi@reddit
You’re missing the point. If you have sufficient Conditional Access Policies, while not impossible, it is very difficult to access work data from a personal device. And yes, you can monitor that.
Further, EDR vendors (including SentinelOne and CrowdStrike) now offer prompt security add-ons that can detect, sanitize, and even block unauthorized or improper use of LLMs.
Also, you say “better yet… offer ChatGPT or Gemini enterprise”. That would be the “approved corporate LLM” that I was referring to; so your “better yet” is one piece of what I’m suggesting and is part of the same approach that you’re saying isn’t recommended.
sublime81@reddit
It’s really not. Pull out phone, point it at the screen with a document on it, click text, and it copies the text. No need to recreate.
ZweiNor@reddit
Well yeah, there are always ways around. But at that point it clearly is an intentional data leak and should be treated as such if they are caught.
juggy_11@reddit
You also missed my point and we’re completely misunderstanding each other. The issue isn’t just direct access to company data from a personal device. It’s employees manually taking company information and inputting it into personal machines, which is much harder to monitor or control. That’s why simply blocking AI on corporate machines can backfire if you don’t also provide an approved enterprise alternative.
TheCyFi@reddit
I didn’t miss that point. I explicitly addressed it.
fresh-dork@reddit
fire them. they exfiltrated data deliberately rather than use tools provided. can't trust them, out the door.
CtrlAltDust@reddit
It's both. Defense in depth.
Jhamin1@reddit
The thing we are running into is that because everyone is chasing AI, AI tools are showing up in every random website you can think of.
We are using endpoint protection to shut down calls to non-approved AI sites... and its breaking a lot of stuff you wouldn't expect. Someone couldn't get to the power company site because it auto-loads an ai powered customer service widget and the site won't load if the widget is blocked. This is a problem as we do business with them.
We are still trying to fine tune the filters, but it's messy & we are losing any hope of this *not* being a big political thing with the wider company,
dynalisia2@reddit
I’m curious: are you using Microsoft cloud services?
I hear about companies that have all their stuff on exchange online, sharepoint online, one drive, teams, but then suddenly don’t trust Microsoft anymore when it comes to copilot. Makes no sense to me.
Substantial-Fruit447@reddit
Hybrid
Shotokant@reddit
I'd you have copilot at work it can use Claude. Just needs the 365 admin to enable it. It's better than the openai LLM imho
dynalisia2@reddit
I think that for average users, the model quality really isn’t going to make much of a difference. It’s the application layer and there copilot is much worse in being a chat assistant than the others unfortunately.
MrChampionship@reddit
Where do you do this as an admin?
GreyHasHobbies@reddit
O365 Admin panel. I forget the exact location but look for the Copilot features labeled "Frontier". This will allow users to switch to Claude Sonnet.
gumbrilla@reddit
oooh
MyThinkerThoughts@reddit
Check Azure Foundry too
craigoth@reddit
Yes, this is the answer. This is a policy issue not a technical one. Ensure everyone reads and indicated that they have read the policies and have real consequences for not following the policy.
jhme207@reddit
We did the same. Co pilot for general use, but people can get exceptions and we have a handful of gpt and Claude licenses.
Enforcement is entirely done by a policy.
Superb_Raccoon@reddit
Very large companies use copilot for "individual" use cases. Even IBM does, as for commodity AI it is hard to beat.
fxs38@reddit (OP)
Thanks for sharing your experience! Very useful
Biglig@reddit
It’s an ancient anti-pattern. Policy based security controls don’t work if you don’t also have an effective disciplinary process for when someone breaks policy. Fire this guy noisily and it won’t happen again.
Ballbag94@reddit
They should have a properly defined policy along with what is acceptable, not just fire the dude to make an example of him otherwise it will happen again, just not the exact same thing
"Don't put company data in AI tools" is way too broad without even defining what is classed as data and what isn't. They need both halves to be there, not just the disciplinary half
Biglig@reddit
Oh yes, it has to be “we fired him because he didn’t follow our clearly written policy which we’ve made sure you all read”.
KJatWork@reddit
IT has to get out in front of new tech and drive it or someone else in the company will.
Many years ago, wireless access points were still new and our company's IT was concerned about security, so they refused to deploy them to offices, but they also did nothing to physically secure the IT closets in the offices either...90s were kinda wild. Office Managers would head over to Best Buy and pick one up on the shelf, plug it into the hub in their IT closet and pass the default password around to their office staff and any customers/clients that needed/wanted internet.
You may not like something and it may be a risk, but if you are driving it, you control it. If you don't, someone will and that someone is right now copying company data into public AI tools because you didn't give them a company option supported by policy and leadership to use instead.
Beneficial-Gift5330@reddit
Given that WiFi didn’t effectively exist until mid to late 1999, I am finding this comment mildly amusing, if inaccurate
HesSoZazzy@reddit
PCMCIA card in my laptop connected to a Linksys WRT54G. Blew my fucking mind the first time I was able to get a page to load from my chair in the living room. :D
Beneficial-Gift5330@reddit
Right I don't doubt it happened, just not in the way he described. Both because he's violently wrong about the decade, but also because laptops were pretty rare at that point and installing a PCI card in a desktop was not as easy and clean of a process as he described an office manager doing. IT guy? sure. Office manager? Absolutely not
gp24249@reddit
this - I remember !!
First PCMCIA "Air Card" too ! - We installed this in our president's laptop and he could get internet from 3G (or whatever it was back then) - Except we forgot to show him how to turn it off and he went to england (from Canada) and he roamed the whole time... ended up with a 7000$+ bill....
KJatWork@reddit
No, fair point, time plays tricks as I get older. 😅 Thinking back I started that role in early 2001 and that would have put this around 2002/3 time frame.
d00ber@reddit
We block it on the firewall and have a written policy. If you put company data into chatgpt, it's auto termination on first offense.
lotekjunky@reddit
so not on first offense?
d00ber@reddit
Here's an example of a first offense fire. During a meeting a developer literally pasted some private data from one of our customers into chatgpt while presenting to try and show us how valuable it is. The meeting was interrupted, and the developer was fired by end of day. The policy had been in place for a long time prior.
xored-specialist@reddit
Sounds like work needs to pay for AI tools or enforce policies that block AI and fires employees who puts company information in it. Thats a per company call.
dont_ama_73@reddit
Block it. Send out an email that only corporate AI is allowed. Fire people that break it. Be aggressive
nickram81@reddit
How bad is it though to expose an API key to chatGPT? Like what are the potential consequences?
viking_linuxbrother@reddit
Not a great thing, but its literally super common. Until these services are blocked they are leak for data.
strongest_nerd@reddit
Funny you used Chat GPT to post this.
You follow the use policy created by HR.
Vektor0@reddit
I am flabbergasted by the number of wannabe IT professionals in here responding to the most obvious AI post ever. None of these people have any business having IT credentials or anything to do with cybersecurity.
upq700hp@reddit
seriously...what the fuck. not the first time either
upq700hp@reddit
I get so fucking pissed everytime I see someone complain about AI using an AI written text. Is it engagement farming? Are they actually just stupid?
So many plausible explanations, not one of them reasonable in the slightest.
iamLisppy@reddit
— is the dead giveaway. Nobody actually uses this in the real world.
danekan@reddit
It’s sad for us who do
Windows95GOAT@reddit
You should know better than to asume as IT personel :)
Hexadecimald@reddit
I feel like the use of actual ASCII mdash character is the call out -- In my experience humans use the double dash (which IIRC is markup mdash?)
So I mean technically you can use markup for Reddit posts
godspeedfx@reddit
I've used em dashes for years before LLMs became mainstream and now I feel like I can't anymore, lol. Had to break the habit =(
rangers_87@reddit
Same I use the em dashes all the time - but the difference is it’s not that elongated one (—) which is usually the giveaway.
bingblangblong@reddit
That's called a hyphen
PartyPoison98@reddit
It is a hyphen, but on a keyboard most people will use it as intended, as an en dash and as an em dash. The ChatGPT giveaway is using an actual em dash.
rangers_87@reddit
Ah, whoops! My bad. I use hyphens all the time.
rangers_87@reddit
I’m probably using them wrong too now that I think about it more.
LachlantehGreat@reddit
If you write in word it auto corrects them, but I’ve also been using them forever. I started adding them to my papers in school and it’s so much easier to maintain good flow with them
immaculatelawn@reddit
I put in hyphens, Word makes them em dashes when it's supposed to.
I'm not giving up a grammatical tool because some ungrammatical tool will accuse me of using AI.
franxfluids@reddit
Sorry man, but you've just been mildly illiterate until now.
(-) hyphen: connects words (–) en dash: connects numbers to represent a range (—) em dash: emphatic pause
Hot-Meat-11@reddit
I went through 13 years of Catholic school, scored 98th percentile on the college boards, went to college, took composition classes, etc., even majored in journalism for a semester...and nobody...ABSOLUTELY NOBODY...ever differentiated between these dashes.
So, to call someone "illiterate" because they don't know the difference, is kind of a d*ck move.
rangers_87@reddit
I wouldn’t say not knowing three different types of dashes is mildly illiterate but sure — I can take my lumps.
civbat@reddit
Harumph. Wholly ignorant of Dashes. Utterly disgusting...
stephenph@reddit
I have just found out about them and — started — using them.
Ur-Best-Friend@reddit
Incorrectly—apparently.
PAXICHEN@reddit
Samsies.
chriscrowder@reddit
I use semicolons
8inches_inside_daddy@reddit
Excuse me - I’ve been using hyphens religiously before ChatGPT was around.
The same for the thumbs up emoji before Gen Z said it was rude and passive aggressive.
BillyBumpkin@reddit
A hyphen is different than an em dash. You have to really go out of your way to use an em dash - I’m wagering you’ve actually been using hyphens (like the one I just used)
lotekjunky@reddit
Word (and probably other apps) automatically replaces hyphens with em dashes
iamLisppy@reddit
I didn’t put a hyphen, though, but I can see how it looks like that.
Vektor0@reddit
Not anymore. The more convincing chatbot posts are now using all lowercase and omitting periods at the end of paragraphs.
Newmillstream@reddit
Some smartphone keyboards will automatically change - to — if you tap - twice.
AcidBuuurn@reddit
I used to use two of these: -, but then text editors started smooshing them into a big long one. And in some places it will look right until you send.
So I used to accidentally use them.
D-Alembert@reddit
Plenty of us write in the well-structured style that the machines were trained to emulate. People are where it came from
Bruenor80@reddit
Plenty of people do. Or did. I've been actively breaking the habit. Also, gammarly, word, outlook, etc. Have been auto correcting multiple hyphens to em dashes for at least a decade.
heavySeals@reddit
Yes they do which is why LLMs use them. They're used all the time in research papers and technical writing.
typo180@reddit
It's not a dead giveaway (plenty of real people actually use em dashes), but given the style and cadence of the post as a whole, it's a good sign.
chrisnetcom@reddit
And used a dormant 10 year-old account to post it. Definitely trying to promote something, most likely from other accounts in the comments.
danekan@reddit
What tools have you provided to them? I feel like that’s a better starting point in this conversation
PixelSpy@reddit
We created a company policy a few months back that any personal AI solutions are prohibited and entering company data into them isnt allowed. We then blocked all of the different providers from out network.
We only allow copilot, and give out copilot licenses to those who request them.
rainer_d@reddit
Our policy: „Don’t ask, don’t tell.“
All our mails are in Exchange Online, the Clients have Falcon agent installed, most servers too.
If management was worried about any secrets spilling out, they would have to stop doing a lot of shit before they can worry about Anthropic , OpenAi and all the other companies.
gbell76@reddit
If you are running an M365 environment, you may want to consider co-pilot licensing with OpenAi or Anthropic as a subprocessor. This way GDPR/HIPAA/ITAR/etc compliance will flow through considering that you have a BAA with Microsoft. It will also keep it logically siloed and you can somewhat manage it. Just a way to CYA and at least give the air of you doing SOMETHING to maintain compliance and best effort. The alternative in the Wild West.
butterbuts@reddit
Screenshots of data gets around a lot of the tools used to block open ai tools (that aren’t copilot).
kennymac6969@reddit
My department in the gov recently rolled out Netskope. I'm not in any need to know situation so all I know is it installed on most machines now. Based on the website description, it might be helpful.
Windows95GOAT@reddit
HR issue, we currently block Copilot and Deepseek while we are officially onboarding AI use. But we do already give training which comes down to: "Don't be dumb."
But there is zero we can do to prevent someone from "asking chat" at home or on their personal device outside of HR reprecusions.
ResidentKernel@reddit
This means your companies controls are terrible. They shouldn’t be able to access LLM’s outside of approved corporate controlled models. And you need to back things like that up with consequences. It’s against your corporate LLM or IT policy, you exit them and make an example.
Jacmac_@reddit
I really don't think putting data into ChatGPT is as big of a deal as people trying to spin it up into. Security and IP people tend to go hyperbolic on the subject and their arguements are pretty much bunk and border on paranoia. A snippet of code is not going to leak the family jewels for everyone to make use of. Pasting in the entire codebase repo into a context might be used by OpenAI or Anthropic in some way for training that could eventually lead someone else making use of the knowledge.
You can't shut it down, you can only make it difficult. Our company blocked everything, people just used screenshots and asked chatGPT on a personal device. Our company finally went so far as to buy their own AI company to the tune of $1.5 billion so that the code base could be put into an AI that is locked to only company use.
Jakeliving@reddit
Thankfully our company decided that giving everyone $20 copilot account would save them the headache down the line, with enterprise protection people can go nuts with sharing customer information without any worry from the higher ups
No-comments-buddy@reddit
You can allow only corporate instance and block personal instances through netskope
cholointheskies@reddit
Ai slop
BonezOz@reddit
Intune/*EDR to block any AI except for CoPilot.
zipcad@reddit
If you don’t have an enterprise LLM on a private plan right now you are behind. People are throwing all sorts of shit into ChatGPT. Have to mitigate the risk even if you hate it.
Claude is the right answer, btw.
Depending on your industry, they’re getting a slap on the wrist if they fixed the fart sound button all the way through litigated for negligence if they code leaked sensitive stuff.
Fatality@reddit
Why are you not giving people the tools they need?
bezerker03@reddit
This is a policy issue. Start officially writing people up and or firing for it. They are basically sharing company code or private info with the internet.
Total-Assumption-494@reddit
Data loss prevention to monitor endpoints for uploads to cloud services
Ok-Reply-8447@reddit
Managers should take the time to raise awareness and provide proper training so everyone understands the risks and consequences of sharing sensitive or critical information with GPT tools.
At the same time, policies need to be updated to reflect this clearly.
For those who repeatedly ignore the guidelines, HR should step in and handle the situation appropriately.
lotekjunky@reddit
they should also understand their API code is not special
JoeVisualStoryteller@reddit
Better get enterprise seats for ChatGPT and Claude at the minimum. No real way to do it other wise.
lotekjunky@reddit
github copilot
B0797S458W@reddit
We block all AI tools as soon as we become aware of them. Only copilot is allowed.
RCTID1975@reddit
So you have a team of 5 who's only job is to comb new AI things?
This doesn't seem like a sustainable process, or even reliably protecting you
lotekjunky@reddit
zscaler ai/ml classification is fast enough.
cometwrench@reddit
I mean, if you’re just giving them standard copilot then you may as well block ai outright since its an active hindrance as opposed to tangentially useful every once in a while.
matjam@reddit
Get agreements for the tools and provide access through the self service portal.
Most of them are able to do a “we won’t use your data” agreement.
They probably lie of course but it makes the lawyers happy.
DopamineSavant@reddit
Well you all don't have a policy, so it was nothing.
cyber_r0nin@reddit
OP stated there is a CYA 'line' in the manual - that no one reads.
Not reading the company handbook isn't an excuse. And most businesses require you to read the manual for this very reason. If there is an update then an all hands e-mail should be sent out indicating there has been a change. If that isn't enough then require EVERYONE to read the manual as a yearly training. If they sign off that they read it (whether they did or not ) the business is covered.
Orgs need to stop giving passes to this nonsense. There are *plenty* of workers in the waiting for a job. If you use fear in this instance it might work just enough. But you have to be willing to back it up. Someone breaking the AI rule will have to be made an example at some point. Otherwise there is no teeth to the no AI rule. In this instance since the fix is in some sort of measure needs to be taken, probably not a complete firing. (emotional decision - maybe they have a family(?))
SandmanPC@reddit
I implemented cloud access security broker (CASB) policies with Data Loss Prevention (DLP) using Secure Access Service Edge (SASE) product called Netskope.
We blocked requests that contained specific information, keywords, health or personally identifiable information. It allowed fingerprinting data and matching that for DLP rules.
Powerful, yet expensive, product.
lotekjunky@reddit
zscaler too
fxs38@reddit (OP)
That’s a heavy setup ! But yes it seem to do the job
junon@reddit
Most secure web gateways offer this functionality now. We had it with Cisco Umbrella before and we have it with Zscaler now.
SandmanPC@reddit
I find Netskope much more intuitive then Zscaler. Ive heard rave reviews. But when i finally got my hands on it, i didnt enjoy the experience as much as i did with Netskope.
gumbrilla@reddit
What the hell are your management doing???
Go and buy Anthropic Enterprise
Give claude to developers.
Usual users, you could give them copilot, but it's shit. Probably give the Claude to them also tbh. that's what we do as well as copilot, but it's shit. shit shit shit.
Put in a policy and technical controls (purview, network monitoring)
You have unmet demand in your organization, if you don't meet that demand.. like water, it will find a way.
cyber_r0nin@reddit
Unmet demand?
First, what would have happened if there was no AI?
Second, why is their well paid dev not able to figure out the issue?
Third, if 1 guy can't debug why isn't there a team or a backup dev also able to help?
AI IS NOT THE FIX.
There is a glut of labor supply who are more than eager to do this stuff. People like easy fixes versus the right fix. The person who did it should be reprimanded. Now that code which is proprietary is out on the web somewhere. And your organization has 0 control over what is done with it.
ResortPuzzled551@reddit
It can be really tricky navigating employee use of personal AI accounts, especially with incidents that go under the radar. People often face similar issues and explore tools that provide visibility into employee actions on these platforms. You might want to look into solutions that focus on tracking app usage and managing data security, like BreachLock, which integrates continuous monitoring with security assessments. It's all about creating a balance between enabling productivity and ensuring compliance, so some clear policies about what's acceptable are essential as well.
tejanaqkilica@reddit
I've informed management that using Chatgpt (or frankly, any other tool) with personal accounts, it's a stupid idea. Management is fine with it because the person managing Chatgpt is inviting users with their work email addresses. Those are still considered personal accounts but hey, management knows, they don't care, I have to bite my tongue and try to let it go.
remuliini@reddit
Provide then the company selection of AI tools you have vetted and setup yourselves. We are provided a selection of ChatGPT, Claude, Github Copilot, Cursor and M365 Copilot.
lotekjunky@reddit
why are you making them use their own accounts? that's the failure here, not the developer making things work. If they aren't allowed to use ChatGPT, what do they have approved access to instead? Why is ChatGPT not blocked and why haven't the developers been properly enabled for AI use?
_nathata@reddit
Let's be 100% honest here, this is not the big deal you think it is.
Intelligent-Pause260@reddit
Who cares. 95% of these corporations are soulless and will lay you off in a seconds flat just to bump their stocks.
_nathata@reddit
Yeah absolutely. Bro is concerned about pasting 1k lines of a probably 500k LOC codebase in ChatGPT like it's a major deal.
Faaak@reddit
Haha, yeah totally. And op is like: how can I make my org less efficient in what it does so that it earns less money and gives me a reason more to can me?
Nobody cares about your code. Anybody can write code, that's not where the secret sauce is
CrustyMFr@reddit
THANK YOU!! Your AI tool isn't going to take a little piece of API code to market and run your company out of business. PII is different of course, and you don't want people pasting stuff like that into it, but there is no reason to get so worked up over this kind of thing. It solved a problem. Move on.
ishboo3002@reddit
Yes but it's all the same problem you have to protect your company data from the public models that can use it to train. If the dev is doing it with code, you know sales is doing it with prospects
aaiceman@reddit
This right here. I expect sales to upload anything and everything that is a client list into anything they can find and put no thought into how it's being harvested.
CrustyMFr@reddit
Locking devs out of something useful because someone in sales might misuse it makes it sound like you don't really have a handle on RBAC controls. Salespeople's entire job requires working with proprietary information. Devs often don't work with anything proprietary and should understand that when they do they should not feed it to an AI. Safe enablement is the answer. Blanket lock down policies throw the baby out with the bath water.
ishboo3002@reddit
That's literally what I was saying it's the same problem. Get a tool that solves it for everyone and enforce dlp and provide a golden path
CrustyMFr@reddit
I guess I didn't get that from your reply. We agree, then.
DistantFlea90909@reddit
We have a Google chrome extension that warns you if you go on ChatGPT
The company pays for Gemini and encourages users to use it.
ImpossibleLoss1148@reddit
Places with a coherent security strategy are leveraging vendor tools to scan for rogue AI in the enterprise. Places that just fall into the next crisis, are not.
nmsguru@reddit
We have created an AI policy document a year ago with specific tools allowed. Employees must sign it before getting access to AI. All the rest are blocked by FW and proxy. Employees that want to use AI need to submit a formal request, approved by their manager - get a licensed Enterprise account for the AI tool with their corporate creds. Additional level of DLP is enforced through a dedicated software that scans AI prompts. Cyber team reviews violations and alerts the managers. Employees that paste PCI and other secret stuff are called and warned. No layoffs yet but they behave after a few got caught and warned,
ProtectAllTheThings@reddit
Give people enterprise accounts. You can’t stop this otherwise.
jmachee@reddit
This is a people problem.
You solve those with policies and punishments.
dynalisia2@reddit
You can’t block the river, so you have to channel it somewhere you want. So another LLM that you can sufficiently trust/control. Don’t build/use something bad (it has to be at least copilot level, which is basically the worst of the big ones imo), otherwise people will still go shadow IT.
Also, many security suites are coming with traffic inspection that includes prompt detection. Our security vendor (Trend) is foaming at the bit to sell it to is. We’re going to look at it next week.
Steamwells@reddit
A couple of things we're thinking about at my tiny company:
- MDM policies to only allow company-approved AI tooling, which is basically Gemini via Google Workspace, and Claude Code/Cowork for engineering.
- We're thinking about the guardrails and sandboxing options in Claude, and how we can roll out recommended security config.
- Tracking which MCP's are enabled across which SaaS platforms.
These were recommendations made to our IT team, as I am more of an engineering security guy, not a sysadmin.
This GenAI era is just highlighting how much of an issue shadow IT is to not just manage, but also have visibility of. We really need an Eye of Sauron on all of it. Not sure if Vanta/Drata etc have made any progress in these areas, but I guess we're always limited by what these AI/SaaS providers allow you to check from their API's.
Bonne chance, my brothers and sisters in tech.
gogetit57@reddit
We use Lightspeed filtering (education). They are bringing in an AI Monitoring tool that gives exactly what you state - which users are using which AI platform and what prompts are they entering. Whilst you may not have the same filtering requirements, this module may help you in your quest.
TommyTheOneAndOnly@reddit
We have Copilot as the approved AI platform, chatgpt and some other tools are blocked so the site isn’t reachable, works for the majority of
Popal24@reddit
You have to give some leeway and provide corporate AI dev tools. It can be corporate Microsoft Copilot or something else. This will destroy the need to use private tools.
Look at what Microsoft did with their Xbox One. By providing the Developer Mode, they offered a tool for anyone to build homebrews or emulators therefore killing the need to hack the system. It kept it locked for more than a decade which is extremely long. On the other hand, Sony killed the PS3 Linux support which sparked anger and the quick hack of their system
ScoobyGDSTi@reddit
We use Microsoft Purview to audit all interactions with Copilot and Chat GPT enterprise. Everything users input, all responses for the agent and then audit it for exposure of sensitive information.
Gaddness@reddit
One system we use is a group policy enforced extension called “push security”, only works on chromium and Firefox browsers, currently trying to find a solution for macs. We use this to give them a popup every time they go to an ai website on their work computer that reminds them of our security policy surrounding ai. Which they then click through to continue using it.
Theinfrawolf@reddit
The option to just hammer home with HR punishment "Don't use this or else..." will just negatively impact the culture in your workplace and will bite you back somehow. Go the route of a managed alternative, in our case, we went with an enterprise browser and set up conditional access policies so anyone who is not in the browser can't access the tooling. What did this do? 1. We can manage and log absolutely everything people do inside the browser. 2. The browser can be used in personal devices, no need for device enrollment. So it's as flexible as can be for the worker. 3. If you're not using the browser, you don't have access to the resources, simple as that. Meaning they may have access to chatGPT outaide, but not the code base inside, and some enterprise browsers have a strict "no copy/pasting outside the browser" policies, or at least logs every copy/paste, so you'll know when someone is doing it. By far a win-win for everyone involved.
BillyBumpkin@reddit
Gosh I wish there was a product I could use to prevent this
Thunar13@reddit
Anyone here saying “control of these things is standard get with it” seriously think this industry is in a different place than it is. I recommend looking at others experiences. It’s the straight up Wild West unless you work DOD / FOR THE BANK the bank is just gonna use 3rd parties to take on responsibility and they can.
False-Lawfulness-778@reddit
More security vendors are coming out with tools to prevent this form of data exploration. We had a demo for an extension by CS, but I forgot what it was called, and it was limited to certain browsers.
young_wendell@reddit
Netskope, policy, and prayers.
Charlie_Parker__@reddit
Hi.
scrotumseam@reddit
We have a firewall. With url filtering.
Negative_Click3214@reddit
Your problem is that even if you have a company sanctioned AI, if that AI is MS Copilot, then nobody will want to use it. Copilot is widely regarded as the least useful AI tool. Give them Claude Code and they'll actually try it.
the_doughboy@reddit
You supply alternatives that are controlled and then block the others with enterprise content filtering.
ReasonablePriority@reddit
This, plus education and policies on why it's bad and with appropriate sanctions if it's breached.
I have access to enterprise licenses options the company has negotiated or internal options. Everything else is not allowed
critler_17@reddit
Best answer for sure. I have my own chatbot my employees use that’s just a ChatGPT skin and will chew them out and let me know if they’ve done something naughty (not really it’s just like hey don’t do that and I get an email)
GeoSystemsDeveloper@reddit
Block it from their laptops ...
Benvolix@reddit
The most effective solution here isn't surveillance, it's substitution.
People will always reach for the best tool available when they're under pressure at 11pm. If you don't give them a managed alternative, they'll use their personal accounts. Block it on the company device, they'll use their phone. It's not malice, it's just human nature.
The fix is to make the managed option the path of least resistance. Get them GitHub Copilot, ChatGPT Enterprise, Claude for Enterprise, or whatever fits your stack. These give you the data protection guarantees you need (no training on your data, tenant isolation, audit logs) and your devs get the tool they already want to use anyway.
Once a sanctioned option exists: - You have actual audit trails - Data doesn't leave your controlled environment - You can enforce acceptable use through the platform itself - Employees stop hiding it because there's nothing to hide
The dev in your story wasn't being reckless - he was trying to do his job. He just had no good option available. That's a tooling gap, not a culture problem.
Policy alone never solved shadow IT. You need to close the gap that creates the incentive in the first place.
Inn0centSinner@reddit
Company owners: "Let's use more AI to cut staff and save money."
Also company owners: "Don't use AI to work faster."
NickAppleese@reddit
Oh yikes. Vibe coding proprietary information from a bank into a LLM?
That's a problem.
Majik_Sheff@reddit
You are looking for a technical solution to an administrative problem.
elyveen@reddit
With MS Purview and Defender you can set DLP policies and such to prevent such things. Its really nice but a bit expensive
extremetempz@reddit
I've been through this, we (Security team and CIO) drafted a document specifically for Developers and this sort of thing and make it a sackable thing.
This is to an extent a technical problem and it can be solved (IE lock down ChatGPT to Enterprise tenanacy via cookies) however you'd be surprised how much people's behaviours change once there is a piece of paper.
moose1882@reddit
I encourage my clients to add something to their Acceptable Use Policy (or a stand alone AI Policy)
"Only use company domains (AT company.com) when signing into any application from a company managed laptop
. Failure to do so that leads to unauthorized leaking of company data to an unauthorized application may lead to"
(I call this an butt covering policy. The onus is on the employee; they've been told)
Definitely not fool proof, but covers a wider set of use cases like personal gmail accounts etc.
Your milage may vary.....
YourSydneyITsider@reddit
I work at Big4 and we have ChatGPT enterprise account. Anyone not using AI is 2026 is living under a cave.
You cannot stop users from using it. Either give them copilot or chatgpt enterprise and setup monitoring. Block rest of AI solutions.
lastdeadmouse@reddit
We just rolled out Zscaler with prompt capture and policies in addition to company provided copilot.
qzjul@reddit
My company just pays for my personal chat GPT to be pro to avoid this issue, since I work from home. Easier to just pay a few $ a month than do anything else.
sublimeprince32@reddit
WAT
grimacester@reddit
Make sure to offer internal tools that are as powerful as publicly available ones. Then people will just use those.
Jassokissa@reddit
You are going to have people use AI, so you better just get a company subscription and guidelines. Yes, it will cost but at least then you can try to have some control on your data. Otherwise people will use personal subscriptions.
GreyBeardEng@reddit
Management folded like a jellyfish, "please unblock the AI URL category".
It's not my job to take on risk, it's my job to communicate proper security posture and it's management's job to decide if that's what we're going to do. I'm sure some of our PI is floating around in several LLMs by now.
dllhell79@reddit
I have seen 2 promising solutions for the general problem. One is Palo Alto Prisma. It's a managed custom Chrome browser that lets you directly apply firewall policy and dlp rules to traffic. Since it's in a browser, it's all encrypted by default, so visibility is pretty much everything. You can do things like prevent flagged sensitive data from being pasted into an AI engine, etc. A very nice solution... but with a very nice price tag.
The other is Prompt Security. They were purchased by SentinelOne last year and are being integrated into the Sentinel product. I believe it's going to do a similar type of blocking to the Prisma blocking, but just at the AV level instead of directly in the browser.
I'd definitely be open to additional suggestions myself. We are drafting a use policy now as well and looking for an enforcement tool.
theHonkiforium@reddit
How have you kept them from pasting it into Google for the last decade+?
snarlywino@reddit
It starts with an AI use policy. If you don’t have one of those, you should be blocking access to all AI tools until you do.
randomshazbot@reddit
hello market research man
AceVenturaIsMyHero@reddit
If you run CrowdStrike, we’re looking at this and it’s pretty solid so far: https://www.crowdstrike.com/en-us/platform/falcon-aidr-ai-detection-and-response/
cjcox4@reddit
This is going to become the "expected norm", not the exception.
In fact, if you're not running everything (emphasis) through some sort of model for summary, enhancement, design, etc. you might just lose your job.
Risk? I like to think of it like WiFi. Risk is very high, and we do not care at all.
fxs38@reddit (OP)
The norm has changed indeed. We need more visibility, there are good ideas being shared so far!
BoringLime@reddit
My opinion is IT has a few avenues to block stuff on company devices. But we are hardly fully in control. At our office we block media streaming and other bandwidth hogs, but I see people using them all the time. Why, because everyone has unlimited cell plans. Why join a work network when you never have too. Same is true here. At a low level you can use your phone to convert a picture to text and paste it in a chatgpt or something else on your phone/tablet. Might be hard to get the answer back, but you can always type it out. Sure it's no normal claude code session, but the purpose is to show it's not possible to completely close the doors to a personal llm account. You can do the whole exchange out of band and out of visibility of any normal IT tools. You are fighting the same fight that universities are fighting to stop people from using AI to cheat. They appear to be struggling too and they have the advantage of having endpoint visibility. I know some of use work in defense contractor sites where they can restrict what a person can bring in, but that's hardly the norm.
AistoB@reddit
What’s exactly is the concern here? Precisely? That some of your API code would end up in training data? Do we know this happens? Or is it just a vibe that feels like it could be bad maybe?
SilentOperative@reddit
Exactly. As if this isn’t already happening in such a massive scale. Nobody including OpenAI cares about your shit code you copy and paste. The chances of it ever coming back to you or even the company finding out are slim to none.
No_Resolution_9252@reddit
It is on the organization for not only not having a policy but doing absolutely fuck all to manage it.
>Not looking for "block ChatGPT at the firewall" answers — we've been down that road, it just makes people use their phones or hotspots. I mean actually tracking and managing it.
Then you manage their computers. Pretty cool thing that MS centralized in the early 90s.
SirLoremIpsum@reddit
You can't.
You literally can't.
If someone has a phone you can't stop them from typing it in right?
You can only block, restrict and deal with it at a policy level "any usage will be subject to disciplinary action".
If someone can see the code or the data, they can type it in to chat gpt on their personal device.
EngineersOfAscension@reddit
Any company that is legitimately paranoid about this deserves to lose its best employees.
joedzekic@reddit
I have a user who requested paid Copilot and manager approved it. Now he feels comfortable to challenge IT in everything. He's even asked to be notified if there is an opening in the IT department even tho he has zero knowledge or background in IT...
wtathfulburrito@reddit
We are seeing stuff like this constantly. People with zero business with an opinion giving one because they watched a TikTok or entered a poorly worded prompt into ChatGPT. It’s just part of the rodeo now.
Entire_Train7307@reddit
¯_(ツ)_/¯
TheG0AT0fAllTime@reddit
You dropped this: —
Get it, emdash, because OP used AI to write their damn post.
hellobeforecrypto@reddit
Ironic.
CookedNoods@reddit
This is not a new thing. It's the same as any policy that has disallowed sharing proprietary code. Someone could plug a flash drive in and walk away with everything for decades now. There's only so much you can do to stop it. You can only rely on employment policy whether or not people read it.
wrangler12@reddit
Speaking as a still very hands on engineering executive I'm amazed at most of the replies here. Give your developers a a corporate approved and paid for AI solution if you don't want them using personal accounts. AI boosts productivity for developers even if they aren't using Claude Code or Codex. They used to have Stack Overflow to turn to for questions. Now it's AI. Block their access and productivity will tank, and your best developers will leave if they are given an opportunity.
TuntheFish@reddit
How do you prevent employees from taking pictures of sensitive data with their phone?
This is not a technology problem.
dgillz@reddit
Please explain like I am 5 why this is inherently bad. Code is not data. You might need help with one little section of code which, without more context, would be meaningless to anyone looking at it.
ITquestionsAccount40@reddit
Feels like gate keeping. Like when calculators first came out
dgillz@reddit
Agreed
tabris-angelus@reddit
Policy issue not IT issue.
Monitor who visits chatgpt personal via firewall/web filtering rules.
Raise it with HR and/or their manager
wtjones@reddit
Everyone needs access to a safe LLM for them to put whatever they want into. This is the dumbest shit in the world. I work for a huge backward company with 12,000,000 stupid security policies meant to keep anyone from getting any work done and everyone has access to a safe LLM they can put anything into. This is like telling someone they can’t use a computer or a cellphone at work. These are the tools of modern work.
og-golfknar@reddit
I believe it's best to communicate why it doesn't work for your company first. Get their buy in. As you build a company based AI which can be tracked.
But I assure you don't have the power to make anything like that happen nor it feels like the ability to conceive next steps without direction. But I hope I'm wrong and you take my advice.
PrestigiousShift134@reddit
ChatGPT ships with MDM controls
m0ntanoid@reddit
don't worry, nobody needs your shitty code
tedious58@reddit
My company just deployed a tool called Acuvity that monitors all interactions with selected AI tools through domain devices and accounts.
Maybe you could call for a demo?
Ltforge@reddit
Unsure of your tech stack but we use Google Workspace/Chrome with Okta.
I block all public LLMs at the managed browser level. Once someone has been approved for AI use and are added to our account it moves them to an allow group in the managed browser. This Okta group also pushes the local app install for the associated app. This has worked fairly well for us. Obviously, I have zero control over what people do on their personal devices. That being said all of our development and anything involving anything remotely sensitive we use Okta Auth policies that user needs to be logged in from a managed device.
alan14225@reddit
We have approved AI programs that such as ChatGPT and Copilot that don't train or share company information to the public model. Our DLP solution blocks company information from being shared on other AI programs. If they try to share company information on any AI model it will be blocked.
Loop_Within_A_Loop@reddit
we have a ChatGPT Enterprise subscription that any user can install and use, and rely on OpenAI's data governance to protect our classified data
banning a program as easy to access as ChatGPT for security reasons is just asking for users to find a way to circumvent it, make your execs pay for it
qzjul@reddit
This is the way. Give the people the tools they want, but the safe way.
People keep leaving for half an hour for coffee because the nearest coffee shop is a fifteen minute walk away? Make coffee for them in the office...
Some "perks" cost the company less than the alternative.
Thirsty_Comment88@reddit
Just stop giving a fuck.
That's the only way AI will stop. We let it implode on itself.
mjbmitch@reddit
This is an AI-generated post!
mrhobbeys@reddit
DLP (data loss prevention) software/monitoring and company issued devices. Deployment of an LLM you think you have control over such as AWS or M365, it’s more of a cya and I don’t actually trust them. Then like I saw others say local LLMs. We have built some very nice setups that support 10-50 people for under 10k. Policies. Training. There isn’t a 1 size fits all just mitigation tactics.
elatllat@reddit
your internal API code is not a security leak because obscurity is not security. Only credentials are security.
Same-Platform-9793@reddit
It's a lost battle and those organizations that are trying to impose some sort of regulation are just proclaiming for the sake of the process itself .
7yphon@reddit
Rather then blocking what about a warning page, the user needs to click I accept, This logs in the firewall and create a day token, which expires the next day. This will allow you to see how much is being used and the user will be reminded not to dump company data
xixi2@reddit
So, what was the incident?
sweetrobna@reddit
Ai;dr
gruntbuggly@reddit
It would be nice if the AI vendors would provide vanity domains for business accounts so that we could block everything except that URL. But, like you said, if you block it, people will just use their phones or some other device.
The only thing you can realistically do is provide company AI subscriptions to employees that provide the tools the employees want to use. And immediately fire people who use personal accounts an get caught. It has to be a zero tolerance policy, and you might even lose a good person or two. But most people will choose the path of least resistance, which is to log into their company provided AI tools, not their personal free accounts.
YungMotch@reddit
Look into ZScaler. You can configure data loss prevention policies that will detect if anything sensitive is being sent over any AI/ML tools
AndyceeIT@reddit
It's a policy issue, don't try to solve it like a technical problem.
fin
FoxtrotOscarBravo@reddit
They said, if you can’t fight it, join it. My organization has subscription to ChatGPT enterprise. They highly encourage people to request a license and join the enterprise workspace
CtrlAltDust@reddit
Block everything on enterprise endpoints and use HIDS to monitor data exhilaration.
Perform yearly training and RoB (Rules of Behavior) for the business. If someone slips, it's training or termination.
thesysadmn@reddit
You’re kinda a moron really.
ai_toolbox@reddit
Every company should have an acceptable use policy for AI as the foundation. Then go from there
lazydaymagician@reddit
The pearl clutching, the hand wringing. You probably aren’t shipping anything world changing. Its not like someone can ask chatgpt for recent pushes regarding whatever you guys make. If you reread your post, you already admit the real problems. Stress and time pressure, likely a lot of other issues. You should probably think a little bit more about the culture that creates this dynamic. Especially if you have any ability to better it. Its super common and nearly always much more toxic than revealed. The developer isn’t the problem. Its the echo chamber of dickheads in here talking about firing the guy and xyz policy. The real problem is that poor schmuck cannot win no matter what. If he followed the rules and didn’t ship he would hear about that. Then youd be up at the lectern talking about time management and these jerkoffs would be muttering about productivity. Every single day I regret working in this industry. Its this shit. Fuck shareholder value.
justlikeike57@reddit
The CEO/owner needs to determine what the company-wide policy is going to be, based on informed opinions on the risks and benefits. Once that’s established, it will be a straightforward IT implementation.
Acceptable_Mood_7590@reddit
Browser isolation- Zscaler
rockysworld@reddit
Also if you can lnk your chatgpt enterprise tenant to zscaler, and do SSL inspection. Allow only access to the chatgpt workspace with your enterprise chatgpt tenant while blocking chatgpt cloud app.
Put the allow for your chatgpt tenant above the app block policy and it works wonders. Can assign authorized chatgpt security group to control exactly who can use it.
Acceptable_Mood_7590@reddit
You can block coppy but allow paste which is useful and won’t cripple your staff
Electronic-Jury-3579@reddit
Get the enterprise plan for one or more of the chat gpts and trust the encryption they offer. Disallow other uses.
apriliarider@reddit
1 - policy and enforcement
2 - communication and training for said policy
3 - ai proxy w/query DLP capabilities, and AI access policy limits.
4 - block all other AI
Sorry for being short. On Mobile.
BankingAnon@reddit
We use an app that strips sensitive data before it hits the LLM. Prior to implementing this, it was very limited on who even got access, we also have a CASB in place to cover accidental leakage / DLP stuff.
pantherghast@reddit
Use co-pilot which is managable via Intune and auditable via Microsoft purview. Block all other AI
idknemoar@reddit
Microsoft Purview DSPM for AI, blocking any sensitive data from being placed in the various ones. May not be perfect, but it’s included in our E5 licensing.
MReprogle@reddit
Literally the only thing I know of right now, but like you said, it’s still easy to get around. Best approach if you have the money would be a cloud gateway or internet proxy and block everything going to the domains of your choosing, or redirecting them automatically to an approved tool.
The bad part is that Purview takes quite a bit of time to properly roll out without breaking everything, and needs to be reviewed by more than just IT. It’s not like one DLP expert is going to know everything about every bit of data to even know what to see classifications to, and auto labeling data would likely end up wrecking life if they tried. I honestly don’t think most orgs know how complex purview can get .
And even with that all being said, you still need Defender for Cloud Apps or something similar to block at the endpoint and hope users aren’t smart enough to bypass it.
TransmuteSlug@reddit
We blocked the AI category entirely. We use DNSFilter and it’s actually done a great job from what we have tested. Different policies for different user groups, so anyone in my IT dept can use ChatGPT because we have a business license. Every other group, anything AI related is blocked (except for executive management, but that speaks for itself.)
SAL10000@reddit
We have a corporate mandate that copilot is the go to, if you want chatgpt, you have to apply for a enterprise paid license and justification for why.
w4itey@reddit
Any company that has not engaged an AI platform for usage at their business is not only leaving efficieancy on the table, but they are also allowing their company data train AI. Employees will use it, as a business all you can do is provide the tools and mechanism's to protect your organization and if you have your head in a hole you will fail.
IgotTHEginger@reddit
You could block the unapproved URLs with Cisco Umbrella and require and access to company resources has to be done on approved devices. This only will block access on those devices, one could email themselves data to circumvent, but at least at that point you have some data to point fingers at.
noctrex@reddit
Time for some malicious compliance.
Clone the sites of some known LLM agents like GPT and Claude, and recreate them with an AI tool to host them locally, just basic functionality.
Redirect the DNS requests from your company's internal DNS to these internal sites.
Run an ollama instance with a very small model like qwen3.5-0.6b to serve them on these sites, and watch and laugh as everyone says that their GPT has gone stupid.
CrashPan@reddit
Block it all... copilot unfortunately is included everywhere in a 365 Tenant, so restrict all and only allow copilot...
This way you retain some amounts of control and enough data sovereignty concerns that NIST is chill with
(We renewed our CSF attestation and my bare minimum purview policies were enough for 'AI risk management')
Kinda bs though.
lanekosrm@reddit
There is no technical solution to a behavioral problem like you describe. The “solution” is to get with HR and come down like an avalanche with consequences for the violation. And then do it again the next time it happens. Because there will be a next time.
DarthJarJar242@reddit
The scary reality is that most companies aren't handling this at all and don't have the slightest concept of how to even start handling it.
The best solutions I have seen though is simply blocking the LLMs at the network level and only allowing company approved ones. For instance we allow copilot since it comes with our 365 licenses so that is allowed but you have to be logged in with your 365 account. We have forced redirects in place for copilot that take you to the corporate login page so you have to use your employee account to use it.
Both on VPN and internal network. Obviously this doesn't stop someone from accessing the content from their email on a non work device and then throwing it into an external LLM but I've yet to see a solution for that. At that point it's more of an Acceptable Use policy violation anyway.
Rude_Woodpecker3117@reddit
He put company code into a personal ChatGPT account. That code is now sitting on OpenAI’s servers. Even if they say they don’t train on it anymore for business users, personal/free accounts have fewer protections. Once it’s out, you can’t fully get it back — and it could help competitors or show up in someone else’s AI output later.
Zombie-ie-ie@reddit
Pass your audits and shake hands. Robot will be making this decision in a year anyways.
schnozberry@reddit
Cyberhaven
TheBigBeardedGeek@reddit
We actually just blocked it via our security software
FloiDW@reddit
Just don’t. I see your point about people avoiding. But we’ve had a two stepped approach.
First step: Every common AI tool gets a Proxy Warning the user has to accept prior to proceeding, informing them that sharing confidential company data might lead to trouble. After setting up our own internal GenAI Tools, like a Company-GPT, Rovo, CoPilot, we now really block publicly available tools on proxy end.
Users are not allowed to surf to 80/443, only to connect through our proxy farm and cannot circumvent this by hotspotting or similar. In combination with locking down, like making it really difficult to get files / text / data from their mobiles to the working device we do have a pretty good setup I think.
Recent_Carpenter8644@reddit
OP said people will use thir phones. I guess it's a pain getting the data onto the phone, but unless you don't allow phones, how can you stop that?
FloiDW@reddit
Like make it really painful. Users don’t have sync options. Teams / OneDrive is somewhat containerized so that no data can leave company managed apps, but still can be used. Other than that users are not allowed to use usb sticks or such. Of course, if you like really really want to and make use of stuff like pastebin or such, you could trick. But there comes the maths: Make your internal tools so good, that the users don’t want to take the pain to use the external tools. For sure internal tools are worse than state of the art externals. But being allowed to clear text enter company data, or get connections to Jira / Whatever backend you use makes up for that.
Recent_Carpenter8644@reddit
Sounds good. Users just want results, so why go elsewhere if they can get their answers the approved way?
binaryhextechdude@reddit
If you block a site and then staff use their personal devices to upload company data to that blocked site that sounds like a HR issue to me.
Open-Ad2625@reddit
If your code was important, you would have a corporate plan or your teams are inefficient AND dangerous.
Lottabitch@reddit
Companies are developing “firewalls”for AI end points. Cloudflare for example. Went to a conference, they showed off a product that would filter the users inputs (regardless of which AI they chose) and catch it if they tried to send confidential information.
Will I ever use this? No. Should companies consider this rather than blocking the domain entirely? Some should yea.
mrfoxman@reddit
Look into hatz or the probably dozens of other services doing something similar. Where they supposedly don’t train on your data. But take that for what you will.
Thrashtah_Blastah@reddit
Cisco Umbrella was a good starting point for us. We have the agent installed on all company workstations with a global policy blocking all AI. Copilot is the only exception globally (personal site blocked still). Departments wanting access to unapproved AI sites must submit a request. After review it must have signoff from their management and IT management. Then it gets added to the appropriate allow policy to supercede blocks.
That put a dent in the problem for us. There is always going to be an issue with folks skirting around it with personal phones. That's a policy violation though and becomes an HR issue.
snottyz@reddit
Policy is that staff can only use enterprise accounts. When we blocked access to other tools (including ChatGPT), the revolt was IMMEDIATE and FIERCE, so it was allowed again. Soooooo not handling it well, overall.
Crowdh1985@reddit
You can enroll ChatGPT in Intune… and buy licences. Otherwise Copilot with licences can use ChatGPT in settings. I do have both and even if Copilot can work with GPT, GPT is way snappier then Copilot (cloud or local)…
With intune/purview you can manage how data is managed and especially office apps. No more copy/paste on personal devices , which will force them to use office devices. No more GPT, well intune will block it… no need to go in firewalls!
Intune and purview is the key!
audixe@reddit
Zscaler can apparently be used to detect this and block it. My company uses Zscaler and I can see AI sites being used but we have no formal policy blocking it.
If this is a concern to you it’s definitely worth looking in to.
Creative-Type9411@reddit
there is a certain point where you make something policy and if you discover they break policy, they get the axe
It is exhausting trying to child proof everything
kiddj1@reddit
We have a subscription for GitHub copilot and we access that through a custom URL
Every other Ai domain is blocked full stop
mixduptransistor@reddit
There will be a lot of technical answers and solutions that eventually hit the market but a stern “do this and you’re fired” policy will go a long way. At some point you can’t prevent someone from picking up a personal device and doing something they shouldn’t and that’s where HR takes over
fxs38@reddit (OP)
Good point !
sobeitharry@reddit
Yeah, first you need a policy in writing. Then, if leadership approves the budget you implement any of the multitude of potential technical controls and tools available.
If someone intentionally violates policy and/or controls, they are written up or fired depending on the level of the offense.
If leadership doesn't want to create a policy or implement controls, you document that in your risk registery.
iamkilo@reddit
Our organization has rolled out https://prompt.security, sounds like exactly what you’re looking for.
imnotsurewhattoput@reddit
Unless you block all of chat gpt there isn’t much you can do. Not to sound mean but was this employee written up for this? He broke company policy per the handbook and if nothing happened to the person who did it , why would they care ?
fxs38@reddit (OP)
The answer to such case isn’t straightforward. There was a policy breach, but lack of due care to ensure employees know the policy. In our case the employee was honest about it, so honest it made it realize we missed something on our side too
VernapatorCur@reddit
If the policy is in the employee handbook, then it's the employees responsibility to have read it before signing it saying they agree to the policy. Whatever consequences the handbook says are applied in these situations needs to actually be applied, and immediately. That employee just made that data public. Your network security has been compromised as surely as if they had published it on LinkedIn or Facebook. At most companies that's an immediate termination, and it should be.
ADifferentMachine@reddit
Found the corpo bootlicker.
Superb_Raccoon@reddit
Start your own business if,you dont like rules.
60 million Americans cant be wrong.
ADifferentMachine@reddit
I don't fire employees for honest mistakes, they own up to, over vague and unclear policies.
Ypu can continue to enjoy the taste of leather, though.
VernapatorCur@reddit
The policy was described by OP as neither vague nor unclear. What OP said was that the employee hadn't actually read the employee handbook.
Superb_Raccoon@reddit
I see you gainfully employ strawmen.
Recent_Carpenter8644@reddit
Half the world could get sacked tomorrow by those criteria.
ishboo3002@reddit
Sign an enterprise agreement with the LLM provider of your choice. Use your SASE solution of choice to block the others including the personal tenant version of your provider.
nitetrain8601@reddit
To curb that, I think you just need to grant access to one of the ai tools and tell them, that’s all you can use. Ensure it’s managed by the org.
xftwitch@reddit
we are in the process of creating policy around this right now. We will allow 1 or 2 and not allow all the others. The real trick will be enforcement as we're beholden to a university for our network and thus cannot completely block all non approved LLMs. :-(
KennySuska@reddit
Blocked all ai sites based on classification. Then only allowed 365 Copilot. With enterprise licenses the data is kept in your tenant.
You can then use Purview to do auditing on it.
tfn105@reddit
This is exactly what we've done too. Completely achievable
OneSeaworthiness7768@reddit
This post feels a lot like research for someone who’s working on a tool to do exactly that.
But the answer is no. My firm gives employees enterprise plans to use so they don’t feel the need to upload company data to personal accounts to do their work. Not sure why a company would pay for something to give visibility into what employees are using on their own but not pay for the tools they actually need.
Superb_Raccoon@reddit
Fire the fucker if you had a clear policy in place.
If you didn't, make it clear it is a firing offense and firemen next fucker to abuse the trust of the company.
Its a people problem, not a tech problem... because y9u seem to have decided not to provide AI in house or at least a private tenet.
thepeopleshero@reddit
And you fired him right?
CVMASheepdog@reddit
Sometimes things like this are a managerial issue vs a technical one. If the policy says No personal ChatGPT, what does that say if you violate it. There is your answer.
CharcoalGreyWolf@reddit
We use Fortinet firewalls to block the free versions; they direct to different clouds than paid. This means if you need AI, it only goes through our approved methods, which are paid and have privacy agreements,
Small_Editor_3693@reddit
Web proxy and block chatGPT sites. Or pay for an enterprise solution for your employees
hobovalentine@reddit
Your company needs to define policies regarding AI usage and if it’s allowed you should purchase corporate accounts so that data stays confidential
frosty3140@reddit
because we use DNS Filter as one of our layers of protection for our endpoints, my manager mandated a complete block on *.AI domain names, plus a handful of other well known AI domains as well. We're apparently heading down the Microsoft CoPilot and Purview path at present. Not my project, so I don't know how that's progressing.
blondasek1993@reddit
We are using bigfix with software management for that, as we already have it is a our main UEM.
electrobento@reddit
Island browser blocking paste access into non-corporate AI interfaces (I’m sure there are other next-gen enterprise browsers out there) + firewall rules to block non-approved AI endpoints.
fxs38@reddit (OP)
Will look into that, thanks !
BlotchyBaboon@reddit
We've been digging into this quite a bit and the the front end it's policy. You can basically divide that into 3 areas:
We separated note takers out because there's a slew of things related to them (consent, retention) that are different than the other areas.
I think the key thing right now is to approach this from a policy perspective and lay out guidelines on the use of those items. As far as being able to automatically stop someone from using it, the security tools just haven't caught up.
fxs38@reddit (OP)
Interesting share, thanks. Policy is key, education of users too. Haven’t seen a solution to stop, perhaps alerting would be interesting as a start - reminders to users when they visit such sites
Bourbonneuxb@reddit
The way my company deals with this by using copilot then blocking all other AI/gen AI sites and have policy exceptions for specific use cases if copilot dose not work for whatever the requirement out of a few thousand employees there are 4 exceptions.