does anyone feels that companies wants to implement ai so bad that they share with it sensitive customer infomation with no privacy layer??
Posted by johnypita@reddit | ExperiencedDevs | View on Reddit | 27 comments
I see this so much and its kind of scary to think of
our data as customers is being shared with those models that are clearly using it
please tell me im not the only one feeling this
Confident-Corner3987@reddit
You’re not alone here! Unclear policies, no approved tools, and people trying to solve work problems on their own. Sensitive data + no guardrails is where things get messy real fast.
juliarmg@reddit
You're not alone. The pressure to ship AI features has massively outpaced most teams' thinking about what data actually leaves their perimeter. A lot of places treat LLM APIs like an internal tool and forget the payload is going to a third party. Usually nothing gets redacted before it hits the model, it just goes.
Infamous_Horse@reddit
Nah you're not alone on this. Companies are basically copypasting customer data into chatGPT without thinking twice. We had to deploy layerx just to see wtf our devs were actually feeding these models turns out 40% of uploads had PII in them. most orgs have zero visibility into AI usage which is terrifying from a compliance standpoint
ninetofivedev@reddit
Some of you have a complex where you think you're the only one who has thought of these things.
No, your company with 1000+ engineers and an entire org dedicated to security has thought of this. They probably have been talking about it for at least the last 12-18 months.
What is really happening is companies like Microsoft, AWS, Anthropic, OpenAI are agreeing to accept liability if/when a data breach happens.
TLDR: The decision makers went with a billable hours solution instead of a technical one.
Wonderful-Habit-139@reddit
It feels like they probably meant the average dev that they encounter, not companies as a whole.
originalchronoguy@reddit
Yep. 100% agree. I am Working with teams on this. So it is interesting to read people’s take as if it hasn’t been thought out. lawyers are heavily involved at every step of the way.
Trawling_@reddit
Layerx? You mean monitoring their use of external services and what’s content they are sending? Or more what is being ingested and processed with models hosted by a cloud provider or hosted within your org’s vpc?
Not familiar with layerx
engineered_academic@reddit
Data Loss Prevention teams are losing their mind over this. We are going to be seeing huge data breaches at some point.
throwaway_0x90@reddit
There's a legal framework for doing so,
While I do see a rush, I do not see major companies ignoring their obligations to PII.
iamabadliar_@reddit
My team in a well known saas enterprise fed customer data directly to llms. Came from director level. Company policy says not to do it but teams are doing it and no one cares. I protested and asked that we shouldn't do it without legal approval but it fell into deaf ears. The idea itself came from a director btw.
One_Caterpillar3396@reddit
I do, the main issue I see is that employees are using their own ai tools on company data without any supervision
throwaway_0x90@reddit
Then you should report it anonymously
One_Caterpillar3396@reddit
Absolutely illegal and expose the company Fortunately it’s not on my company And I did report on incidents I new on outside my company
Ok-Chair-7320@reddit
It's also very hard for the customers individually identify their data was leaked and where the leak is coming from
Adept_Carpet@reddit
They're all doing it. The penalties are toothless and essentially rely on self reporting.
Even if it's not company policy to share PII/PHI with AI companies, all their employees are copy and pasting everything into chatbots or running tools that can hoover up all the data lying around on their devices.
I was working with Claude to write code to process a well known non-public dataset (documentation on the dataset is public), and it kept saying "if you upload the dataset itself I could do more of this automatically." I took at least an extra hour out of my day by not uploading the data, do things in the chat instead of having a tool installed locally, etc. I'm sure others aren't doing the same.
MentalMatricies@reddit
I’m leading a team that is working on healthcare and AI integration. I think that ironically, AI adoption speed requirements have led to a massive backlog of compliance concerns being addressed quicker than they would have otherwise. That doesn’t mean that we sidestep compliance or any sort of review, because if you guys have worked with large corporate entity, compliance firms that require external statistical validation, you know exactly how rigorous they are, and how little flexibility they offer (as they should).
It’s one of the only noticeable benefits in my day-to-day work life that I have found in this gold rush.
ninetofivedev@reddit
Absolutely. They sign agreements with Anthropic or OpenAI or any middleman broker/partner like AWS or Microsoft.
And those companies basically have agreed to accept liability for any data breaches.
This is a known risk and most companies have just deemed it acceptable.
scoot2006@reddit
Isn’t that exactly what Meta, Google, Amazon, Apple, and every other large company have been doing for a while?
PomegranateBasic7388@reddit
dude it's space race, make sense. If you are not making ai stuff, investors are not going to put money on you.
beefz0r@reddit
Even before the age of AI I saw plenty of colleagues using free shady websites for, for example, base64 encoding or OCR
Trawling_@reddit
Yea, a lot of devs are not privacy focused at all.
imLissy@reddit
My company takes data privacy and security very seriously, to the point of absurdity sometimes, but better that than the other way. We are also very serious about protecting our own, very valuable data. It’s expensive though.
Uhgley@reddit
Yeah this whole thing is pretty gray, everyone jumps on the “AI AI” train but the data side is super unclear. Companies aren’t exactly transparent about what they’re using. Feels like it’s gonna blow up at some point.
Firm_Bit@reddit
The thing is that it’s always relative. It’s if you leak customer information but so does everyone else then you’re fine. If you don’t but you fall behind then you’re not fine.
MonochromeDinosaur@reddit
Most companies do it via B2B agreements with CYA security based things to shift liability the same way they’ve always done it.
Mosk549@reddit
Yes, and I don’t mind. It’s not my plan to make my job harder because of some regulations. I am the “put the fries in the bag” guy.
Mosk549@reddit
Hahahah so you don’t?