Are there compliance issues with integrating with OpenAI? Does it need to be mentioned in the privacy policy? (Australia)
Posted by The_Real_Slim_Lemon@reddit | ExperiencedDevs | View on Reddit | 13 comments
I started up at a new job recently, and they are ramping up their AI usage for a bunch of things. I haven't been put on any of those projects yet, but it's coming soon. These guys deal with a lot of sensitive information, and I'm wondering about liability and compliance.
What sorts of things need to be included in a privacy policy for sending stuff to AI to be acceptable? Is this the kind of thing that might come back to bite us?
Or is this a case of "Yes we send data to overseas third parties without consent, but no one cares?"
And while it's not my maain concern, how liable am I for these sorts of shenanigans as a senior dev? I'm for sure going to be sending some emails around with recommendations to create a paper trail, but like, if I get shot down (quite likely, the CEO is an Elon Musk type), and then thrown under the bus when it hits the fan - what am I actually exposing myself to?
rkeet@reddit
Yes. Yes (for EU).
Nofanta@reddit
AI tools are a minefield of legal issues.
originalchronoguy@reddit
I don't know about AUD. But in the US, even legal departments are still navigating this in general.
There needs to be guard-rails and compliance before it this the LLM. This is why you see a lot of start-ups in this place that try to tackle before it goes into the black-box.
Those compliance includes pre-processing that no sensitive data goes into the LLM. Also includes making sure the content they put in belongs to that user. There is a lot that has to happen before the end-user prompts go to ChatGPT. Is it anonymous? Can those prompt be tied to a user? Those kind of things. What do you do about inappropriate content that comes back. Or forbidden. Example, how do you check if an employee is uploading a HR policy/handbook. In the US, you need to catch that before it gets ferried along upstream. And when the answer comes back, is providing guidance that can be construed as legitimate corporate policies. You need to catch that after the answer is replied.
I had to go through a lot of these types of scenario for many companies. And in every case, their legal were like "Oh, we didn't think of those use cases."
madprgmr@reddit
Yeah, and don't forget that even anonymizing PII has a lot of potential gotchas.
BertRenolds@reddit
Ask your legal department
The_Real_Slim_Lemon@reddit (OP)
Yeah a discussion with the compliance guy is definitely on the way - I’m pretty sure the answer is going to be “don’t worry about it”, which I will get in writing. I’m not the first to bring this up I’m afraid…
chaoism@reddit
In our company, we are not allowed to send any pii to LLM. If we do try, we either get blocked right off or, if we somehow pass the first stage, get flagged for passing sensitive information. There's a filter the AI team sets before actually passing info to LLM
ladycammey@reddit
DISCLAIMER: I'm US-based and deal with some international data handling, but Australia is not my specialty and this can be highly regional.
So this is seriously going to hinge on what 'sensitive information' is which could end up in these tools.
* Commercial Secrets from other companies - Should be covered in the MSA/Access Policy with the other company, as well as your terms with your AI provider (assuming you're using an API).
* PII - Must be covered in your privacy policy, really should be reviewed by a lawyer.
* Many other types of sensitive data have other specific handling requirements.
I'd expect any application to have both a privacy policy and access terms vetted by legal in just about every case. This stuff is however highly situational.
The good news? You, as a developer, are probably not the person even vaguely responsible for this and I wouldn't typically expect a developer to be involved. I think your only (polite) responsibility is to find someone to tell you this isn't your problem. Typically this responsibility is somewhere between someone up the chain from the product manager, legal, and maybe infosec.
originalchronoguy@reddit
You are definitely responsible if the guard rails are published by your compliance/legal. If your legal says you can't enter a customer's email and name into a prompt, you need to run a model to catch and stop that, generate an exception. Then create the logging/auditing to catch the false positives if it slipped through the cracks.
You are not responsible for the guidance but responsible for executing the deliverable that adheres to those guidances.
If not the developer, the Staff, principal and Architect should be setting the playbook so the devs follow.
ladycammey@reddit
Good point - responsible for following policy, not generally responsible for making it.
And heck, at least in my group there are Architects involved in writing policy as well (our Development Policy was mostly written by... drum roll... developers).
But there should at some point be someone in legal giving at least direction. That direction then typically goes to someone with at least a director if not a C title to deal with things like 'risk acceptance/signoff'...
originalchronoguy@reddit
Developers will end up influencing policies. I know in my case. You roll something out and the LLM starts spitting out nefarious things and then it becomes policy to prevent it from happening in the future. You discover this with QA and jail breaking testing.
E.G. You ask an LLM if you can sabotage your boss, cheat on time-sheets. You catch that, log it. Then a new policy recommendation is to add meta-prompts to tell the agent to not answer anything HR related. Voila, a new guard rail and checkbox.
The_Real_Slim_Lemon@reddit (OP)
They just fired like two of those three roles - me becoming a de facto tech lead is why I’m concerned lol
varieswithtime@reddit
We have to use AWS bedrock hosted in Aus for anything user related for our Aus and NZ customers. A bit of a pain since the models in AU are a bit behind.