MCP Endpoint Security Controls - blatant avenue for data loss!
Posted by cananyonehelpmoi@reddit | sysadmin | View on Reddit | 18 comments
So, we have recently started using Claude AI with a group of test users and have found a pretty glaring security hole with how the MCP connector works, allowing users unfettered access from personal devices to their company M365 data.
We have CA policies in place to grant access only from hybrid/compliant devices.
At the moment, our group of test users can sign in to their personal Claude account on their work laptops, then setup and authenticate their M365 connector.
They can then log in to their personal Claude account on a personal device and access the M365 connector/data from that device.
From what I can gather, the only way to prevent this happening is to block access to Claude personal accounts on the company devices.
Anyone got other ideas?
InstructionDirect773@reddit
Yeah that's definitely concerning. The device access piece is tricky because you're essentially creating a bridge between personal devices and corporate data without the usual guardrails, and it sounds like your CA policies aren't catching it. Before you lock things down though, have you looked at what logging you have visibility into when these connections happen? Like are you seeing what data's actually being accessed or moved, or is it more of a black box situation right now?
mixduptransistor@reddit
Why would you be OK with them logging in to their personal claude account on their company device?
cananyonehelpmoi@reddit (OP)
We allow the use of personal accounts. Usage is logged and monitored with uploads blocked.
mixduptransistor@reddit
Then why are you confused or upset that they are expropriating company data through the personal accounts?
cananyonehelpmoi@reddit (OP)
From personal "devices".
mixduptransistor@reddit
But the personal account is the conduit they are able to exploit on their personal device. And I don't see the difference between personal AI account and personal device, both are ways to get data out of your control. I fail to see how it's OK to login to personal Claude and connect to work on a work device but you don't want that same flow to happen on a personal device
cananyonehelpmoi@reddit (OP)
We do not have monitoring or controls on the personal devices. We want to allow our users to access gen ai services and encourage them not to share sensitive information via popup reminders, DLP controls and blocking of uploads. This has already been in place for some time. And our users have been unable to connect any gen ai services to their MS data, which would require admin consent.
We now have company based Claude Teams accounts for a small group of users, and have enabled the MCP connector for Claude, hence introducing this new problem. I am surprised there is no way to control which Claude accounts can access the MCP service at MS, this seems like a fairly simple solution to implement.
Big-Floppy@reddit
From what I've read you capture the domain in the Claude team/admin console and force SSO. Then only the corporate accounts can be logged into when using your company domain. Personal Claude accounts can't use SSO. We are just now working through this with our test group, so I haven't tested this yet.
cananyonehelpmoi@reddit (OP)
Yeah, we have that already in place, that prevents free/personal accounts being created using your domain but the issue I am referring to is something else entirely.
mixduptransistor@reddit
The root of your problem is with Anthropic/Claude. It is presenting itself to Entra/365 as a single app. So, there's no way to differentiate or tell what the username behind the Claude account
To solve your problem, Claude needs to present information to Entra/365 from the Claude user account, probably email address, and then you'd need to have a mechanism in Entra to only allow login through the Claude MCP connector app registration if the presented email address belongs to your domain
I honestly don't know if that is a capability Entra has. if not, it's certainly a hole
On the other side, Anthropic could let you capture the domain on their side in terms of the 365 connector but I highly doubt they'd be motivated to add that
Frothyleet@reddit
If you permit people to connect personal accounts to M365, you are effectively jettisoning all of the data they have access to into the wild. This is expected behavior. It doesn't matter if they can use their Claude account from a personal device, your exposure is effectively the same.
If you care about this issue, you need to enforce rules permitting only company managed AI access to your tenant.
Josh_Fabsoft@reddit
Full disclosure: I work at FabSoft, which makes AI File Pro.
That's a really concerning security gap you've identified with the MCP connector. Bypassing conditional access policies is exactly the kind of vulnerability that keeps IT teams up at night.
The core issue you're describing - personal devices gaining unfettered access to corporate M365 data through AI connectors - is precisely why we built AI File Pro with on-premises deployment as a foundational feature. When you process documents with AI File Pro, everything happens locally within your network infrastructure. Your files never leave your premises, never get uploaded to cloud services, and never pass through third-party servers.
This means your existing conditional access policies remain fully intact and effective. There's no external connector that can be exploited from personal devices, no cloud service that could potentially be accessed outside your security perimeter.
We see this a lot with organizations in healthcare and finance who need AI document processing but can't risk the security exposure that comes with cloud-based AI services. The on-premises approach ensures complete data sovereignty while still giving you intelligent document processing and organization.
You can test this yourself with our free 1GB trial - it'll show you exactly how the local processing works and how it integrates with your existing security infrastructure without creating new attack vectors.
Would be curious to hear if you find other security gaps as you expand that pilot program.
datec@reddit
If you allow personal Claude accounts to be used on corporate devices to access corporate data this will happen.
If you want your users to be able to use AI on their corporate devices to access corporate data with the desired DLP protections in place then you need to provide them with corporate AI accounts and block their ability to use their personal AI accounts.
This is not a difficult concept. If you are worried about your corporate data then you should be blocking access to all personal accounts for everything including AI, Gmail, Google drive, dropbox, etc...
SquizzOC@reddit
I’m just beginning to dive into this as we are considering Claude CoWork for our users, but it would be an outright no to use a personal Ai account on a corporate machine for us. Seems like an easy fix if we are giving our users access to the corporate set up.
After-Vacation-2146@reddit
Block the regular endpoints to Claude API endpoints. Route all connections through an LLM gateway. Make the approved path into the path of least resistance.
cananyonehelpmoi@reddit (OP)
Can you elaborate on how this helps in this situation?
After-Vacation-2146@reddit
You’d prevent people from being able to access Claude directly. The only access to Claude’s api endpoints would be from your llm gateway. This lets you plug in an enterprise key and all the features and access is tied to your business not personal subscriptions. This also breaks the method of remote access.
tensorfish@reddit
Your CA only protected the sign-in/consent moment on the managed device. Once the user's personal Claude account is holding the M365 connector token, you have turned device-bound access into delegated cloud access that follows them anywhere. So yes, the boring fix is usually blocking personal Claude accounts or unsanctioned connector consent on work devices unless you have a sanctioned enterprise path.