Claude now connects with Microsoft 365. Would you allow it in your tenant?
Posted by KavyaJune@reddit | sysadmin | View on Reddit | 177 comments
Anthropic recently introduced a native connector between Claude and Microsoft 365, allowing users to analyze data from Outlook, SharePoint, OneDrive, and Teams.
From a security and access perspective, here’s what I’ve observed so far:
- It’s read-only (can’t send emails, create/edit files, etc.)
- Uses delegated permissions. only sees what the signed-in user already has access to. If a user can’t access a SharePoint site, Claude can’t either
- On data handling: In lower-tier plans, training can be disabled manually. In enterprise plans, training is disabled by default
While Microsoft Copilot is \~$30/user/month, Claude is: Free to \~$20/user/month (basic to higher tiers)
So naturally, users are going to ask for it.
As an admin, would you allow this integration?
CMed67@reddit
I'm trying to figure out where this "connector" in the Teant is!?!?
KavyaJune@reddit (OP)
You need to connect it from your Claude account.
Search for “Microsoft 365” under connectors, click Connect, and then authenticate using your Microsoft 365 credentials. Finally, grant the required permissions to complete the setup.
You can follow this guide for step-by-step guidance: https://blog.admindroid.com/connect-claude-ai-to-microsoft-365-using-built-in-connectors/
sydtrakked@reddit
Stick with Copilot, it has Claude models built in now in addition to ChatGPT. Plus their Frontier push is adding Cowork-ish features.
We are running into an issue now since our execs were so gung-ho on diving into AI that they paid for Enterprise ChatGPT for a year contract before exploring all their options. But now all of a sudden are starry-eyed and FOMO about Claude. Now we have to tell them "too bad, you chose your lane".
We got approved for a few Copilot licenses to compare things and since we're a M365 shop, we're REALLY pushing for them to switch to it since it integrates with our entire environment.
ketorin23@reddit
Claude is a sub processor for M365 copilot already and falls under Microsoft EDP as of January…. Just get Copilot, plus you get all the open AI models as well
Claude is also available in Copilot studio as a standalone model you could deploy
CPAtech@reddit
It falls under EDP but is still outside of the tenant boundary, so its does not have quite the same protections as Copilot.
Chownio@reddit
I'm in the behavioral healthcare field. Absolutely not.
Fuzzy_Paul@reddit
Nope: https://www.theregister.com/2026/04/06/anthropic_claude_code_dumber_lazier_amd_ai_director/
brauersuzuki@reddit
My boss has rolled it out without asking (yes, he is an admin...). A couple of weeks later, I broke it unknowingly by tweaking the geo-block policy. Turns out the claude MS365 connector requires an OAuth flow through US servers. Somewhat creepy.
Unlikely_Tie1172@reddit
My analysis of the situation:
Using the Microsoft 365 Connector for Claude
The Microsoft 365 Connector for Claude allows Claude to access SharePoint and OneDrive files, emails, and Teams chats and meetings. The connector is now available to all users, including the free tier for Claude. Installing the connector creates two Entra ID enterprise apps (MCP server and client) and channels Graph requests to Microsoft 365 to fetch information for processing by Claude. Is that a good thing?
https://office365itpros.com/2026/04/08/microsoft-365-connector-for-claude/
ToastieCPU@reddit
Is there any really any difference between giving copilot this access and not claude?
I understand that Microsofts copilot has data protection, claude does too.
I would advocate that your org should invest in Claude Teams that way you have better controls when it comes to your companies data.
CPAtech@reddit
Yes, while Anthropic is covered by Microsoft's EDP like Copilot, it does not have the same protections when it comes to tenant boundary.
imgettingnerdchills@reddit
eh
edaddyo@reddit
Seriously. Now I get to spend a few hours determining the security risks of this.
Thecrawsome@reddit
AI management is SaaS management and IT Security planning at the same time. This sucks.
TheFluffiestRedditor@reddit
all led by marketing.
Cyhawk@reddit
The biggest security risk is, the users actually had access to stuff they werent supposed to in the first place. This will amplify any issues in your infrastructure.
Provide it really does only have read only/user-only access.
Disgruntled_Smitty@reddit
Only a few hours, must be nice.
MadCybertist@reddit
You just ask it if it had any security risks. It’s pretty simple.
Centimane@reddit
I guess the plus side to this blind push for AI - you probably don't need to. They'll probably just ask you to skip the review or push forward regardless of the review.
That's the one silver lining to this AI hype - c suite want it so bad they don't want due diligence. Whatever man it's not my company on the line, just send that in writing.
toxcicity@reddit
I feel this and I'm 27
russlar@reddit
Just wait until you hit 40, you'll wish you had gone into botany
flatulentpigeon@reddit
Can confirm, getting close to 50 and I wish I would have gone into the trades as either an electrician or plumbing.
dcv5@reddit
You joke, but since the push to cloud I have a near empty server room with 6 racks of climate control, free power, out of sight. Maybe a few hydroponic towers could work as a work hobby.
russlar@reddit
could even allocate 10.4.20.0/24 to the new farm and make it look legit
wrincewind@reddit
You'll wish you'd gone to Botany Bay...!
Chemical-Example-783@reddit
I'm not sure I follow the point your making? Seems your core suggestion is the Anthropic model is better than copilot and now it has access to M365 apps copilot is redundant.
The problem with this though is for months now M365 Copilot users have been able to use the Anthropic model. MS have already given copilot users the ability to choose from the openAI or Anthropic model on a prompt by prompt basis without breaching any of their internal security or compliance controls.
Nathanielsan@reddit
If your users are asking for Claude then it's probably this or have them shadow use Claude some other way and expose company data overtly. Atleast this way you have somewhat of a governance over your data.
seawaxc@reddit
turn off sharepoint access in the api permissions
Rouxls__Kaard@reddit
Hell yeah bring it on babbbyyyyy
RainStormLou@reddit
from a security standpoint, no what the fuck are you guys even thinking about? why is everybody exfiltrating data to companies that we cannot trust under any circumstances? did you guys just get out of school like 2 years ago? how is this shit even happening
in practice they sent the connector request in last week along with something else they want to implement so it's part of this week's plans to set up.
data security hasn't been about keeping your data secure in probably 15 years now. it's just about who's taking liability when it eventually gets compromised.
unprovoked33@reddit
This thread, like many others recently, is full of insane people. Slap the “AI” label on any piece of garbage software, and the carefully crafted security protocols we’ve built over the past decades become deprioritized immediately.
I don’t care if higher ups are demanding it. Part of our job is to protect these people from themselves. Data leaks that cost the company millions will come down on our heads, no matter their demands.
And I don’t want to hear about how mind boggling and productive AI is. Straight up, it isn’t. If it was, we would have software coming out of our ears right now. It’s had 3+ years to cook, with practically no regulation. Where are the results? For instance, large game studios take 5-10 years to develop games, if AI speeds up production even 2x, shouldn’t we be seeing a spike in AAA game releases? And why aren’t feature release schedules from SaaS companies tightening? These timetables haven’t budged an inch. Wasn’t AI supposed to make these companies more agile?
iamkilo@reddit
If Anthropic has a data leak, and you have written demands from executives to enable those features, why would it come down on your head? That doesn't make any sense. We are talking about connecting it to Office 365, what if that has a data breach? Why would you at all trust Microsoft, who is employing Anthropic heavily to write their software?
Your final paragraph is just littered with "I've never seen someone use this technology effectively." I work for a software company, I assure you, we are heavily employing AI and it has transformed our business. In the hands of an average developer, it's a small help, but in the hands of our most forward thinkers, and our best devs, it has accelerated timetables dramatically and we've put out AI features in our platform WRITTEN by AI that are attracting customers and generating revenue. You can't just say "why aren't feature release schedules from SaaS companies tightening", there's no way you went out there and looked at the release schedule of every SaaS product (most of which aren't published online) and determined that AI wasn't working for them. Just because you haven't personally seen it, doesn't mean it isn't happening. Blanket statements like that are terrible arguments.
Like it or not, it IS the future, I'd recommend learning the tools to the best of your ability, and it sounds like you have a way to go. They're getting more powerful every day. If you're the naysayer at your organization, you will be replaced or become a dinosaur (along with the rest of your business).
unprovoked33@reddit
Have you had your fingers in your ears for the past few decades? People with money and power offload the responsibility for their terrible decisions all the time. Sure, getting their request in writing is probably going to save you when they're looking for heads to chop, but given how slippery AI companies have been with regards to responsibility, I wouldn't count on it. When data breaches occur, heads have to roll for the stakeholders, and the people with money and power aren't likely to place themselves on the chopping block.
My man, the paragraph you just wrote was littered with vague and blanket statements. Here, I'll go the other way around. Show me one, just one product release that came early with an early release that was credited to AI. Not a vague blog talking about a theoretical, "We improved our speed 3x," but an actual, planned product release timetable or roadmap that was pushed early thanks to the tremendous benefits of AI. I agree that not every SaaS company publicly posts their feature releases, but plenty do, and if AI is truly the hallmark of efficiency that people and companies claim it is, these timetables should be absolutely smashed. This should happen so often, that we would be unable to not see the difference.
Why should the burden of proof be on me to look at the release schedule of every SaaS product to find out if AI is working? You're making the claim that AI is working, you should show me more than vague assurances of improvement. You say your software company is so transformed, great. So your company is pushing their feature releases faster? Are your customers seeing the benefit? Is that public? If you're doing so much more, so much faster for your clients, surely you're publishing that and marketing it, right? Show me.
You won't. It's all theoretical. I'm calling bullshit on the whole thing. The pipeline isn't speeding up, and it shouldn't be speeding up. Good decisions take time. Timelines are useful for avoiding mistakes, which AI makes all the time. All this speed does nothing for anyone.
AI is designed to be simple to use. If I don't pick it up today, I can pick it up tomorrow. The rush is artificial. The first person there is barely more skilled than the person who spent 2 hours toying around with it, because that's how the tooling is designed. And it's not as if I'm completely avoiding AI, it's part of my job to keep up with trends, read blog posts, security newsletters, etc. I'll be fine. I just don't buy the BS. I've seen the hype train come and go for more technologies than this one. It'll eventually settle down and level out just like all the others.
iamkilo@reddit
Cool - so you have no idea what you're talking about. Not going to waste any more time than I already have on you.
unprovoked33@reddit
Exactly the type of response I figured I'd get here. No substance, no actual discussion. Sounds just like AI.
iamkilo@reddit
We just could spend all day going back and forth and quoting each other and arguing. I don't know you, I don't need to convince you of my argument. I told you how AI has improved feature releases for my organization and sales due to said features. You called me a liar (and called my personal anecdote blanket statements, which doesn't make sense).
Crawling the internet to look for specific examples, or posting my internal organizational information on the internet is just not worth the hassle.
Your post history is clearly a war against this technology. I'm obviously experiencing something entirely different. We don't have anything to learn from each other here. I don't feel it would be great discussion in this setting. If you live in the Austin, TX area and want to sit down and have a beer and discuss any of this, I'd be open to it, just shoot me a DM. Happy to bring my notebook and show you what my company has done with AI.
Was not my intention for this to become personal attack slinging, and I feel we were both poor citizens in that regard, my apologies but I'm politely moving on.
unprovoked33@reddit
Fair enough, I appreciate the olive branch. For what it's worth, I come from an argumentative culture where tempers run hot but are forgotten quickly. No personal attack was intended on my end, I apologize for it coming off that way. I especially don't believe that you are lying about the successes you've seen; rather, I've noticed that the discussions around AI have been staged by many in such a way that successes are attributed to AI, failures attributed to humans. We may disagree on which belongs to which, is what it likely boils down to. Either way, no personal insult was intended.
You're correct that my post history has plenty of anti-AI sentiment. I deeply feel as though this technology is causing great harm to the industry as a whole, as well as negatively impacting other industries. I'm fine being the curmudgeon in the room. I don't believe that everything AI is bad and wrong, I just think we're all putting an awful lot of blind trust in gigantic tech companies, and a lot of people are forgetting lessons that they learned decades ago. For what it's worth, my more technophobic friends see the other side when they spout their conspiracy theories and refuse to address the realities of change.
Srirachachacha@reddit
This thread warms my shriveled little heart
thortgot@reddit
Largely the AI efficiency push has been a cost cutting move rather than an agility boost.
Ive seen companies make real substantive gains using properly architected AI tooling using Copilot studio custom agents.
Thr same groups that were using automation and workflow see the most gains.
SaaS platform upscaling in my area has been largely at the MCP layer in the past 2 years. Its a huge lift but once you can get agent to agent communication working well its a game changer.
unprovoked33@reddit
If AI was able to provide an agility boost, why wouldn't those companies go for the agility boost? Do you seriously think that companies are so allergic to growth that they'd prefer to cut costs when growth is available? No chance. It's because AI doesn't actually provide any of that, AI is just the convenient scapegoat. It sounds a whole lot better to shareholders to say, "Man AI is so great, we were able to replace all these workers," than it sounds to say, "We overhired and wasted all this money due to my bad decisions and inefficiency."
This is what I'm calling BS on. I'm a consumer of tons of SaaS products. I see the patches come through, I see their agility. Nothing has changed over the past 3 years. Product improvement schedules aren't speeding up. I have no evidence of this supposed change. I remain unimpressed, and when I call out their BS, people come out of the woodwork to praise theoretical change and things "getting better and more powerful every day."
thortgot@reddit
You presume growth is available. Salesforce et al. largely have maximized the spend in their industry. They posture over market share which is why it largely leads to a relative pause across the board in investment.
AI isn't so great that it replaces all workers, it replaces crap work. The same way every iterative move in tooling has.
The changes I've seen and lead are legitimate direct savings. It's paid for my team 3 times in the past 2 years. ERP work, CRM work, business process work all do the same thing. AI is just the platform we use to do that work today.
Smart_Dumb@reddit
What are those? :(
unprovoked33@reddit
I get that you're being cheeky, but they're there. Code reviews, security reviews, change controls, compliance audits, they're all being bypassed to make way for AI.
RabidBlackSquirrel@reddit
Not really, though. Our job is to make sure their bad decisions are duly informed. Document the risks and their likelihood, costs, etc and propose controls where available (or document when non exist/are practical within the ask). It's their job to weigh all that against the (alleged) business gain.
Just because we can all see through the productivity argument and think it's all bullshit and see they're brainwashed by the marketing hype doesn't suddenly put us in the business' driver's seat. Document your concerns, use as strong of language as you have to to get the points across. But ultimately, it's not our decision nor should it be.
unprovoked33@reddit
You aren't entirely wrong, but part of our job is to clearly articulate our concerns and cut through the marketing hype so execs can make informed decisions. The marketing departments of these AI companies have found out how to speak the language our execs want to hear, so we have to speak the same language in order to counter the push. Sure, we could wash our hands of the whole thing and say "not my decision", but that leaves us open to execs claiming we didn't educate them enough.
Darkace911@reddit
It's coming directly from the top of the company and security concerns are being vetoed by upper management. It's go along with the flow or find somewhere else to work. This just kicked into high gear at the end of 2025.
unprovoked33@reddit
I feel bad for IT departments and Security teams where things are getting pushed like this. I understand that you can only push back so much and still get vetoed. But understand that there are plenty of companies out here where that doesn't happen. Each day another security failure or pipeline mishap occurs due to AI, and I have more ammunition to use against these pushes.
Nu-Hir@reddit
AAA Games are largely rushed and pushed out before they're completely polished. We'd be likely to see an increase of higher quality AAA games on release, not necessarily more games on the whole.
unprovoked33@reddit
And yet, AAA gaming is having the worst few years ever. Indie gaming is crushing it, more and more every year, while big studios shut down. Sure, that isn't AI's fault, but if AI is so great, why aren't they improving?
sabre31@reddit
We had it connected to our office 365 for a while now.
KavyaJune@reddit (OP)
Do you allow access to all users or restrict access to specific groups?
sabre31@reddit
We all users and we connected Gemini Enterprise also. Everybody on here posting about being afraid is why many of the enterprise customers are so far behind when it comes to AI. We are a large global company and we have companies like Google and xAI throwing their best engineers at us because we quick and are so far ahead of everybody else. These companies want to do this stuff but find it difficult to do much with enterprise companies because of slowness and afraid to adapt.
eastamerica@reddit
Bingo.
I_SNORT_KITTENS@reddit
Yes, but with caveats. I would require some sort of a tool that can detect what AI apps, plug-ins, etc. are currently installed in the environment. More important than that is to have a way to see what settings are applied to these AI tools to ensure that they are hardened correctly. I know Remedio can do this (and can also bulk apply restrictions on AI tools or uninstall them entirely). I’m not a shill for this company, and there may other tools that do the same.
Ill-Detective-7454@reddit
Never. Azure is just too fragile.
acidburn1672@reddit
Nope, would not
chesser45@reddit
Whatever the business and IT leadership agree on. At the end of the day it’s not up to us. Can recommend but cannot force.
M365 shop by default but I could see a benefit to having options with how MS is already clawed back features
KavyaJune@reddit (OP)
Makes sense, it’s a business call.
M365 shop by default-- Are you already using the Copilot?
chesser45@reddit
Mmmmm kinda… getting bogged down in data classification. Rather than making it a actual project it’s run off the side of a teams responsibility and thus it’s limited only to upper echelons who can be trusted to see sensitive info like payroll documents or other T1 data.
Once that’s done it’s supposed to go wider but we’ve been at it for almost a year now so who knows..
RikiWardOG@reddit
God wish we were taking this approach instead we're paying out the ass for people to replicate other peoples work in other department because nobody knows how to fucking communicate.
chesser45@reddit
I dunno, paying would be less painful than DIY. Plus we are going ahead without huge business investment so I feel we are doomed to have no business familiarity with the planned data classification process.
19610taw3@reddit
Copilot seems that it's only good for searching emails.
It's actually really good at that. Outside of searching for email ... not very useful.
FullExchange7233@reddit
I mean I've never used PowerBI and I'm making it teach me how to pull crowdstrike data into a visual exec summary. Granted, it's definitely just aggregating all the publicly visible posts on the same topic, and I have to watch for hallucinations, and I have to tell it "Bad AI, no water" when it makes mistakes.
19610taw3@reddit
That's all AI when asking it for help doing things. I hate that I have to do it now and then, but it does help me a ton.
It's never 100% right, but a few times now I have been really stuck on an issue and 90% there but I just couldn't get to the answer. It at least pointe me in the right direction and got me where I needed to be.
Tymanthius@reddit
I find AI is great for those things I know how to do, but I don't recall the exact syntax/structure.
pwrshell commands I seldom use for instance. It spits out the command quicker than I can google it and if there's a flag I don't recognize I can figure that out fast.
The other place I've started using it is as a GM notes for sandbox TTRPG's. I'm terrible at remembering the 9000 threads that are going on, but I dump the pdf's into an ai, and the player's session notes, then tell it 'players are headed here next, what is likely to be learned/encounted' and it reminds of that thing on page 205 even tho the place we are going is on page 157. :)
Jaereth@reddit
I find it's ok asking a question about a Microsoft product. Other products I use ChatGPT.
Frothyleet@reddit
I absolutely avoid asking Copilot for MS product advice, in my experience it's no better than any other LLM, but it makes me unreasonably angry when Copilot hallucinates shit about MS products.
Like, that's the one thing you'd expect them to polish! Their product offerings are psychotic!
Frothyleet@reddit
It's definitely as good as Outlook should just be in the first place, if I may besmirch Microsoft's legacy of being crappy at searching.
That said, the fact that it can search across everything at once is super useful, it's very helpful when you can't remember whether "thing X" was brought up in email or teams or elsewhere in the MS estate.
dfsna@reddit
What do you mean by how MS has clawed back features? I haven't been following the Copilot space in a couple months.
chesser45@reddit
Copilot free was integrated with limited functionality into m365 apps for frontline workers. This has been disabled
Mr_ToDo@reddit
I don't disagree
What I'd like to see in microsoft products(well, any product really) is a good framework for AI access. That shit is going to overstep at some point and I'd like if I could have that locked down before then(and preferably without a year long project just to get sane settings). I think Windows could really hit it out of the park if instead of trying to write a framework that's only good for them they made a flexible one that anyone could use. Best of both worlds. You wouldn't get copilot by default and if people want to use AI they don't have to worry as much about it going out of control
discosoc@reddit
I don’t care. What’s important is that people understand they are responsible for their data, no matter what tool they use. It’s a policy issue.
wtjones@reddit
This is like asking if you’re going to have WiFi in the office in 2003. If you don’t have it this year, if your company is still around next year, you’ll have it then.
MDParagon@reddit
Lol no
DoctorSlipalot@reddit
Absolutely, not.
steveoderocker@reddit
Why, absolutely not? Can you elaborate on the risks you foresee?
dcampthechamp@reddit
Claude is waaaaaay too aggressive to feel comfortable with being in our tenant. When Claude can’t do something outright, it will try to find holes and ways around whatever the deterrent is.
An example is that a user was trying to pull data from out production platform website using Claude, he didn’t have enough backend permissions to do it the right way so Claude completed the task by scaling out website and ultimately ddossing us, knocking the site offline for a little time.
This site has ddos protections and plenty of Aws load balancers to help with traffic but that didn’t stop Claude from killing it.
alluran@reddit
Sounds like the user needs a raise if they're able to demonstrate defects in your infrastructure so easily.
You think he's the only one that's going to be using AI to try get data?
awful_at_internet@reddit
There are right ways and wrong ways to alert an org to a vulnerability. Breaking prod is firmly a wrong way.
alluran@reddit
It's not the users fault if prod is set up so shit that a commonly used piece of software can take it down accidentally.
If anything, the IT dept is the one that needs performance reviews, not the end user :P
awful_at_internet@reddit
"Help am just a user" and "i found a vulnerability for you, give me a raise" are mutually exclusive. Pick a lane.
alluran@reddit
Only if your company sucks - lots of companies have bug bounty programs.
Though in this case, nothing in the original story says that the user was even aware that they DDoSd the server - only that IT discovered that he was. So again - "halp am user"
steveoderocker@reddit
So, firstly, you’re telling me, a single user using Claude was able to DDOS your production website, running in AWS with DDOS protection + ALB? I don’t see that being the case to be honest with you.
Secondly, this post is talking about the specific, first class integration built specifically to call the graph APIs to pull data as the logged in user, with the ability to turn off training on your data. Does your position change or differ, if we remove the issue of Claude somehow ddosing your own website?
dcampthechamp@reddit
Long-story-short the user did a batch-api call with Claude ~100k calls at once.
While the integration does seem to limit the scope that Claude can work through, I would not feel comfortable, while our RBA is good its not perfect and I would not be surprised if Claude could find work-arounds for things that the user it's impersonating should not be able to do.
steveoderocker@reddit
How, how can that be? It’s literally using the logged in users credentials to hit the graph api. Claude can only see what the user can see - that’s precisely how delegated permissions work on the Entra app. There’s no “Claude suddenly impersonated a different user” or magically gained access to things the user doesn’t have access to - this is just not how delegated apps work.
brokerceej@reddit
Because that person is either an OpenAI shill bot or an idiot.
Iconically_Lost@reddit
The question is, was it the re-tries or did it intentionally try to ddso you in an attempt that you leak?
dcampthechamp@reddit
it was the retries. Unfortunately I work at a startup that is still in the "go fast and break things" phase. So we get a bit of shadow IT and things that aren't configured with security in mind.
KavyaJune@reddit (OP)
That’s fair, I expected a lot of “no” responses here.
Are you using Copilot, or holding off on that as well?
Inanesysadmin@reddit
Copilot is junk and Microsoft has had data breaches from said service
Darkace911@reddit
Confirm the Copilot is junk, we had a trial and the users didn't like it so we are moving to Claude. The email reading is the next piece that I am expecting.
_DoogieLion@reddit
What data branches?
Inanesysadmin@reddit
https://www.bbc.com/news/articles/c8jxevd8mdyo
_DoogieLion@reddit
Why do you mistakenly think this was a data breach?
From your article: “did not provide anyone access to information they weren't already authorised to see”
Inanesysadmin@reddit
If it’s accessing data it’s not supposed to. It’s a breach. Not that information was exposed but it’s doing things it’s not supposed to. And depending on your type of work it’s a reportable event to certain agencies.
_DoogieLion@reddit
Where would this be reportable and for what reason?
Inanesysadmin@reddit
If copilot inadvertently accessed any PII, CUI, or other information it’s not allowed to. It could have easily violated many states data breach laws, HIPAA, etc laws. Those events by nature are reportable depending on state and local laws.
thortgot@reddit
Not remotely. Do you have a CUI breach if your data is in OneDrive?
It being able to access data it shouldn't have is an issue which was fixed. However it was only accessible to users with authorized graph permissions to the file. Explicitly not extending permissions.
_DoogieLion@reddit
Copilot runs in user context, so this wouldn’t apply here. Why do you think it would?
Inanesysadmin@reddit
And to add to general shit Microsoft posture around security with azure is
https://www.propublica.org/article/microsoft-cloud-fedramp-cybersecurity-government
karokajoka@reddit
Is this not a fucking ad?
popegonzo@reddit
Affirmative, valued human user. This promotional communication was, with 99.97% confidence, not generated by artificial intelligence technologies. It was crafted with authentic organic human intentionality for your optimal engagement experience.
Thank you for your cooperation, fellow human. Please buy Claude and not Copilot.
digitaltransmutation@reddit
its formatted exactly like all the other stealth ads in this subreddit.
At this point I'd like a filter that blocks anyone who posts 3 bullet points followed by a question.
HateSucksen@reddit
Gotta ask Jimmy for that.
Sharp_Animal_2708@reddit
delegated access means every user who consents is handing over visibility to whatever they can see in sharepoint and outlook. if you dont already have solid data classification and DLP policies this is basically giving an AI full read access to your least controlled data.
Ziegelphilie@reddit
Statement, bulletpoint list, opinion, question
Fuck off bot
tapwater86@reddit
They can’t even keep their own proprietary data safe, what makes you think they’ll keep yours safe after they’re done reading it?
OkEmployment4437@reddit
Read-only delegated still means every user who consents is handing over access to whatever they can see, so your data classification better already be solid or you're just hoping nobody has a mailbox full of stuff that shouldn't leave the tenant. The real headache is that even if you block the connector in Entra, people will just paste the same content into Claude's browser chat anyway, so you need DLP policies that actually cover that path too. I'd probably do a scoped pilot with a handful of users whose mailboxes you've already audited, and only if you can restrict consent grants to admin-approved apps. Otherwise you're just adding another exfil vector with extra steps.
ConnectionAmazing110@reddit
How do you handle the users who take a photo of data and use the native OCR to paste into the version of the app on their phone?
At some point the focus needs to be on training around confidential data, too.
Alaknar@reddit
In a perfect world? Just shoot them on the spot.
steveatari@reddit
Straight to jail.
magataga@reddit
More realistically "How are you handling employees who violate security policies and willfully sharing confidential data" you fire them.
W1ULH@reddit
Meat-space issues require meat-space solutions.
If treat your workspace like a SCIF with air-gapped machines and no outside tech allowed you can {maybe} secure against this kind of extrusion. Understand that in .gov/.mil land it is still accepted that there may need to be FBI/USMC personnel involved in security.
In reality? publish serious penalities for doing this, and stick to them when it happens. put The Fear™ into your users.
skippy2k@reddit
My company has Claude and every which AI tools integrated into everything. We’re not a Microsoft shop (typical tech stack for Silicon Valley, Google workspace, slack, jira, etc). People literally have Claude update jira tickets because they don’t want to go to the UI.
Even had someone get phished with a fake Claude app. Gonna be fun lol.
Infninfn@reddit
If you had M365 Copilot you'd have known that Claude has been available as a subprocessor for Copilot since December. Enabling it gives you a toggle for GPT and Claude in chat and researcher. Claude was my go to until gpt-5.4 showed up, now I'm split between the two. Only downside is that Microsoft explicitly says that Claude is outside of the tenant boundary.
On the frontier program you get to preview Copilot Cowork, which is actually just Claude Cowork with full integration into M365, and I have to say that that really is the Copilot that Microsoft wished they started with. I can get it to write up properly useable documentation, runbooks and decks off of stuff I have across email, Teams and Onedrive. Problem is when it goes live you'll only get it with the new E7 plan.
VeryRareHuman@reddit
If it is enterprise account, yes.
phlatlinebeta@reddit
Yes, but only with an Enterprise account, which also ties your entire domain to your account, meaning people can't connect unless you license them.
pinkycatcher@reddit
Nope, we do federal work and Anthropic has issues with them now.
When it's resolved? Maybe.
TransporterError@reddit
Nope.
woemoejack@reddit
No.
YSFKJDGS@reddit
The amount of people just saying 'its read-only so I approved it' is insane. Even the free version of copilot has the enterprise data protection applied to it, and if you use gpt or claude models in something like researcher it still falls under that protection.
If you guys are just looking at the read-only part of it and have no idea if your companies data is being ingested to be used in their model you are ENTIRELY missing the point of where the risk of this stuff comes from.
magataga@reddit
Large provider AI Privacy controls do not work. The incentives are all aligned towards ingesting and using customer data -> AI companies are literally built on taking in as much unique human data as possible while ignoring any and all restriction on doing so, be it contractual, legal, ethical, or moral. For structural reasons -> there's no real advocate for customer privacy.
Technically privacy controls are cost centers, and eliminating them either explicit or implicit makes doing the work at an AI company easier.
Historically all of the 6 major AI companies have failed or outright ignored privacy issues: MS, Google, Meta, OpenAI, Anthropic, and Hangzou.
I would not allow this integration if given a choice, and I would advocate strongly against it. Although I am also probably the only person who's read the chrome browser TOS and EULA.
FirefoxMetzger@reddit
Would you run a locally hosted AI that runs inference on-device or a resident server in your own cloud? (all else being equal, ofc)
dynalisia2@reddit
I like Anthropic but they recently accidentally leaked the sourcecode/systemprompts for Claude. I don’t trust them (or honestly, any startup, no matter how big or well funded) enough with access to any data at rest. We do some things with their API though, but those are mainly experiments or prototype class work and not with higher data classifications.
bitslammer@reddit
Even long established companies like Cisco and Microsoft have had source code leaks.
If that's your bar for what you will or won't use you're going to be building with rocks and sticks.
MathmoKiwi@reddit
I hear even sticks have had their source code leaked...
hoagie_tech@reddit
This is a misconception. Rocks and sticks went open source a millennia ago. Certain forks like Engineered Wood and Lab Diamonds are proprietary and are protected but the OG is available for all to play with.
Nu-Hir@reddit
And who doesn't love a good stick?
TheBros35@reddit
It’s one thing allowing a company access to some of your code repo’s or a set of documents that you feed it. It’s a totally different thing to allow them access to all email and file share data that an employee has access to. I like to limit how many vendors have that much access.
bitslammer@reddit
Which is great, but if you're an MS shop then you're not really going to be able to limit them much at all if you look across all of Azure and O365.
03263@reddit
Rock server... very reliable. No crash for 100,000 year.
brokerceej@reddit
The system prompts for Claude have always been public. They publish those.
Claude code specifically had a source leak. And it’s just a CLI wrapper that should be open source anyways.
Commercial_Growth343@reddit
we just had a change control submitted to start a pilot. so I guess so lol
b1jan@reddit
we have already. it is phenomenal.
looking forward to the ability to edit/send with Claude so we can dedicate real tasks to it.
Wild_Swimmingpool@reddit
We actually do allow it, there was a lot of testing and vetting done first. We also had to work with our compliance / legal team to outline use policies as well. Training is disabled of course. Permissions are very granular at the group level. Management wanted this and cowork badly so I made it happen with as many guardrails as possible.
It's actually really good at searching emails, SaaS, and your storage services and combining those results with read only permissions.
devonnull@reddit
Sure, it's way more helpful than CoPilot which pretty much just responds like a negative vortex of apathy teenage girl with "whatever".
owlbynight@reddit
Sure, why not? It's not like users routinely share plaintext secrets with one another with these products or anything.
ccsrpsw@reddit
Is this full Claude integration or the Sonnet integration that's been inside CoPilot for a while? (The latter is what you need if you have any form of data protection and people are asking for Claude like capabilities btw).
But the former way of integrating has been available for a number of 3rd parties for a while (pre-'AI', probably since Office 2003 era) and has always been an organizational risk analysis point (e.g. pretty much any Outlook or Office 'plug-in' has the ability to send data out for processing and many do).
But either way, this list s a usual "3rd party integration" list, and even if this didnt read like Ad copy, it would be the same as any other office plugin being introduced to the mix - if the legal/data governance team approve - you implement as requested after the discussion phase, and move it into maintenances mode. No different from any other tool adoption despite the new buzzwords.
drinkwineandscrew@reddit
We're testing it out with a pilot group rn, CISO has signed it off and at least it's read only. I'm not a fan but we don't have enough clout to say no to having it in the tenant outright.
Biggest challenge so far has been users wanting it to write and send emails etc. 'the connector can't do that' "ok but Claude told me it could do that on web outlook through the browser but I can't enable that" Yeah no shit you can't enable that buddy that's a disastrous idea.
Jaereth@reddit
It's amazing that a lot of people who's entire job is just sending Emails are salivating at the idea of having AI write them.
One manager here was just amazed he can ask copilot in the morning "Summarize my emails for me" and just reads the summary and goes about his day. I said that's a very high level of trust you have in a product still in it's infancy and left it at that...
edisc0@reddit
In the same boat here. But truthfully, I’m more worried about all of these users who are now suddenly full stack devs writing undocumented business applications overnight that we will inevitably need to support.
drinkwineandscrew@reddit
Yeah, that's the bigger headache for sure
TheFumingatzor@reddit
No
SylvainLafrance@reddit
A big NO for me !
neferteeti@reddit
No, if users need it get the enterprise plan after doing a data security audit of how they handle data to ensure it meets your orgs needs. Only allow that.
Also, look into using sensitivity labels to encrypt sensitive data and set up policies such as Endpoint DLP to prevent uploading those files outside of areas you don't want them.
thortgot@reddit
Claude inside your tenant is available at a marginal difference. Why would you breach your tenant security boundary for $10/month/user?
Fatality@reddit
Because it's a better experience
thortgot@reddit
In what way?
MathmoKiwi@reddit
So basically it seems Claude is no more of a security risk than Microsoft Copilot and in fact it might even be more restrictive it seems in what it can do than Copilot?
thortgot@reddit
You can literally use the same reasoning models in Copilot.
Claude is taking the data in your tenant and shipping it off to Anthropic through graph without the native auditing functions.
Gunny2862@reddit
You're at the whims of IT leaders above your head on this, unfortunately.
Valdaraak@reddit
Not unless the CEO/legal signs off on the risks. Which is unlikely since we're a Copilot shop and give access to pretty much anyone who asks nicely.
Ok_Interaction_7267@reddit
Depends on your data more than anything.
“read-only” + delegated sounds safe, but it still means anything the user can access could get pulled into a third-party model.
I wouldn’t blanket allow it. maybe small pilot with the right users and see how it behaves first. Feels very similar to early copilot discussions.
jcpham@reddit
Hellllll naw
MetalEnthusiast83@reddit
No, but most of my clients are asking for it, so yes.
I do think people are moving too fast with some of this shit and it's going to bite someone in the ass, but it's not my problem.
KavyaJune@reddit (OP)
Instead of providing access to all the users, you may restrict access to specific set of users or groups. It might help in restricted usage.
MetalEnthusiast83@reddit
It's up to the client, ultimately. I advise them to do that. The only real leverage we have is forcing them to at least use SSO for all this shit.
twatcrusher9000@reddit
Claude can't even keep their source code secure, why would I trust them with my data
gumbrilla@reddit
Enterprsie account, infact we do. Sure. Claude Free? absolutely fucking not.
cgreentx@reddit
Unless it is paid for and managed by the org, no. Reading data out of 365 into an unmanaged personal AI tool is data leakage by definition.
aquila421@reddit
Yes, assuming we control the rights, just like any other connector. If I can limit to read, and limit confidential from being read, then there’s no additional risk exposure.
Note: I personally use Claude Code with the m365 cli for administrative tasks and have been for a couple months now. Makes managing 400 SharePoints a bit easier.
KavyaJune@reddit (OP)
That’s interesting.
I’ve been exploring similar approaches on the Active Directory side as well. Tools like AdminDroid AI assistant are starting to bring in natural language querying for M365 and AD tasks, which feels like the same direction, just packaged differently.
Write-Error@reddit
Depends on industry. In mine, we deal with a lot of regulated data and classification is inconsistent. Handing over mailbox and site collection scopes, even delegated, is a non-starter.
WiskeyUniformTango@reddit
CEO demanded it so I said sir yes sir.
CrestronwithTechron@reddit
Hell no. I begrudgingly gave Co-Pilot, and that is supposed to have a digital enclave.
AI_Strategist1098@reddit
Before enabling, I’d want clear internal guidelines on what data is okay to query. the tool is only as safe as how people use it.
lesusisjord@reddit
Sure, why not? My SME input and advice is well respected, but if they want it, then why wouldn’t I “allow” it‽
Crafty_Dog_4226@reddit
We can't. Held back by Claude being out of scope for DoD CUI handling in our org. But, in reality this is crazy because the DoD did use it. I would have to assume, before the SecDef declared it a supply chain risk that it was cleared to handle ITAR CUI and much more sensitive info. Ahh, I should just go back and prep for my CMMC audit instead of wasting time on Reddit.
Vistaer@reddit
Lemme think…
Academic-Proof3700@reddit
Why not? The moment I'll open some larger textfile, claude will stop doing anything and ask for more money to bump up the limits
apple_tech_admin@reddit
My department too could’ve been getting in on the fun (I love Claude). Alas, some drunk, unqualified bastard said that Anthropic is a supply chain risk. cries in GCC restrictions
2wheels_up@reddit
Acting like you guys get a choice. You will do whatever leadership says.
KavyaJune@reddit (OP)
Do you guys have a choice?
no1bullshitguy@reddit
Yeah in my org, we are allowed to connect Claude to 365, Slack, Atlassian stuff , Google Drive, Github etc.
( one of the Mag7 )
Downtown-Sell5949@reddit
Nah. We use copilot. I also trust Microsoft a bit more than Anthropic regarding following GDPR and data protection. Not a lot, since you know, they’re both American companies. Bit still more.
illicITparameters@reddit
See above
kieppie@reddit
It's far more competent than most users I've encountered, so... maybe?