Which vendors can secure AI data centers?
Posted by FudgeAgile7958@reddit | sysadmin | View on Reddit | 15 comments
We’re starting to look more seriously at security for AI data center environments and I’m realizing this might not be as straightforward as applying the same tools we use for traditional infrastructure.
With GPU clusters, huge amounts of data moving around, hybrid cloud connections, and teams trying to protect training data and models, it feels like the requirements are shifting pretty quickly. Anyone already dealing with AI-heavy environments how are you thinking about this?
VviFMCgY@reddit
This is an odd post
WorkLurkerThrowaway@reddit
I'm not sure I even understand what OP is talking about? Physical security? Network Security?
ukulele87@reddit
What makes you think its any different? I dont get it.
Sounds like you are just saying things just because you read them somewhere or heard them somewhere.
Is it AI posting, are you not from an IT background? What does GPU clusters and "huge amounts of data" moving around have to do with anything?
Trust_8067@reddit
It sounds like you might be confusing data centers with managed cloud/MSP environments.
In terms of datacenters, most are going to have the same security setups. To get site access, someone on an approved list has to open up a ticket stating the persons name / info for access to the building during specific hours. Their ID will be checked, and they will be escorted the entire time. They will only have access to your specific companies locked cage, and locked racks. From a DC perspective, it's your environment, you're responsible for your own network and infrastructure security outside of physical access.
From an MSP perspective, everyone's going to be different, and the smaller the MSP, the more likely they don't have the full end to end talent, knowledge, or expertise to meet strict security requirements. This is one of those things where you have to know what you want, and ask the MSP if they can meet your requirements, then you have yearly audits against it.
Gullible-Surround486@reddit
pretty much, half the time people mean MSP stuff when they say datacenter security anyway.
Character-Hour-1797@reddit
The part I’d be watching is whether the security team can actually keep policies understandable once the environment starts scaling. A lot of vendors can say “AI data center security,” but day two operations are where things usually get ugly. Check Point is one of the more interesting names here to me because they’ve been pushing the single-management / prevention-first angle for a while. For AI infra, that may matter more than some brand-new AI-specific feature, since the hard part is often keeping the whole estate governed without creating a policy nightmare.
kombiwombi@reddit
Looking at it from the 'datacentre' issues of hosting your own AI servers. The huge amount of east-west traffic is the issue, along with the sheer number of racks.
This will depend on site policy a little. Do they allow you to build your own interrack cables, or do you need to go to a cross-connect? How about if you are longer than a row? At what point is all this made easier by buying a cage, room or floor.
There is also the question of how much security is needed versus the requirements of the machinery. If a 'safe'-style rack and interrack encryption is needed can you actually get enough airflow into those racks as needed by the GPUs, can you get Macsec interfaces at the rates needed to drive the interrack connections?
GallowWho@reddit
It's all just data, and programs running on a server you protect is as any other data or programs running on a server.
You also make sure the model you are running an appropriate guardrails in place.
Kitchen-Pea6382@reddit
I mostly agree, but I think the tricky part is less the theory and more how messy these environments get in practice. Once you have a lot of east-west traffic, hybrid cloud links, different teams moving training data around, and policy needing to stay consistent across all of it, “just secure it like any other data” can get harder pretty quickly.
I’ve seen Check Point come up in this context because their angle is more consolidated prevention and policy management across network, cloud, and workloads rather than treating everything as a separate point tool. Not saying they’re the only answer, but I do think this is where the vendor question becomes more interesting than just put guardrails on the model.
HighRelevancy@reddit
That's a somewhat different problem problem. The AI model itself is just words streamed in and out, not very interesting as system security goes. The guardrails of what those words are isn't really a system security problem.
If you're making assistant applications that run tools for the user you need to write them to protect against arbitrary input and the bounds of intended file paths and so forth. Those "guardrails" are the same security problems as any other web app doing similar operations.
But yeah it's all really just the same problem as any other multi-tenant web app.
Beginning_Ad1239@reddit
As a concurrence here, think about what you would do to protect your data if the finance team was poorly controlled and had a habit of over sharing data. The principal of least privilege is your guide here; start with really low permissions, so low that you make the team mad at you, and make them justify anything more.
This is also an exercise in DLP. If you don't have solid data labeling and permissions around it in place you're going to be in trouble.
wrosecrans@reddit
People have always wanted to protect data in offsite services. There's nothing particularly interesting here about GPU clusters, or the data being related to AI. No matter how much middle management tells you that everything has suddenly changed, this is still just running software on computers (with a bunch of Perl in the glue under the hood), and you still need to mainly just pay attention to the same old boring stuff like making sure the ACL's are sane, and you remember to close accounts when somebody leaves the company.
And really, none of your post seems to actually be about securing the data centers. Securing "AI data centers" as you mention in the subject is about stuff like sturdy doors, security camera, and hiring some guards. Call ADT, they'll be happy to set up alarm service for your data center, or house, or retail shop, or whatever.
Helpjuice@reddit
This is not a vendor problem, this is a your company problem. You have to do this! You can outsource the hardware problem to major cloud providers, but the actual virtual access controls, governance, risk management, and compliance you will still be responsible for as a company.
M3tus@reddit
I can say this much: most.orgs...99%...didn't consider for one second that they needed to change their security approach with AI infrastructure.
This is going to get ugly as people 'learn on the job'.
M3tus@reddit
Lol...AI fan boys coming for you OP. You're not supposed to talk about this obvious failure to assess risk.
"The AI is perfect and security is for losers and libs." /s