What are you using for multi-petabyte backup targets?

Posted by cezaryd@reddit | sysadmin | View on Reddit | 15 comments

We’ve been working on backup storage for large environments for years, and one problem keeps coming back:

Traditional backup targets don’t scale well beyond a certain point — performance drops and cost per TB goes up fast.

I’ve seen deployments with 100+ nodes and \~100+ PB of logical backup data in a single grid that avoid many of those bottlenecks, which made me curious how others are approaching this at scale.

Curious how others here are handling:

Would really appreciate feedback (or criticism) from people running large environments.