Running equipment past end of life - what's the oldest in your environment?
Posted by pinghome@reddit | sysadmin | View on Reddit | 181 comments
Due to rising costs due to AI nonsense, our edge device refresh was cancelled. The $12.6k server is now $76k. These were set to replace an aging fleet of G8/G9 HPE boxes. How's is the rising price of gear impacting your orgs and what's the oldest gear you're being forced to run?
nodiaque@reddit
We actually have many Pentium running w98 and w95 for the subway system. Those system cannot run on anything else (tried everything). They talk to old robot used in the subway that would cost thousands of millions to replace. So we have like 100 p1 133mhz never been used ready to take the ghost when it crash.
kissmyash933@reddit
Is this the NYC subway system? I heard somewhere once upon a time that they were very specifically IBM PS/2 Model 90/95’s that the MTA had stashed away. Bulletproof systems those, the pentium versions are mad money on the open market these days though.
nodiaque@reddit
Nope, Montréal
RyanMeray@reddit
No way to virtualize them?
nodiaque@reddit
No since they need to talk to robot on trails deep in tunnel without network requiring old delegated connector that we never been successful to emulate in VMware, virtual box or hyperv
RyanMeray@reddit
Damn
abr2195@reddit
Our oldest servers are Dell PowerEdge R730s. If you can tolerate running really old equipment in your environment, just refresh with equipment that’s newer than what you have, but that isn’t the newest. We’ve saved tons of money doing this with servers.
kissmyash933@reddit
I’m running 4 r730’s at home still, they have been awesome machines, and I don’t currently have any plans to replace them, but I’m curious if you are seeing what I saw the other day: I was in there repasting anything with a heatsink and blowing them out and replacing all the iDRAC batteries and I swear, every piece of black plastic inside those systems that I touched turned into black plastic dust. Just me, or is that a thing with these systems?
MitochondrianHouse@reddit
That's the rub, and a perfect storm with our environment with Dell right now. The R730 is 13th generation, so there is no extending service for hardware support on them. The 14th gen models (which we have a LOT of) are out of support next year, so we can't just extend the support contracts a year or two to hopefully get past this AI hardware bubble.
So, all new triple-priced servers I guess.
VG30ET@reddit
We love it xD
porksandwich9113@reddit
Hello my intel dualcore friend.
Rxorcistt@reddit
For the love of God, don't reboot that thing
porksandwich9113@reddit
No plans to.
2 Electrical grids, ~1 million in batteries for our 48v DC plant, and 3 generators will make sure it doesn't lose power either.
MrChicken_69@reddit
And yet, I've seen such CO's lose all power. :-)
porksandwich9113@reddit
It happened once a long time ago I was told. Apparently there was a fire. (This was in the early 2000s when I was a teenager).
And good catch on the compile date, I didn't think about that.
I think to lose power at this point, it would need to be a pretty big disaster though. Like the entire building picked up by an EF5 tornado or some shit. We have a fire suppression system, and generators have testing every two weeks now.
MrChicken_69@reddit
Never underestimate the damage of a single squirrel.
RIPenemie@reddit
r/uptimeporn
CeldonShooper@reddit
At this point it is considered mainframe hardware for honorary reasons.
sertxudev@reddit
May I introduce you to one of my Proxmox nodes:
porksandwich9113@reddit
You know what, I went hunting and I found something disgusting.
Own_Error_007@reddit
That's eligible for the pension in some countries.
Monomette@reddit
Had a Cisco MDF like that at my last job, wasn't quite at 16 years, but it certainly hadn't been shut down since it was originally installed. If I remember correctly it was stacked 2960s, with the redundant stack power unit and dual UPS.
We had replaced switches in that stack once or twice, but the overall stack uptime was over 10 years. UPSes had been changed.
Only noticed it when I was reviewing the infrastructure to make sure things had been getting regular updates. Guess that hadn't.
LosLeprechaun@reddit
I hope you patch Windows more than you patch your Cisco devices.
porksandwich9113@reddit
I'm not in charge of any Windows infrastructure, thank god.
ender-_@reddit
I replaced a Core 2 Duo running pfSense for a client in January. One of the NICs was a 3c905.
Brilliant-Advisor958@reddit
It's only DHCP ?
porksandwich9113@reddit
Yep. Literally just running ISC for a bunch of our managed voice customers.
Angelworks42@reddit
I’m kinda surprised aren’t you worried about security or physical hardware failure taking down your net?
porksandwich9113@reddit
We are an ISP, and this is one of ~80 routers in a MPLS mesh. The only things that would go down is the handful of DIA circuits directly served off of it, which we could easily migrate the interfaces over to a new transport node (we have 6 in this particular CO) and run a few jumpers. Might have to move a few pseudowires too.
In terms of the phone DHCP server, that is one of two servers, and we are standing up a new pair of them in a proxmox cluster that is still being built.
Morkai@reddit
Goddamn, even that "windows experience index" brings back memories.
drozenski@reddit
Try lenovo servers. The price was 1/2 of what Dell and HP wanted for the exact same specs.
MrChicken_69@reddit
So long as you're a NEW customer to Lenovo. Otherwise, their shit is just as expensive. I warned my bosses of this when they got a low quote from Lenovo... fill those blade centers now as those blades will be 10x more next year (zero discounts.) They ignored the warning and paid 10x as much two years later. (rather surprised they didn't just rip it all out for the next vendor-of-the-week.)
imstaceysdad@reddit
That's interesting - here in Australia we had the opposite. Dell were coming in so much lower than Lenovo and HPE.
Morkai@reddit
They're catching up. We have three Thinkservers as ESX hosts with warranties expiring next year so we're sussing out replacements, and we're easily looking at 150-200k AUD to replace (last time I looked)
Darkk_Knight@reddit
Yep. All the big boys use the same suppliers for their RAM and hard drives. They're usually have contracts on the pricing.
phillymjs@reddit
You never want to be prowling the consignment area at a Vintage Computer Festival and see something on sale that you still have in prod, but that happened at my last job-- one group I supported finally migrated some critical databases off an ancient Apple Xserve in like 2022 or 2023. I was at VCF East in 2021 and there were a few on a table in for like $30 each.
cfmdobbie@reddit
Got a 23-year old PowerEdge 650 running the door access system. It is air-gapped though...
Excellent_Search_270@reddit
If anyone wants to know the market value of their aging hardware let me know. I work in the ITAD space and am happy to quote. I will say, most DDR3 systems are worth next to nothing but older DDR4 systems still have solid resale value.
Glue_Filled_Balloons@reddit
I ditched our last '08R2 box a little less than a year ago.
We have some \~12-14 year old cisco switches kicking around in some places. Its a good time.
YellowOnline@reddit
I still have one NT4 at a customer, because there's a company essential app running on it. It uses Outlook 97 for sending mails too. The developer died 25 years ago or so, and noone has the source
pdp10@reddit
25 years is long enough for someone to have half-attentively reverse engineered the work of one programmer, during the boring parts of the workday.
If they started when nobody could find the source code, they wouldn't been long since done migrating by now.
SomeWhereInSC@reddit
if it isn't on fire most companies let it ride....
pdp10@reddit
More often than we'd like to be the case, the simplest explanation turns out to be the correct explanation.
Yet still, organizations have a lot of different staff, each with different concerns, skillsets, and time-horizons. I guess I'm just accustomed to having at least one person who both knows and cares.
miltonthecat@reddit
This is, unironically, a task made for AI.
Darkk_Knight@reddit
That's the thing they are afraid to.
reilogix@reddit
NT4?? Holy crap, that is incredible. I remember Back In The Day (TM), I used to occasionally have to use an NT 3.51 box and that thing was BULLETPROOF. NEVER crashed, never died.
NotMedicine420@reddit
IBM did good.
MajStealth@reddit
at my last mcorp they had a nt3.5 machine. because that one particular lab device only talks to that one handmade interfacecard using ISA....
drozenski@reddit
Similar. Know of an old client still running NT4 because its the only thing 8 of the ancient CNC machines will connect to as its running some proprietary software. They have to manually move files with a USB drive to it.
They have a ton of spare hardware for it bought off ebay. Pretty sure the owner has 5-7 more years till he's going to sell and retire and make it someone else's problem.
ZexGr@reddit
DOS machines on 5k$ HW
ender-_@reddit
You sure you're not missing a zero there?
Morkai@reddit
Oh man, my last workplace had teams that worked on rail lines, and had track levelling equipment with embedded Win7 computers on board. Due to it running Win7 it was prohibited from being on any wireless network we operated, so the files from the machine had to be moved by USB as well. Rather annoying when the rest of the fleet has USB access blocked and we have to create a whitelist policy to allow a subset of users this function.
kg7qin@reddit
That's when you make an image, take the source files and on a fully loaded Linux workstation tell your favorite LLM agent to decompile and reconstitute the source into something readable.
It should work better than you realize.
Beatusnox@reddit
Couple production machines on OS2 Warp.
SomeWhereInSC@reddit
Wow! I'm not sure I would have remembered it existed had it not been for your post.
paqmann@reddit
Love it.
Lukage@reddit
We got rid of most of our old stuff that our team manages, so this is fortunately the worst we have to deal with.
SystemHateministrate@reddit
NinjaOne fucks
Lukage@reddit
Its got its ups and downs. I've submitted just under 200 support tickets over the last 2 years. And every time, it involves me sending them \~10 screenshots of our interface.
dmoisan@reddit
We still have a Drobo SAN at my work. I repurposed it for storing VM images. Drobo's been out of business a long time!
Hyperx1313@reddit
I have a Poweredge dell server from 1997 running windows NT 4.0 completely disconnected from the network, just to the old ass monitor that is still running.
byrontheconqueror@reddit
Most of our network switches, including our core are 19 years old. Long live the HP 5400zl!
Lonely-Abalone-5104@reddit
Dell 2950s and r710s are about the oldest we have right now. Pretty old but not completely ancient. Things run like tanks
AlaskanDruid@reddit
Literally zero equipment is end of life until it stops working.
Mainframe, as400 from the 80s is the oldest here. Zero end of life as everything works.
Timberwolf_88@reddit
.... Mainframe
Fuzzmiester@reddit
Umm, I have an HP microserver? The one before the G2. 😉
it works?
rimjob_steve@reddit
My company is so cheap we have 30ish computers that can’t install win 11
AcidBuuurn@reddit
You can force them if you want. Not really recommended for business but possible.
whippy_grep@reddit
I had to force 20-plus Dell Optiplex 3020s to 11. It took a while, but all are running like a charm. The MS upgrade tool will put 11 on a cinder block.
secondhandoak@reddit
we're running laptops longer than normal now and I'm starting to see more nvme failures. I expected more complaints about battery life but the nvme failures after 4-5 years is surprising me. People here refuse to use new Outlook and I wonder if the OST files of Outlook classic cause increase wear.
reilogix@reddit
What do you use to monitor SSD health? I use HD Sentinel but I'm not sure how reliable it is, in terms of prediction, and/or when to replace an SSD...
secondhandoak@reddit
I wait for users to complain their system is super slow or no longer boots. I use PDQ but not sure if it can monitor/alert for SSD issues.
pdp10@reddit
And what happens if you zeroize them and reformat them? Performance restored?
bubblegumpuma@reddit
My instincts say that there's too many layers of mediation and translation between the data sent by the OS and the actual physical flash cells inside of modern SSDs for zeroing and reformatting to actually meaningfully do anything that the SSD controller, etc. aren't already doing in the background. It'd be worth a shot if you have the time, since slowness might be something like a screwed up but not 'invalid' filesystem, but if the SSD isn't playing ball with that or it takes too long for your tastes to test for functionality, I'd just can it.
andrewpiroli@reddit
Writing zeros doesn't help anything if your OS already properly supports TRIM. If it doesn't, you can do a format using the ATA or NVMe Secure Erase feature. That's actually handled by the controller and will do the proper flash maintenance on the entire drive and restore performance. This isn't really needed now that TRIM works on most OSes, but if you are pulling an SSD out of like a Win XP machine it can help.
secondhandoak@reddit
I haven't tried. the SOP here is to replace. I'm not given much time to dig into things due to the number of tickets and the SLA.
PDQ_Brockstar@reddit
Take this with a grain of salt since I haven’t done any testing with SSD reporting, but with the new PowerShell scanner in PDQ Connect you should be able to scan for stuff like that and then build collections and reports off of it. The same should be possible in PDQ Deploy & Inventory with the PowerShell scanner. How reliable the SSD reporting in Windows is, is another question all together.
MrYiff@reddit
It's even simpler than that I think as PDQ Connect records Disk Drives natively including reported Health status so presumably you could just build a report of that.
secondhandoak@reddit
I checked just now and PDQ Connect shows disk info and says Healthy on a few I spot checked. Wish it displayed more details like CrystalDiskInfo would but I don't think that's free for commercial use so I'd never test it out.
Greed_Sucks@reddit
My dude, New Outlook is a disaster for people that have been working out of Outlook for most their careers. I wish I could nuke it forever.
Tap-Dat-Ash@reddit
I tried so hard to use New Outlook for 3 months. I couldn’t stand the limitations for so many basic things it just did plain wrong. Attach a file? Nope I’m gonna link it via onedrive/sharepoint. Do a search for a previous email? Nope, I’m gonna bring up the email your drafting. It’s near unusable for any power user.
secondhandoak@reddit
I like new Outlook because I don't have Sync issues when users refuse to stick to one device. Their laptop is too heavy so they have a laptop at home and one at work. I can't wait for old Outlook to be unsupported and gone.
rome_vang@reddit
You must be lucky. I work at an accounting software company and we heavily rely on outlook classic for our CRM and Microsoft OLE integrations.
There’s many companies like us that have integrations that go back many years. Our company has the money to get away from the legacy integrations but not the time/resources to do so.
So when these integrations break, it’s up to our team to figure out why. Most of us have a Computer Science background but we’re not full fledged software developers.
KC-Slider@reddit
It won’t work with my addons for encrypted email so I’m stuck for classic for now.
Ferretau@reddit
The nvme failures don't surprise me, they weren't designed for longevity. Hopefully for you they aren't soldered to the mainboard like a lot of laptops are moving to.
ColXanders@reddit
Me. I'm old.
bliveng1@reddit
Dl380 gen 1. They are compaq branded before hp bought them. Must be 26 years old.
DiamondLatter1842@reddit
Atera helps us keep tabs on health so nothing falls through the cracks.
ThomasPaineWon@reddit
I do Third Party Hardware maintenance and I have a customer running Sun Fire E2900. Released in 2004. Unfortunately we still have parts for them and I still work on them.
General_Opening_7739@reddit
Atera helps us keep tabs on health so nothing falls through the cracks.
scottjl@reddit
Me.
techretort@reddit
What's a few cve's between nation states
DeathRabbit679@reddit
We decommissioned an Ubuntu 8.04 machine last year. Had some test automation stuff that had been ownerless for about 8 years or so.
UnexpectedAnomaly@reddit
Currently my workloads are just office and web apps so we don't need the latest and the greatest to do that. Frankly the only thing failing on my fleet of 5 plus year old laptops is fans and batteries which are pretty cheap to source and replace. Don't get a lot of hard drive failures and the ones I have had usually throws a smart warning before it dies. One of the comments are saying in NVME are not as reliable as SSD is making me worry. The oldest thing I have is a Ricoh printer that was discontinued 2013.
Tatermen@reddit
We have a Cisco 2950 that's been running almost continuously since sometime around 2003. Sadly we had a full power cut a couple of years ago and we lost the ridiculously impressive uptime count on it.
tech_is______@reddit
I've got a handful of G9 HPE's still running strong
BitRunner64@reddit
This old bastard refuses to die.
BitRunner64@reddit
One advantage of no longer receiving Windows updates.
Angrymilks@reddit
Inb4 all the mainframe maintainers with COBOL expertise come in.
Humpaaa@reddit
AS400 chugging along
drozenski@reddit
Hey just because we still run and code in COBAL does not mean the hardware is not DDR5 and SSD's. :D
Angrymilks@reddit
I can vouch at least for Target. They still are running AS400 in basement of their HQ.
pmormr@reddit
I don't know about COBOL, but I work for a bank. Basically everyone wants to be the company that replaces the old ACH system. Because obviously that's a big deal to a bank if everyone's using your clearing house. What that means in practice is that JPM wants PNC to use their new system, but PNC won't use JPMs system because PNC wants JPM to use their system. So.... mainframes it is!
Schrankmaier@reddit
Old but gold
Moses_Horwitz@reddit
processor : 28
vendor_id : AuthenticAMD
cpu family : 21
model : 2
model name : AMD Opteron(tm) Processor 6376
AnonEMoussie@reddit
We almost had a 21 year old server. Luckily it got decommissioned last year.
Jealentuss@reddit
*sadly
fk067@reddit
Once saw a Cisco PIX up and running for over 9 years, no reboot or patch whatsoever. Don’t recall know how much EOL/EOS the HW or SW was.
sboone2642@reddit
I know somebody that is still running a Pix 515. That thing is about 25 years old now
sboone2642@reddit
My company is running a critical server that is running RedHat Linux 5.2. Note, this is not RHEL 5, but RedHat 5. The hardware doesn't even have a PCI bus. You know how hard it is to find a working IDE drive at 2am on a Saturday?
pepod09@reddit
We have 20-25 year old Cisco switches running in most of our PoPs across the states. Oops
Evening_Link4360@reddit
We have 16 HPE BL460C Gen 10’s that should have been replaced four years ago. Now we really can’t replace em.
TinderSubThrowAway@reddit
I have an HP DL360 that was bought in 2010 and is still running 2008R2.
We only have it still because it runs our ERP that can’t run on a newer OS.
We are currently in the process of upgrading said ERP and can hopefully decom that one at the end of the year, if all goes according to plan anyway.
Or turn it into a beast of a TrueNAS because at this point it may never die.
ZathrasNotTheOne@reddit
at my old job, we had 3 XP desktops that were used every year to run a single program that could only run on an XP machine...
Morkai@reddit
We recently had a few staff leave (the contract they were on ended) and had a Lenovo T470 returned. Not sure if it was in active duty or not.
The oldest one I know was active was a Lenovo laptop where it's warranty had expired 7 years ago.
johnd126@reddit
My personal laptop is a refurbished T470 I bought on Amazon a few years ago. I love it.
Annh1234@reddit
Had a pentium 3 server decommissioned after the pandemic , everyone forgot about it. It was running the DNS for 20b racks in 3 datacenters... Always DNS issue huh.
Slawcpu@reddit
Server 2003. Old IBM xServer.
It’s an SNA gateway for a mainframe running legacy hardware.
WoWMiri@reddit
I just threw out a PC that was manufactured in 2012. It was working until 2025… the 2012 printer is also joining it…
I inherited A LOT of ancient gear when I took this job…
steeldraco@reddit
I'm at an MSP. I think the oldest stuff we still have around at a client is some older 2012 R2 servers that were walled off via VLAN. They are old EMR servers that had to be kept around for various reasons. Might also be manufacturing servers kept around for similar reasons, usually to run old machines that can't be down but cost too much to replace.
We got a new client recently that has some Win7 machines and a SBS 2011 server. Haven't seen one of those in a while, but they've already approved getting rid of that old crap.
Glass_Call982@reddit
Last SBS 2011 I saw was in 2024. Client was sold a new server and licensing but the old msp just moved the SBS and left it on the esxi host. Total scammers.
steeldraco@reddit
Huh. Wonder if that was the same as the outgoing MSP here. The discovery for this place was wild. They've got two domains in place; one running on the old SBS box and the other running on a VMWare host that just connects up to Entra. Users are on the old domain for drive shares and folder redirection and the new one for their Entra users. Workstations are registered to the new domain but not in Intune ('cause they're not licensed for it).
That was a wild discovery.
fkick@reddit
We just retired our fleet of Westmere Mac Pros (2009-2010). Television post production facility kept them going as Avid bays for almost two decades. Damn workhorses those machines.
goatsinhats@reddit
G8? That’s modern
We had a G6 until recently
Honestly if your just replacing 12-13 year old hardware now your not being forced to run anything. The company just doesn’t care and good let them pay extra.
Pub1ius@reddit
Roughly 15 years old Precision workstation with Server 2008 (non-R2) running essential HVAC control & automation software for a commercial office tower. Requires a physical serial connection to a proprietary Siemens box that I haven't been able to virtualize.
Known_Experience_794@reddit
We have one that’s 16 years old. In fact, I think our newest server is from 2016. Our C-suite does not believe in investing in IT infrastructure and has been pushing the envelope for over 10 years. And “if” they get forced into replacing something now, with prices the way they are now, they are going to really regret that shit. It’s been a fucking joke with these ass clowns. But that’s ok, the piper is eventually coming and he is gonna get paid one way or another.
Imhereforthechips@reddit
Oldest server is 2019, our SAN (MSA 1040) is 12 yrs old.
WantDebianThanks@reddit
Place I was at in 2019 was running a business suite on OpenVMS that was setup in the mid-80s, we stopped being able to update it in the mid-90s, it was running on i386 or some such (so we couldn't cloud host it or virtualize it), the backups were "complete" but I was never told what was actually being backed up, we hadn't tested backups in a decade, we had a spare server, but no idea if it worked, if we had an installer for OpenVMS, or anyway to test.
Not sure how old the hardware was, but I think it was from the 90s.
AcidBuuurn@reddit
Why couldn’t you fire up the spare and restore a backup to it?
WantDebianThanks@reddit
Director didn't have the time, that wasn't my job, and I wasn't trust to do that.
Kitchen-Back-1271@reddit
Couple months ago decommised a Old Dell PE2950 , 4GB RAM, 80GB HHD, Windows Server 2003, have an application run in vb6 and sql server 2005, some guy developed in 2004, and was fired in 2007.
Glass_Call982@reddit
I've still got one of these in the rack, powered off. It was our first real server running SBS 2003.
TheGreatNico@reddit
We have some lab equipment made in Yugoslavia with a text only interface on the back end written in a language I don't recognize, maybe Serbian, that would cost about half the yearly budget of the whole hospital to replace. No idea how it's running, no idea what's inside it, it might be magnetic core memory for all I know, but it still runs whatever tests it runs decades after the country it ceased to exist.
If we're talking about non-specialized equipment, we've got a few HP G7s and 11th gen Poweredge servers still kicking around, some old cisco switches from the same era, Starting to see parts failures on them though. It's a ticking clock before father time forces an upgrade or migration on those
YouShitMyPants@reddit
Hmmm, 2960xrs, 12hr old servers still running og drives, wlan3650s. Thankfully those will be gone soon. I got office space on my mind.
artekau@reddit
SharePoint 2010 (only for old searches)
SAP that hasn't been upgraded in 15+ years
fuzzylogic_y2k@reddit
An avaya partner phone system. I think it's older than my youngest IT staff member. Estimated at near 30 years old.
kissmyash933@reddit
This one is actually surprising to me. Maybe I have had really bad luck with them or something but I have always found Avaya Partners to be insanely unreliable.
fuzzylogic_y2k@reddit
I have worked here for almost 20 years and just found it. Never a single ticket. And I did verify it was actually in use and not just hanging around. Its possible they are calling a local vendor for it but yeah, still just working away.
AmiDeplorabilis@reddit
I don't like it any better, but I'm about to enter the fray by using some older hardware... also thanks to AI.
shimoheihei2@reddit
HP EliteDesk G4 mini-PCs
LowIndividual6625@reddit
End-User.... we have a couple of 2022-ish ProBooks and few 2020-ish AMD-based Optiplex but they are running Win11 fully patched.
Back-End.... my Nimble is \~ 2019 and not budgeted for replacement until next year.
ExceptionEX@reddit
I've got a sco box running informix that has a companies general ledger, billing, reports, and HR on it.
It's older than nearly everyone in the company.
theoriginalharbinger@reddit
I just had a bunch of flashbacks to lawsuits originating in the 90s and then trying to explain in the mid-2000s who Novell, SCO, and WordPerfect were to some interns.
And then getting bought by OpenText a few years later, thus finally arriving at the official home of Unix(TM).
ExceptionEX@reddit
Yeah, SCO was nothing but a lawsuit to me when I was coming up in tech, I hadn't actually seen it in production for the first decade of my career.
Though everyone is nervous about it, I have to say, the system and the software handle an amazing workload and is nearly bug free and every-time anyone has scoped it for replacement the cost exceeds yearly earnings.
msalerno1965@reddit
For an extreme: Got some Dell R610's, and a few R320's in service. Third-party hardware support. In reality if one died today, I'd have it back up virtualized in about an hour. Some are Solaris 10, some CentOS 6.
Mostly decommissioned, we're getting there!
In reality: Just reupped support on a Dell MX7000 chassis for another 3 years, total blade ram: 5TB. It's not going anywhere.
I wonder about your hike in server costs, though. I think I'm seeing about double on RAM prices from Dell, versus what I would expect to pay 1-2 years ago.
A Dell R470 with some goodies and 512GB of RAM was <$40K each in a batch of ten, just last month. And I had to "suffer" and take 64-core CPUs instead of 48-core, because they were out of 'em by the time we got it quoted.
Higher Ed, btw, so pricing from Dell is ... different.
But yeah, hold on to what you have, if you can. We've had Dell in the datacenter now for the 15 years I've been here. Never once did a machine go "down" completely. Disk dies? Sure. We mirror or raid6 everything. RAM ECC errors? Once in a blue moon. Literally half the time we got ECC errors, it was a BIOS bug. Ignore the run of bad 450GB SAS drives in the MD3000's, and Dell has done well by us.
The two machines that actually died ... like ever? Got wet.
illicITparameters@reddit
A couple servers and a firewall from 2022. Servers are getting ditched next year, firewall is getting replaced in 6-months for something with more WAN and VPN throughput.
Immediate_Bison3308@reddit
Not ours but a client's
Secret_Account07@reddit
We got super lucky and bought hundreds of Cisco hosts in anticipation of the big price hike. Saved a few hundred thousand.
I feel for the folks that didn’t. Luckily we have enough customers that our rates now reflect the price change so when we do refresh we will be basically even.
But none of this, and I mean none of it, was worth what we got. I’d hand back over AI and go back to the way things were in a heartbeat.
spikederailed@reddit
Itanium hardware running OpenVMS, nuff said.
gamblodar@reddit
Misread this and though you were thinking different with big iron.
rthonpm@reddit
I've got an R710 with an MD1200 running Hyper-V Server 2019 at a client that running a few Windows 10 VMs on a protected network segment and a Linux server for Docker containers. To replace the hardware with the same specs would be around $125,000 dollars that they don't have.
Ziegelphilie@reddit
I retired a Dell server from 2011 last week
silkee5521@reddit
17 year old Dell server repurposed as a file server for items not meant to be sent to the cloud.
hardingd@reddit
9 year old Lenovo blade chassis. 12 blades being replaced with 8 pizza box servers. Quote went from 240k to 670k.
csjc2023@reddit
10+ year old HDDs. SSDs that are 500%+ used. Double-digits of PIIX4 south bridges that are 29 years old.
SolidKnight@reddit
With the current trajectory of hardware costs, it's time for 10-20 year life span.
cederian@reddit
Solaris Cluster with 13 years of uptime, iirc. I was told that the day it gets rebooted/shot down it will be day it gets decommissioned. Nobody is allowed to be even near that ancient thing.
mr_data_lore@reddit
Oldest? An Avaya IP Office 500, but it's still supported. I've spent the last 3 years replacing all the old equipment, but haven't gotten to the PBX yet.
CeldonShooper@reddit
Reminds me of a project around 2007 that had to run on an enterprise Avaya cluster. Just querying a logfile took like 5 consecutive web forms to be filled out and sacrificing your first born until you could see any logs.
onboarderror@reddit
VMS
AFlyingGideon@reddit
TOPS-10
pdp10@reddit
At least RAM cost won't be a problem.
vi-shift-zz@reddit
Decommissioned a bunch of 20 year old sun sparc solaris servers this year. They were solid as a rock but should have veen out of here 10+ years ago.
Wolfram_And_Hart@reddit
Just retired. 29 year old switch today.
archiekane@reddit
I have a 16yr old Dell r210 running as a router.
It's scheduled for death this coming Saturday. It's served me well.
Sintarsintar@reddit
A r710 running an ancient ESXi version running an ancient Linux version that only does one thing ingest netflows nothing can talk to it but the netflow senders and a jump box or two.
eufemiapiccio77@reddit
Firmware update every component in it. It’ll last a few more years. Unless it’s physically broken then you can extend it for a bit longer
Due_Ear9637@reddit
We date some of our servers by what grade our newer hires were in when they were originally deployed. I think the oldest was 6th grade.
whatsforsupa@reddit
We did a pretty good job of changing out most of our hardware to atleast Win 11 compatible.
We do have a really gross VM on Server 2012 R2, funnily enough on a Server 25 host. We are out of activations and they stopped selling perpetual licenses, so our boss just said support it until we can't anymore.
TerrificVixen5693@reddit
We’re still running Windows 7 on some systems here.
SmasherOfDaButtons@reddit
Damn son, those are amateur numbers. You need to bump those up. My last employer still has an xp box used for proprietary equipment testing, and a rhel 6 box floating around for some old cad models.
TheFluffiestRedditor@reddit
Still a better OS than Win11. Just wish it could be supported
RestartRebootRetire@reddit
The six Dell 1.8tb SATA drives I just bought used (from 2018) are working great in our PowerEdge T630 (2014) and cost me about $200. Another $200 gets me eight more and then I just hope the RAID10 fails gracefully.
touristh8r@reddit
Dell PE2650s are our oldest. Running SQL 2000 that was never a candidate for virt. Have an entire windows domain running just for that to continue working since its all voodoo magic to our young devs.
bschmidt25@reddit
I still have a few Proliant Gen9s out there but are still on hardware support with a third party. They’re not my direct responsibility and getting the people who “own” it to write a check to replace them has been a challenge so all I do is tell them they could die at any moment and they’d be SOL. Fortunately, nothing too critical is running on them.
azspeedbullet@reddit
Desktops and laptops 10+ years old, barely meeting the win 11 compatibility
MetalSufficient9522@reddit
Wow! G8/G9 is old. You waited too long and now it's expensive. Tech debt hurts!
derfmcdoogal@reddit
So glad I bought my SSD shared storage array before everyone decided it was beneficial to burn electricity to make computer generated stupid cat videos.
HankMardukasNY@reddit
I have four Cisco 2951s acting as gateways for our Cisco UCM environment, one in each building, that are 17 years old with one single non-removable power supply, and no spares. I have raised many concerns about this throughout the years to no avail. Luckily we are moving to Webex Calling this summer and can get rid of them
fubes2000@reddit
Me
gamebrigada@reddit
I've been managing to find prices that are not crazy more expensive. The RAM and SSD's are insane, so we've just had to contract a bit and deal with it.