Supporting other IT people is usually better than the general populace. Usually.
Posted by DefNotBlitzMain@reddit | talesfromtechsupport | View on Reddit | 84 comments
I work support for a specific piece of software that runs exclusively on customer servers, so 99.9% of my calls are directly with IT people from other companies. The other .1% have to transfer me to their IT people because they don't have access to servers.
That usually means I'm excluded from tickets that get solved by reboots, but it doesn't exclude me from week long finger pointing contests.
"You are totally correct in saying that the other server can't talk to our service on this server... But that server can't ping us at all. It's something on your network, not our service."
"Yes. We checked everything on our service just to be sure. It's ready to go and working fine, it just doesn't have an internet connection at all. That's on your network, not us."
"Yes, you've mentioned that this is the only server affected and all your other stuff has an internet connection, but we don't manage your network or even this server. It's all your stuff. Please troubleshoot the network connection."
"Logs are showing a bunch of errors because the server doesn't have an internet connection. No other customer is complaining about being unable to connect to the internet. Between the network errors, the service reports that it's running fine and ready to go, it just doesn't have internet."
After no less than 10 days of 3-5 emails a day like those... I get this gem: "Issue caused by faulty ethernet cable has been resolved. You may close your ticket."
10 days of downtime... 1 cable.
LogicBalm@reddit
What I hate the most are when we have multiple meetings, bridges, screen shares and then eventually it was just a typo that was overlooked by everyone because we are all collectively rolling our eyes too hard at the finger pointing nonsense.
I had one issue for over a year with multiple techs and engineers reviewing it before we finally got on a call with someone, shared the screen and she said "I think you should use https instead of http" and that fixed it.
Yes, it was a stupid mistake but it's a stupid mistake that was overlooked by at least ten "experts" who had already reviewed it from multiple angles.
syntaxerror53@reddit
Once did an overnighter with a team when studying, wondering why a program would not work. Only to find at dawn that it was the dreaded "typo" in one variable. Missed by everyone multiple times.
Eraevn@reddit
I've been on both sides of that situation, but I usually try to at least throw a my bad when im the problem. But I have also had issues in dealing with a telecom provider where I have had to go "i know the issue is on our end, but I need to know where the return traffic is trying to go because it isnt getting to us in any form that is allowing me to see what to fix". Once they finally sent a packet capture I got to go "as I mentioned previously, we upgraded the PBX and it now utilizes pjsip on the standard port of 5060, this is still trying to use 5160, and you assured us that it would just work, please direct the traffic to use port 5060.
Really irked me cause I am a barely functioning idiot and it took me all of 2 minutes in wireshark to find where the traffic was jammed up on their end lol
DefNotBlitzMain@reddit (OP)
In our case, we don't have anything to do with their network. It's 100% software, so we just throw the service on their server and tell them what to allow in the firewall and dip. There's a bunch of stuff that it does that we can troubleshoot and work on... If we have an internet connection.
Literally the only thing I can do in this case is try to send traffic through it and see if it comes back. If it doesn't, that's on them. They just refused to even pull in their network guys to try to troubleshoot anything outside of the server itself.
ahazred8vt@reddit
You kind of need to modify your service so that it automatically tells them "I can only ping this many hops" "traceroute can only reach these ips" "tell BlitzCo these errors" in plain language.
Eraevn@reddit
Yeah, thats annoying, why delay bringing in the network techs when its obviously a network issue, that would make too much sense lol
DefNotBlitzMain@reddit (OP)
It was a bit weird because they were able to remote into it and the server itself thought it had a network connection, but the rest of their network didn't agree as nothing could ping it or talk to our service on a port that was definitely listening and unblocked.
Not my place to know what cable they replaced, but it was DEFINITELY network related from step 1!
Eraevn@reddit
Ahh, could probably remote in because the in house network was fine, so obviously if everything on the same network functions, couldnt be an issue reaching out over public internet lol it was likely DNS, always DNS lol
DefNotBlitzMain@reddit (OP)
They said it was an Ethernet cable. Maybe whatever connects that particular branch of the network to whatever goes out? They very specifically said it was an Ethernet cable that got replaced. It also might just be one of those broken cables that's just damaged enough to stop some traffic but allow others. I've seen that happen back when I did on-site support.
Neuro-Sysadmin@reddit
Out of band management. Meaning they had one port/cable/switch dedicated for management, one (or more) networked for normal traffic, totally separate path. Covers both a security and redundancy angle.
Eraevn@reddit
Entirely possible, potentially a dual nic setup, on strictly for the company network and the other for external access and the cable for external access got damaged or unplugged and the scream test was delayed lol
MisfitHula@reddit
My experience of being the IT Support for IT users boils down to:
1) They never Google the problem/query/question themselves first or even reboot.
2) Assume that if something doesnt work we have "blocked" it or "Done a change" - like yeah I just go around updating Enterprise app secrets and permissions for the lols
DefNotBlitzMain@reddit (OP)
Not googling is a 50/50, but I feel like everyone I talk to has restarted the server or service at least once before calling us...
But yeah, everything that breaks is because we changed something... Even during code freezes where we haven't changed stuff in weeks.
blind_ninja_guy@reddit
To be fair, I may reboot it and then want to know why the issue occurred. Rebooting is sometimes like replacing a fuse in an electrical system. Sure replacing the fuse or rebooting fixes the issue for now, but why do I need to do that, something downstream is broken and if I actually want to solve the issue, I really should figure out what is causing the issue instead of just reboot every time. That fuse is there to protect the system at large, or in this case, the reboot is only necessary because the component is fragile, or in some cases failed in order to prevent actual harm to the system.
Logical_Number6675@reddit
On the flip side as an IT person who has received support from other IT people, I can't tell you how much I dislike working with 50% of those IT people.
I try to be concise and thorough when providing details to cut though the BS, and they just ignore it and pretend they don't know anything, or that you are an inconvenience to them. You end up explaining the same thing over again to 15 people. Or they hyperfocus on one thing (not the whole) you mentioned which has noting to do with the task at hand (e.g. We got this error and it seems we need to upgrade or server, can you help me upgrade i cant find any documentation? Sorry, I'm going to focused on the error and why it's happening instead of helping you upgrade). Worst is they they don't share solutions and just say something like "try now" or "it works for me".... it's like, that's great, why does it work for you, can you help get it working for me, that's the whole reason I reach out is because I've exhausted my knowledge about your product.
One time I spent 4 months with Sage 50 support because the software was painfully slow on any EntraAD joined machine, this was further exacerbated with Win 11. A computer which worked fantastically 2 minutes earlier (pre-domain) would come to a grinding halt as soon as it was EntraAD joined, even on a device with no policies or apps pushed to it. I was escalated to their "top engineer" all they could do was blame our "Microsoft environment", and tell us to give the end-user local admin (which did not fix it). Cool... I guess no one else is using Microsoft... can you help us diagnose or guide us in the right direction for investigation as to what your software doesn't like about our environment so we can fix it, NO.
jmjedi923@reddit
For the not offering solutions part, I get that because IT people are used to working with people who Don't Care/Don't Know. I'm more worried about confusing people with long emails and if they're curious I figure they can ask me what I did. They almost never ask.
DefNotBlitzMain@reddit (OP)
I'm on the other end of that, where you would be calling us for help with our product, so I don't directly feel your pain... But boy do people complain about other vendors when they talk to us!
We're pretty good so it's a lot of "well, I need you to pull in this other vendor so they can fix it on their side." "Oh no... Can't you fix it from your end? Is there any way we can change something on your end to make it work? I HATE working with them..."
I don't get how companies let it get so bad with their support sometimes.
curtludwig@reddit
"Of course I already changed the cable, I'm not an idiot!"
SkidsOToole@reddit
"We need a P2 ticket to create a bridge!"
Sir, your password is expired.
PanTran420@reddit
I don't miss my HellDesk days in healthcare.
Radiologist: "I need an urgent ticket to the PACS support team, PACS is broken."
Me: "I'm sorry to hear that, can you tell me what's going on with PACS?"
Them: "It's broken. I need a PACS Admin NOW!!!!"
Me (internal): "JFC I hate radiologists"
Me: "That's fine, I'm happy to page one for you, however I need to include the issue in the ticket. Is there an error message?"
Them: "No, there's no error message, just forward this to a PACS Admin."
Me: mutes call to bang head against desk
Me: "Again, I'm happy to page one, but I need to be able to tell them the problem so they can assist you better when they get the ticket."
Them: "It says my account is locked, but it's not, so PACS is broken."
Me: "I actually see a lock on your AD account. I've removed that for you, can you log in now?"
Them: "Yes, bye...."
I filed so many rude customer complaints against Radiologists. More than any other single job code.
problemlow@reddit
I think after the second round of that I would end the call. Regardless of what company policy is. When they call back they get told exactly why the call was ended. And if they're not civil from the get-go they get the call ended again on loop until they are. I am however unemployed for unrelated reasons.
PanTran420@reddit
I was one of the senior people on the desk so i was the one that problem customers got escalated to, so I just delt with them and reported them to my manager.
Langager90@reddit
Calling all units, calling all units: We have a code 16-11, irate radiologist in need of a "fix it" operation. Bring the big hammer. Over.
7ranquilcitizen@reddit
Kaiser?
PanTran420@reddit
Nah, not Kaiser. It's a regional system, but I'm sure we use a lot of the same software.
Hebrewhammer8d8@reddit
Or vendor supply the switches and firewall. The terminal can't communicate to printers. Turns out whoever setup the terminals was using small switch and switch power plug was unplugged. Plug the switch, and the terminal can communicate to printers.
curtludwig@reddit
Years ago I was doing at an install where everything worked great the first day and literally nothing could talk to anything the next day. Customer swore up and down nothing had changed. I finally asked them to email me that. I got a really snarky "I don't know why you can't understand..."
Then I asked "So when you changed from Dell to Cisco switches last night, that was nothing?"
When they replaced the switches they forgot to turn on our VLAN.
Our install went a day late and the customer had the gall to demand that we do the extra day for free. We offered to take all our toys and go home. They paid for the extra day.
eddietwang@reddit
I remember calling Nintendo support as a child because my brand new Wii sensor wasn't working.
"Yes, of course I plugged it in."
It was, in fact, not plugged in.
PrisonerV@reddit
I didn't call support on it but once I couldn't get my optical cable to work. Turns out that little bulb on the end is the cover for the fiber optical cable.
curtludwig@reddit
I was once working on a big drive array where the drives were mounted on sliding trays. Got into a state where one of the trays wouldn't close. I messed with it for far too long and finally somebody in support said "Did a drive pop up in the tray below it?"
I thought "Thats the stupidest... I better go check."
When I checked a drive in the tray below had indeed popped up. I am still very glad I took that time to go check.
Mr_ToDo@reddit
-man who fished a cable from the trash and managed to put a faulty cable in place of another other faulty cable
aj4000@reddit
Similar but different. One of parts in a system I support is a bucket that opens for a customer to remove a ticket, then closes once it's removed. The mechanism uses a belt that is driven by a motor to open the bucket. When these belts wear out the bucket can't close, so we replace them and throw the old ones in the bin.
"The bucket won't close. I already swapped the belt but it's still not closing so you have to replace the bucket."
"Ok, where did you get the spare belt from? We don't give them out."
"Oh. Well. Last time a tech replaced a belt on another bucket, I saw the one he took off in the bin and it looked perfectly fine, so I took it out and kept it just in case. The bucket... (insert totally unrelated thing that isn't a fault) ...so swapped the belt with the spare one."
This same dialogue has occurred so often that I've actually had to start cutting them before I toss them. It's in the bin for a reason, lady.
jeffrey_f@reddit
The real cause, the USER had no idea what you were talking about. USER kept on asking their peers, who as the user, had just about the same knowledge, NONE!. 2 of those days were Saturday and Sunday. User was frustrated and finally asked his friend about it and therefore finally involved IT about it. On the 10th day at 09:04am, 4 minutes after involving the IT guy, the IT guy plugged it in because he was never told that the server was ready to be used.
DefNotBlitzMain@reddit (OP)
I don't work with users. This was a sysadmin who called to troubleshoot the service and by golly, he was going to troubleshoot the service! And just the service! Regardless of what the vendor for the service tells him is the issue, it has to be with the service!
jeffrey_f@reddit
a USER could be anyone, but I have worked with and around these people too
standish_@reddit
For want of a cable, the server was lost.
For want of a server, the work-weeks were lost.
For want of work-weeks, the contract was lost.
For want of a contract, the customer was lost.
For want of a customer, the business was lost.
All for want of a cable.
DefNotBlitzMain@reddit (OP)
Wasn't THAT serious! They were using our cloud stuff while their on-prem was down, but it's clunkier and slower than the local traffic one-click solutions they get from the on-prem.
standish_@reddit
That's good, sometimes it really can be. Good poem: https://en.wikipedia.org/wiki/For_want_of_a_nail
6890@reddit
I continue to believe that Network Admins in particular are sadistic assholes who relish in watching me squirm, then double dip in the pleasure by unceremoniously resolving problems in a way that avoid all blame.
I'll report that a client system is unable to reach servers and they'll proudly CC a whole chain of directors and VPs with drivel about checking ping, traceroutes or firewall settings. And when I comply to their every whim for hours at a time once they finally get off their ass to actually do anything they'll only include those same VPs in a meek "the issue has been resolved" message. Anything even coming within the hemisphere of admitting fault will only include me on the message.
I work as a systems integrator and have dozens of clients. It isn't just one network team who manages to pull this shit. Its all of them like its part of the CCNA course or some shit on how to deflect blame and avoid doing any actual effort until proven necessary.
EDIT: I will say that one client fired their managed service provider and brought back all their network in house and it is a pleasure actually having people who are interested in resolving issues rather than filing tickets and passing blame. If I say system X can't reach service Y they're immediately logging into routers and pulling logs and asking me relevant questions. My only fear at this point is they're gearing up to drop us and take our work in house as well.
monji_cat@reddit
It's part of CCNA sadly
DefNotBlitzMain@reddit (OP)
You know, maybe that's why the person I was working with refused to pull in their network resources... Maybe he was just to used to that exact scenario you described and didn't want to deal with it... Or he was just stuck thinking it had to be out fault. Oh well.
JoshuaPearce@reddit
I think there's a default cause for every little niche in IT.
For android, it's the manifest file. For apple, it's the certificates. For C++, it's uninitialized values. If there's a cord involved, it's probably the cord.
monji_cat@reddit
Omg I just dealt with a certs issue on a Mac mini in the last week or so.
Ephemeral-Comments@reddit
As a network engineer I'll tell you: "it's the network's fault until proven otherwise".
At the company I work for, we have a phrase: "mean time to innocence". We use that to highlight the benefits of our telemetry product which allows you to go back in time and look at control-plane data. Very useful.
DefNotBlitzMain@reddit (OP)
I'm usually pretty slow to blame the network for weird issues that end up being caused by the network, but if you can't ping...
Ephemeral-Comments@reddit
If you can't ping then it can be anything of the following:
You just proved my point.
"I can't ping so it must be the network".
No, that doesn't mean anything.
Sporkmancer@reddit
That seems like a lot of ways to say it's just the network!
/s (mostly, as a software dev myself I usually assume it's the program first)
Ephemeral-Comments@reddit
Exactly. And none of these issues have anything to do with the network. Yet it's the network engineer who, usually at 2AM, has to prove that it's just a host-owner who assumes it's the network because "he can't ping".
And then they wonder why network engineers don't want to interact with them.
JoshuaPearce@reddit
I think there's a default cause for every little niche in IT.
For android, it's the manifest file. For apple, it's the certificates. For C++, it's uninitialized values. If there's a cord involved, it's probably the cord.
espositorpedo@reddit
Yeah, I learned that lesson the hard way on my home network, about 15 years ago. I still have connectivity to the Internet, and the gateway was only about 6 feet from my computer, so it couldn’t be the cable, right?
I called the ISP and the very patient technician asked me to replace the ethernet cable. Sure enough….
Later, I had a job supporting Xbox 360 through a third-party company. There were two or three times when I solved connectivity problems by asking the customer to humor me and replace an ethernet cable. Sure enough…
6890@reddit
A failed ethernet cable feels sorta like a unicorn to me. I've never seen an actual failed cable that worked and then didn't. I have seen cables with broken retention clips that back out of the port... the cable is fine but won't stay seated. I have seen bad cables because sparkies are blind and can't wire them properly. I've seen NICs die. I've seen cables fail because a rat got hungry. I've seen fibre fail because it got bent or pinched. But never a normal cable just...stop working.
DefNotBlitzMain@reddit (OP)
Definitely a super rare unicorn. Over the last 10 years I've seen 2 cables fail. They've got a pretty good track record considering how many cables I used to check when I was on-site IT.
TrickyAd8349@reddit
Happens way more than you think possible.
HammerOfTheHeretics@reddit
In the 11 years I worked at Cisco I saw many cases of bad ethernet cables that just... stopped working for no obvious reason. We had a specific disposal policy for them so they didn't keep causing problems in the lab over time. It involved scissors.
espositorpedo@reddit
I can understand that.I haven’t had a cable fail since then. Nor do I remember what the CAT standard was. I suspect newer cables and better standards don’t fail as much. I don’t even remember the exact symptoms. It was something like getting to the Internet, and then websites not opening properly. I mean, I was getting to the ‘net, and getting to sites, but things were wonky. Replaced physical cable. Problems went away. (This was well after all the tried-and-trues of resetting / restarting the gateway and the computer and so forth. Replacing the cable was the last thing I did.
DefNotBlitzMain@reddit (OP)
That reminds me of the time I wiped my entire homelab trying to figure out why half my lab wasn't getting internet but was able to talk to some stuff on the network... Turns out the cable that connects two switches together wasn't plugged in. Oops!
NaoPb@reddit
This reminds me about me calling my ISP and they saying it was probably an outage that was happening a couple of towns over. Turns out it was my own fault. I had set my previous router up to give my Pi-Hole a static IP. And I had in fact just switched ISPs and received a new router. The Pi-Hole had worked on that new router, but a day later the IP reservation ran out and then my computers could not find the DNS (Pi-Hole) I had pointed them to.
It was only a small fix to set it to static and to the IP I had pointed my computers to. That way it shouldn't happen the next time I switch ISP.
Zonnebloempje@reddit
This is kind of what I, as a user, have experienced. Fibre optics internet was installed almost 2 years ago. In the previous autumn, we know the guys that are "upgrading" our streets had hit a cable on Tuesday afternoon (around 15:00) and all fibre connections were lost, including ours. We called in to our provider, they told us what the problem was and that it was being worked on. At 16:30, the guys from the company that owns the cables came around, telling everyone that a cable had been hit, and that they were working on it.
On Wednesday afternoon, everyone who had a black TV on Tuesday, was happily watching TV again. Except for us. We had nothing. Husband works from home, and he had been to the office on Wednesday, but it is quite a bit of a commute, so he wanted to work from home.
I called the provider again: I had to wait, they were working on it. Thursday, same thing, no internet, TV, anything. Friday morning I call them again: nothing new, they are still working on it, and I still need to wait. So I decided to call the cable owning company, and they told me that the problem had indeed been resolved since Wednesday morning. Called our provider again, and said they need to call the cable owning company, because the issue has been resolved, and we still do not have anything. They'd check and get back to me.
Friday afternoon I still had not heard, so I called once more. I was getting pissed off at no resolve, so they finally transferred me to a manager. Told him everything, and I told him to PLEASE, contact the cable owners and do something about it! They did. He called me back. And there must be a problem with our network, because everything was resolved and working fine at the cable owners. (Duh)
Monday, they sent a mechanic. But they could not fix it. So on Tuesday, they sent a ground worker (the ones who actually lay the cables and connect them), and it appeared that there was hardly any cable in the box that should have a couple of rounds. There was nothing. Guy was surprised our internet & TV had even been working before...
So not exactly 10 days, but 7 days of down time due to 1 cable not being long enough to be properly connected.
DefNotBlitzMain@reddit (OP)
Ouch. Not clearing the outage on their end is the worst. Especially when it locks you behind an automated system telling you there's an outage. "No, there's not an outage anymore, please fix my stuff." Doesn't work too well on the robots programmed to repeat the outage message...
Pioneer1111@reddit
It really depends. Supporting the IT of other companies? Can be great if they are good at the their part of the triaging/troubleshooting.
Supporting other tech-related departments as the on-site IT? Not necessarily. I've never had a general user feel like messing with sysprep is a good idea. I've had 3 people do so while supporting the various depts that are under the IT umbrella. One exchange admin, one developer, and one director.
IT folks often have the rope to hang themself with. And the self assurance to not believe that the wobbly chair can and will fall out from under them.
I would still go back to that over supporting a primarily finance institution or a hospital though.
DefNotBlitzMain@reddit (OP)
Everything we do is based in the server and the industry is pretty tight on security, so it's almost exclusively sysadmins or whatever equivalent they have.
They definitely do have the rope to hang themselves on, but most of them know what they're doing. It's a pretty comfy gig, even if it is a bit repetitive sometimes.
Refusing to check their own network for a week definitely stuck out.
AintNobody-@reddit
Definitely! I've been that guy. But the other side of that coin is that you can get your head so far deep into your niche that you kind of forget the basics can apply to everything else. I've been that guy, too. Like, "you mean you can reboot...a firewall?" lol.
Pioneer1111@reddit
I've absolutely caused my own pain. Sometimes it can be swept away as a learning experience, sometimes that learning experience comes with downtime on prod.
Also I've actually had a wobbly chair fall out from under me at work - changing a ceiling mounted projector's bulb.
rhoduhhh@reddit
Like...one of the early troubleshooting steps when a device has no internet connection but everything else around it is fine is replacing the cable with a known good cable. This physically hurts me. I am sorry you dealt with that.
DefNotBlitzMain@reddit (OP)
To be fair, this is a server in a rack in who knows where. Both the guy I was working with and I were remote so it's not like either of us could go check it... I'm sure once someone physically there to troubleshoot looked at it, it was one of the first things they did. It just took forever to get someone to troubleshoot.
RememberCitadel@reddit
That's why they invented port channels. Of course the type to not check the network cable are also the type to ignore a downed member of the port channel also.
570250@reddit
I worrked at a major NY hospital (you would know the name). After an all-night network upgrade, the pharmacy robot was the last to be tested and.... it failed. The robot was responsible for loading the pill trays for all patients in that building. At 6:45am the head of nursing was in my face - "THIS MACHINE HAS TO BE RUNNNING IN THE NEXT 10 MINUTES OŔ PEOPLE WILL DIE WITHOUT THEIR MEDICATIONS!" - full volume.
I'm literally sweating and tearing at my hair, checking and re-checking network and system configurations. It was getting CLOSE.
Finally, in exasperation, I switched out the Cat5 cable. Bingo- started right up.
True story.
Ok_Pomelo_2685@reddit
I feel your pain! We have a major piece of software hosted in Azure and managed by the software developer's tech team. A 3rd-party app on their Citrix VDA is supposed to authenticate with our on-prem ADFS server. We spent three months on finger-pointing, trouble-shooting, and phone calls.
"It's your firewall!"
"No, it's our firewall!"
"You need to trust the site in your GPO."
"We don't use that GPO."
Just endless back and forths! Then one day, the hosting network engineer says "Let me get my colleague on the phone."
This guy jumps on the phone and resolves the issue in 5 minutes. The routes to our ADFS server were neither added to their on-prem firewalls nor the Azure firewall.
Three months of pain and this guy fixes the issue in 5 minutes!
itenginerd@reddit
I've been that guy so so many times. I wonder sometimes how businesses survive without those guys, cuz theres not an infinite supply of us.
deeseearr@reddit
Fun story: If you work in a regulated industry like finance or telecommunication then you may be legally required to take two weeks of consecutive vacation every year. If the industry is any good you may even be required to not take calls or "just help out with a little issue" for the whole time you're away.
The reason for this is nothing to do with making sure that you take a break from work or have a chance to relax. It's to help identify parts of the business which rely on you and only you.[*]
Sure, you'll still come back from those two weeks to find a pile of stuff sitting on your desk that nobody else could even begin to understand let alone fix but at least you can use that pile to make a list of things that somebody else needs to learn how to do.
[*] Okay, that's not the whole reason. It's also so that your coworkers will have time to find out if you're running a money laundering operation under the table. How will they do that? Beats me, but if you're the sole person in the company who handles a lot of accounts all year long, it's going to be difficult to prove that you aren't doing that. Having someone else take over is supposed to reduce risk.
FeatherlyFly@reddit
"Oh, I account all payments every Friday. Why yes, there is a small discrepancy in accounts every week, but we handle cash, it's too be expected. I always handle the cash myself, I don't believe anyone here is dishonest but why offer temptation? No, I've never, ever taken a Friday off, I'm far too dedicated. "
Is that person skimming cash off the top? No idea, but I'd the discrepancy dissappears every time they're forced to take a Friday off it's worth asking the question. Lots of processes in that realm, where one person can easily hide part of their work flow basically forever if no one else ever has to take over. Enough processes involve money for it to be a temptation at the very least.
kai58@reddit
I have heard a couple of stories of people having some kind of scheme set up (money laundering or otherwise) that got exposed when they took a week off.
itenginerd@reddit
Yeah we had to do similar during the 2008 financial crisis. 4 weeks off in the year, two unpaid , so ZERO contact allowed. Not sequential, like the financial sector requires, but same idea. Was the best thing that ever happened to us. Vacation rates skyrocketed the next three years after we learned to hand our stuff off effectively.
krennvonsalzburg@reddit
God, you just know the colleague who actually knew his shit was SO done with his know-nothing cow-orker as well.
3LostArrows@reddit
Being the guy who installs, tests and fixes the cable. I feel this pain. Usually because the engineer will show me a link status as up, saying the cable has to be ok. When changing the cable is usually a 30 second job and would at the least eliminate a layer 1 issue.
smcbride27@reddit
"When was the last time you rebooted your local?"
Distracted-User@reddit
I swear every time I ask a user to reboot they think i'm just messing with them.
Like I know we joke about it, but it solves a surprising number of issues. I'm not asking to be funny, i'm asking because it'll probably fix your issue.
Ok_Pomelo_2685@reddit
"I just rebooted my computer and it immediately loaded back to my Outlook screen."
__wildwing__@reddit
I work in manufacturing, not IT. The other day, a tech from the company that makes some of our measuring equipment came out to calibrate said equipment. One of the pieces we’d been having issues with. Kept having to power it on when no one had turned it off. Also it was way out of permitted variance.
The tech is working on it, tells me just reboot it. I ask him to show me, as this is expensive equipment and I don’t want to fry anything. Even the computer that runs it was built by the equipment company. He hits the power button, unplugs the measuring machine, plugs the machine in, hits the power button.
Only, the power button he is hitting is on one of those all in one monitors. Yeah, that’s the “computer” but that’s how the display turns on and off.
Oh, and the issue with having to boot up repeatedly? Someone wired a motion sensor into the main circuit. It was bringing down the automatic robot, along with the measuring machines. Sheer genius there.
Ok_Pomelo_2685@reddit
Ok, I laughed out loud on that one!
"Only, the power button he is hitting is on one of those all in one monitors. Yeah, that’s the “computer” but that’s how the display turns on and off. "
K_Boloney@reddit
I had one a few days ago where a device wasn't working but I was told was plugged in. Unfortunately the extension cord it was plugged into, was not plugged in.
NDaveT@reddit
Users aren't always up to date with 140-year-old technology.
ChrisCopp@reddit
That's my goto for connectivity issues. Check the cable!!!
NewUserWhoDisAgain@reddit
Tell me about it.
1 week of fighting the 365 team about the end user's license.
"License issue. Please look at their account."
"Reinstall office."
"Did that. Same issue."
"Reinstall office again."
"Did that. Same issue."
"Well I dont know what you want us to do?"
... Look at their account?