Use of LLMs for daily work: Good, or Bad?
Posted by Aldar_CZ@reddit | sysadmin | View on Reddit | 62 comments
Hello everyone,
by now, I've been a professional linux admin for close to 8 years, so not too long, not too short.
Lately, I've been kinda struggling with this feeling of "shame" of relying on LLMs for my daily work -- Be it brainstorming, or coming up with automation scripts, instead of writing them on my own; something that I've been doing for most of my career. And it makes me feel... Ashamed.
On one hand, it is much faster, and of a higher quality than if I had written it by hand, but on the other, it feels like cheating. Like I lack the know-how or ability to do the same, only with more time required.
I don't believe that an LLM could _replace_ me per say -- I still go through the scripts and make sure they do exactly as I asked, but still...
What do you all think?
Master-IT-All@reddit
If you're not focusing your work through an AI agent now, someone else is and they will take your job.
HWKII@reddit
Truth.
florence_pug@reddit
It's a tool like any other. You can circlejerk all day about it being bad or good, but at the end of the day if you can use a tool to improve your job performance, I don't see what the problem is.
Commercial-Fun2767@reddit
You could say: It’s a tool like any other, and if performance is the only thing that matters, then using it makes sense.
The problem is: performance is NOT the only thing that matters.
frankztn@reddit
Man this is it. blindly trusting it makes me question if that person is even a sysadmin. Also literally no excuse for not having documentation anymore with AI. I have it create me deployment and migration plans with the information I provide, I ask it to leave areas blank for me to fill in. It’s like having a an assistant it’s so nice. That extra two hours it would take me to create and format documents takes like 5 minutes now
Sinister-Mephisto@reddit
Unless you work with toxic mother fuckers that say shit like “the code is self documenting ! If you wanna know more about the purpose of this app / tool and how to use it, just ask ChatGPT, chat gpt just always one shots a solution to any problem, no need to document”
-TheDoctor@reddit
There's a whole plot-point in season 2 of The Pitt about AI and how the output it gives you needs to be thoroughly reviewed.
If you use it like the tool it is, and self-verify the information it gives you where needed, it can be quite a helpful tool.
Real example for me just last night: I was comparing two Spotify playlists in excel. I had the songs from one playlist in column A and the other playlist in column B. I asked Copilot from within excel to create a comparison table between the two columns and leave blanks in cells where the data existed in one column but not the other. It spit out the table in like 2 minutes and even did extra work and gave me statistics on how many matches there were and how many unique entries there were between the two columns.
Could I have done that work myself? Probably. But as someone not very familiar with Excel, it was nice to just have this well-formatted sheet with extra statistics and info in a couple minutes with no real effort on my part.
Commercial-Fun2767@reddit
I believe noone will discuss this use case. But there are different use cases.
florence_pug@reddit
For sure. I have it give me a change management report and fill out the pertinent information. It's so fast and easy.
Sillylilguyenjoyer@reddit
Exactly, as long as you aren’t wholly reliant on it to function I think its fine
FI_gure_It_Out@reddit
The problem is this is exactly what everyone is doing...
jhuseby@reddit
That’s been my take too. There’s lots of ways to use LLM‘s, if it’s benefiting you and your role, then it’s likely a good tool to have.
Commercial-Fun2767@reddit
I don't want to use it very much for 2 main reasons. And I really try to restrain myself from using it. It's really awesome and powerfull, tough.
First, the cost. The way RAM prices have changed is striking. It's strange how we complain about the state of the world while blindly using every tool thrown at us, as if it were just a tool.
And the second reason is learning. I really learn less when I simply rely on AI, at different levels:
It's clearly not just a tool.
hva_vet@reddit
I use it to parse noisy log files and it's usually very good at finding errors in the the logs that help me resolve otherwise annoying problems in a shorter period of time. I would have found them myself eventually but an LLM can find them in seconds and usually suggest a proper fix, so why not. The LLM just replaces endless google searches and useless forum posts and it's often very close to correct. I'm not using it to replace my knowledge but instead to speed things up that otherwise would consume a lot of time.
iamMRmiagi@reddit
it's not using it that's the issue, it's blindly following it or relying on it exclusively that's a problem. Make sure you understand roughly what it's doing, don't 1-shot everything and all's good.
There's also the argument that some things can be as easily google'd as they can be written by an LLM.
There's a difference between 'vibe coding' and 'AI-Assisted' coding.
-TheDoctor@reddit
Thank you for saying this.
tacticalpotatopeeler@reddit
I’m pretty much forced to if I want to advance.
I feel like I’m getting a lot dumber, but I’ve also been able to do things very quickly that are well beyond my current skillset.
I hate that I don’t feel like I’m learning much (I may be, at least a little). I do try to use it to learn but there’s a lot of pressure to just get shit done.
Kinda saps my energy for learning on my own after work hours too, so that sucks.
cubic_sq@reddit
I do lazyweb for stuff i havent used in a long time. As this saves time.
deGrubs@reddit
Where you ashamed about using google instead of just knowing or RTFm in the past? I'm old enough to remember when internet outages, first started to have an impact on my ability to do the work. Just like google was a tool to be more efficient in finding the information you need, AI is a tool to make you more efficient in what you do.
mahlalie@reddit
I very much think of using an LLM as outsourcing my googling. At least, in the way I use it for work.
I do some vibecoding for personal projects, but even for that, you have to know when it's BSing you and troubleshooting skills still matter even if it's just to tell the LLM what it needs to help you inspect.
formerscooter@reddit
Why feel shame? I save myself hours of time just with documentation. Cluade can take my rough scribbled notes when I build a process for my team and format it like the rest of out documents. I still review and edit, but I'm bad and writing out full steps. Just as a test, I just uploaded screenshots of each step, and it wrote a solid document out.
-TheDoctor@reddit
I use M365 Copilot relatively frequently since I have a license for the full version. Its just another tool in my toolbox. I treat it like what it really is; a fancy search engine results aggregator. I trust its interpretation of those search results to an extent but always self-verify anything that seems remotely questionable.
I tent to prefer Copilot for Microsoft-specific information/guides/tutorials. For the most part, it tends to know what its doing/talking about when it comes to that side of things, which makes sense considering it has direct access to Microsoft's entire internal knowledgebase.
pm_me_domme_pics@reddit
Honestly if it's helping you it's all good. I know if I used it to try troubleshooting anything complex I'd be log diving on barely related stuff that llm pointed me towards rather than finding the actual source of the problem.
worm_dude@reddit
Don’t sweat the downvotes. It’s just people blindly lashing out.
My workflow before was heavier on Google and stackoverflow, and that bit has shifted over to the LLM’s. In fact, Google and stackoverflow have been so enshittified now, it’s not possible to go back. The LLM’s are the primary way you have to search them now. Sure, I have to read through and verify everything the LLM writes, but I don’t see how that’s different than verifying code I pulled from stackoverflow. And the whiteboarding phase is over in 5 minutes now, when that used to be a majority of the time sink.
Most of the sysadmins complaining about AI are just grey beards doing their usual complaining about new tech they haven’t even tried. Ignore them, as always.
sTaCKs9011@reddit
Just think about resource consumption and ask yourself is this juice worth squeezing all these fruits (ammount of water polluted/global temperature increase) lots of people are using Ai for bs, atleast youre being productive with it. But if youre asking it to write a script to cleat chrome cookies/cache/history youre probably misalocating your resources.
Normal_Choice9322@reddit
I have zero shame using a force multiplier to get my work done
But I do think it's going to have a huge disruptive impact on the human experience in good and bad ways
CharlieTheK@reddit
My company(and most others) are spending a small fortune on AI solutions. We're expected to use it, and now and in the future it's something employers of all kinds will expect their people to be skilled in the use of.
It's just not a moral thing for me. I'm using it every day to enhance my work because that's what they want and it's not going anywhere.
I guess that might sound a little fatalist but I've watched enough coworkers turn over their entire job to CoPilot and be praised for it that I don't think less of myself because I saved a few hours having an LLM frame out a script that I needed.
TantumCouto@reddit
I usually use LLM’s to have it develop any scripts I need. Any outbound emails I write them (unless it’s a EHR vendor request otherwise fuck them).
I can sit around all day worrying about the ethical use of LLM’s (rightfully so), but there are much bigger corporations polluting the planet with their bullshit and face no repercussions, so why tf do I have to worry if my 1 script every other week is going to kill the ocean. If I’m not going to use it, then my boss will certainly find someone who pays less and will use it to replace me.
I also never use LLM’s to generate art or anything like that because that’s just ultra lame lol
thetokendistributer@reddit
I don't feel bad. If I can delegate time, and information gathering, and it allows me to shut off at 5 more consistently throughout time. Then that is what matters.
missed_sla@reddit
If it's one tool of many, used alongside a working knowledge of your topic, it's fine.
If it's the only tool you use and/or a replacement for knowing your job, you could also use it to write your CV.
GullibleDetective@reddit
A llm could have prevented this repeat topic
Kindly_Revert@reddit
Its a tool like any other. I try to avoid outsourcing all of my thinking to LLMs though, because there are aspects of our business that it won't add to a project plan for example.
My biggest concern is the next generation of admins who will primarily rely on them for everything, even when they are wrong. You are already seeing majors fumbles with it in production, like at AWS where someone caused an outage with Kiro.
It helps, but you still need a human in the loop who understands.
meatballwrangler@reddit
my team is all in on AI and I'm already so fucking tired of seeing 3,000 word copilot novels whenever an easily googleable error is thrown. I actively refuse to use it
yumdumpster@reddit
Why? I use Gemini for GAM commands all of the time because I literally cannot be asked to memorize every command and the accompanying syntax, This is something I could do but the LLM just saves me time. There are too many things for the Average sysadmin to keep track of all of them now, the LLM's just allows you to increase your coverage so to speak.
DavWanna@reddit
Recently started doing this with Claude and it's really neat how it can just immediately combine and analyze the output files to get straight to where I needed to get to.
derango@reddit
AI has a place.
The problem is when people don't understand what it is and what it does and try to use it for everything and blind trust in it.
It's good at things you can find in documentation especially if you give it the documentation. It's good at taking a lot of data and organizing it. It's fairly good at coding up scripts, especially simple ones.
It's bad at trying to take a bunch of symptoms and troubleshoot a complex problem. It doesn't know anything. It predicts. It doesn't think through a problem. It's not intelligent. It's real good at looking like it is.
R-tardGPT@reddit
This
Extra-Organization-6@reddit
nobody felt shame about using stack overflow. or tab completion. or config management tools that write configs for you. the only difference is LLMs are newer so the guilt hasnt worn off yet. if you understand what the output does before you run it, you are still doing your job.
redyellowblue5031@reddit
It’s another layer of abstraction. The trick with this one is that it’s never truly rooted in fact. You must verify.
There’s ways to engineer the prompts to get closer to fact or bubble up uncertainty, but you still need to verify.
So unless you also feel bad about not programming or binary because you don’t really know what it’s doing, I wouldn’t sweat it too much.
Man-e-questions@reddit
Depends on how you leave the output IMO, or what you present. Proofreading it and adjusting so it doesn’t look/sound so robotic. Or just telling people the truth. Like I will tell my boss “hey I used Copilot to make a plan for ___” and present it as kind of a starting point etc
Zromaus@reddit
Some admins thought Powershell was cheating back when it was first introduced.
"You won't really know what's happening"
"It's doing too much for you!"
I imagine a sense of shame in some who made the switch in the early days was a very real thing.
Quattuor@reddit
Lol, coming from Unix/Linux world, I was so stoked when the PowerShell was introduced and when they starting introducing more and more cmdlets for os configuration. Finally I could script most of the os configuration for repeatable lab builds.
uptimefordays@reddit
LLMs are increasingly useful tools that, when used effectively, enable much higher individual productivity. There are certainly downsides, reduced engineering knowledge of code/systems but that’s always an issue with increased output.
At the end of the day, these systems still benefit from expert humans in the loop to review and optimize their output.
Dontkillmejay@reddit
Speeds up a lot of menial tasks. I don't see an issue.
WALL-G@reddit
Don't feel bad, use the tools you have in front of you. Concerning LLMs, my personal vibe is if I can't read or troubleshoot the output, I have zero business using it in production.
My workplace has been on a major "AI" drive for a while, the direction seems to be "ensure you're using it, we'll figure out the vision later".
They've given us all Copilot licenses and nerfed it so hard that when I ask it questions for my job (think security groups, routing, firewalls) it either tells me the topic is forbidden or it asks me if I'm suicidal.
-GenlyAI-@reddit
I use copilot for a lot of things daily. It is a very useful tool to augment my job.
I also double check the random bullshit it spits out sometimes.
Bert__is__evil@reddit
If it runs and does the job, good.
But when your colleagues start using LLMs for daily e-mails between colleagues, the fun ist over.
reol7x@reddit
I see it as a tool.
Do I have it write scripts for me? Absolutely.
I could write a script myself. It would definitely take me longer than 30 seconds to throw one out though.
What makes it ok in my mind is being able to understand enough of the script it's giving you to know it's doing what you intend it to.
My coworker also had it write scripts for him.
His task was to write a script that iterated and deleted files under profiles created in a weird line of business app...think program files/app/profile.
The AI wrote him a script that deleted everything out of C:\ProgramData. EVERYTHING.
He presented that was a ready for production script
AI is no better than the person using it.
PS: I've never understood it, but I think my coworker is an example of one of those people that weren't ever able to "Google" something.
Outside-After@reddit
Too small and under resourced at times here. It has its uses to get an initial leg up and complete the task or project sooner.
phunky_1@reddit
I think ultimately it will make people retain less knowledge and truly understand less.
However, I think it is an inevitable tool at this point and you are putting yourself at a disadvantage for not using it.
Could I review the documentation for powershell commands, review all the available parameters and write my own script?
Sure, that takes time to read through it all. Or an LLM can do it for me in about 3 seconds, I can review the code it generated to confirm it looks right and isn't doing anything it shouldn't be.
Is it always right? No, but it gets you 95% of the way there and you can finish it on your own in a fraction of the time.
playahate@reddit
It is just a tool. You are not someone who is just starting out trying to learn something relying on AI for everything.
spazmo_warrior@reddit
Dude, I feel the same way, but look at it this way. You still "quality" check the code, right? Look at it as any other tool in your arsenal.
kosta880@reddit
Excellent. Just don’t forget to keep your brain turned on and know what it’s doing. It’s a tool.
ivaneleven@reddit
It's all well and good if you treat it as an extension of your brain and a multiplier to your output - help script faster, brainstorm ideas base on your input, look up and summarize information that takes much longer to go through otherwise. But if you use it to replace your higher brain function entirely, then I would have a hard time trusting you.
Lando_uk@reddit
There's guy in our place who uses Claude for everything, he does some amazing work and it takes mins to produce results, he's really the golden boy. He gets all the interesting, cutting edge projects compared to the old farts who get lumbered with the old fart technology.
Unless you're 55+ and looking for retirement, be like him.
prematurepost@reddit
I felt the same at first, but then I started seeing it like a calculator. If the result is right and you understand what you’re doing, using tools is totally fine.
OddAttention9557@reddit
At this point it would be a dereliction of duty for personal gratification to write admin scripts manually.
simulation07@reddit
Depends on if you automate yourself out of a job. I say it’s good - because I can now reach higher than I used to before. It’s a double edge sword for sure - but who cares? It’s all a game anyways. Let’s play.
TheDevauto@reddit
Why would you feel bad? Do you search goole for answers? Do you read man pages or ask someone next to you?
Dive deeper into it. Learn Claude Code and use it to maintain your scripts and tools. Figure out how you can use local models to perform tedious tasks.
A tool is just a tool. A sysadmin who is not lazy is no sysadmin at all. Lean into it and use it.
tarvijron@reddit
This is the equivalent of “Lately I’ve been using spellcheck instead of just knowing how all words are spelled or looking them up in a physical dictionary I keep at my desk. Am I still an author?!”
The business context and content of your work matters more than the memorization of some syntax or obsessively following some tech blogs for every framework/os/cloud provider announcement. Like with the author: the story is the story, not the spelling.
BadSausageFactory@reddit
no more musicians, now we're all conductors. unfortunately we don't need ALL of you to be conductors, so..
yeah no there's no way AI could replace your job even if you keep showing it what your job is and helping it learn. I have mine write wrong stuff and then I'm like, you so smart AI, yes skibidi yes. Been gaslighting that thing, it's my own murphy's private war.
JBD_IT@reddit
As long as you're using it to enhance your work instead of relying on it to do your work you'll be fine. I think AI used wrong makes you dumber.