Using LLMs for simple tasks?
Posted by hoppyboy193216@reddit | ExperiencedDevs | View on Reddit | 89 comments
Has anybody noticed a huge uptick in engineers misusing generative AI for tasks that are both simple to accomplish using existing tools, and require the level of precision that deterministic tools offer?
Over the last week, I’ve seen engineers using ChatGPT to sort large amounts of columnar data, join a file containing strings on commas, merge 2 large files on the first column, and even to concatenate two files. All of these tasks can be accomplished in a fraction of the time using shell, without the risk of the LLM hallucinating and returning bad data.
I understand that shell commands can be difficult for people unfamiliar with them, but it’s trivial to ask ChatGPT to write a command, validate how it works, then use it to make changes.
I see this practice so much that I wonder whether I’m missing something obvious.
blazingsparky@reddit
In seniors I’m starting to see it as a symptom of burnout where they just straight don’t want to do anything
ShartSqueeze@reddit
This also coincides with the behavior of 1) putting up massive PRs they didn't review and 2) getting frustrated/annoyed when being asked questions or for changes.
I'm not sure if it's burnout, but these tools are definitely exposing some laziness in some devs, rather than the proclaimed productivity boost.
DorphinPack@reddit
I’m sure there are lazy devs but it doesn’t seem like a very useful generalization in case that’s how you meant it or others are reading it.
The idea of individual laziness in this current working environment (surrounding climate of death/fear, job market, wildly different and disruptive predictions of the future) is pretty fraught.
People I thought of as rock solid are zombies right now. Numb.
Is it lazy to reach for a tool like AI when the alternative is completely locking up? How can you tell if that’s what’s going on with someone in a competitive environment where vulnerability puts a target on your back.
This industry is a giant prisoner’s dilemma. I hope if the dev labor market does crash (doubt) it at least crashes in a way that we start looking out for each other.
thekwoka@reddit
Most are lazy and only borderline competent.
That is a useful generalization, since the most common outcomes of new processes will be dictated by those lazy and incompetent
DorphinPack@reddit
I am curious — do you have examples of lazy devs setting poor process on such a systemic level. Are they managing up?
thekwoka@reddit
Why wouldn't it be?
What?
I said "most devs are lazy and only borderline competent".
idk what that has to do with setting poor process.
DorphinPack@reddit
This part?
I’m afraid this response has moved me further from clarity.
thekwoka@reddit
OUTCOMES of processes.
OUTCOMES.
DorphinPack@reddit
Isn’t that a bit of a nitpick though? Feel free to explain how they’re different I’m sure they are from your POV but it doesn’t solve the larger disconnect.
Whatever the artifact of so called “lazy devs” is I disagree with your broad generalization.
thekwoka@reddit
Process: Don't Speed
Outcome: People still speed
DorphinPack@reddit
Not gonna argue but not because I don’t disagree. I really, really do.
I just don’t find it worth it to try with someone who can have that much certainty about this. Doesn’t seem rational.
thekwoka@reddit
It's pretty easy to be pretty certain when you spend time with developers and seeing the code they think is acceptable.
met0xff@reddit
Yeah on days when I'm tired or not feeling well I can absolutely feel the allure of just feeding that stupid bug to Claude. On other days I feel like doing everything manually. Sometimes I guess it's neither burnout nor laziness but just "actually I have a million other things to do"
WizardSleeveLoverr@reddit
Yeah here lately mine is almost always “I have a million other things to do”
budding_gardener_1@reddit
Hmm - I have a colleague who behaves like this - I just assumed he was a jerk...maybe he's burned out.
donjulioanejo@reddit
And the irony is, they're having this burnout because executive leadership fired 50% of devs with the justification of "we have AI now!"
So they're pretty much doing exactly what senior leadership wants them to do.
biosc1@reddit
Hi, that's me. Our team got cut from 6 to 2. Guess who vibe codes the crap out of their day now while interviewing elsewhere?
CodacyKPC@reddit
The Pope?
hooahest@reddit
wonder what position he's looking for
Ch3t@reddit
Phrasing BOOM!
hooahest@reddit
nice
AntiqueTip7618@reddit
Oh hey, it me
thekwoka@reddit
or incompetence and laziness even more so
pl487@reddit
LLMs won't generally hallucinate in this situation. They will write a Python program to process the data and return the output. It's just writing code with less steps.
HosseinKakavand@reddit
Totally. Deterministic tools beat probabilistic ones for data transforms. My rule, use AI to draft the command, execute native tools. Guardrails help. Keep a read only copy, sample before and after with head and tail, check counts with wc, checksums with md5sum, diff small subsets. Prefer csvkit or xsv for CSV, join and awk for merges, jq for JSONL, sort with LC_ALL=C for speed.
We’re experimenting with a backend infra builder, In the prototype, you can: describe your app → get a recommended stack + Terraform, and managed infra. Would appreciate feedback (even the harsh stuff) https://reliable.luthersystemsapp.com
larsmaehlum@reddit
The LLM will most likely just generate a python snippet and run the data through that, so I just ask for the snippet itself.
kagato87@reddit
I've been doing this.
Not "convert this model to this structure" but "give me a script to convert this model to this structure."
Then I went through it, asked it why it was changing a certain property. Stupid thing made an assumption about my data structure.
nullpotato@reddit
This week I struggled to get LLM to stop renaming fields named tpm and mangle them into tmp. Like yeah I know tmp vars are common but that's not what the class members are called.
indigo945@reddit
Frankly, if you have class members by the descriptive name of
tpm
, your problems started elsewhere.thekwoka@reddit
They mostly don't. They just try to do it with normal gen ai
Organic_Battle_597@reddit
Exactly. And for something repetitive, it's expensive and slow to let something like Claude Code churn through it. Better to just give me a script, I can check it out and tune it as I see fit, then run it on my own machine when I'm ready.
Trio_tawern_i_tkwisz@reddit
That is also a symptom of not knowing how to use AI tools efficiently.
Instead of asking AI just to merge data, they should ask it for a shell script or regex doing that. This way, a non-deterministic tool gives a deterministic output, that one should then understand and only then use on their own.
lordnacho666@reddit
Any one of them is simple to do with a shell, but there's a heck of a lot of shell commands you could use. What's wrong with asking the LLM to tell you the command?
hoppyboy193216@reddit (OP)
Please actually read the post…
BootyMcStuffins@reddit
I’ve also seen a lot of engineers say “I could have done this in 2 minutes and Claude took 10, Claude sucks”
It’s like using a CNC to cut a 2x4
EmbarrassedSeason420@reddit
I usually start asking for suggestions from an AI tool.
If it's just a few simple shell commands I will just run them.
Then choose what I think is the best one.
Then ask the tool to make an implementation plan.
WrennReddit@reddit
That's engineers working through those stupid token consumption quotas set by the C suite.
Western_Objective209@reddit
I haven't seen this. An LLM can write the script for you trivially, so having the LLM do it manually is quite stupid
DorphinPack@reddit
This is way too simple of a statement that doesn’t seem to acknowledge any of the drawbacks beyond incorrect output.
There are plenty of cases where an LLM can but shouldn’t be used. Free money is drying up and costs for useful models aren’t going down at all. Cognitive atrophy is a real concern even just beyond the idea of keeping your skills sharp.
I’ve had a lot of good ideas while doing “busywork”.
Western_Objective209@reddit
sure, you can also learn a lot working on a system disconnected from the internet with just documentation files, a compiler, and vi. that's kind of outside of the scope of what we're talking about though
DorphinPack@reddit
That’s a strawman.
Western_Objective209@reddit
Feel like you're the kind of guy to call anything a strawman
DorphinPack@reddit
Were you trying to do a bit by responding with a textbook ad hominem? I’m having a slow day lol
Western_Objective209@reddit
Sure we can play these games if you want.
Off-topic; OP mentions people using a tool incorrectly, and you are derailing the conversation to soapbox
Incorrect. Cost per token has dropped dramatically in a short period of time
Generational bias. Using tools/abstractions that you are familiar with is "keeping you skills sharp". Using new tools/abstractions is "cognitive atrophy"
Mischaracterization. I gave an example of pre-internet technology, not creating a weaker argument from whole cloth
Again a mischaracterization. I called out a pattern in your line of argument, not a personal attack
DorphinPack@reddit
Yeah I am soapboxing. Good callout!
If you're watching the $/MT prices, I get that perspective. I see the same numbers. I have yet to find someone in my network using AI in anger for something complex without complaining that it takes more tokens than it should and the cost hasn't dropped low enough. They reach for larger models because repeatedly trying to get it working with the smallest possible model can take even longer and thinking gets short term. That's what I mean by it's not that simple and I really should have just explained that POV instead of getting on my soapbox. Thanks again.
Now. Take a gander. https://www.media.mit.edu/publications/your-brain-on-chatgpt/
It was a strawman because I am not saying we should shun new tools. I am saying they should be used with care.
And then finally, you said I "seemed like the kind of guy" and we're going to have to disagree on your semantic assessment, friend.
Cheers!
Western_Objective209@reddit
Moving the goal posts. "I want it to be cheaper faster" is not really a valid criticism. The agentic workflows use more tokens, but it's doing more of the work for you.
Everybody is aware of the paper. Yes if you use chatgpt by copy/pasting a question into the prompt, then copy/paste the output as your answer, you'll retain very little information about the task.
Rail-roading someones response and then throwing out of context strawman accusations at them is kind of rude?
If someone wants to maximize their understanding, they should be disconnected from the internet and just read documentation, the way people did pre-internet or early internet. That's how I learned to program, and I've had to work in air-gapped environments like that and surprisingly you'll end up being faster after a few weeks then if you just google'd stack overflow/blogposts about topics and let that info pass through your brain.
Now that we've reached a higher level abstraction where a machine learning model sits on top of internet data to make querying even easier, people have picked a new brain-rot boogeyman
DorphinPack@reddit
Yeah okay, dawg. You do you. 100% not trying to railroad anyone :)
DorphinPack@reddit
Ad hominem!
I will say, I’m not usually that girl but I’ll do it again if it means we get the hat trick. Let’s do it!
HaMMeReD@reddit
It's fair to say that LLM's can't process Tabular data, as tokens, in a deterministic way.
What they can do however is identify that data, extract it with tools from your prompt directly (not processing it), and write scripts around it and hand you the outputs or let you run it in your browser/ide.
It's also worth noting if your transformation isn't part of some kind of pipeline branch and is just a leaf node, there is no harm in handing it to a LLM to analyze, as long as you expect to verify it after. Not every use case requires 100% accuracy.
DorphinPack@reddit
What do you think I was trying to say? LLM bad?
I don’t mean to dismiss I just think there’s a disconnect. There’s a lot of money being thrown at borked LLM-backed solutions just to pad their resumes. Optimists will find plenty to reassure themselves but realists are all about understanding tradeoffs.
farte3745328@reddit
GPT will write a script and run it on your data for you and return the output. They've gotten smart enough to not try to process all the data in the LLM itself.
serious-catzor@reddit
"I understand that shell commands can be difficult and asking AI is trivial"
I think you answered yourself here😁
Bash, sed, awk and what not is all extremely powerful.... it's also extremely incomprehensible to a novice or even intermediate.
AI can give it to you in any scripting language you want.
My opinion about hallucinations is that... how many lines of code can you write while being confident you didn't introduce a bug? And how do you ensure despite that that your code is correct and works?
Whatever you did to ensure your code is good....Why can't you apply the same thing to AI code?
familyknewmyusername@reddit
You misread the post - they're not asking the AI to write a script. They're giving the AI the data and asking it to output the data but sorted
serious-catzor@reddit
Yes, I most certainly did.
Wow, that is so stupid it didn't even occur to me😅
hoppyboy193216@reddit (OP)
Did you read anything after the line you quoted?
itsgreater9000@reddit
I've had to deal with this a lot. I've made previous comments in this sub about it, but it feels like developers are using ChatGPT and others when not understanding their tools. Had to keep an engineer away from some of the dumb stuff that ChatGPT generates through. I can't get over that devs don't want to learn to script something in bash or python, or find tools that do these things well - they'd rather let some LLM do it (a good example is someone used an OpenAPI spec to generate an HTTP client in Java for it... ignoring the many tools that could introduce it as part of the build process or using a tool that would do it... correctly).
I don't know what to do. Devs are lazy, offer them a one stop shop, and they'll get creative. There's nothing to be done. Maybe devs will learn. But tbh, if they didn't before... I'm not sure they'll do it now.
failsafe-author@reddit
I absolutely use LLMs to write simple programs to do simple jobs. Then make sure I go over everything it outputs.
I will NOT trust LLMs to manage data for me. Yikes!
garfvynneve@reddit
We’ve actually just started advocating its use for simple tasks that otherwise would sit on a backlog because they never become priority.
The reason being the agent chugs away in the background and the resulting PR is either objectively right or wrong.
It’s quicker to just prompt the AI, and then approve the PR or discard the failures - than to sit in a backlog grooming session debating them.
dragonowl2025@reddit
Yes the successes have saved me so much time , the failures are often obvious, sometimes you just can’t give it the right context or it’s too much and you’re just going to get worthless results but used well it’s very obviously a productivity increase.
I almost feel like it’s an ego thing, nobody gets this mad when stackoverflow didn’t have the answers
Therabidmonkey@reddit
I get the most use out of LLM's for simple stuff like this. I haven't had much issue with hallucinating when I provide input data and explicitly call what I need.
Yeah, of course I know how to write a file parser and create DB insert statements for whatever batch processing I'm doing. But I'm not doing it in the 30 seconds of a prompt.
ComputerOwl@reddit
To be fair, there are enough configuration files that almost no one understands, with or without AI. There are just too many tools with either so many levels of indirection that everything essentially becomes magic strings or config files contain an actual string variable with the magic commands at some level of complexity.
Therabidmonkey@reddit
Sure but a lot of these values default, and when they are default they are the opinionated defaults of the tool/framework/library. When the llm makes that shit up you have no idea what philosophy influenced that description.
ComputerOwl@reddit
For those almost undocumented and/or overly complex trial-and-error games that many tool vendors call their config files, I am not interested in the design philosophy behind them. First and foremost, I need something that works. If it doesn't work well enough, I can still optimize it later.
StriderKeni@reddit
What I’ve noticed is engineers are reading less, not checking the existing documentation and just relying on what LLMs say.
It’s annoying when they said that X function doesn’t take some parameters (LLM hallucinations) when it’s not even mentioned in the official documentation.
NightSp4rk@reddit
People are struggling to keep LLMs relevant at this point.
nullpotato@reddit
I use them extensively at work, they can be a powerful tool if you use them as such and understand the limitations.
carllacan@reddit
Someone at my job asked chatgpt what the population of a certain city was, because searching for "name of city" and clicking wikipedia was apparently too much effort.
So yeah, people are using it for everything even when a faster and more accurate method exists, simply because the perceived effort is less and they are insanely lazy.
DigThatData@reddit
this is IMHO the biggest problem wrt LLMs and energy usage. The whole thing about how much water goes into training and running the models, that's all mostly out-of-context rabble rousing. But if it becomes common to involve LLMs in procedures that were previously done by shell commands or simple IDE features like find-replace, that's a significant additional energy expenditure for no reason.
ButThatsMyRamSlot@reddit
I wouldn't sleep on structured output.
You can weight the output attention layers to adhere to a structure, e.g. json schema, and provide input data to be formatted into that structure. It's very useful for formatting datasets for supervised machine learning.
It's important to chunk the contents and perform it in batches, as the accuracy of an LLM decreases as the context window grows.
Hot-Profession4091@reddit
“We’re going to build an MCP server to help devs create new endpoints/models!”
…My dude, we’ve had boilerplate generators for decades.
donjulioanejo@reddit
rails g model foobar
GrogRedLub4242@reddit
BINGO. 20+ years ago I could write shell scripts to generate boilerplate for common patterns needed in new projects. Nothing new here, kids. Templates have been a thing for a looooooong time.
bante@reddit
Not to mention those boilerplate generators don’t use more energy than a microwave and then screw it up 80% of the time anyway.
forgottenHedgehog@reddit
If you think inference takes several hundred watts then you have no idea what you are talkign about.
AverageFoxNewsViewer@reddit
Relevant
GrogRedLub4242@reddit
My rule of thumb at moment is that the more someone uses an LLM in software development the worst quality they are, and their code. It sends a signal, in broadstrokes anway. And when one uses an LLM too much or on the wrong thing it causes the user's own skills to atrophy, becoming less able to add value without training wheels. They become more of a commodity. Racing faster towards a global bottom of only the most desperate or dishonest.
Good luck with that strategy, kids!
blacksmithforlife@reddit
Because there are organizations (like the one I work at) that are forcing devs to use AI tools. And if you don't use AI tools, then you will get bad performance ratings (or fired). So, it is natural then, that people will use AI for terrible things, just so that management is happy that we are checking a box.
itzmanu1989@reddit
It is supposed to be the next step in the evolution. Romans replaced part of the content of their gold coin with copper, then as time went by it kept on increasing and ultimately gold/silver coins just had a coat of them.
What happened next is the lesson we have to learn?
polacy_do_pracy@reddit
i use it to generate the commands to do these tasks. I really can't remember how jq works for some reason. or awk.
Significant_Mouse_25@reddit
Had a dude use llm to create some mock JSON from a dto. He spent thirty minutes with it because it kept screwing up before he asked me for help. I asked him why he didn’t like, use an api tester or check the browser network tab. You know, the old school way. Took him ten seconds and he got what he needed. The application was already running locally.
thephotoman@reddit
That’s basically all use of LLMs, if you’ve been around the block enough.
That said, I’m using it a lot for helping me fix CSS things. I’ve always kinda regarded stylesheets as a kind of black magic, as I grew up in the world of table-based web design.
dbxp@reddit
I've noticed some people on my team doing the same, far better to get the LLM to write a program for you to do the work as it's far more precise and predictable.
timmyctc@reddit
I notice this a lot when people wholesale copy and paste code. It will truncate method param names etc. If people were pouring actual tabular data I would go insane. Theres absolutely no guarantee your data will be preserved.
kagato87@reddit
Almost a guarantee the data won't be preserved. The hallucinations I've seen in just the past week...
Potential4752@reddit
ChatGPT is less likely to hallucinate when data processing than you think and the consequences if it does are lower than you think.
I used to refuse to use it for stuff like that but then I asked myself “if it hallucinates will I be fired? Or am I taking this data too seriously”. Now I use ChatGPT for minor data work all the time.
Ok_Tone6393@reddit
fully agree on this. i once asked AI to convert a text file to another format just to see what it could do. the results were terrible, it was much better to just ask it to write me a simple script or program to do it.
LossPreventionGuy@reddit
I use LLMs for all sorts of dumb stuff. alphabetize this list. whats four percent of 87, all that stuff.
pydry@reddit
My rule of thumb is that if it seems like you are working in a company filled with a bunch of devs who arent very smart, you are probably leaving a bunch of money on the table.
I noticed this a few times when I worked in lower paid jobs for various reasons.
forgottenHedgehog@reddit
No, i have not noticed that.