The Mythical 10x Developer? More like 0.8x: Why We're Moving Away From LLMs
Posted by gametorch@reddit | programming | View on Reddit | 55 comments
Posted by gametorch@reddit | programming | View on Reddit | 55 comments
phillip-haydon@reddit
“We couldn’t figure out how to use AI and so we threw in the towel and are prepared to be left behind”
chat-lu@reddit
Isn’t the point of those tools that you don’t have to learn shit? How can someone be left behind? You just have to join when the tools start working.
phillip-haydon@reddit
No? That hasn’t been the case at all since day 1. The tools are only as good as the user.
gametorch@reddit (OP)
The tools work today. That's the reason the site even exists. They wrote the entire thing.
chat-lu@reddit
Yes, the tool can produce low quality work today. I’d rather not push it to prod.
gametorch@reddit (OP)
Low quality work that has 50 paying users, 250 sign ups, 10 million visits and never once fell over, never once broke 10% cpu usage! HAHAHAHAHAHAAHA I am so glad I did this social experiment
gametorch@reddit (OP)
I completely agree with you! This article was a social experiment. Please see this comment: https://www.reddit.com/r/programming/comments/1lr680x/comment/n18p1d4/
sleeping-in-crypto@reddit
I’m convinced alot of early adopters for LLM-based coding just used them wrong or set the wrong expectations. I saw it happen in my own teams and guided them to a more sane usage and I’d argue that, at least for us, productivity has actually significantly increased. But that’s because we stopped trying to use them where they aren’t appropriate.
These assistants needs clear prompting. This means clear instructions and a clear understanding of where to find things and expectations on the output.
We tried Devin and it was miserable and resulted in exactly the experience described in the article. We tried Copilot with similar results.
Agents will happily run off and do 1000 things you didn’t ask them to do if you aren’t careful.
We reduced our expectations, discussed as a team how to prompt to get desired output, set up tooling such as rules files and per project MCPs and are now able to move much faster than before but sure it wasn’t straightforward. And we are all senior or better engineers, most of us staff or principle level.
I think as a whole we appreciate that you can’t remove the engineer from the process. Once you accept that you can get far better results out of the LLMs because you stop expecting it to finish that last 5% of detail that LLMs seem to struggle to tie together.
granadesnhorseshoes@reddit
based on limited playing around i get the impression they are great time severs in a narrow context of producing decently well understood boiler plate and generic implementations and then leaving adopting the output to your specifics. EG the 80/20 rule still gets you a 20% productivity boost by doing 80% of the borning/repetitious coding.
I'm sure some book pushing a "like unit testing, but for vibe coding" will be all the rage in the next couple of years. It will codify in the ugliest and most bass akwards way, steps to produce generalized/self contained chunks of code you can the manually graft into your Frankenstein's Monster.
Bleyo@reddit
Same. I shake my head at these articles.
LLMs aren't replacing developers yet. However, developers who use LLMs correctly will be replacing developers who don't very soon.
gametorch@reddit (OP)
I completely agree and whoever upvoted my social experiment blog post is a FOOL lmao: https://www.reddit.com/r/programming/comments/1lr680x/comment/n18p1d4/
DukeNukus@reddit
Soon? I replaced myself with LLM. Finished probably 100 hours of work in about 20 hours and did little actual coding my self on a javascript tool.
theboston@reddit
The problem I have had when I do have clear prompting and instructions to feed the LLM, it feels like I could have done the work just as fast coding it up myself and also be more familiar with the code. I feel in a lot of cases I just know my tools and it would faster for me to zip around and code it up rather than crafting the perfect prompt and still needing to go fix things. That's just my experience though and there are situations where they can really help.
I do agree the expectations with LLMs are way too high and it leads developers to use them wrong. Im working with an eng right now and he is using it for everything and the code is a mess. If I try to discuss architecture changes he did in a PR, he has to go look through the code to even know what "he" decided to do. Its horrible.
AReallyGoodName@reddit
If you're working in a well known codebase everyday with no new libraries then perhaps.
AI is your replacement for the "Search online how to work with new framework/library -> find relevant code samples -> work them into what you want to do" cycle. It shines at this honestly. To the extent i'm learning things about certain obscure libraries i've had to integrate "oh it has an option to do that!" since the LLMs are amazing repositories of code samples to achieve some goal.
They work really well for that sort of thing.
DukeNukus@reddit
I just coded up a javascript component for a client in about 20 hours that would have taken 100+ hours probably 2,000 lines of JS via ChatGPT. A lot of it was: Here is the cureent code, review and summary.
Alright now I want to do X.
I was able to tell it at one point "seems the X functionality broke at some point" and it reviewed the code, found the issue, and gave me code to handle issue.
The issue with GPT is it's a balancing act between being too specific and not specific enough.
If you are too specific you often run into the XY problem and may be solving the wrong problem, or solving it inefficently.
If you are too general, ChatGPT has to "fill in the gaps" likely in ways you might not like.
not_a_novel_account@reddit
If it takes you 2 weeks to write 2k LOC, on the one hand I understand why you think LLMs are a miracle, on the other hand like what?
LOC is not an awesome metric but even for complicated C++ metaprogramming templates and polyfills I have an MR that size about every 2 or 3 days.
reazura@reddit
Im conflicted with this as well. I needed to just remove a few lines of repeating pattern to insert somewhere, and I couldnt be arsed to do the whole "You are a helpful Assistant" shit and just wanted it fixed with simple prompts. First time i did it, it did it well enough that i was happy to wait, but then copying the exact same simple prompt later on for a different result set had me retrying over and over just to get better results. And at that point i was paranoid because it would often hallucinate making other changes that i had to double check each line again, and lo and behold it did delete some "duplicates" that never even occurred.
shanem2ms@reddit
Same experience here. It still requires the critical thinking that is at the heart of good engineering, just applied at a different level. My workflow has been to first have the llm plan out the task from my "back of napkin prompting" and write that plan out to a text file. We then iterate on that file until I am happy with the plan. I then may have the llm break that plan up into smaller steps, each measurable. Rinse and repeat. Once done, its a matter of having the LLM follow our plan, which is written in clear steps. Each step is more digestable byt the LLM and less likely to cause it to lose context. It honestly has been more disciplined than how I worked before, and even though it sounds like a lot of steps, I've seen my productivity jump much higher than before when working this way.
spiderpig_spiderpig_@reddit
My observation with this approach is it's great for some kinds of work - new project scaffold, technology porting, even to an extent language ports. But it falls apart in really painful ways when working with novel/new signfiicant sized projects.
No shade to you personally, but I wonder how many projects fits this scenario? Maybe that explains the gap.
chat-lu@reddit
What if instead of English, we used a specialized and much more precise way to tell the machine exactly what we want. Some kind of code. We could call it coding.
muntoo@reddit
I don't understand some of the complaining.
How on earth are y'all (ab)using LLMs?!
savage_slurpie@reddit
LLMs are extremely useful for speeding up development and I don’t know how any serious dev says they aren’t.
As long as you can provide good context they will speed your process up. Gemini 2.5 pro feels like a god damn cheat code for me.
Professional-Trick14@reddit
It's almost like the author has no fucking clue how to use LLMs to program. You tell it what to do and how to do it, not the result you want. And then you have to review the code like it's a junior. I've been a dev for 12 years now and I'm working faster than ever before. I can't complain about it.
gametorch@reddit (OP)
I am the OP and the "author" and I completely agree with you:
Now this begs the question: are these upvotes driven by insecurities or a paid bot army? Who knows! 😂
chicametipo@reddit
I’m pretty dumb and even I can clearly see the truth. You wrote a bad article and now you’re trying to cover your ass.
theboston@reddit
Dont bother arguing with gametorch, hes a junior dev and all he does is make simp posts about AI
gametorch@reddit (OP)
HAHAHAHA so now which one is it? Did the LLMs write the whole website or not? Are you trying to PROVE to ME that I wrote all this code by hand? LMFAOOOO
gametorch@reddit (OP)
Is the tide finally turning? Am I allowed to be pro AI on r/programming now?
robotmayo@reddit
gametorch@reddit (OP)
I mean what, are you claiming that I wrote this entire startup by hand now??? LMAOOO
Which is it? Did the robot write it?? Or me???
SanityInAnarchy@reddit
A frustrating counterpoint: For most of us, it doesn't matter if this article is 100% correct. Money claps for tinkerbell, and so must you:
gametorch@reddit (OP)
I am the OP and the "author".
This entire website, including this anti-LLM article, was written by an LLM!
Before posting this article, we got tens of millions of visits, nearly 250 signups, 50 paying users, and never once did the site fall over! I wrote the whole thing in Cursor with o3 MAX and gpt-4.1 MAX.
I pay about $200/month on LLMs to write code for me and it's had a very, very positive ROI. I use a really solid technological stack and have a lot of engineering experience, so that certainly helps. If you want to read about our tech stack, here's our blog post about it: https://gametorch.app/blog/how-we-serve-millions-of-requests
Thank you for participating in this social experiment.
If you don't believe me, refresh the blog post and scroll to the bottom! LMAOOOOOOO got you guys so good
dark_mode_everything@reddit
But I like probabilistic code that may or may not work as expected some of the time. Reviewing AI slop is super pRodUctIvE.
gametorch@reddit (OP)
HAHAHAHA I am the OP and the entire fucking website was written by an LLM. Tens of millions of visits, 250 signups, 50 paying users. ONLY WRITTEN BY LLMs. You are so wrong, brother. Good luck! The fact that I hit the front page with this shit proves my point! lmaoooo
Mysterious-Rent7233@reddit
I'm always annoyed when articles or comments like this don't tell us what model and what agent they were using.
deadwisdom@reddit
I mean that doesn’t even really help. To understand you’d have to know the prompts, the rules file, the process. You’d have to really understand the whole context. And that’s kinda the point, it’s hard to do that right and takes a lot of investment.
gametorch@reddit (OP)
No dude, you are so wrong. The parent commenter is 100% correct. This entire blog post that I "wrote" was a social experiment. The whole fucking thing was written by an LLM including the entire website, which is cashflow positive and never once fell over. See my comment lmaooooo: https://www.reddit.com/r/programming/comments/1lr680x/comment/n18p1d4/
Flimsy-Printer@reddit
OP here. I was using GPT1.5. It sucks.
gc3@reddit
I get a lot of use out of cursor. Today I had to make a new tool and I asked cursor to make the build file.
Did it fine.
gametorch@reddit (OP)
I completely agree with you. This post was a social experiment. See my other, heavily downvoted comment 😂: https://www.reddit.com/r/programming/comments/1lr680x/comment/n18p1d4/
AReallyGoodName@reddit
Before LLMs we often stackoverflowed and Googled our way to solutions. We had to. Part of our job is constantly going into new codebases and using new libraries and frameworks that we've never seen before. So we searched online, found code samples and then made it work. 20+ years experience, much of it in a senior role at FAANGs, and here i am being honest and acknowledging this is what we programmers do. Or did i should say.
LLMs cut out a few steps for us. Today i had to create an animation using the Manim library. Never used it before. Asked claude "create a programatic animation using manim that shows X changing into Y with an appropriate graph." One shot results. That would have been over a day of work just to learn how to use Manim but the AI had seen code with it before so the step where i Googled code samples and worked them into what we needed is gone.
Yet i keep seeing these high and mighty "we're too good for AI" type of articles. I bet they never had to Google a solution and only code in vim on a keyboard with pure binary inputs huh. Like it's seriously tiring at this point and when i see this i just think "another dev pretending". Chances are pretty good they haven't got their AI+IDE integration well setup nor do they know how to prompt the AI.
The "i don't use AI give me upvotes" needs to stop.
oadephon@reddit
It comes across as disingenuous when you don't even mention the models you tried.
gametorch@reddit (OP)
I completely agree! This article was a social experiment. Please see this comment: https://www.reddit.com/r/programming/comments/1lr680x/comment/n18p1d4/
gametorch@reddit (OP)
Hey guys, just kidding!
This entire website, including this anti-LLM article, was written by an LLM!
Before posting this article, we got tens of millions of visits, nearly 250 signups, 50 paying users, and never once did the site fall over! I wrote the whole thing in Cursor with o3 MAX and gpt-4.1 MAX.
I pay about $200/month on LLMs to write code for me and it's had a very, very positive ROI. I use a really solid technological stack and have a lot of engineering experience, so that certainly helps. If you want to read about our tech stack, here's our blog post about it: https://gametorch.app/blog/how-we-serve-millions-of-requests
Thank you for participating in this social experiment.
pa_dvg@reddit
Super funny that these articles always feel the need to include chat got soulless aesthetic artwork
maxinstuff@reddit
All it’s done is replace the free stock imagery and random copy-pasted images everyone used to use.
It’s not as if anyone was paying for or specially commissioning images for their blogs before AI generation.
bastardpants@reddit
I found public domain content and appropriately licensed Creative Commons images to put together in a fun, mildly relevant way when I made banner images for my old posts, but I also wasn't selling anything.
no_brains101@reddit
That is exactly what maxinstuff said no? free stock imagery and random copy pasted images?
SanityInAnarchy@reddit
I guess there's a difference in that it's stuff you're explicitly allowed to use, and properly-attributed, it could lead to more exposure for the artist.
But every working artist knows how little 'exposure' is usually worth, and it's not like people used to be paid for bespoke artwork for each article. Some of these might be crimes against aesthetics, but this seems like one of the more harmless uses of genAI.
bastardpants@reddit
At a high level, but specifically seeking public domain content as a fun challenge instead of paying Getty or pretending a regular Google image search is sufficient. Google Images has an option to filter by usage rights.
maxinstuff@reddit
That’s the thing though - people who didn’t care about ethically sourcing images before… or just didn’t think about the issue at all… still don’t 🤷♂️
Like, this image of a man who is extremely distressed at the sight of his monitor being sideways is also not attributed in any way.
bastardpants@reddit
I wanted a banner image for a reverse engineering post, so I found a public domain old server room picture and a creative commons non-attrib that allowed commercial use picture of a magnifying glass. It was a kid's toy froggy magnifying glass with eyes on the plastic circle part.
Not sure if it was as descriptive as it could have been, but I had fun putting it together and it felt like it matched my writing style
Kept_@reddit
I was going to comment the same thing.
With that being said, I'd rather see a bunch of doodles made by hand rather than AI art, at least it gives a touch of personality
grady_vuckovic@reddit
"Why We're Moving Away From LLMs"
Directly below image: AI generated image of man looking at a monitor that is at a 90 degree right angle to him and the desk in a way that makes no sense.
zxyzyxz@reddit
Well they did say large language models and not all transformer models