Writing Code Was Never The Bottleneck
Posted by creaturefeature16@reddit | ExperiencedDevs | View on Reddit | 172 comments
Posted by creaturefeature16@reddit | ExperiencedDevs | View on Reddit | 172 comments
caffeinated_wizard@reddit
With the tweets going around of a founder barfing 10k lines of code a day with AI like it’s some sort of metric this is only one part of the appropriate response. Not only is writing code not the hard part, counting lines of code as a metric is absurd.
time-lord@reddit
https://www.folklore.org/Negative_2000_Lines_Of_Code.html
ZombieDiscoSquad@reddit
Over the past 12 months I've taken a \~2.5m lines of C++/C#/C/Python and have hit feature parity in 35k lines of pure C#. It's gone from taking 40 minutes just to do a basic build of the library down to 7 minutes which includes a full test suite run. Might be the proudest thing I've achieved in >25 years of career dev.
fuckoholic@reddit
Sounds great! What was the motivation to rewrite it in C#? And why C#?
chalk_tuah@reddit
Having picked up C# recently for my job it’s actually a lovely language to work with. Definitely a top pick for me now.
ZombieDiscoSquad@reddit
It's been my daily driver for most of my career, early days was still plenty of C++ but I'm pretty glad to be leaving MFC and ATL behind. I've still had to keep my hand in a bit of legacy Java too but it's rare that I run into a problem that a .net project can't solve cleanly and elegantly. It's such a generally healthy ecosystem too, plenty of well maintained and mature packages along with a steady flow of new ideas and concepts being brought in with each release while still maintaining adequate backwards compatibility or a reasonable upgrade path.
ZombieDiscoSquad@reddit
The consuming apps for the library are all C# and there was already a C# shim wrapping the native C++ and bits of C. The existing C++ was riddled with memory safety issues and the components are much more security sensitive than performance sensitive although given the massively reduced complexity we've also gotten a performance boost for all the major operations with the move anyway. This isn't to say that you'll get a similar benefit doing this type of conversion on any C++ project, this one just happened to be in a particularly bad state.
We now have a codebase that the other developers in the wider team actually want to work their way through and investigate which was something everyone would avoid at almost any cost with the old C++ monster.
mkdz@reddit
My favorite PRs are the ones with way more negative lines than positive lines.
fried_green_baloney@reddit
An actual 10X or maybe even 100X programmer was praised that he "added functionality while removing code".
And I say that as someone who has trouble writing really compact code myself. I envy the people who do.
Princess_Azula_@reddit
Don't be envious. After a certain point, you sacrifice readability to write compact, brief code. Making your code clear and understandable is more important than making it clean, concise, or performant. These things can all be easy to do once you're working with code you can understand, but are much more difficult if you can't understand what you're reading.
Scotthorn@reddit
As with all things, it is a balance. If the most experienced and smartest engineer uses his most creative, complex, and thoughtful solution to implement something, than the mid or junior engineer has no chance of debugging it. Heck the person that wrote it is probably going to struggle a little.
Compact code is good, but if you sacrifice some of that elegance for readability, it helps everyone
2cars1rik@reddit
Compact implies simplicity, simplicity implies readability. No one is suggesting cramming 5 operations into a one-liner.
codingwithcoffee@reddit
This is a key difference that separates good developers from the rest - they are thinking about the developer who in future will need to read and debug their code.
Which might be themselves - and I’ve way too often found myself cursing “whichever idiot wrote this” only to sheepishly realize… oh, that would be me!
Sure it’s more compact to use a ternary operator or assign a Boolean to the result of an expression - but it is often more readable for a future developer to spell it out in an if/then block. And the compiler will optimize it anyway.
Coding is only a part of software development. The hard part is solving the problem. Code is just how the solution is expressed - and ideally that code highlights rather than obfuscates the key elements of the solution.
thefightforgood@reddit
If your smartest engineer is writing code that others can't understand... Are they really your smartest engineer?
Scotthorn@reddit
sure, but that wasn’t really my point. My point was that engineer shouldn’t write the most compact and efficient solution if they’re the only one on the team with experience or awareness of it
fried_green_baloney@reddit
The very best seem able to be elegant and readable. There aren't many like that.
pm_me_ur_happy_traiI@reddit
Removal of complexity should always be the goal. Too concise adds overhead just as much as not concise enough. Maybe more.
Bobby-McBobster@reddit
After 8 years at Amazon I have negative 35,000 lines of code.
connorvanelswyk@reddit
Love Hertzfeld
Sensitive-Ear-3896@reddit
Makes you realize how far our profession has fallen
creaturefeature16@reddit (OP)
that was fantastic, thanks for sharing!
ares623@reddit
lol I'm sure Paul Graham has written some novel gazing blog post in the past about how number of lines is not a useful metric for software productivity and quality.
forbiddenknowledg3@reddit
From Paul Graham nonetheless. Thought that guy was smart.
chipper33@reddit
lol the whole thing with execs giving a shit about how many lines of code a written comes straight from Elon.
Elon is a nepo baby who got lucky during the .com boom. He’s really not smart, more of a bully than anything… but he has the most money and all execs do these days is parrot one another.
cbusmatty@reddit
Loc has been a think since lines of code existed
ccricers@reddit
Idea: Release a second version of the software, the "manager's cut" AKA the manager's preferred version of the work, which re-introduces all the removed code and has added bloat. It could do well with certain audiences.
ccricers@reddit
Idea: Release a second version of the software, the "manager's cut" AKA the manager's preferred version of the work, which re-introduces all the cut content and added bloat. It could do well with certain audiences.
alinroc@reddit
That has nothing to do with Elon. I had a project manager & mid-level manager badgering me for LoC counts on his pet project multiple times a week twenty five years ago and even then everyone knew it was a BS metric. I can only surmise that this guy was trying to impress his Princeton MBA classmates with how much code "his" developer was churning out.
It really bothered him for the couple weeks that I was reporting zero because we were still trying to nail down requirements and start architecting instead of just throwing code around.
creaturefeature16@reddit (OP)
was Elon an exec in 1982?
https://www.folklore.org/Negative_2000_Lines_Of_Code.html
BeerPoweredNonsense@reddit
Nah he was in primary school. But this is Reddit, so when it comes to Musk or Trump, facts don't matter.
See for example the post you're replying to: a claim that's clearly bullshit, standing currently at +15. That's 15 "experienced devs" with zero ability to think critically.
shifty_lifty_doodah@reddit
At the same time, complicated software does require a lot of code.
So for a given programmer, how much code they wrote can be a pretty good heuristic for how much functionality they developed.
It requires some taste to evaluate. But it does mean something, for a good programmer who’s not gaming the metric. A 100k line system is probably more interesting and complete than a 3k line prototype.
Built4dominance@reddit
What WAS the bottleneck?
G_Morgan@reddit
The real bottleneck has always been terrible requirements. 90% of developers get reqs that aren't fit for purpose. Normally you spend an obscene amount of time re-engineering the requirements in order to actually write the code.
This is why "CEOs vibecoding" is a nonsense. The CEO doesn't know what he wants. He cannot describe to the engineers what he wants, never mind an AI.
UseEnvironmental1186@reddit
Preach
rnicoll@reddit
Translating what you want to achieve into things a computer can do.
No-one wakes up and thinks "I really want a password prompt". They think "I want to be sure only I can use my computer"
The part that takes the time is deciding is that a password or a physical device they plug in or biometrics, and having decided that how to store the credentials, and what the user workflow looks like, and what to do when they forget their password.
Built4dominance@reddit
Gotcha. Thanks for this.
rnicoll@reddit
No worries.
Basically I see Product Managers and Engineering becoming specializations of broadly similar roles, but one side focusing on the people and one on the technology. PMs aren't going to be sitting there going "Well if I use this asymmetric encryption scheme then in 10 years time we need to think about xyz so I'll use this other scheme instead", and engineers aren't going to be as good at "What is the most common requirement from this pool of users and which of them will actually pay for the product at the end and then lets prioritize them"
AI is another paradigm shift, but in the same way no-one writes a password prompt in assembly, and the leap to C, then C++, then Java/Go, then Javascript/Typescript didn't put engineers out of a job, things will settle down in time.
pemungkah@reddit
Understanding the problem. Considering edge cases. Preparing for failure. Preparing for attackers. Making it pretty. Making it usable. Things like that.
creaturefeature16@reddit (OP)
I consolidated 800 LoC yesterday to about 200. It was one of my most productive days with this particular project.
fuckoholic@reddit
If you keep going you may reach 0 LoC some day! That'd be nice!
creaturefeature16@reddit (OP)
That would be the best day of coding I'd ever have.
fuckoholic@reddit
let us know if it can go into negative
Fidodo@reddit
We taught this lesson already and now a new generation of dumbasses need to learn it again
yourgirl696969@reddit
It’s the most regarded metric I’ve ever heard. Like it’s actually so god damn useless
PoopsCodeAllTheTime@reddit
Paul Graham says something that benefits his own accelerator and everyone acts like it's the most honest to God truth...
rnicoll@reddit
I've found that absolutely hilarious because at 10k lines a day they have no reason to have not shipped in a month or so.
I suspect most MVPs are under 50k LOC. So... A week at that rate.
Western_Objective209@reddit
even with the most expensive claude plan, it caps out at a few thousand lines a day. people saying they are getting it to generate 10k+ LoC a day for them are very likely lying
positivcheg@reddit
That’s just shows that AI advocates know nothing about programming if the only metric they use is lines of code.
dashingThroughSnow12@reddit
Once I wrote a major piece of functionality in 20 minutes. It was a few hundred lines. This was either after or before a few aggregate hours of meetings to define precisely what was wanted and get consent from all stakeholders.
Wait, did I say once? I meant every few weeks.
Writing code is the funnest and easiest part of my job.
No-Extent8143@reddit
Yeah, I'm currently maintaining a total sh_t show of a codebase, most of it was written by a co-founder. Nice guy generally, but when it comes to software engineering he's somewhere between a moron and an idiot. No real system design, no real thought behind any decision, just sh_t out as many likes as possible as quickly as possible. And the real fun part I can't even be honest with anyone just how sh_tty this dude is at coding...
ConnaitLesRisques@reddit
Writing code was never the bottleneck, but I guess LLMs allows you to throw more shit at the wall to see what sticks.
It’s the ultimate Agile PO wet dream.
PM_ME_SOME_ANY_THING@reddit
My team recently integrated a chat bot into our project. At first I thought it would be pretty cool to integrate an AI bot for the first time.
Just a couple of issues. We limit the bot to only be able to respond with material from the official user guide that the business people sign off on. Also, we’re using an AWS bot that required all kinds of configuration and tweaking to get right.
By the time we were done I was wondering why we didn’t just use some sort of search function for the official user guide. Seems like we used this super complicated tool to lookup info in a document. Did we really gain anything from AI? Or are we just paying for more overhead and more crap we don’t need?
creaturefeature16@reddit (OP)
While they can be useful, many implementations of the tech right now are a solution in search of a problem.
quentech@reddit
My boss on Friday literally posted, "we need to be 80% focused on AI if not higher".
Just "AI". No actual ideas for our product. Just do AI. 80% of the time. Or more.
creaturefeature16@reddit (OP)
Mind-numbingly infuriating. Replace "AI" with "crypto" or "web3" and it reads the same.
quentech@reddit
10 years ago he almost killed the entire company by insisting on and rushing a move to cloud that brought our whole system down spectacularly.
Surprise, surprise, 2ms of latency is a big deal when it's two orders of magnitude more than what you're used to on bare metal boxes in the same rack. If only someone could've warned you that we needed extensive mitigation for that before moving.
Or warned you that it would double+ your infrastructure costs forever, and that with a steady load profile there wasn't much advantage to moving to cloud. And when serving hundreds of terabytes of data every month the bandwidth costs were going to be brutal.
This jackass watched OpenAI's hour long marketing video the minute it was dropped and as soon as he had time to finish it, declared to the entire company, "This is a huge step forward!"
My dude - I guarantee you have not put a single fucking inquirty to ChatGPT 5 yet - AND WE DON'T EVEN USE ChatGPT.
commonsearchterm@reddit
pretty sure this is just built into video meeting apps now
loptr@reddit
I honestly think semantic documentation parsing/querying is one of those problems though.
It is cumbersome and lackluster to set up today, but it's still a great use case. We keep internal knowledge bases of our GitHub Actions best practices and all our internal actions, and attach that to a chat with the Copilot Agent Platform, and it's been very helpful in the organization when it comes to assisting in what internal actions to use, how to structure the workflow, what permissions are needed, how to implement OIDC/credential federation etc.
We've also had some success in taking a changeset/PR and have an LLM determine if there are parts of the documentation, or adjacent products' documentation, that need to change.
It's not really a substitute for anything, it's not good enough to replace other sources of information or guidance. But it's a great augmentation and catches things that would have otherwise been missed.
Perfect is the enemy of good, especially in early adoption.
death_in_the_ocean@reddit
Does this agent of yours hallucinate at all?
loptr@reddit
I'm sorry if this is a little long winded, but I want to give an earnest answer to it to nuance it a bit but let's just acknowledge from the start that it is based on GitHub Copilot, and in the end it's just a GPT and certainly gets things wrong.
However, when it comes to the workflows/actions, we haven't encountered much problems with hallucinations but we also have a rigid set of instructions to steer it (a common problem otherwise is things like outdated versions of actions, like using actions/checkout@v3).
Our main challenge there has been to have it prioritize our internal in-house actions/understand when they're a substitute since we require explicit whitelisting for third-party actions.
But it's also a bit of a black box in that you have little control over what the user asks (which might have a company specific intent/context not known to the LLM/not covered in our instructions) and what the recommended solution was until it doesn't work or breaks and they reach out to us.
One of the things it does well is directing people to the right actions and config for on-prem vs cloud solutions, and it has read access to non-protected internal repos and has generally done a good job answering questions/using it for reference if you provide one.
I think it also happens to be a good match because it's GitHub's platform, so actions and workflows is very native to its training data. But then I think about when I ask Copilot Chat about REST endpoints for something like "list all pending installations in an org?" and it just flat out makes up something like "GET /orgs/{org}/pending-installations" with zero connection to reality, so jury's still out on that one..
It does help however that the organization has strong guidelines and guard rails for most common needs for internal products.
We have no illusions about its capabilities, in many ways it's as much an experiment for us in seeing what can be done with LLMs (and our internal data) as it is a solution to a problem.
The conception was when we analyzed our Service Now history to get an idea for what kind of queries we were receiving and seeing how much of it was answerable with boiler plate replies/internal links. It has made a marked difference in offloading those incoming questions.
We're carefully optimistic about this specific use case, while our attempts to build something like a coherent modules/template scaffolder has failed to produce good/consistent results once you start adding a couple of requirements.
We also tried using it for automating updates non-intrusively, kind of like dependabot but for our IaC, but it made a mess of things and having it change only a specific line and return a file intact was more than a challenge. (This was a while ago, I believe it might have improved in that specific area today.)
I think it's a bit sad that the AI hype has resulted in a kind of contrarian reaction from many developers who would under different circumstances embrace new tech.
As a closing note I'm convinced that building these things keeps everyone in the company grounded and the hype/expectations at bay. It's difficult to tout AI as a magic bullet when you have a few LLM based solutions up and running and everyone can experience them (and their shortcomings) for themselves.
death_in_the_ocean@reddit
Thanks for the writeup!
Knock0nWood@reddit
One thing I've noticed with retail customer service LLM tools is that they are very good at summarizing the issue you are having and what you want done about it, but they're not empowered to do much more than "ok I can initiate a return process for you". In which case it's like what's the point
zacker150@reddit
Users are too incompetent to read documentation.
GargamelTakesAll@reddit
We created an AI powered search but we had to format our data in such a way that by the time we were done we just replaced the AI with a SQL query.
transhuman-trans-hoe@reddit
"oh so you need a way to search through data. query it basically. and you want to do that using language. but you need to be specific and correct, so the language should follow some standards. hmm, if only there was a standardized language for queries. a standard query language, so to speak. but alas."
PM_ME_SOME_ANY_THING@reddit
Yeah ours still isn’t good. A few of the other guys built it so I don’t know all the details, but it sometimes takes 30 seconds to respond… if it responds.
Qinistral@reddit
Sounds like it was built wrong TBH. It should not take 30 seconds. Are they feeding the entire document to an LLM as context? They should be indexing the document into a vector search and using RAG
PM_ME_SOME_ANY_THING@reddit
Likely feeding the entire document as context. I’ve only been in the room when they complain about it, so I really don’t know all the details, but this is the first I’m hearing of any indexing, so I doubt it.
TheTrueXenose@reddit
Sounds like a expert system would have been better...
light-triad@reddit
A good perspective to have is that LLMs are a possible improvement on traditional search functionality. A traditional search pipeline will consist of retrieving documents relevant to a user's query and ranking those documents based on their relevance. LLMs enable the newer capability of summarization, which can replace or augment or ranking.
LLMs can help with all stages of this, but it's really important to understand your search problem well and make sure your inserting LLMs wisely. What you guys did is pretty common, replacing the ranking stage with a summarization stage. But that's not always a good idea. Completely replacing ranking with summarization is appropriate when you're working on something like ChatGPT that has a really large set of documents and unconstrained set of queries that users might have. When you're working with a smaller set of documents and queries then ranking may often be more appropriate. Summarization may be a good addition to ranking for these use cases, but you probably don't want to completely replace it.
WJMazepas@reddit
A lot of people just dont know how to search well. The AI is there to help them to do the search for them
lazydictionary@reddit
If users prefer a chatbot rather than doing the search themselves...then answer is a strong maybe
istarisaints@reddit
People will prefer searching themselves since they need to see the documentation / reference / primary source / whatever it is as opposed to citing an AI bot.
Honestly though these things sort of serve different purposes. Chatbots are like conservations with coworkers … they can give you insight into things you wouldn’t know otherwise / give you grounding in an entirely new subject you’re just grasping. And then you go, equipped with this new knowledge / search terms, to find the real documentation / primary sources / … etc.
lazydictionary@reddit
I think user preference depends entirely on the userbase.
istarisaints@reddit
Even an average person doing something for their non-technical job shouldn’t rely on a chatbot and should do their due diligence.
lazydictionary@reddit
Of course. But we are talking about user preferences.
jer1uc@reddit
I completely agree that the more you look at the solutions being "enabled" by AI, the more you realize that it's effectively search with a worse UX (natural language in both directions).
I will say though that text and image embeddings are very valuable outcomes of the current wave of AI developments. We've had them since like 2013ish, but today's embeddings models are quite good. Ultimately these mostly just make sense to use as a search metric or as input into some downstream model like a classifier.
TangerineSorry8463@reddit
Search with a worse UX? I'd argue for some users that perhaps don't know the exact term of what they are looking for, but they can describe it, to be a better UX.
lulzbot@reddit
I respectfully disagree about agile. Code is 1 of 6 steps in the learning loop: https://www.scrum.org/resources/blog/maximize-value-learning-loop and it explicitly says to ship the minimum amount of code/product to increase the velocity of learning.
Agile has been perverted and buzzworded to hell and back the initial principles are long gone
No-Extent8143@reddit
But isn't the actual bottleneck working out what to throw at the wall?
tmax8908@reddit
Now it is. But it’s just brainstorming. Not that technical or demanding a task.
No-Extent8143@reddit
I mean.. if generating billion.dollar ideas was not demanding, probably everyone would be a billionaire by now, no?
tmax8908@reddit
Didn’t read the article, is that what it’s about? I wasn’t talking about ideas for startups. I meant trying out different ideas for implementing whatever project I’m currently working on.
Fidodo@reddit
Over developing MVPs is a real problem, but when you develop a prototype you need to be extremely careful to sequester it and rebuild it once it's validated.
ConnaitLesRisques@reddit
Agreed, but I’ve learned to never trust promises that MVPs will be rebuilt. They become v1 after some spit-polishing at best.
Fidodo@reddit
In my experience that happens more when the MVP is over developed. We as a habit try to do a good job and write to at least a base level of quality even for a prototype. An unexpectedly good thing about AI coding is that the output is so bad it's pretty much impossible to upgrade to a v1 because of how bad it is.
If your org promotes AI MVPs to v1s then I'd be planning my escape plan now because that org is going to implode with that level of bad hygiene.
ConnaitLesRisques@reddit
Yeah, I left that place behind and I now make those calls.
I just got burned enough times that I’m now skeptical that stakeholders truly are onboard with throwing away the MVPs and prototypes. I find there always some disappointment when the "real thing" has to be built.
You’re right though that the low quality and low cost of prototyping with AI can help do away with the "heartache" of throwing code away.
remimorin@reddit
It's the "start-up fail fast" wet dream and I do agree in this regard. Mature products should think carefully where they unleash LLMs.
Ibuprofen-Headgear@reddit
Yes and now I get to wade through oceans of incorrect comments/documentation, code that lacks apparent intent because the prompter never read it, etc. it’s awesome.
delventhalz@reddit
"You'll spend more time reading code than writing it" is one of the most important things I learned when I was coming up, and it does seem like LLM coders have forgotten that. I write code for humans. LLMs are crap at that.
BigLoveForNoodles@reddit
My experience is that people think they’re excited about LLMs for writing code, but it’s clearly more that they are excited about them because the LLMs read the code.
I work on a codebase which has some massively hairy, super tightly coupled functions that are hundreds of lines long. Folks are terrified to touch it. I guarantee you that some of the people most excited about LLMs are thinking, “thank god, now I don’t have to understand that bullshit.”
It’s hard to get people to take code maintainability seriously when they’re gung ho to hand off the maintenance to a bot.
aj0413@reddit
Unironically, one of my teams most anticipated uses of LLMs is pointing it at our code base so it can explain it to people lol
Literally “help me read the code better” is one of the most powerful use cases. I’ve also used it to help me understand weird as JS stuff and other bits and bobs.
Qinistral@reddit
Along with the other comments, another way AI can help is generating unit tests to cover those nasty-ass functions. Then once you have code coverage behind an interface you can start refactoring it and cleaning it up.
ALAS_POOR_YORICK_LOL@reddit
It's actually not half bad at that kind of thing. I like to get the LLM writeup before diving into something gnarly
jk_tx@reddit
LLM can actually be useful for providing an overview/explanation of unfamiliar code, IMHO that's one of its strengths. It's a great Q&A answer bot, it's the agentic/generation functionality that can't be trusted. And I don't think they really have a fix for this with the current models aside from better training which can only take you so far.
There is no "reasoning" in these models, they only recognize word patterns, not concepts and ideas. So many supposedly technical people can't seem to understand that
pinkjello@reddit
Yeah, but plenty of people are using LLMs to explain the code to them. The blog post and your comment don’t seem to acknowledge this.
Beautiful-Parsley-24@reddit
/*
* If the new process paused because it was
* swapped out, set the stack level to the last call
* to savu(u_ssav). This means that the return
* which is executed immediately after the call to aretu
* actually returns from the last routine which did
* the savu.
*
* You are not expected to understand this.
*/
if(rp->p_flag&SSWAP) {
rp->p_flag =& \~SSWAP;
aretu(u.u_ssav);
}
Spider_pig448@reddit
Except LLMs are also doing that for us now, and reading code is something that LLMs are actually much better at than writing code. There are countless articles about the benefits of throwing an entire codebase to an LLM as context and seeing what kind of improvements they can make to it.
Western_Objective209@reddit
LLMs can speed up reading code as well
No-Extent8143@reddit
I don't think they have. I've met many, many coders that literally tell me "I don't know what's causing this bug". These are senior people that call themselves "software engineers".
Another one I keep getting is "oh we don't have logs for this, so can't work out what went wrong". And obviously they haven't improved logging at all, so I guess we now have a bug that will never get fixed.
BadLuckProphet@reddit
Not having logs drives me crazy. I wouldn't call software complete without logs. And when I get pushback I simply ask, "Would you drive your car without a speedomotor? A gas gauge? Turn signals that don't have any feedback that they are on or off?" You obviously can but you become a danger to yourself and others. I know we want to keep duct taping the bumpers back onto our "enterprise software" but traffic laws allow you to drive without a bumper, they don't allow you to drive with without working turn signals, despite the insane amount of people who refuse to use them. In the same way and despite the danger, I'd rather work on software without exception handling than without logs.
CloudStrife25@reddit
Interviewers seem to have never learned this truism, either.
Embarrassed_Camel422@reddit
My favorite is when they forget this AND try to simultaneously act like they’re the most responsible coders and code stewards ever.
Bullshit they are. I’d like to say they’re not fooling anyone with that crap, but they absolutely are.
Bakoro@reddit
When you start talking in absolutes, you're usually going to make an ass out of yourself, because a bunch of people are going to come out of the woodwork with exceptions.
Writing code was absolutely a bottleneck for a long time, and for some projects is still a bottleneck. If writing code was never a bottleneck, then software development wouldn't have gained as much prestige and the high salaries. There was a time where businesses wanted programmers and literally couldn't find people, so if you could compile "Hello world!", you could get a job.
If writing code was never the bottleneck, then why don't more codebases have higher unit test coverage? Is it because the businesses have to choose if they want developer time spent on unit tests vs new features, and almost always choose new features over tests.
If writing code wasn't a bottleneck then refactoring codebases wouldn't be a big deal, but with a large code base, even a conceptually simple refactoring can be a pain in the butt, so people made tools to assist in that.
Writing code is obviously not the only thing, but it is part of it.
caseyanthonyftw@reddit
Absolutely agree, but looking at all the comments on the rest of this thread it seems like everyone likes to oversimplify. If anything, the "code is the easy part" attitude just seems to justify the idea that all programmers / developers are easily replaceable and should be offshored for cheaper. I don't understand how, at large, the developer community can say both "We're skilled valuable workers and you can't replace us!" and "writing code is the easiest part of a project!".
If writing code was never the bottleneck, then by all accounts an average / shit offshore team that's well-managed by local leads and managers would generally produce stellar results. Obviously that's not the case.
In all my career, some of my biggest bottlenecks have to do with fixing issues involving third party software / libraries being used in frameworks they were never designed for, their code written by developers long gone. That's very much a technical issue that doesn't fall under the responsibility of anyone else but the developers, and I fail to see how stuff like this should be considered easy / shouldn't be taken into consideration.
RiverRoll@reddit
I feel this is one of those things people want to believe but It doesn't really make sense. Testing is writing code. Time spent in developer meetings and collaboration is the overhead of having more people writing code.
QuroInJapan@reddit
It absolutely makes sense to me. Understanding the business problem and designing a technical solution that fits your constraints is always the most time consuming part, in my experience. Actually writing the implementation is trivial by comparison.
pacman2081@reddit
Writing "good" code was always the bottleneck
Embarrassed_Camel422@reddit
Also- over time the system is self-limiting in that actually, the AI WON’T really get better if it doesn’t ever get enough high quality examples to train from and to outweigh the poorly written examples.
If new code its trained on in the future is largely a mess, it’s going to learn to write messy code. If companies jump the gun on developer layoffs, and that code is largely from vibecoding by people who don’t understand it well enough to check it, it’s can’t improve without explicit directions that would take longer than just writing well formed code itself in the first place.
Qinistral@reddit
I don’t lack of good training is the cause of AI’s shortcomings. Maybe that’s true in an absolute sense, but I think the current deficiencies are context quality and size and speed. I’ve never really thought of AI gen “this is bad code” more like “this is wrong code”. In fact sometimes is “good code” that I don’t want, like making something overly extensible. This is not training but craft value judgements based on context and personal vision.
When users use AI as if the AI can read their minds, then it guesses and they have a bad time.
When AI can only hold a fraction of the code base in context, it guesses and it’s gonna have a bad time.
When I have to wait for AI to “think” for minutes like it’s the 90s and my code is compiling, only for it to go in the wrong direction, I’m having a bad time.
Embarrassed_Camel422@reddit
In my experience, I’ve seen it put out code out that is wrong and some that is just plain bad in terms of structure and some ‘what the heck was that?’ stuff where it completely mistook one concept for another and sort of blended them together, particularly with testing.
That being said— it has strengths. It’ll get there someday, I just really don’t think the tires have been kicked enough yet- I personally wouldn’t take any major financial risks based on it working as I need it to yet. More applied research would be immensely helpful, especially into security and privacy.
darkapplepolisher@reddit
As I see it, the only way forward for AI code is to embrace the unmaintainable mess and focus on guardrails that ultimately bring it to a viable end product. Test driven development on steroids?
Embarrassed_Camel422@reddit
The point of TDD is to make it so the code is easily understandable with self-documenting examples to check if changes run or not.
AI’s really not good at that yet unless people guide it through it, and even then… eh.
Perhaps if it was trained ONLY on really good process and sequence, it would be better. But right now, it’s trained on a lot more bad code than good if taking entire swaths of all that’s available, so it gravitates back toward the mean.
bartosaq@reddit
I find AI coding to be really nice for productivity, but it's always ends up with me having to do all the adjustments myself once the prompt results go sideways.
DrIcePhD@reddit
I'm struggling to see how something can be productive when you have to then redo its' output anyway.
csthrowawayguy1@reddit
See people will read “prompt results go sideways” this and say “but AI will only get better” but it’s totally missing the point. I’m trying to think of a better way to phrase this.
It’s more like once an application requires even the slightest design decisions or complexity considerations (which is basically anything beyond a trivial example project) that vibe coding falls apart.
bluetrust@reddit
I feel like people reading "prompt results go sideways" immediately jump to "skill problem, bro." Which is easy enough to ignore, except when it's coming from company leadership, and then it's an implicit threat even when presented nicely.
creaturefeature16@reddit (OP)
Indeed, because those millions of micro-decisions, which are done by a cogitating human who is considering many, many, many different aspects to a project (not just the "context" that is written down) all begin to add up. Seemingly innocuous and minor decisions that don't seem to matter in the moment could have huge impacts downstream and laterally in the project.
Many of those decisions aren't even something you learn about from a book or a class or an LLM, but simply having experience in the trade for decades.
The idea that you can abstract away all this experience and technical understanding is absurd, as many are finding out once they reach that point very quickly as they try to build without it.
hctiwte@reddit
IMO writing code_is_ the bottleneck. If you know what you’re doing, and can review well and think about the high level of problems well, then AI tools will give you a productivity boost, and a measurable one at that.
Not having to bother with the details of how exactly the code which fulfills your high-level requirement should look like is a welcome improvement, and is what makes you more productive.
nates1984@reddit
I think you're missing the point. It isn't that code is not the tangible output of our productivity, it's that the hard problems typically aren't getting the code to work. The hard problems include things like root cause analysis, designing for maintainability and flexibility, etc etc.
hctiwte@reddit
I don’t think I am missing the point. AIs also help with system design and with debugging. My point is that using AIs well, will likely result in a net increase of programer productivity. Is it going to be a 50x increase? Probably not, but my feeling is that those who use it well will see a measurable increase.
I think of it more like the productivity that the internet and search engines brought vs books.
dbgtboi@reddit
AI is very good at that
If AI is very fast at writing code and engineers don't need to read the code much anymore, then this doesn't even matter
hw_2018@reddit
> understanding code is still the hard part
is it? i find understanding code with ai alot easier
gymell@reddit
Q: If writing code is the bottleneck, then why is offshore development always a complete failure?
A: Because writing code isn't the bottleneck. It's the decision making about what to do, prioritizing it, then figuring how to do it. Actually doing it is only the last step in a process involving mainly leadership , analysis and communication.
dbgtboi@reddit
AI is better at all that than it is at writing code
Qinistral@reddit
Explain
dbgtboi@reddit
Explain to the llm what you want to do and it'll spell out the options for you, you can then ask it to do all the design and everything from there
It's very good at that stuff
If you don't like what it says then ask for additional options
hkr@reddit
The answer did not address the "offshore" part of the question though.
Whatever4M@reddit
This seems overly simplistic, you can (and should) review generated code as it's being generated, and you should understand why it takes a specific approach and force it not to do it if it doesn't make sense. I'm personally happy that I never again have to write another line of boilerplate code.
PastaGoodGnocchiBad@reddit
Scripting can be used to generate boilerplate code while being sure of the correctness of the output.
Whatever4M@reddit
And scripting takes significantly longer than AI, plus I don't really agree that it's easier to sure of the correctness of script generated code than AI generated code.
PastaGoodGnocchiBad@reddit
When you write a script you know what it does and you know its output. You don't need to review every single line that gets out of it as long as you review the script itself and make sure of its correctness (well one probably should check it as testing though). You do need to review every single line of AI output. Reviewing script code seems much less boring / intellectually degrading than reviewing AI-generated boilerplate.
Whatever4M@reddit
If the script handles any level of complexity then understanding it and it's outputs is not straightforward. You need to review every single line of AI generated code, but the reality is that it's easy to review most boilerplate, for example, the basic skeleton of a new API endpoint (creating the route, creating the controller and configuring auth or whatever is required) doesn't require much time or care to review, on the other hand, you can review more complex stuff more thoroughly, I don't see why that would be boring or intellectually degrading.
PastaGoodGnocchiBad@reddit
Thank you for your explanation.
Reviewing tens of lines of boilerplate is fine (at this level AI gen is just smarter autocomplete). Reviewing thousands of lines of boilerplate code makes me want to run out and scream, I would probably write a script to generate the boilerplate anyway and check the diff with the to-be-reviewed code rather than play "spot the thing that's not actually the usual boilerplate" a thousand times.
bwainfweeze@reddit
One of the things I hate about 30 or 60 minute interviews is that real code is a conversation that goes on over weeks or months. There’s time to remember all of the corner cases you forgot in the original proposal and initial PRs.
So if you collapse the entire interaction to AI or an interview, it won’t be representative.
And nobody actually reviews large PRs. They get rubber stamped so often that they almost always make it to production with new bugs or regressions. Most of the time the RCA leads back to a large PR inadequately reviewed. And it takes a long time to convince a team to summarily reject large PRs, so the farce continues until the evidence is overwhelming.
Do you know how to get an AI to break large changes into organic sub tasks? How many people do?
Whatever4M@reddit
That seems like an issue on the processes where you work, I flat out reject to even review large PRs and I always flag it as a problem.
bwainfweeze@reddit
It’s an issue I’ve had to convince people of many times. And the thing is the tune often changes when the author discovers they have a large PR. It’s not universal, but it’s a common failure mode that most of us have to address.
creaturefeature16@reddit (OP)
I agree that LLMs have all but solved the boilerplate issue, although I also find a lot of value in boilerplate. Often that is where I think to myself "Wait...can this be done better?" A lot of my workflow efficiencies and reusable components/hooks/functions/classes came from the pain of the redundancy and overhead that writing boilerplate created. If that is automated, there's less opportunity for that, which has the potential to create a lot more verbose and repetitive codebases that are harder to maintain.
Whatever4M@reddit
I agree, but then that's just part of your review process moving forward, but I do understand that you need to be more actively cognizant of it considering that you aren't directly feeling the pain anymore.
Huge_Negotiation_390@reddit
As a Junior I invested so much time learning all the vim shortcuts and tricks to speed up typing and editing of text... what a waste of time and mental health.
Due_Ad_2994@reddit
Also wrong. The bottleneck is what code to write and the real TCO is ongoing maintenance of it and the infra it runs on.
Puggravy@reddit
What code to write? More like what code to kill. That's where the real institutional friction is.
NoIncrease299@reddit
I mean ... no shit?
CheeseOnFries@reddit
100% requirements gathering, knowledge transfer, prototyping, scrum, etc are the bottle necks.
papillon-and-on@reddit
No. But doing it well and at speed was. AI can help with one of those. The other, well that’s debatable.
bwainfweeze@reddit
What’s to debate? Who is successfully shipping AI code in production? Not a few functions, an entire app? Where’s the successful test case?
There isn’t one. So what we know is that you can use AI to coast on a project built by real devs, but you can’t make something that stands up to real traffic and adversaries with AI from front to back. And there’s been enough time now for that to happen and it hasn’t.
dbgtboi@reddit
The real devs wrote that project by copy pasting 90% of it off of stack overflow, AI just does that faster for you
bwainfweeze@reddit
I use SO for brain storming. I count(ed?) on the nattering in the comments to point out the corner cases I’ve forgotten about the API in question. In some languages the docs are better than others, but they’re never 100%. I always get a couple design modifications and unit tests for boundary conditions out of the comments.
creaturefeature16@reddit (OP)
This is a really important point. After GPT4 dropped, I remember the Twitterverse saying devs (especially front-end dominant developers) have "six months left". I kept hearing "these are the worst the models will ever be", which was some way to shortcut any criticism or skepticism on the upper limit of their capabilities and hand waive away all the possible future issues.
Well, here we are now, 2.5 years later. Claude4 is truly phenomenal and I love using it. GPT5 has now officially been rolled out (with highly variable reception). These models are REALLY good now...surely we should have some amazing examples of how they've impacted the industry. We should see endless examples of more features shipping faster with the exact same level of quality. Yet, we haven't. The largest study done of 100k devs over a couple years, which also measured some of the best models, are showing 15 to 20% (max) increase in productivity. Shit, that's like....trimming off a few unnecessary meetings a week.
bwainfweeze@reddit
I can’t claim credit for this observation. I stole it from someone who is clearly right.
creaturefeature16@reddit (OP)
You didn't claim credit, you just brought it up, and it's still a great point that needs to be highlighted! So you're still going to get credit in my book just for bringing it up. 🙂
ImaginaryEconomist@reddit
We are getting record profits posted by Big tech amidst layoffs and money directed towards capex for AI, Data centres.
This convinces the leadership enough they could do with lesser people & smaller teams. Add AI or some productivity tool in the mix to make people 15-20% more productive and they'd double down on reducing headcount. As long as this situation exists, no limitations of AI is going to convince otherwise. This is the reality we'd all live in for few years.
PublicFurryAccount@reddit
That's how layoffs work. They trade future growth for immediate profits. Currently, they need to post much higher profits because interest rates are high, so the risk-free return is higher than ever in my life.
DarthCaine@reddit
If those CEOs could read, they'd be very upset
midasgoldentouch@reddit
Alright now DW
Vladimir_crame@reddit
I lol'd.
And then I cried
_dky@reddit
Writing maintainable code in the long run has always been a bottleneck.
grizzlybair2@reddit
Well yea. Bottleneck has always been business can't tell me what they want or need or keep changing. The only technical bottleneck is when my orgs lead engineer decides to force us to use a new tech that they can't do anything more than hello world with but I have to build a real feature using this new tech in a week.
North_Resolution_450@reddit
But then this is not programming anymore. 99% of people are regular programmers not managers
gizmo_5th_cat@reddit
While downsizing their QA departments, lol.
wayruner@reddit
I one 100% agree and I have actually gotten the most use out of it when I acknowledged that. Writing code is easy, reading and understanding code is hard.
I find AI the most useful if I ask it to explain a bit of code to me or summarise some API documentation. It's especially useful when working with languages or Frameworks you have not worked with before. Or just an overly complex regex...
Pavel_Tchitchikov@reddit
I just don’t buy that this is the common narrative. Maybe I’ve missed it, but the narrative I’ve seen is that LLMs are there to cut costs and remove a lot of the context acquisition that is needed by programmers. The whole rest of the article rings true, sure, but they’re fighting against a strawman. And worse, they’re not even addressing the main reason people do go for LLM-generated code: it’s (at first glance) way cheaper than hiring more devs, and it “empowers” newer devs with less experience. We all know how code like this fares in the long run, sure, but lots of companies won’t care and will still throw everything they can into AI just because they think it’ll cut costs.
AmpaMicakane@reddit
The bottleneck is figuring out what to build and how to build it for senior developers + tech leads.
ottwebdev@reddit
IMO
Always has been always will be 80% spec/planning, 20% dev/qa/deploy
Regal_Kiwi@reddit
In my experience it's been more 5% planning spec, 85% dev, 5% QA, 5% deploy. If you want to work on software that doesn't have a market and doesn't make money, that's how you do it.
midnitewarrior@reddit
The thing the author is missing is that the ultimate TechBro goal is to make it so that no person has to understand the code. CEOs will get onboard with this if they can prove it works.
jdlyga@reddit
It's kind of like switching from bikes to ebikes. You go faster but you're still on a bike.
saposapot@reddit
Absolutely correct. The only way it could really help is if it reduces the need to have other devs so no coordination needed. Hell, no devs, just the product owner doing everything.
If not, the true bottlenecks are always still there.
PositiveUse@reddit
Awesome article. Read it before, but it’s good that this is making its rounds
frankandsteinatlaw@reddit
For me, quite often, code is the bottleneck!
creaturefeature16@reddit (OP)
Then in your case, it sounds like multiple bottlenecks.
WrennReddit@reddit
Now with forcing devs to use AI and go fast, it will be the bottleneck. Just not in the way they're expecting.