Writing code was never the hard part -- Except for some of us, it was
Posted by ninetofivedev@reddit | ExperiencedDevs | View on Reddit | 116 comments
It's another reddit thread. Go ahead and get your pitchforks and head to the comment section. I'll see you all down there.
Disclaimer: No tokens were consumed while writing this post.
This expression "Writing code was never the hard part" is something that I've preached. Because to an extent, it's true. I've been a software engineer at various levels for 15 years and the hardest part of the job is always dealing with everything that is ancillary.
- Meticulously defining requirements so that there is absolutely zero confusion from stake holders when you deliver what they asked for.
- Convincing stake holders that what they are describing is not actually what they want. No, they actually want this instead.
- Finding out how to dodge the hype traps. No, we don't need blockchain. No, we don't need to go serverless. I know it's webscale, but SQL is what we need for our use-case. I know you're reading a lot about kubernetes... Actually, I like kubernetes.
- Remember those meticulous requirements? Well stakeholders are back and they're ready to gaslight you about what you discussed.
- Manager just sent you a PM. They want you to know that you shouldn't work overtime, but that thing that would probably take you a week needs to be done on Monday. It's Friday.
- Oh. The bikeshedding. Don't even get me started on the bike shedding. We need to build a simple app so that our customers can fill out 4 forms and submit them. We then map it to a number of different downstream providers and the work is done. But Scotty Staff engineer, who has something to prove, wants to talk about why our story points need to represent complexity instead of time.
There are a number of things you deal with day in and day out as a software engineer that have nothing to do with actually sitting down and writing the code.
Time for personal anecdote.
So I would say it's hard to really pinpoint where success comes from. I like to attribute mine to being pragmatic, having good intuition when it comes to design, being able to effectively communicate with management and being able to give good direction to software engineers.
I'm good at convincing our team that we can go with the simpler approach. I'm good at developing understanding for operations and cloud, which has led me to design pretty reliable and scalable systems. I've also just been around the block enough times to know where the footguns are hiding and I've solved this problem before.
I can pass interviews. Leetcode comes easy to me, even though I hate it.
But I also have ADD. And sitting down and writing code without getting distracted is something I find very difficult. I can decompose the problem. I can be forward thinking when it comes to design. I recognize when I need to rewrite something. I can research with the best of them, and syntax is something that naturally sticks in my head for some reason.
But locking in to do that? Feels impossible sometimes. Sorry, I have to respond to this slack thread. Someone wants to build their own Kafka and I just need to teach them how to use Kafka.
Ok, back to writing the code. Wait. Our pipelines quit working. Our atlassian admin revoked all of our keys in his effort to implement our companies new "least privilege" OKR. Sure would have used some heads up. Ok. Back to writing the code, what was I doing again?
I say all this because I notice this trend. This trend that people think that only bad engineers are using AI. I'd say it differently. Developers are using AI badly.
If you're not reviewing, and actually reviewing, the output of the AI, you're doing it wrong. Ok, no, you don't need to fully understand that regex in that script that it wrote to parse that API. The data that comes back looks good and it's not that important that it's 100% accurate.
Claude code has become my daily driver. I have significantly shifted how I write code, and I've also spent a lot of time learning in depth how these products work and all of the different techniques, the ins and outs of the settings and configurations. It solves the ADD I have. Because I can sit down and write out a PRD. I can design something ahead of time and be very specific about what I'm asking for. I can give it specifics to validate the implementation. And I can get distracted while I'm writing it and come back to it and finish it.
I know that AI is a contentious topic. I know that it feels so good to dunk on the vibe coder. Because they're not real programmers. The true scotsman is practically baked into the software engineers DNA.
But I will stand by one bold prediction. The future is not going to be a dichotomy of good and bad programmers. People who figure out how to effectively leverage AI in their job (emphasis on effectively..) are going to excel. People who refuse to adopt the technology are going to fall behind.
Because as a reminder: This is what engineering is. It's very easy to go back a year and point out all of the bad things AI used to do. And a year later, it still does a bunch of bad shit... but it has also dramatically improved. Remember when it used to just straight up hallucinate APIs and websites that didn't exist? Now it does that sometimes, but it typically does it in the background and iterates enough to get to the real API.
And it's only going to get better. It might get worse before it gets better, but eventually, this technology will be better than it is today.
Or it won't be, it'll be abandoned to the graveyard of other tech that never lived up to the hype. But I don't think that's the case here.
DebateRealistic4840@reddit
Honestly, I relate to this a lot. The 'ancillary' stuff is where the real grind is. For me, the ADD aspect makes focused coding tough too. I found that using an AI assistant like Claude code to help draft initial PRDs, generate boilerplate, and even suggest refactors has been a huge help in staying on track. It lets me externalize some of that initial setup and validation, which frees up my focus for the actual problem-solving. It's not about replacing thinking, but augmenting it.
fletku_mato@reddit
You have ADD, but you have more patience to meticulously review LLM output than to write code?
CodelinesNL@reddit
I have ADHD and Claude is great. It takes away a lot of the tedium of having to go from a problem description to good code.
ADHD isn't a patience issue, it's a motivation issue. Common misconception.
fletku_mato@reddit
But are you highly motivated to prompt Claude and read its output?
Chwasst@reddit
Yes. I like talking to people, explain ideas, processes, patterns - because interaction gives me quick feedback loop. It’s engaging. When I’m coding myself it’s tedious, it’s just me and a shit ton of abstract thoughts running into each other constantly.
Talking to LLM is similar to talking to people in this aspect. Although I’d argue the best kind of workflow is pair programming for me - that way I get both accountability and motivation in real time.
Cute_Activity7527@reddit
This. I care more about delivering value than whats in the between.
If AI can help me deliver it faster -> bonus points.
Also I dont see us working the same way we did in past 20 years. I prefer to work like Iron Man talking to AI assistant solving complex problems on the fly. I dont need to know how to weld to tell robot to weld two pieces of metal. And my goal is two pieces of metal welded.
Chwasst@reddit
I don’t agree on the last part - I actually think that we still need fundamental knowledge. We should know how to weld, the robot should simply act as multiplier of said knowledge - but we still need to be able to control the process and verify the results.
herbacious-jagular@reddit
I'm the opposite: I really enjoy the quiet and the slowing of the mind that occurs with intense focus.
Thus LLMs aren't something I enjoy using regularly.
Chwasst@reddit
I don’t get that kind of focus in my job. Utterly boring projects, with annoying workflows simply don’t do that for me. And if I find something that actually puts me in such focus then oh boy - I end up in hyper focus session that can last up to 20 hrs and makes me forget that sleep, food and other physiological needs exist. Then afterwards I need up to 3 days of rest to recover so it’s not really viable strategy in everyday life to me.
ninetofivedev@reddit (OP)
Correct. Engaging with ai is… engaging. Sitting down and pounding out code is a lot of me and my own thoughts.
herbacious-jagular@reddit
And guiding the AI is sometimes slower than me simply writing the code myself, even when I make mistakes.
Cute_Activity7527@reddit
Tools issue. For ppl like you it would be faster to let AI go line by line where you can already fix araising issues. I dont believe human can write „words” as fast as a computer.
Interface for pouring your thoughts on paper changed and can be faster but still keeping experts in the loop to fix occasional bugs.
CodelinesNL@reddit
There's a pretty big difference between having to use that corporate mandated Copilot crap and using the right tooling though. I was on the "ugh I'll do it myself" side as well half a year ago.
Expert-Reaction-7472@reddit
this is the thing... some people like their own thoughts a bit too much. they're the ones that struggle to delegate and collaborate and therefore are a unproductive with an llm based workflow.
CodelinesNL@reddit
Absolutely. But your idea of it's "output" is probably wrong. You've probably seem some vibe code BS where it spits out an entire application based on a vague prompt.
I use it in steps:
It's pretty much the same design > spec > red-green-refactor cycle I always use, it just takes WAY less time and is way less tedious. It's especially good at the boring shit I dislike.
People here keep pretending that AI writes 'bad' code but that's simply completely untrue. It makes very few mistakes and it almost always self-corrects it's mistakes faster than I can do.
Like I said before; people here seem to have mostly experience with ChatGPT or Copilot. These are complete trash. The model and harness you use, make all the difference.
SplendidPunkinButter@reddit
Nobody who uses an LLM to generate their code is carefully reviewing it. That’s just not human nature.
Wide-Pop6050@reddit
If I use it I use it for small sections of code - not the whole thing. So I'm certainly reviewing it as I integrate
ninetofivedev@reddit (OP)
That’s like saying nobody actually reads your PR.
Some of us do.
serious-catzor@reddit
Well.. they don't. It's a generalization but it encapsulates a few different truths about us and our cognition:
Ten lines PR -> Finds 12 faults and proposes a refactor.
500 lines PR -> LGTM
CodelinesNL@reddit
But how is that an AI problem? The same standards towards PRs apply.
serious-catzor@reddit
AI? This is about humans.
More specificily it is about how likely a human is to review and yes the same rules apply. That is why the example works.
fletku_mato@reddit
It's not an AI problem. It's a human context window and motivation problem. At least when the massive PR was written by a person, someone in the team must've read it (while writing it).
CodelinesNL@reddit
But it is still the same size PR that needs to be reviewed by someone else. Nothing fundamentally changed in that aspect.
It's also a dangerous assumption that just because the original author wrote it, it's "more correct". There's a reason we do reviews after all :)
fletku_mato@reddit
I wasn't making such an assumption, but if I ask about the details of something my coworker "wrote" they'd better not be reaching for Claude to explain "their own" code.
CodelinesNL@reddit
No absolutely. I've made a very explicite friendly note in our Slack this week that "But Claude said..." is not an acceptable argument in any discussion 😉
yad76@reddit
"lgtm" was the standard at the last place I worked. Always a huge disappoint as why do we even have the PR process blocking me from pushing my code forward if we aren't taking it seriously? Then at the next standup, that same person would invariably say they spent all of yesterday afternoon or this morning or whatever reviewing PRs.
Appropriate-Wing6607@reddit
There are Dozens of us!
-Django@reddit
Strongly disagree. I'd be mortified if I put my name behind a vibe-coded PR, and super embarrassed for my co-workers if they did that
gsxdsm@reddit
Why
-Django@reddit
Because it's unprofessional and lazy to make your coworkers review something you couldn't even bother to read. It's almost like if a coworker never tested their code before shipping it. I'd be pissed if I had teammates that did this routinely, and embarrassed for a dev who thinks it's an acceptable way of doing business
Spiritual-Theory@reddit
I review the code.
I try to get PRs to be small breaking them into smaller tasks if needed, and I have a strong sense for what the architecture and code will look like at the end. Reviewing it and asking Claude questions along the way will lead to me completely understand every line. I often find 5-10 things that change and Claude updates, commits, pushes for each. At the end, if Claude and I agree, I'm confident it's good. I'll do 5-10 PRs a day this way. We require a second review, but I have yet to receive a change request by another dev in a PR review.
Traditional_Fox7091@reddit
I literally go through it line by line. If u r not using ai, im not sure what to say. Its here to stay
CodelinesNL@reddit
"No true scottsman"
coderstephen@reddit
You mean "No true Scotsman".
ninetofivedev@reddit (OP)
Yes. I find reading and reviewing code less distracting than digging into a different sectors of my brain as I context switch between problem solving and writing a solution.
fletku_mato@reddit
As someone who also has the attention span of a 5yo, I find that very hard to believe. I find it hard to get through even human-made pull requests if they're big. I'd hate reviewing LLM outputs all day a lot more. When you are actually interested in what you are doing, time flies and shit gets done. I can't imagine being very interested in how Claude solves something.
CodelinesNL@reddit
ADHD isn't a lack of attention, it's a lack of motivation. The name is a massive misnomer.
CreativeGPX@reddit
While I would agree that it's more complex than most people understand, your own link summarizes symptoms as "fidgeting, difficulty paying attention and losing things." It's completely reasonable to summarize a lack of attention as a symptom. Rather than reject that, if you want to push back I think it makes sense to talk about the nuance of how that lack of attention presents itself.
For example, the lack of attention is because of an inability to "aim" attention. In some cases that means being hyperfocussed on the wrong thing. In other cases it means having attention divided between 10 things only one or two of which need your attention at that moment.
Another example: I remember one doctor said that for a person with ADHD its like everything happens now or infinitely in the future. It made a lot of my wife's (who has ADHD) behaviors make a lot of sense even seemingly unrelated ones like estimating how much time passed or how many words/questions I said. But that distorted view of time (perceived by many as impatience and ease of frustration) can obviously have much more complicated effects when it comes to the ADHD person assessing their own subjective productivity or assessing things like the amount of waiting, working, etc.
CodelinesNL@reddit
For the record, I am pushing back because I have ADHD, officially diagnosed.
If someone with ADHD tells you that you misunderstand ADHD, just read the damn link. It's not that deep; ADHD does not mean you don't have the attention span to read through a review.
That's like saying everyone with autism is good at counting matches. It's a pretty gross oversimplification. Was this a GP? Because he sounds like he barely understands ADHD himself.
ADHD brains work pretty fundamentally different from neurotypical brains because our reward system is fundamentally different. In many cases this works against us. In many cases this works in favor of us. ADHD can be debilitating in circumstances that are 'made for' neurotypical people, but a superpower in others.
Someone who has ADHD but finds code boring will not be a developer. In fact; most people with ADHD I know (and it's pretty easy to spot if you have ADHD) are pretty fucking good developers, especially when the 'chaos' gets managed properly.
My wife also has ADHD. She would never be a dev. Because her reward system will not trigger on the work we do.
CreativeGPX@reddit
I was just saying that your own pushback was an oversimplification as well so you should have humility when you're combating oversimplification with oversimplification and redirect to the actual point of disagreement or misunderstanding. Telling a person it is not a lack of attention when it is and your own link says so doesn't help. What helps is what I said: nuanced discussion that helps you come together about what that lack of attention does and may actually mean with respect to the context of this conversation. It's a vague phrase so discussion can help clarify what it means so you're on the same page.
Fwiw, I did read your link and pointed out how it agreed the lack of attention is a reasonable summary of the symptom. I very explicitly said it's fine to get into the nuance of what that means but that you can do so without pretending the person you were responding to was making a wrong claim about ADHD. Fwiw, I don't have adhd and my wife does but my knowledge about it comes from reading several medical books about it, so yes, I see the the brief internet article you linked as oversimplifying a lot compared to what I've learned and think it's valid to explain some aspects in more detail.
Also given how defensive your comment sounds, note that my comment didn't comment on what tasks people with ADHD can or cannot do. It was simply about the way you were communicating about ADHD.
CodelinesNL@reddit
Again, it's a common misconception, also due to the name ADHD itself, that people with ADHD have a lack of attention. This is a very simply misconception that I simply pointed out, with a simple explanation suitable for laymen, to support this.
Yeah, well try mansplaining this to your wife, see how that works.
And don't come at someone with ADHD with "I read some books". We all hyperfocus on ADHD research after getting diagnosed. I read all the fucking books.
Now go "well AkSuAllY" to someone else, pretty please?
unconceivables@reddit
That link literally says ADHD is difficulty paying attention in the first paragraph?
CodelinesNL@reddit
Why do I even bother citing sources?
Read a few more lines would you? "Despite its name, ADHD doesn’t mean that you lack attention."
ToastyyPanda@reddit
Since you're getting downvoted, I'll chime in and agree with you here as well.
For whatever reason, reviewing others code comes incredibly easy and I'm less distracted when doing so.
ninetofivedev@reddit (OP)
Reddits a little moody today
ToastyyPanda@reddit
Haha all good. Just wanted to put another perspective out there. I find it a lot easier vetting/reviewing other people's code (and even somewhat interesting to see others coding styles), but when it comes to my own code it's like my brain starts working half as fast lol.
CanIhazCooKIenOw@reddit
Reviewing PRs is a skill that most engineers don’t have. That’s why it’s always a team problem the bottleneck in reviews because only very actually do and even fewer than that do it properly.
Now because LLMs simplified writing code people are panicking because they don’t have the skill
CodelinesNL@reddit
I have ADHD and you're right. People want to get their biases confirmed here it seems.
Looking a who is still posting here (I recognize your username) it seems most of the "OG" here gave up on this sub. I'm starting to understand why.
rupayanc@reddit
The top comment kind of proves your point by accident. Reviewing LLM output requires the same attention loop as writing code, except now you're auditing someone else's decisions instead of making your own. For a lot of people that's actually harder. Writing code is generative. You're building something forward. Reviewing AI output is investigative. You're checking for what went wrong and why. Different cognitive demand entirely.
pydry@reddit
I genuinely cant tell if there is a tsunami of bots dying to give us their latest hot take about how precisely LLMs will replace them or if there's just that many human NPCs who are basic enough to write this slop and basic enough to be anxious that nobody can distinguish their slop and LLM slop.
annoying_cyclist@reddit
OP's been a regular participant here for quite a while, since before bot posts and slop were something we really saw. It is regrettably helpful to be skeptical of bots and astroturfers these days, but I don't think OP is one of them.
ninetofivedev@reddit (OP)
You’re certainly free to click that button that hides the post. You can also block me and never see my post again!
CreativeGPX@reddit
Those are both selfish routes that don't help the overall community. IMO ideally people only block/hide personal harassment, but solve what they see as poor content by engaging with it as a community (votes, discussion, rules, moderation) so that everybody can benefit.
ninetofivedev@reddit (OP)
There is a 100k+ daily visitors to this sub. Feel free to block some people.
Eliarece@reddit
Judging by the fact that in 16 days, OP made 5 controversial posts on r/ExperiencedDevs, I think they're someone that enjoys creating discourse
ninetofivedev@reddit (OP)
Having a day love?
BigJimKen@reddit
I just fundamentally disagree with this. Inference costs are going to skyrocket when the arse falls out of the AI sector. The people who are reliant on LLMs to do their jobs properly are going to be fucked because management are not going to pay for Claude Code when tokens are being sold at or above inference cost price.
Even outside of that every argument I hear about how these tools are going to be essential going forward relies 100% on how things could be in the future instead of how things are now. The reality is that right now the best programmers are the ones not using LLMs to generate code and I can't see that realistically changing what with the scaling and data walls the LLM industry is currently nestled against.
The past was a dichotomy between good and bad programmers, the present is a dichotomy between good and bad programmers, and the future will be a dichotomy between good and bad programmers.
Smallpaul@reddit
I keep hearing about how the price of tokens is going to skyrocket, but I’m curious how persistent people think this trend is going to be?
Do you think that the inference costs for 2026 level Claude Code performance will be higher or lower in 2056? In 2046? In 2036? In 2030?
How long is this inference-pocalypse going to defy the long term trends of dramatically more efficient LLM algorithms and chips?
Do you honestly think that a technology that is accessible to current programmers is going to be priced out of reach forever??? And if not, then for how long?
BigJimKen@reddit
Absolutely no idea. Trying to predict things outside the scope of the obvious compute problem and financial bubble is probably a pointless exercise since this sector is currently hitting walls in lots of different places, technological and financial.
Hopefully we do reach a point where cheap, financially sustainable inference on massive, smart models is possible but I don't think we will reach that point soon.
Smallpaul@reddit
You think it is impossible to infer ANYTHING from the history of computers starting with vacuum tubes and proceeding through to smartphones? Or GPUs from when they were FPUs through to H100? Or compilers from outputting straight assembly through LLVM with multiple optimization passes?
This is the one time where things will get more expensive and say that way for the long term because it is somehow unique at a technical level?
BigJimKen@reddit
Yes, that’s exactly what I said. There definitely wasn’t added subtext in my comment that explains why I think predictions about this sector are difficult. Definitely.
-Django@reddit
This is a good question. People keep talking about a rug-pull that and skyrocketing token prices, but forget two things:
- Smarter LLMs generally need less tokens for the same result
- Open-weight LLMs are increasingly closing the gap with closed-weight LLMs
CodelinesNL@reddit
But the good developers who are now using it extensively don't magically become "bad developers" when they pull the plug. We're back where we started, which is just a year ago.
Source? Because this is something I see being repeated on Reddit a lot. Just like I see on r/java the constant "No one uses Kotlin" for example. Are you sure this is not just your bubble?
Sure a whole bunch of annoyingly loud nobodies are very vocal about it. But most actually really good devs I know use it extensively. They just don't spam LinkedIn about it.
The future is a dichotomy between good and unemployed programmers. Infosys' business model just evaporated.
If I need to spec a change in detail, Claude Code is going to do a better job at it than the typical dev who needs this input.
BigJimKen@reddit
I think a lot of those people are going to have major issues re-adapting and are going to be at a major competitive disadvantage.
There isn't a study that plots skill against tokens used that I know of - at least not yet - but I don't think it's a coincidence that most of the people I have worked with in my career who I would hire without much thought are generally skeptical about LLM workflows. There are plenty of examples of bad LLM driven software development though. Have you seen the leaked CC codebase? It's pretty grim.
I use these tools, btw, probably more than the average programmer. I'm not a luddite. I have an extremely robust stack and workflow for it that is (reading your other comments here) nearly identical to yours - but the outputs of that are still worse than the code I write unassisted. I work in a domain where it's very important we are right-first-time so even if I am gaining time, it's not worth it. If I want to bring that gap to parity I can, but then I start sacrificing the productivity gain and at that point it's a pointless expense.
Agreed in almost all circumstances, but the average developer is not good lol
hachface@reddit
Studies show cognitive skills do actually decline in correlation with LLM use. It’s not a magical process that turns good developers into bad developers but a natural, gradual decline in independent problem-solving ability.
CodelinesNL@reddit
Oh wow, this is the exact level of nuance as the "computers make you dumber" stance a few decades ago. And yes I'm actually old enough to remember these claims.
You can absolutely use LLMS that way, we see it here and on linkedin all the time. People clearly using LLMs to write entire posts for them.
But you fell for the correlation does not imply causation trap. It CAN be used to achieve the reverse. As research shows: "They used a generative AI instructional chatbot called ChatTutor to prompt students to connect ideas and explain their reasoning. They discovered that students who used the chatbot performed better on later assessments compared to those who learned to analyze in traditional ways. "
It's a tool. Shoot yourself in the foot, or use it to get smarter. Your choice.
hachface@reddit
Some people smoke meth once a year to give them the motivation to do a deep clean of their house. It's just a tool!
CodelinesNL@reddit
Reductio ad absurdum. What fallacy are you going to reach for next? Or wait; don't bother; I'll just write a bot for it. He'll to it with more intelligence and humor than you, I'm sure.
simfgames@reddit
Studies in this area are so broad that they're useless, because there's giant variability in effectiveness: some people know how to use it, and others don't. Many studies are set up very poorly. And they're also out of date immediately.
You know what else isn't well studied? That using version control improves productivity. So I guess you should stop using that, until the studies come out and confirm.
hachface@reddit
This technology came out yesterday practically. The idea that there is such a thing as a right way or safe way to use it is not yet clear. Version control is a proven technology that addresses an obvious need.
simfgames@reddit
My point about git was that studies aren't a magic bullet. Studies have many limits. I often see them brought up here as the final authoritative source, but they often aren't worth shit.
It's also impossible for studies to be up to date with a tech that just came out and is evolving so fast.
hachface@reddit
I'm aware of the limitations of studies. They are still a smidge above writing whatever bullshit springs into your mind, which is what most reddit comments are (this one included, i'm not attacking you)
CodelinesNL@reddit
What's even worse; cherry picking information that supports your stance while ignoring the information, from the same studies, that doesn't.
His might be ignorance. But lying is malice.
CodelinesNL@reddit
Cherry picking studies (and not even sourcing them) and ignoring the same studies also saying it can work in the opposite way, that is the Reddit way.
Really I started posting here again today after a hiatus of a few years, and I already remember why I quit :D
Chwasst@reddit
Studies also show it’s much easier and faster to recover from such decline than to start fresh and build them from 0. So guy above has the point - it’s just a matter of adapting again to previous workflow.
Expert-Reaction-7472@reddit
you're getting down voted but what you're saying is the truth whether people like it or not.
It's like a tree surgeons refusing to use a chainsaw - Sorry but nobody is going to pay you a weeks work for something another person will do in a day because you refuse to use modern tooling.
fletku_mato@reddit
A chainsaw is pretty predictable and reliable.
Expert-Reaction-7472@reddit
enjoy being unemployed
fletku_mato@reddit
If I had the same beliefs as you, I'd be pretty worried too.
How much value do you actually think your unique prompting skills have?
Expert-Reaction-7472@reddit
you can choose a staff engineer that can write code by hand and by AI, or one that can just write code by hand. Why choose the one with a more limited skill set?
fletku_mato@reddit
Idk, you could just choose a PM that can command their own fleet of AI agents, or even better, do it hourself. Why pay for anyones expertise? I'm obviously not being completely serious here, but maybe you get my point. Enjoy your unemployment.
Obviously, if my employers want me to use Claude or whatever, I can do it. Commanding Claude isn't exactly something that requires years of training.
FreeYogurtcloset6959@reddit
I can build a house with a paperboard in a day. Regular real estate builders need a year for that. Is my solution better because it is faster?
Expert-Reaction-7472@reddit
that's a tenuous analogy.
People want the tree cut down, they dont care what tool you used to do it.
The material stays the same.
At this point if you can't get a good enough cut out of an LLM that it is 90% to do with how you hold it as opposed to how sharp a tool it is.
FreeYogurtcloset6959@reddit
Makinga a software is much more complex than cutting a tree. If you cut a tree, you can immediately see if results are good or not. In software you can't,
Expert-Reaction-7472@reddit
if you write a unit test you can.
CreativeGPX@reddit
No. People and AI both routinely fail to capture all requirements of software through written tests. In practice, unit tests rarely guarantee that you completely the job totally correctly.
FreeYogurtcloset6959@reddit
Would u write unit test manually or with AI?
If u write them manually, it means that u still have to write code and to understand it 100%.
If AI writes test, how could u know if tests are good?
CodelinesNL@reddit
I write specs, it writes tests based on those specs, which I can review. It then red-green-refactors corresponding code.
The quality is generally better than a typical junior/medior dev produces. I very rarely have to tell it to change something, and if I do, it just does it, without complaining. Unlike typical junior/medior devs who want to bikeshed naming conventions instead of writing functional code.
If you think "AI writes bad code" you're extremely behind the curve.
hachface@reddit
I’m sorry but that’s incredibly naive. Test code is code like any other. You can test irrelevant things, mock up bad assumptions, and just have plain logic errors in tests.
CodelinesNL@reddit
Claude Code write better code than most developers. Your analogy is not even close to reality.
FreeYogurtcloset6959@reddit
Yes, it writes great code, until it starts to overengineer for complex problems which he hasn't found in it's training data.
CodelinesNL@reddit
It's a tool, how it behaves depends on how you use it. Using an openspec based workflow, that's not how it behaves. Sure it can sometimes tunnelvision into a certain direction, but that's something many devs to as well.
What you describe is not my experience in general in the slightest. And when I instruct it to use a TDD approach, it just fucking does it, instead of arguing about it. When I review it's code, it covered all the unhappy flows automatically (according to spec), and I don't have to argue with the developer that part of their job is also writing validators for their input.
Using that workflow (which explicitly tells to to challenge my assumptions) it very often sees edge-cases I did not spot, which I would've noticed only while implementing the code myself. It is deliberately instructed to challenge me, and it does that very well.
Using it in the refinement state; I have it as an assistent that helps me refine a Linear ticker from the context of the codebase. It can reason much broader and faster about this codebase than any human can. As a result, our stories are specced MUCH better than what a typical PO or Senior dev would do.
It's faster, can reason broader (not smarter, but computers are very good at graph searches) than we can. That's where the strenghts lie.
9 times out of 10, it can one-shot a story specced like this. The code quality is generally higher than what an medior dev produces.
What it's also very good at; is when you give it read-only access via the GH and AWS cli to figure out deployment issues. Again; very broad and fast reasoning. Something these models are getting VERY good at.
By all means pretend that I'm wrong, I understand that for a lot of developers this is a very uncomfortable truth that saws the legs from under what they see as their core strenght. But the simple truth is; writing code was just an annoying bottleneck and never the really hard part, and we're not really particularity good at it.
CodelinesNL@reddit
This sub is rediculous. I mean it was pretty bad a few years ago, but holy crap what happened here?
CodelinesNL@reddit
Alex Karp (Palantir CEO) recently said "only two kinds of people will succeed in the AI era: trade workers — ‘or you’re neurodivergent’" 😉
And there's plenty more to find about what's coming out how AI is pretty awesome as a structuring tool for people with ADHD specifically.
CreativeGPX@reddit
Is he calling himself neurodivergent or saying he/CEOs will not succeed?
CodelinesNL@reddit
No idea. I think it's mostly funny, I don't put much stock in what billionaire CEO's thing. They're very far removed from actual life.
throwaway0134hdj@reddit
LLMs are great for grabbing you a lot of code that may or may not work and a large part is getting those requirements and context in order, that’s been my experience. I’m usually in a prompt loop for an hour to get the code in the way I want it, AI boosters would say this is a “skills issue”, as if you can get good at a non-deterministic process.
I find developing with LLMs a lot more piecemeal than what the marketing portrays, there is tons of security, networking, and infrastructure considerations, which I think a lot of the c-suite is annoyed that this even exists and wants to see less of? It’s odd, they just want everything simplified and pay ppl low salaries.
I believe a lot of this is related to the economy, we are heading towards a recession. A lot of what we do I think is not well communicated to the general public, who believe it’s just banging out code all day, I’ve heard non-tech ppl refer to it like writing a book…
How AI replaces jobs end to end w/o having someone behind the wheel? Who knows… I’m fed up with all the AI nonsense, I use it as a tool and that’s it.
It is so incredibly tiring opening up Reddit, YouTube, or LinkedIn and just be bombarded with news about AI doom and gloom or how software development is cooked… And the very same ppl saying this have zero experience in the field. I believe a lot of this is some mix of trolling, marketing/sales, and capitalism.
CodelinesNL@reddit
That just shows you're behind the curve.
Claude Code produces code that is generally of higher quality than typical devs (it almost always covers edge cases, validation, etc.) and when it makes mistakes, it self-corrects based on compile, linter and test-errors faster than a dev would. I have way fewer 'nits' to pick with Claude produced code, and it also does not stray from the task we defined in the spec, unlike developers often do.
This "it writes bad code" does nothing but signal you're not up to date.
Doomed? It's better than ever. AI does most of the boring shit for me, so that leaves more time for fun things.
As long as you don't think "development = typing code on a keyboard", development is not dying.
throwaway0134hdj@reddit
Tbf I haven’t used Claude Code, the whole CLI feels a bit cumbersome and hard to navigate. I just copy and paste into the Claude website. I’m all ears for what the correct approach is.
CodelinesNL@reddit
How do you work on a local codebase by just using the claude website?
throwaway0134hdj@reddit
I use Kiro as well, my work is split between data engineering and developer. When I am building and wiring up entities, objects, and pipelines that’s when I am using the website. When I am doing traditional dev work that’s when I’ll use Kiro, so effectively the AI sees the whole codebase and I ask it to make changes based on requirements.
McChickenMcDouble@reddit
As a fellow ADHDer my experience aligns well with yours. Cutting out the writing code step has made me much more effective and more engaged at my job. I was the biggest AI skeptic until the last six months or so, when my entire outlook changed as I seriously sat down and invested in building out new workflows to leverage LLMs to automate all the boring stuff
CodelinesNL@reddit
This "experienced devs" subreddit is now apperantly full of developers who could not even be arsed to get some experience by paying for a 20 dollar Claude Code sub and actually building something. And yes, I'm talking about the people downvoting you and some others here.
What a useless dreck of a place this has become. It's basically CSCareerquestions but with people who think they're smart.
Outside-Storage-1523@reddit
It's like saying "Writing code is easy as long as you think 100% through beforehand". But my dear friend how many times you have "thought through beforehand" and then got something unexpected? Do you think you can think through bean counters? They can't even think through themselves!
I'll add a similar example for which is much more technical and doesn't even involve bean counters -- I found it is very hard to reverse engineer the full specification of mmap/munmap of the current Linux kernel. There are some questions unanswered in the manpage so I had to do some my own experiments and look at the code.
ninetofivedev@reddit (OP)
I haven't found what you're describing to be the problem.
If things change, I iterate. I'm not expecting that I'll come up with a 100% perfect design spec.
Sometimes I scrap it all together and start over.
My general workflow when it comes to working with software and AI hasn't really changed, other than instead I'm passing off executing on the plan to the agent, and then I review it.
Taking notes while you review it and feeding that back into the AI has been the biggest game changer for me. Also making sure that I manage sessions independently so that context isn't poisoned or biased towards their original implementation.
CodelinesNL@reddit
I'm writing the same plan. I'm now handing it off to "someone" who typically does a better job writing the code and doesn't argue with me when I tell them they made a mistake. "They" apologize, fix it, and save clear instructions so they don't repeat that mistake again.
Oh and it's done in a few minutes instead of 2 sprints.
CodelinesNL@reddit
That's not at all what AI based workflows look like. Not even close.
Ok-Hospital-5076@reddit
Dont know if it was hard part or not but was definitely the most fun part. It felt so good . That dopamine hit , that flow state. It was a pretty cool job. That feeling is gone .
qrzychu69@reddit
For me using ai is worse for my ADHD, because it's mostly waiting.
I can't really work on two things at once, and I noticed that I learn a new codebase MUCH MUCH slower than before.
That said, it's amazing for codebases I don't feel like understanding - we have angular frontend, I don't give a crap about angular, let Claude do the thing.
We have python generating SQL from text templates (DBT is really great guys), with hundreds of datamodels. I can make changes, add new things thanks to Claude and copilot, but I know as much about the models as I did a test ago.
When I code F# I am typing.every.single.character. it's mine
The rest? Screw that :)
arihoenig@reddit
Defining the requirements from the abstract down to the instruction level is the hard part. Code is the latter part of that process. It's all the hard part, code isn't uniquely the hard part.
brosiedon169@reddit
I’ll just toss in my experience with AI assisted coding.
I have just over 7 YOE now at various levels and working at a few different companies of various sizes.
I grinded really hard in my first couple years to absorb all I could for coding practices, patterns, fell in love with functional programming and trying to use those ideals in the systems I’ve built.
Then comes along the first iterations of LLMs and AI assisted coding tools that were available to the general public. Boy was I a huge hater. “I don’t need AI to write good code. I know what I’m doing. I can write better code than what this thing outputs”.
Fast forward to now and I haven’t written code by hand more than a couple quick sql queries in close to 8 months. I’ve spent hours fine tuning my skill files to get the output I want. I’ve automated all my mundane tasks like updating and creating JIRA tickets to a cursor rule that is set to always apply and my PM loves me for it since she knows exactly what I’m working on and has a plethora of comments to dig through on each ticket to give her more context.
I’ve fully embraced the change and it’s been amazing tinkering with the tools to get them to work the way I want them to.
vibes000111@reddit
I also have ADHD and the thing that helped me the most was medication - it made a giant difference in the quality and quantity of my work and how engaged I feel with everything at work.
I do use LLM assistance for coding but it feels like the ADHD would a problem with or without it. Because it’s still work.
serious-catzor@reddit
That's typical with ADD. You have it all figured out and know how to solve it but you'd rather go build an entire new program from scratch than finalize it.
It's basicly procrastrination.
Saki-Sun@reddit
The hard part of writing code is balancing readability vrs complexity, dealing with stakeholders and listening to people who think AI will solve the previous two problems.
QuitTypical3210@reddit
I just want agents to do my job while I fall asleep at work
FreeYogurtcloset6959@reddit
I asked AI to summarize your post, and this is the summary:
They also emphasize that AI isn’t for “bad engineers,” but that many developers use it poorly. Those who learn to use AI effectively will have a significant advantage, while those who ignore it risk falling behind as the technology continues to improve.
So, this is the 1000001st post with the same bull*hit message.