Honest question: how do you stay sharp when the code practically writes itself?
Posted by minimal-salt@reddit | learnprogramming | View on Reddit | 72 comments
I want to ask this genuinely, not rhetorically.
I'm 12 years into backend dev. I used to love digging into disassembly, reading RFCs for fun, optimizing queries until they ran 100x faster. That stuff used to make me feel like a craftsman.
Now I'm shipping features in half a day that used to take a week. Which is great for velocity. But I've noticed I'm not learning anymore? I hand off the boring parts to Claude, I skim the generated code, I move on. My brain is in "review mode" constantly, not "build mode."
I know the job is to ship working software, not to suffer. But I also know that staying technically sharp is what got me to this point.
For those of you who've been doing this a while, how do you balance "use the tools" with "keep your edge"? Is it even a real problem, or am I just romanticizing the slower days?
0x14f@reddit
Easy: Don't use LLMs.
It's like you being a musician, maybe a violinist, asking "how do I stay sharp when I can just replay the music on the cd player ?" If you care about your skills, practice them. Nobody forces you to use LLMs
markrulesallnow@reddit
how is this possible going forward in this career?
It definitely leads to skill atrophy but when everyone on your team is using them and shipping features in half the time (that work) management is not going to keep a Luddite on staff just because of their personal principles if they take 2x-3x as long to deliver the same quality of work
0x14f@reddit
In my company we care about quality more than convenience (and this is supported by management), so I am not about to lose my job anytime soon for some autobot. At best some of my colleagues use LLMs to become better, not at a replacement of themselves.
niccolololo@reddit
Nobody forces you to use LLMs unless you work for somebody
0x14f@reddit
In that case use them at work and don't use them at home (for your own projects).
AUTeach@reddit
To be fair, the expectation that programmers need to spend 8+ hours a day doing programming related things for work and then spend N hours a night doing programming things to stay good at programming has kinda blowed for the last twenty years.
Dzeddy@reddit
I mean I'd agree, but there's a reason median terminal level salary at tech companies is like 330k lol
lottspot@reddit
The "need" is to become good at the skills you want to become good at. It's not necessary to work on side projects for every person for all of time, but if your day job does not help you hone the skills that you want to hone... Then it is your responsibility to hone them on your time. This applies to literally any employable skill, and is not remotely limited to programming.
BogdanPradatu@reddit
This is becoming a real stressing subject for me. I barely have time to do anything between work, house chores, playing with the kid, doing some physical exercise etc. I barely have time to socialize with friends, read a book, play a game. When should I work on my own projects?
Less-Opportunity-715@reddit
lol welcome to adulthood.
-Periclase-Software-@reddit
Personally sometimes I use Cursor at home to do tedious stuff, but sometimes when it comes to writing my own view I just do it myself for the fun of it.
0x14f@reddit
Nice! :)
corrosivewater@reddit
Seriously, if you aren’t using LLMs at work, your output isn’t going to be sufficient enough and you’re getting let go. It’s basically a requirement.
drummer22333@reddit
In addition to this, many companies are tracking token usage as a core developer performance metric. Many companies I interviewed with in February and March this year had developers mention this to me.
b3wizz@reddit
Insanity
skill347@reddit
This is not realistic for someone working in the industry. Once LLMs became a think and OP started shipping features this fast, management EXPECTS them this fast. That's how it works. Unfortunately nobody cares about code quality other than the engineers, it just needs to work.
Jack1eto@reddit
My team leader actually mocks anyone that uses AI lol, he says thats it sucks the fun out the job and don't care about it. And he's actually carrying the project hard (he has the exp tho).
A lot of small/medium companies are like this, internet talk is not real life
Less-Opportunity-715@reddit
A lot of soon to be fucked companies.
corrosivewater@reddit
Yeah for real. You're not going to be first to market if the attitude is "let's not new this new technology because the old fashion way is how we work". It's simply not a realistic mindset.
dirtuncle@reddit
The idea that being first to market is the single most important priority is completely ridiculous though. I'll take mature, well-maintained software over hastily put together slop any day.
corrosivewater@reddit
you’re suggesting someone equal in skill that isn’t using AI compared to someone who is using AI is going to have a product that is slop? Did this person suddenly forget how to architect and design apps or something?
dirtuncle@reddit
I'm saying anyone who prioritizes speed over everything else will produce slop regardless of their skill level.
Less-Opportunity-715@reddit
Skill issue
disappointer@reddit
I have mixed feelings on it, because the LLMs still aren't great at what they do. Just yesterday (for example) I had CoPilot refactor one function and, in the process, it completely nuked two unrelated functions, and this was in a not-very-large TypeScript file.
When it comes to understanding our half-million-line codebase in a meaningful way, it doesn't offer any panaceas.
It's great for demos, for bootstrapping new projects, and for unit testing (kinda). For actual enterprise development, it requires a lot of handholding.
"First to market" isn't great if your product is ultimately crap (especially if it reeks of the same sameness as all of the other LLM-generated shovelware). Quality of execution still matters.
corrosivewater@reddit
I agree on that. Largely any output from AI needs a lot of review. If you’re an engineer and have experience, you should be able to catch a lot of garbage and mistakes it outputs and rectify that.
I also think copilot is inferior to something like Claude code. Having used both, Claude just blows it out of the water. That said, I catch Claude making weird mistakes as well and if someone who uses it with little to no software experience isn’t going to catch things like that. Engineers are mostly code reviewers at this point and you’re just reviewing junior (AI) engineers’ code.
Less-Opportunity-715@reddit
In large cos, this is largely solved already. Our agents at my valley-based co are integrated into the entire stack and dev/deploy process.
skill347@reddit
Maybe you're right, it might just be the big companies, wouldn't be surprised if I think about it.
0x14f@reddit
I work in the industry and my management is intelligent enough to still care about quality. Might be because we don't really have a choice in my company, but I get your point that in some other places, they might not have the same requirements.
That having been said, I didn't suggest that OP should not do what their management requires from them. I was just suggesting that if OP care about keeping their skills, they can code without LLMs, could be personal projects. To me programming is a personal skill, not something I do only for "work"
skill347@reddit
I'm jealous of your management. I still think this is the minority of cases though.
But yeah, not using LLMs for personal projects is a good idea.
zquintyzmi@reddit
Artisanal coding is unlikely to be a career much longer
0x14f@reddit
True, but there are companies / industries where they can't use LLMs (unless ran locally) for legal reasons. Also I would still advise people to know how to code, you know when there are outages. Forgetting that knowledge is like humans cutting their legs (to be lighter), because we have invented machine transportation.
AntiDynamo@reddit
Also, even if you have to use the LLM, you get to decide how, and to some degree when, and what you’re going to take out of it. You can limit it by:
only prompting general cases
using planning mode
always pushing back on its suggestions
prompting it to review work, find bugs, and be critics of things
only using it for rote work
having a good idea of your solution before you involve the AI
drafting solutions first before asking it to continue on what you have
0x14f@reddit
Absolutely! LLMs can be used for so much more than just generating code that the engineer then just "approve". They can be amazingly efficient as learning companions and help the user/engineer becoming better while being faster, which is a win-win situation for both the company and the employee.
lianjin_365@reddit
In reality, however—due to efficiency demands—we are compelled to rely on tools like ClaudeCode in our work. In China, our industry is in a pathological state; and having completely drained our energy on the job, we no longer have the will to do any further programming once work is over.
gomsim@reddit
I still haven't really let AI develop stuff for me with schemes and stuff. I just let the AI see my code base and I ask it to help me with some stuff. I dread the day when I am expected to just write instructions for AIs and review code.
Ok_Option_3@reddit
Can you accurately multiply dozens of figures using pencil and paper? How sharp are you at long division? How are you skills at looking up logarithms tables?
Do these skills still matter?
Upstairs_Snow5195@reddit
Calculators dont hallucinate
Grenrut@reddit
You will definitely get worse at programming.
But you will get better at reviewing, architecting, velocity, and effort use of LLMs which are now more important skills than writing code.
I’ve forgotten everything I knew about writing assembly thanks to modern programming languages. Does it matter? Not at all
Upstairs_Snow5195@reddit
Compilers don't have monthly subscriptions and outages
apocalypsebuddy@reddit
You spend the extra headspace on learning infrastructure and deployment
Savings_Speaker6257@reddit
The thing that keeps me sharp is debugging. AI can write code fast but when something breaks in production at 11pm, the AI-generated code is a black box if you don't understand what it did. Those are the moments where your actual knowledge matters.
What I've started doing is treating AI output like a junior dev's PR. I read every line before I commit it, ask myself "why did it do it this way," and sometimes rewrite it manually just to make sure I could. It's slower, but I'm still learning.
The other thing: build something from scratch with zero AI assistance once in a while. I did this recently for a real-time multiplayer game and the amount I learned about Firestore listeners, race conditions, and state sync was way more than I'd have picked up if I just let the AI handle it. The struggle is where the learning lives.
OnionsOnFoodAreGross@reddit
Do you know how a calculator works and can do all the complex math? Do you care to know?
dirtuncle@reddit
If the best calculator currently available was consistently getting things wrong then I think you should absolutely care to know.
OnionsOnFoodAreGross@reddit
I use ai a lot and it isn't consistently wrong. What are you talking about.
dirtuncle@reddit
Example: Google's Gemma 3-based "AI overview" is at best correct 91 % of the time.
If that was the case for your calculator, you would throw it in the trash.
Caaolmii@reddit
It’s a real problem, not just nostalgia. When the tool does most of the work, it’s easy to drift into passive review instead of active thinking. The edge usually stays sharp when there’s still something being done intentionally. That can be digging into one part of the system deeper, rewriting or simplifying AI output, or taking ownership of architecture instead of just accepting what’s generated.
If everything is delegated, the learning stops. If the tool is treated like a junior dev instead of an autopilot, the skill still compounds. Velocity is great, but the people who keep growing are the ones who still choose where to think hard instead of letting the tool decide everything.
No-Bodybuilder-4655@reddit
“I skim the code” yeah, that’s your problem. We should all collectively not deliver faster than it takes to at least review.
It is a real problem if you can’t debug your own code or even explain what it’s doing.
Blothorn@reddit
Read and understand everything. Don’t skim; actually follow the logic and ensure it makes sense. LLMs are good at producing plausible-looking code that seems sensible at a superficial level but has weird edge cases or pointless complexity/inefficiencies if you dig into it; there’s no excuse for merging code that hasn’t had a thorough human review outside of throwaway PoCs and the like that don’t need long-term maintenance.
ComfortableSoup7@reddit
The “pointless complexity” is what’s getting me. I asked Claude to write a snakemake tying scripts together and the formatting was so weird I couldn’t read it. I asked it why the code was written that way and it said “oh yeah it’s not as readable as your way, do you want me to change it” Uhhh, yeah
One of the things that worry me about the next generation is they won’t know what good clean code looks like because they’ll only ever see really complex, overwritten code
Fajan_@reddit
This is actually a problem, not an idealization.
The things that worked for me were being deliberate with when to switch to “build mode,” e.g., forcing myself to write something from scratch or debug more deeply than usual rather than simply reviewing.
Also, focusing really hard on a particular layer (performance, systems, edge cases) helps keep that skill sharp.
AI can work fast, but if you’re never struggling a little bit, you don’t get the craftsmanship mindset 🙌
esaith@reddit
10 years here. I keep any AI out of my IDE then I code, even the boring stuff. If there is any question, I then have it code review for me. So not only do I keep sharp, Im still interested in what Im writing, but then I learn from AI of what to look for next. There have been plenty of times where it told me to one thing, then later another. Because I use it more for a learning tool than a passive one, it doesn't feel like "watching paint dry" boring level worth of work.
repster@reddit
To me, writing the code has always been the boring part. Understanding the problem, designing the solution, is where the fun is. There is some level of satisfaction from an elegant implementation, but much more from an elegant design, and the majority of implementation is just boiler plate.
I'll also caution against "skimming" the generated code. What I have found is that AI is great at writing code, but the design side is frequently sloppy. You basically have to be on top of it to make sure that adding minor features later doesn't resolve in major rewrites, with subsequent bugs and instability.
I frequently ask Claude to do something, look at the result, and revert it. What it did was ok for a prototype but not a product. I then spend a few hours implementing a few abstractions before asking it to fill in the specific use case.
throwaway8u3sH0@reddit
This is like the Assembly people when higher level languages first came around.
Really learn to use AI. You ship in half a day? Make it an hour. Make it multiple features in parallel. Learn the techniques so that you don't have to review everything, just the most critical parts. Learn to run it overnight so you can ship 24/7. Learn to do it faster/better than other engineers who are also using AI.
We're at the next layer of abstraction. Don't be an Assembly expert in a world of Python, Rust and JavaScript.
k1v1uq@reddit
Sounds more like you are accelerating :D
I use the new extra time to invest in myself, explore alternative solutions, use AI to learn new things in general.
Last week, I came across this neat password generator:
https://www.uni-muenster.de/CERT/pwgen/index.php?lang=en&mode=pwcard
Liked the idea, so I wrote one for the terminal in Go with Bubble Tea. Never wrote a line in Go, now I can write basic Go, I've also learned a lot about Entropy, Argon2ID and zxcvbn.
AI Bot: 5 senior-level technical interview questions tailored to {FAANG company}'s engineering challenges
Winter_Layer_9950@reddit
This resonates a lot. It does feel like we’ve shifted from “building” to mostly reviewing and steering.
I don’t think the problem is using AI, it’s how we use it.
Striking_Rate_7390@reddit
dry run the code simple as that !!
einstAlfimi@reddit
The "boring stuff" actually matters.
Less-Opportunity-715@reddit
Focus on product market fit, code is irrelevant
Grand-Tip236@reddit
Simple. You don't.
spas2k@reddit
You don’t. I get dumber by the day but at least my dog is getting better at catching the frisbee.
The_Other_David@reddit
Less time behind the screen, more time playing with your dog. That's what we should all want out of faster/better tech.
deadbeef_enc0de@reddit
The real pain point is going to be debugging something weird going on and figuring out what the LLM did. Just reviewing code is not going to be enough to learn the codebase.
Generally at work I have the LLM do the grunt work (ie making files, creating stub classes/methods, generating data structures) and do the actual logic myself.
In terms of keeping sharp in general, I have personal projects where I don't use any LLM to write code at all. I do try to make the projects/features sufficiently complex that it doesn't feel like grunt work.
AlcinousX@reddit
We work like giant muscles, our brain being one that powers us. And just like every muscle if you stop straining it, stop using it will begin to deteriorate. Technology makes things easier so we strain less which puts the onus on the person to willfully choose the harder experience instead of the easier one. I use a ton of AI both at work and home, and sometimes I don't use it at all because the tactile experience of doing the task and troubleshooting myself has value in keeping my brain sharper
Taiqi_@reddit
My suggestion would be to always follow up.
I don't code for a job, or school, or anything of the sort; rather I code for coding's sake... (dropped out of uni during covid, currently in a dead-end job, etc, etc, sad sad, yes yes, anyway...). That said, I have found that I've been learning a lot more now because my code writes itself.
The reason for this, I think, is the fact that when code is generated, I am able to not just skim the suggestion, but to critically analyze it. Is this the solution I would have used? What are the differences? Are there any concepts I am not familiar with? For that last one, a yes is a prompt to go do further research and learn more.
It seems the difference is that I am operating without deadlines or responsibilities where my code is concerned. Perhaps, to translate this into a framework of "the job is to ship working software", one could either
SeaThought7082@reddit
I have at least one complete no AI day at work. I try and choose the most difficult tasks to do during this day. Otherwise, I feel like 50/50 is still faster that getting the AI to do 100%
AskNo8702@reddit
I'm studying applied CS and I love programming but I don't think that I'd love it as much if I just tell an LLM to do this. ''that doesn't work. Check what does work''. Until it works and the code is decent. I'm pretty sure that I'd feel like working a menial job after a while , the one I'm trying to get away from. However I doubt that a subfield in IT will not be like this (unless it's physically installing stuff).
It's like when they invented the calculator maybe. You can feel yourself missing out on not training that arithmetic muscle as you use it. You don't get the same satisfaction. Yet to not use it wouldn't be pragmatic.
So what do we do? Work part time so we can have a life where we can do fun things outside of work. Or find a different job where you can problem solve.
divad1196@reddit
You cannot stay sharp and use AI.
Similarly, when I became lead, I learnt that in order to empower others, I had to let them do things themdelves and the way they wanted (at least to some very large extends) instead of thinking like "I know how to do it, it will be faster to do it thatn explain".
So, by becoming a better lead, I practiced less and became less sharp. When I changed job recently, I went back to an "hardcore" dev position and I can that I am less good than before.
The key is to practice. Especially the little, annoying, things that you do everyday, not just the fun and complex things you do once in a while.
Fajan_@reddit
I believe that there is no romanticization involved in your observation. What has really helped me is my awareness of the situation and knowing when to use or not to use the AI. There are still features or refactorings which I do from scratch because this is something that keeps me thinking and coding. Moreover, by working on one particular layer (e.g., performance layer), debugging or system designing, we still require an ability that cannot be replaced yet by the AI. The tool works well and fast, but when nothing takes effort, we no longer exercise our instincts. Thus, the balance between the two must be achieved.
PM_ME_UR__RECIPES@reddit
You have to cut out AI usage. Yes the job is technically to ship code, but those real optimisations on a large existing codebase like you're mentioning just aren't really feasible with AI tools, because they don't have as good of a context window as a human, they don't actually do any real cognition or understanding and so on. You can ship something much faster with AI, but the quality drops very sharply.
Resident_Cookie_7005@reddit
Shipping features fast also means edge cases and bugs happen. Previously during that week of shipping a feature you had the time to think of those edge cases and not including each one in the prompt or tests is inevitable with the current velocity.
That said when you encounter such cases and bugs it's important to go back and understand more how the generated code missed these. That's where the learning opportunity is, learn the stuff you didn't understand and implement new practices to catch such errors next time.
JustSimplyWicked@reddit
Only use the llm for the "boring" stuff, and write core logic yourself. Write code a home and don't use an llm at all.
The_Other_David@reddit
Take the extra time to pick the best features to develop, write better requirements, and develop better test suites that help you be sure that the code you're pushing is actually fixing your problems and not creating new ones.