How I started programming differently over the last year. What about you?
Posted by ievkz@reddit | LocalLLaMA | View on Reddit | 79 comments
An interesting observation: I’ve stopped using the LLM-powered autocomplete in my IDE.
At first, it was one of the key features for me. It felt extremely convenient: you start writing a function in your code, and the LLM completes it based on common sense or the context from the open tabs.
But the most interesting thing is that back when LLM autocomplete was useful and in demand, I had already written a script that could go through the source files, let me select what I needed, and prepare the context to feed into an LLM chat so it could tell me what to add or fix. I worked like that for about six months.
And even that is gone now.
These days it’s easier to open a CLI interface with a coding agent, without even launching the IDE. You describe what you need, use @ to point it to the files it should inspect or modify, and that’s it. Everything is changing at an absolutely insane speed.
Basically, the only things I still use an IDE for are nice Git diff visualization, step-by-step debugging, and the ability to click on functions and jump into their implementation. In other words, code navigation. And even that functionality is only needed in about 5-10% of my work.
It’s interesting to think what comes next.
What I mean is that I have an all-products subscription from JetBrains because I program in several languages at once: Java, Scala, Python, TypeScript, and Rust. But the question is: why keep paying for it?
Sure, once every 2-3 months, some unclear issue appears, and debugging helps find it. On the other hand, I’ve already tried another approach: I give an LLM agent the path to the log of what is happening in the program. If it doesn’t have enough information to solve the problem, I ask it to add more logs, then I describe the problem again and ask it to understand from the logs what needs to be fixed.
And of course, it’s very convenient to ask an LLM to write tests. That really is useful. If the tests fail, it looks at what it changed in the code and what it broke. When the LLM starts going in circles, I directly tell it: cover this with tests and read the logs to understand how everything works. Very convenient.
One of my latest techniques is using a plan.md file. When I ask it to solve a complex task, I first ask it to create a work plan and write it into plan.md. Then I simply ask it to complete one task from that file at a time. And step by step, through small tasks, the LLM eventually gets to the result.
Overall, I think the industry is changing a lot.
Share your experience: how has your approach to programming changed? I’d be interested to hear how things have changed for others.
But please don’t reply if you have never programmed before and have just discovered vibe coding. I’ve been programming myself since 1990, which means I wrote my first program 36 years ago...
_mayuk@reddit
Are you guys not setting up a vectorial memory of your projects ? Or a graph memory ? Like megamem solved most of my problems with larger projects …
That is the main difference in my coding nowadays .. setting up that shit in an already big project could be annoying xd
Drenlin@reddit
I'll offer another perspective.
I'm not a developer and would not claim to be. I guess I'm technically "vibe coding"? Mostly it's just patching abandonware to work with new software or building niche little tools for personal use. I know I'm "cheating" but I don't have the time or energy to learn 6 different programming languages in order to do this properly and the alternative is just to not have those tools available. Not much of a choice.
I have no formal training and can't really code on my own using anything but HTML (thanks MySpace) without googling how to implement pretty much every function, but I knew enough coming in to look at what's already there and make a reasonable assessment of what it's doing, especially when it's networking related. (Kinda like any non-enfoneer with some automotive knowledge can look at this transmission, know that something functional-but-weird has been done, and then figure out how and make guesses at why.)
I started doing this with Powershell or python/ArcPy scripts in CLI, and it was near but hard to follow. Moving to an AI-enabled IDE has made it a LOT easier to learn about how the software works and visualize what's actually going on, especially when I'm starting with someone else's project and making changes, and that's enabled me to get the the point where I can untangle the spaghetti code it tries to write into something modular and reasonably well documented.
I'm learning a ton about actually coding things this way, seemingly backward from how you're "supposed" to learn it? Going from - early on - making it rebuild the entire script or module entirely just to merge a couple of lines of code because I didn't trust myself to not mess it up, to now just asking it what to change and typing it myself or even making my own small modifications as we go, has been a really fun jump to make. (And far more token efficient.)
Imaginary-Unit-3267@reddit
I would actually argue this IS the optimal way to learn. People learn by doing, not by reading textbooks or doing ultra abstract exercises with no direct bearing on their life.
Equivalent_Job_2257@reddit
I liked TDD before and now I heavily moved into it. First, write a markdown spec for everything. Also makes you refine your ideas. Then make LLM write tests, then the implementation. Also a lot of times I find myself writing a long prompt describing a fix. This prompt will be lost - if it constraints a feature, I try to add it to feature spec. And to the Git. Now I am in active search of a balance, what to offload and what to do myself. It isn't obvious. It depends on model a lot. Different model is like a different colleague.
Imaginary-Unit-3267@reddit
This is what I'm aiming for too. I don't trust the model. I work with the model to build an extremely thorough specification first, then translate it into tests, THEN let it build stuff to make the tests pass. Qwen has a cute habit of being impatient to get to work - it whines about having to do too much planning - but I always restrain it and insist on painstakingly clarifying every requirement first. :)
Wise_Stick9613@reddit
I, on the other hand, write everything myself first and then ask various LLMs if they have any suggestions on how to improve my code.
In my experience, LLMs tend to overcomplicate things (such as considering use cases that will never occur).
Anthonyg5005@reddit
Nah, that's so true. You can ask it for something simple that should only take like 3 lines max and it decides you need 5 whole new useless functions
FastHotEmu@reddit
It's a fundamental problem with LLMs, it's called verbosity compensation.
markole@reddit
Reminds me of a compiler stage when they were producing unoptimized assembly.
sernamenotdefined@reddit
My whole job was focussed on squeezing the last bit of performance out of CPUs and GPUs using avx-512 and CUDA respectively. Then I'd hand it off to someone that would make it a bit more user friendly to run if it was something meant to be run more than once in a blue moon.
Now everything I write has a nice CLI or GUI, because I just spec it out and ask an LLM to provide it.
The LLM has become quite decent at CUDA too, to the point that I let it make the initial kernel that I further optimize. I'd say it's about 80% there and stalled with recent updates, so I still have a job there.
Optimized avx-512 code for a specific processor family it's still quite a bit off of the best hand optimized code.
No-Consequence-1779@reddit
Even local LLMs are very good at following your coding style or pattern. Once you establish it. Complicated user interactions on the gui or complicated business rules can be a challenge. But the latest qwen3.6 27b seems to do ok. Atomic changes like normal development are not a problem.
We will see alot more advertising for ai coding agents because their pricing changes has costed customers going local. Or the dreamers that are gonna get rich making a Saas after work at the pizza shop.
This post is an advertising campaign with multiple user accounts chiming in , likely bots, selling the ai coding stuff.
MuDotGen@reddit
I have executive dysfunction. Even if I have good ideas, it is difficult to actually get them to working, production applications. These tools have allowed me to not fret over the little implementation details and focus on making solid plans that are very satisfying to look at as my meticulously record and organize every little detail I can think of with the help of the agents. At work, I finally have working applications that I can continue to iterate, and I am able to do it so much faster and efficiently, debugging, documentation, proper plans that can be done in a reasonable amount of time even if I'm more ambitious. It's so much more helpful for someone like me.
As others have said, if you have the time and you like to handle more meticulous details in implementation yourself, the option is still completely there, which is nice. In fact, even cheaper in some ways, but costs you your time and energy, so depends on what project you're working on.
fishhf@reddit
I've coded for 30 something years almost daily. I treat LLMs like another hobby, a locally run llama.cpp that happens to code like a junior dev.
I personally like to code, it's frustrating at times but rewarding and fun. I don't think I'll want LLMs to take that away. If it's at work, and the company pays me to deliver junior level work with fuck around deadlines, sure.
I kinda feel those who are completely vibe coding aren't really into coding themselves, they are like the project manager type. Even at work, I explicitly say I prefer to have a minimum of 50% hands on coding and don't want to be in management.
HomoAgens1@reddit
I have a slightly different background, but I relate a lot to what you’re saying.
I studied C, but in practice I only programmed seriously with MATLAB and C#. With AI, something became possible for me that was almost impossible before: programming complex things in a relatively simple language like Python.
The key point is that now you don’t necessarily need to be able to write the language fluently. You need to be able to read it, understand it, reason about it, and verify what the model is doing.
That’s exactly my case with Python: I can read it, understand the structure, follow the logic, debug it with the help of the model — but I wouldn’t say I can really write Python from scratch in the traditional sense.
And honestly, that makes sense. Not everyone is a writer, but almost everyone can read books.
For me, this is the big shift: natural language is becoming the new programming language.
We went from Assembly, to C, to object-oriented languages, to higher-level languages like Python, and now to LLMs as an interface between human intention and executable code.
Of course, you still need technical understanding. You still need to know what you are asking for, how to check the output, how to test it, and when the model is hallucinating. But the act of programming is changing.
The real revolution, in my opinion, is that we can now “speak computing” in our own language.
That is a much bigger change than autocomplete or even IDE integration. It changes who can build software, and how.
Mashic@reddit
How do you feel about not knowing what's in your code?
sagiroth@reddit
Thats industry standard. We have to build guard rails to improve quality and put ai in place to review itself. Thats sad reality. I work for fairly large cybersecurity company in UK and well, the message is pretty clear what the company wants.
tautality@reddit
It's not industry standard. Also it's utterly unserious. So if your company is doing this, I don't want anything to do with your company.
sagiroth@reddit
Problem is, there wont be companies then for you to deal with otherwise. I am not saying I agree with this or are happy with it.
tautality@reddit
There are and always will be companies that use LLMs responsibly.
Equivalent_Job_2257@reddit
If 50% do a thing, we usually call it standard, even if other 50% do 10 other things. In case of "ship it with a prompt", I am afraid, number is higher than 50%...
tautality@reddit
Do you have data that shows that over 50% just vibe code without looking or reviewing the code?
We also have to define what we mean by the "industry". If there is a lot of people who self-describe themselves as "developers" but who are in fact not (because they can't code without the AI tools) and who haven't shipped a single product with clients, then I wouldn't consider them part of the industry.
Business that actually have clients usually have higher standards than your average vibe coder. There are exceptions, but normal businesses tend to have developers that understand code and can fix it manually when shit breaks.
Equivalent_Job_2257@reddit
You have no better data. Why do you downovote people for no reason? 7 upvotes on initial comment tell that seven people experienced push to ship with AI without necessary quality checks, not even including me. Of course if manager pushed, it is a company that ships something to clients. Sure, we must agree on definitions, and I proposed my definition too in one of comments. I don't know who do you have in mind by "average vibe-coder". You actually speak as one who haven't seen real industry in action here.
tautality@reddit
I wasn't the one who downvoted you (or anyone else in this thread).
Equivalent_Job_2257@reddit
Ok, sorry then. But we all usually work in one company at a time and don't conduct scientifically grounded polls. What I see with high probability, in the bubble where I can see, that managerial push is real. And by no means I ever say it is good - the competent developers should push back and even leave such companies. What actually happened for me, I had to review a lot of vibecoded s$t by non-developer colleagues and trying to make it remotely good, in result I spent 90% on the thing but it looked like I am stopping a thing that "just works". Luckily, I am not in that company anymore.
tautality@reddit
That's wild. I've heard similar stories, especially over the past year. The more this happens, and the more layoffs happen due to AI, the sooner the pendulum swings back because some of those disgruntled and laid off developers will create their own businesses that don't do shit like that, and their software quality will be better.
Equivalent_Job_2257@reddit
I hope so. But there are other not so good possibilities. That everyone will be tricked or forced into accepting velocity above quality. Never before it was so important to actually understand things as now - I feel that in pre-LLM era it was actually a useful hobby, if it just worked, and now it is necessity to raise above the ocean of vibecode. It is the only possibility to fix LLMgen code.
jazir55@reddit
This sentiment is predicated on LLMs not improving to the point where they no longer make errors.
PrinceOfLeon@reddit
Vibe Statistics.
If it "feels" right, post it like a fact.
UncleRedz@reddit
How do you see things like compliance fit into this? For companies in EU, there is the Cyber Resiliency Act coming into effect in September this year, and by then you really need to know what's in your code or you can legally end up with various liabilities if something happens. I don't see how humans can be out of the loop if CRA compliance is required.
Put differently, I see a trend from USA to push more for AI coding without humans and then use AI for code reviews, as humans becomes bottlenecks, but I don't see how that can work with the legislation being built in EU. Not sure how to feel about that, but I'm also an old fart who's been writing code for more than 40 years now and I don't want to give it up completely either.
aguspiza@reddit
EU regulations are stupid. It is impossible to enforce that.
UncleRedz@reddit
Well, essentially the core principle with CRA is that you, the human or legal entity, are accountable for the output delivered, and there is no difference between malicious insertion of backdoors or vulnerabilities and an LLM doing it by accident. If any of it is exploited or discovered, you have a limited time to fix it, and/or face heavy fines. If you never looked at the code or know how it works, you are taking huge risks shipping it. I understand the point, but it's quite extreme.
jazir55@reddit
The ironic part here is just to point an agent at it and tell it fix the bug and verify the fix with a test.
shard746@reddit
THey are so stupid, that they regularly force multi billion and trillion dollar corporations to adhere to them. So maybe they are not so stupid after all.
No-Consequence-1779@reddit
It is absolutely not industry standard. Holy crap.
Say that during an interview. ‘ I don’t need to know the codebase because ai does stuff’.’
The interview will end and they will not be proceeding.
This is an obvious promotion campaign with multiple marketing accounts chiming in to trick dummies.
FastHotEmu@reddit
In my view, that's only acceptable for quick prototypes/mockups/throwaway code that has no security requirements. People keep acting like writing code is the expensive part of the process - it isn't.
Equivalent_Job_2257@reddit
It really depends if managers or develipers are in charge in a given department. If a manager, it is a different story.
tautality@reddit
I'm glad there are reasonable people out here.
No_Hedgehog_7563@reddit
That's basically where we are heading. If companies push for ever faster shipment (and many do), you can't possibly know whats in there.
qudat@reddit
Writing code faster and shipping faster are not the same thing. There’s a huge cost associated with not having people knowledgeable enough to debug their prod system
No_Hedgehog_7563@reddit
Tell that to every company at the moment.
qudat@reddit
They’re gonna be feeling the burn in no time
No_Hedgehog_7563@reddit
Yeah, fully agree, but until then you as the worker have little to say. So you either conform and churn out a lot of garbage very fast or you get the boot for being too slow.
erm_what_@reddit
How do you feel about not understanding compiled code?
It's not the same, but it's the new level of abstraction we're at now. Tooling is behind, but it'll catch up to the point you can begin to really trust the output. I think it'll either materialise as really good TDD, or composable trusted blocks and integration tests.
milkipedia@reddit
There's a lot more determinism in compiled code
ttkciar@reddit
Guess you've never looked at gcc's "reload" pass :-D
milkipedia@reddit
Yeah that's why I tempered my words a bit
aguspiza@reddit
I mostly align 99% with OP... Of course I still do a "PR" locally and check the code my self... but the real "issue" is that the code is even better that if I have written it.
Comfortable_Ebb7015@reddit
I don't feel comfortable at all. I mostly use Sonnet and Opus at work. I think they often over complicate the code. They are too verbose. Some methods are extremely long. I also found a lot of duplicated code, unused imports. I have previously got excited about agents and vibe coded like a crazy. But now I am much more cautious. Coding with Copilot is like driving a big truck downhill. I have to always keep the brake pressed.
Hot-Employ-3399@reddit
Not op, but indifferent and usual. I have absolutely no idea what I wrote in the code 10 years ago. Like I see code written by me and have as much idea about it as about any other code from that era. As long as code is not total garbage during very shallow review(pressed page down when was looking for something) I see it the same.
FastHotEmu@reddit
I've been programming since the 1980s and I would love call your bluff: let's review some of the code you generated :)
I use LLMs 24/7, I have a few 3090s at home, a Mac, a server, and at work I use Copilot and Codex.
I see three things:
- Rich people propping up their investments in any way they can (and lying constantly)
- Very powerful technology with fundamental limits, like any other tech
- Some programmers getting lazy, not caring about the output
What I don't see:
- A silver bullet
- Code quality going up or staying the same
kiwibonga@reddit
For me the game changer has been injecting code at runtime into apps.
I use unity and c# but this applies to anything that has a VM or a hot reload capability -- you make the target application serve a http server that receives code as plain text from your agent, compiles it on the fly and executes it without recompiling the main application.
You can push this extremely far -- normally a barrier to debugging apps like games with agents is poor vision capabilities and latency. It takes several seconds to analyze one screenshot and write code to perform an action.
But you can pause your game and step it at the speed of your LLM, just like a debugger would interrupt execution. You can output positions. You can output a full text description of the game world and make really high level decisions.
One cool application is you can think up cheat codes while playing the game. It'll research your code and skip levels for you so you can test a specific part of the game. It'll fish your character out of the void if it falls through the floor and ruins your playtest.
But the real breakthrough IMO is "pregaming" the code writing. Your LLM won't commit (as much) code with hallucinated APIs and declare victory anymore, because it will usually realize its mistake it in the pre-coding phase, when you provide it with an ideal runtime environment to research.
I'll walk to the specific place in the game where I want to code my new feature, and it starts speaking to the game engine, figuring out where the game objects are and how they behave "for real" at runtime, as opposed to "cold" in the text of the code.
It especially helps with the fact that LLMs are great at 3D math but they never know the right ranges for floating point values, angles, etc -- super common problem is radian vs degree conversions not getting done properly.
jazir55@reddit
Ironic since you're describing the standard vibe coding workflow that has existed since 2024.
Lesser-than@reddit
I used llms over the last year to further all the prototype things I had done over the years before, it was fun finishing stuff up I put on the back burner for ever. However I also kind of lost interest in coding while doing it. I guess the learning how to do what I wanted to do is what I liked about coding, now that I do not have to figure it out for myself it just lost the spark.
Old_Swimmer26@reddit
Can you explain the plan.md part whats the use of that ? Is that to create agent or specific skill or instruction to the Ai ?
jopereira@reddit
Planning is essential in every activity. First think what you want to do, then execute.
I do almost nothing in VS Code GitHub Copilot without first put my AI (usually local Qwen3.6 27B) doing a plan. It asks about unclear objectives, undefined conditions, etc, until we're both in the same page. Then, I let agent do the coding. That makes HUGE difference in final results (both quality and time spent back and forth).
I start coding at 12-13 (Z80 assembly) so I'm doing this for... 44 years!
Coding without strong AI collaboration is not an option. When people say they spend more time reviewing that writing... Well, sure, but then I'm really slow at typing....
phein4242@reddit
I started with LLMs early this year. It took me a weekend or so to realise that it could write approx 80% of the code Ive written professionally (DevSecNetOps engineer).
Currently working with opencode / llama-server / qwen3.6 27b q6 on an RTX A6000, rocking a solid 54 tp/sec (mtp rocks!), and as of yet it is able to generate 100% of the code I need.
Not just that, to properly run a project, you need a well defined design plan. And thats (mostly) on the conceptual level. Which makes it fairly easy to integrate that design into the project mgmt structures at work, allowing for rapid development cycles with verifiable results.
PreferenceAsleep8093@reddit
The best thing I've found recently is the coding agents. The IDE based agents seemed cool at first, but once you get a hit of the CLI coding agent drug, there's no going back.
The greatest improvement to my workflow was that I recently switched from Claude/Codex to OpenCode. I was using Claude at work and Codex at home. But I've been delving more into local LLMs recently, and got introduced to OpenCode. Recently at work, we were told to switch to Codex instead of Claude. However, they were really only concerned about the models and not the harnesses.
So I took a second to think, and realized something. Why don't I just use OpenCode at work and at home? You can use it with the OpenAI models, Claude models, and local models.
So OpenCode + LSPs + a few MCPs has been a game changer for me. I've also realized how important skill creation is. Whenever you see the model doing something with suboptimal performance, take the opportunity to ask it to write a skill for that exact thing. I've come up with a handful of skills so far that basically smooth over every rough edge when it comes to what I do on-the-clock.
I'd even say that a well-tuned OpenCode harness is better than Claude.
There are still some problems that are just inherent to models. For example, it may not fully understand logic that crosses multiple domain boundaries. And it may be difficult to give an agent access to enough repos to fully understand context when you're working with a lot of microservices.
Yellow-Jay@reddit
It's recognizable to let the llm take the reigns more and more. But at moments, i'm shown why that is a bad idea.
Today before dinner i had a bright idea for new functionality for a pet project of mine, I wrote detailed instructions, went to diner and after it i was greeted with the exact functionality. Added some refinement, and voila, done. Or not?
I now have 1500 loc, oh my god, it should have been 500 at most, but the well, llm coding is not my coding. Going function by function, class by class is still my preferred way. For a pet project this kind of unmaintainable code is ok, but for real things, i still don't trust the llm. (but this speed from idea to implementation is great, even llm assisted function by function would have taken half a day, now an hour at most)
steveh250Vic@reddit
In 2024 used VSCode+Codeium/Windsurf and in Dec 2025 swapped to Claude Code and very rarely touch code much now - keep my PR's small so I can read them, maybe make a small local change with vi and that's about it.
Caveat - am not a professional developer.
Scared-Tip7914@reddit
While I have not been coding for nearly as long as you have, I started a few years before the LLM wave hit, and thankfully learned system design the "old fashoned way". That said, I hopped onto LLMs from basically the beginning, and I guess the same way as everyone else with access to these tools they slowly but surely took over the workflow. The one thing I dont let them do is high level architecture because I find that they over engineer stuff and create extremely complex structures for simple apps.
No-Consequence-1779@reddit
I wrote scripts at 16. Doesn’t actually count as experience. When I got hired professionally. That date counts.
Equivalent_Job_2257@reddit
Yes, job experience is very different, but what person did at 16 is an early indicator and also influences future prifessional development, imho.
Scared-Tip7914@reddit
I am not that young but I wish haha, in my case it was "professional" experience, nothing groundbreaking but I was lucky enough to have good management who actually put emphasis on code quality and not just JIRA sprint metrics. Also they where very open to LLM use to the point that they were actively pushing for tool use well before the hype, if in retrospect that was a good idea or not, only the bottom line can tell.
Equivalent_Job_2257@reddit
I opened this post with intention to destroy it, then... it's me... someone's spied on me ... Seriously, it is interesting people converge on same approaches independently. Maybe it is unified agents CLI tools, and we just use them, or maybe something deeper.
jwpbe@reddit
I started learning programming in the last year, so I don’t have the preconceived notions a lot of older programmers do.
I don’t use a plan file unless it’s a complex spec. But i also don’t implement in a way that needs one, generally. I do it one scoped change at a time, read the code, and then iterate.
I think the best change has come for me in the last week: Either with the pi librarian or experimental opencode scout, i tell the model to look up similar idiomatic patterns from well respected projects, and run experiments with the shell to determine the best pattern. I really can’t recommend this enough.
Recently, this has looked like me telling DeepSeek flash what to implement, and then it will send off my local Qwen 27B with a detailed explanation of what it wants to know.
Qwen 3.6 is extremely good at hunting down examples using ’gh’ and then running heredoc / shell scripts to test and tease out problems with what it finds. It’s good at finding idiomatic examples from readthedocs, etc
It then returns citations to DeepSeek, who then judges and implements.
It’s so important that you know enough about the language and its patterns and capabilities to tell it to try something different. You need to be able to know when to tell it to cut or inline something, and you need to make sure it’s not going to just add type check ignores.
But it sure as hell beats scaffolding it all yourself.
ttkciar@reddit
I've been programming since 1978, so a lot of my coding habits haven't changed much in the last year, but a few things have.
One of the biggest changes is that my priorities have changed. I have 229 unfinished projects, and I usually try to focus on the two or three highest-priority projects and only work on those until they hit some use-case critical milestone(s) before shifting my attention to other next-highest-priority projects. Once a project is usable, I frequently set it aside for a long time (months or years) before getting back to it to implement more features.
Codegen has changed that a bit, because I have come to realize that while there are some projects I really, really want to develop myself, there are also a lot of projects I don't want to develop myself; I just want to have them completed so I can use them. So I continue to prioritize and work on the former as before, but I also set aside some time to point GLM-4.5-Air at the projects I don't actually want to write, but really want to have.
My hardware isn't up to the task of using GLM-4.5-Air interactively, so I use it without an agent. That means writing a complete specification for the project, which usually I already have. All of my projects start with me writing up a specification and some notes for my own reference anyway, so I just have to clean those up, reword declarations or shorthand as instructions, and add some boilerplate I have learned the LLM needs for guidance. I also tell it to write code for easy unit testing, but to not write unit tests yet.
I append any existing source code to that (or my standard starter template source code if there is no code already written), prepend an instruction to implement the following specification using the appended code and a specific programming language, and pass that to
llama-completionas the prompt for GLM-4.5-Air.That prompt looks something like this: http://ciar.org/h/prompt.codegen_wiki.03.air.txt
It can take GLM-4.5-Air several hours to write the project, so while it's doing that I work on other things (like the aforementioned projects I actually want to work on). Amusingly, it takes less time to finish projects in Python or Perl, and more time to finish projects in C or D, just like humans do.
When it's done and I have time to look it over, I will review its output line by line, looking for / fixing bugs and changing anything I might want done slightly differently. Most recently I've had Gemma-4-31B-it fix its bugs first before I review it myself, which has worked splendidly. GLM-4.5-Air's bugs tend to be small, simple things (like using
1234instead of0x1234), and not far-reaching design flaws. As I go, I also split out its output into their respective files.When I am satisfied that it has inferred something I understand and will be happy with, I make a copy of its unified output without its "thinking" phase, and frame it as an instruction to write unit tests for the project. I prompt GLM-4.5-Air again with that, and work on something else while it writes the unit tests.
I've learned it's important to perform the code review before having it write the unit tests, because frequently I will change the implementation in significant ways which change the tests which need writing. I will also add comments to some functions which are pertinent to their tests.
When it's done writing tests, I will review those and split them out into their respective files, and run them to make sure they all pass. Sometimes they don't, and I dig into it again, sometimes fixing the implementation and sometimes fixing the tests because the implementation is actually correct and it's the test which is wrong.
On one hand that's a lot of work, but on the other hand it's a lot less work than implementing everything myself. I've been finishing the "want to use, not develop" projects faster than I've been finishing "want to develop" projects. It gives me hope that I might actually bring all of those 229 projects to completion before I die :-)
I've been contemplating writing my own codegen harness which automates parts of that workflow, like having Gemma4 fix the bugs and splitting the outputs to their respective files, but haven't written any of it yet.
I believe it is absolutely critical to understand the implementation thoroughly, for the sake of future troubleshooting and further development, and so that it's really my project and not GLM-4.5-Air's project.
Quoting fictional characters from movies is silly, but in this case I'll indulge the urge, because it's relevant:
"I say your civilization because as soon as we started thinking for you it really became our civilization" -- Agent Smith
UncleRedz@reddit
Thanks for the detailed explanation. I find that it's important to be intentional with the use of AI/LLMs, some things are worth knowing and learning, others I could not care less about, and that influences how I use AI to solve the task.
Quirky-Persimmon3342@reddit
the skill that's gotten more valuable for me isn't writing code, it's reading it. i spend way more time reviewing agent output than i did reviewing my own. the bar for "does this do what i asked" is easy. the bar for "will this cause problems in 6 months" is where i still earn my keep.
akp55@reddit
I break into 3 files. The plan which outline the phases. A tasks file while lays out the tasks for each phase and a completed that has a summary of each task that was completed
sparticleaccelerator@reddit
The plan.md approach is exactly where I landed too, but I added one wrinkle- I make the agent write a "decisions.md" alongside it, where it has to log why it chose an approach when there were tradeoffs. Two benefits- it forces the model to actually think instead of pattern match, and when something breaks three weeks later I have a paper trail instead of archaeology. Curious if you've tried anything similar for the "why" vs the "what".
aguspiza@reddit
That is called ADRs (Architectural Decision Records) https://adr.github.io/ and they are added to docs/adr/ .... I love them 😃
ievkz@reddit (OP)
Nope. Sounds very intresting. I will try use it. Thanks )
ievkz@reddit (OP)
BTW. Usualy AI agent put decisions explanation in plan.md. Some times. Probaby easy ask him not use sepoprate file decisions.md and use plan.md and use stracture:
1. task name
2. task goal
3. decisions for do task
4. which files need to change or create
?
No_Lingonberry1201@reddit
I use LLMs for doing the tedious bits, like writing unit tests and whatnot, I just review the result and fix any issues. But I like working out a problem and coding it, why would I outsource the bit that I like? During work it's different, I use it more there, but even there I review code as it is.
TL;DR: I don't vibecode.
aguspiza@reddit
Yes... I trully feel the same... what the fuck comes next?!?! :[
No-Consequence-1779@reddit
Do you work full time at a company with a team of software developers?
ievkz@reddit (OP)
Yeap. I work full time. And 99% of other developers in team do not use AI agents for coding )))
Terminator857@reddit
I started using the CLI almost exclusively 9 months ago. I have a few decades of engineering experience. Mindset has to change a lot. Software is changing fast, so too must the people who own it.
Hot-Employ-3399@reddit
At work I code as I coded mostly because there are few chances to use llm. What did change is I sometimes write pseudocode first as I do sometimes for llm, then rewrite it manually.
At home pet projects I'm more architect. Decide what to be done, general idea how(to not be in situations where suddenly O(n^2) gets called too often) and alt-tab to watch YouTube or play Minecraft.
Let's say I'm more than glad not to write in pure code as writing for-loop for millionth time to do something with several containers stopped being fun to sing "me loves coding". It's a chore.