I am a programmer, not a rubber-stamp that approves Copilot generated code
Posted by pyeri@reddit | programming | View on Reddit | 364 comments
Posted by pyeri@reddit | programming | View on Reddit | 364 comments
andupotorac@reddit
Your job is to create solutions, not to stick to old coding behavior.
bwainfweeze@reddit
And what does that have to do with rubberstamping copilot hallucinations?
andupotorac@reddit
That’s a skill issue. One needs to start with proper specs.
DogsAreAnimals@reddit
This issue exists independent of management forcing AI usage.
No one is forcing people to use AI at my company, but right now I have a huge PR to review which is clearly mostly AI generated (unnecessary/trite comments, duplicate helper functions, poor organization) and my brain just shuts down when I'm trying to review it. I'd rather re-do it myself than try to explain (agnostic of AI) what's wrong with it.
Enlightenment777@reddit
AI needs to replace more managers and CEOs
Bluemanze@reddit
This kills me as well. Part of the point of code review is to discuss design, share knowledge, and help each participant improve at this work. None of that is relevant when youre checking AI slop. There's no skill growth to be had in checking where the AI snuck some stupid CS 100 implementation or obvious bug. The juniors dont learn, I dont learn. I'd rather work in a factory plugging hair into dolls if all im getting out of this is a paycheck.
Polymer15@reddit
When doing things manually, you may run into a situation where you’ve got to write 2000 lines - where you’ll probably then ask “maybe I’m doing this wrong”.
The triviality of generating code which mostly works (at least at first), and because there’s no immediate punishment (like having to update 2000 lines) for shoddy code, it becomes an automates technical debt machine in the wrong hands.
cstopher89@reddit
This is why its really only useful in the hands of an expert. They have the experience to understand if something is poorly implemented or will have issues with maintenance later
Pigeoncow@reddit
And who's going to maintain all this slop when beginners are all reliant on AI and never become experts?
redditisstupid4real@reddit
They’re betting on the models and such being leaps and bounds more capable by then.
KazDragon@reddit
Asynchronous code review is already broken because it provides those feedbacks way too late. If you actually care about discussing design and sharing knowledge, then you should be with them through the development process with your hands off the keyboard. This is one of the understated and most amazing advantages with pairing and ensemble.
Bluemanze@reddit
I work on an international team, but I agree with you in general.
autoencoder@reddit
There are tools to remotely pair-program
EveryQuantityEver@reddit
Are there ones that are particularly good?
In my experience, using video chat things like Zoom and Slack can work fine. But the biggest issue is that, if someone isn’t being engaging, either the person coding or one of the people watching, it can get boring quite quickly. I’m not sure that’s something that can be fixed with a tool, but it’s always been a downside of remote pairing.
0x0c0d0@reddit
That's a downside of in person pairing too.
darth_chewbacca@reddit
tools aren't the problem. Co-coordinating 5 people to use up an hour of their time for ever PR is the problem.
MilkFew2273@reddit
Timezones
KazDragon@reddit
Me too! It's a solvable problem.
-Knul-@reddit
I have a team of 5 other developers. I can't sit next to each one all the time. Also, in most cases we don't need to discuss design or architecture and in the cases we need it, we do indeed have a discussion upfront at the start of the ticket's work.
KazDragon@reddit
You can with a little imagination! See any of Woody Zuill's presentations on YouTube. It's eye-opening stuff.
grauenwolf@reddit
Normally I would disagree, but in this case I would call for a live code review.
aykcak@reddit
This is not really feasible with most development environments but it made me miss our mob programming sessions. Those were really insightful and the amount of knowledge being shared was really visible
Acceptable_Potato949@reddit
I wonder if "AI-assisted" development just doesn't fit modern CI/CD paradigms anymore. "Agile" alone can mean any number of different processes at different companies, for example.
Perhaps moving away from "modern classic" paradigms (e.g. Scrum, Kanban, etc.) and finding a new way to work is necessary to get the most out of this new confounding situation with AI.
BTW, not taking sides here, just observing from a "PeopleOps" perspective.
Mc_UsernameTaken@reddit
The company i work for doesnt do scrum/Kanban/waterfall or any similar paradigms.
We're oldschool, we simply have list of tasks/tickets for each project that needs doing.
And two people manages the projects and prioritizes the tasks across the board.
In my 10+ years working here, we have never ever been more than 3 people on a team.
We have great use of AI tools, but it's not being forced upon us.
This setup however, I believe only works for medium to large size projects are we usually deal with - enterprise is another league.
HaMMeReD@reddit
"We're oldschool, we simply have list of tasks/tickets for each project that needs doing.
And two people manages the projects and prioritizes the tasks across the board."
Uh that's kanban.
hackrunner@reddit
Not only that, "oldschool" as I remember it was full of gantt charts and critical paths, and a PM (or multiple) going crazy trying to get all the dependencies mapped and status updated in a project plan. And no matter what, it seemed like we were perpetually 3-months behind whatever delivery date was most recently set, and we needed to "crash the schedule" to get back on track.
Kanban would be straight-up blasphemy to the oldschool true-believers and a complete paradise to those of us that had to suffer through the dark times.
denizgezmis968@reddit
did that really need a name
Erebea01@reddit
Better Kanban than
We're oldschool, we simply have list of tasks/tickets for each project that needs doing.
And two people manages the projects and prioritizes the tasks across the board.
every time
denizgezmis968@reddit
Debatable. I'm not in the industry so maybe my opinion is totally useless and irrelevant but more often than not 'naming things' gets ahead of what really matters. Kinda like Object Oriented Programming (?). Just do what works? Why do you need a buzzword like Agile or Kanzen or some other mysterious shit to make it more legitimate? But wtf do I know?
HaMMeReD@reddit
It doesn't make it more "legitimate" it communicates what it is. It's called language.
In the case of Kanban it literally means "sign board" in japanese. I.e. putting cards on a board and moving them between columns to demonstrate progress.
You can't "just do what works" without learning "what works" and we do that with language to describe and compare things.
Mc_UsernameTaken@reddit
That might very well be - but alss we don't use the terms - we do what works for the developer that has to implement the tasks.
HaMMeReD@reddit
So?
I could navigate my city in a 4 wheeled automotive device and not call it a car, but it'd still be a car.
Why is what you call it, or not call it, relevant to what it is at all?
Acceptable_Potato949@reddit
That's just called CE/CJ.
(Classic Enterprise, Continuous Jira)
SnugglyCoderGuy@reddit
Process++, so you know its good.
SporksInjected@reddit
You need to write a book!
EveryQuantityEver@reddit
Why?
I’m not against new ways to work, but to me, there has to be an actual benefit. “AI workflows” aren’t enough of one to change.
Acceptable_Potato949@reddit
Another commenter mentioned the "BMAD" method and in the GitHub README, the following stands out as what makes it "different":
https://github.com/bmad-code-org/BMAD-METHOD?tab=readme-ov-file#overview
I haven't tried this myself and I just literally found out about it, so I don't really hold an opinion on whether the proposed ideas are useful, but it's certainly sparked other minds to do something more with AI-assisted development, so who knows what the future of this will look like?
MagnetoManectric@reddit
This sounds like a great way to generate masses of legacy code that no one actually understands, very quickly
inevitabledeath3@reddit
Lookup the BMAD method and specifications driven development. That's basically what you are getting at here. It's already been done essentially.
Acceptable_Potato949@reddit
Of course it has already been done, haha! That's very recent, the GitHub repo for that project is just around two months old.
Carighan@reddit
The problem is that the technology people want to use has a purely negative impact.
It's not like code completion in IntelliJ for example couldn't do super-fancy shit pre-AI. Now it's actually significantly worse, often wanting to create whole blocks of code that are fine for 2-3 lines and then become increasingly unhinged, which is insiduous for new programmers in particular. Even AI-based line-completion has gone down, just basically plugging in what the majority of programmers would write in a somewhat similar situation instead of actually looking at the code preceeding what it is trying to complete or the return types or so. (one funny thing if AI coding, since it's based more on textual letters instead of meaning)
We have to first eliminate the use of AI in situations it is not adept at, and that includes ~everything related to programming. There are exceptions, but they're quite narrow in focus.
Schmittfried@reddit
That’s a completely ridiculous claim.
Hate to break it to you, but the reason why copy&pasting from StackOverflow became such a meme is that most software is not that special. Many situations do in fact require code that the majority of programmers would write in a somewhat similar situation.
gefahr@reddit
I see you've chosen to post an axiomatic truth that will earn downvotes here, so I'll join you in your cause.
Schmittfried@reddit
Usually I am the one posting that quote under posts that get downvoted for telling the truth. How the turn tables.
gefahr@reddit
I was taking one for the team. I don't code for a living anymore so I'm able to look at where things are headed more objectively than if putting food on the table depended on a certain status quo remaining true.
I'm not an AI zealot, I don't think it's there yet. But pretending it won't get there is head-in-sand stuff.
EveryQuantityEver@reddit
You’re not getting downvoted for telling the truth, because you weren’t telling the truth
EveryQuantityEver@reddit
No. This idea that anyone who is downvoted is somehow speaking some controversial truth, instead of just saying something wrong or stupid, I’d idiotic and needs to die.
Carighan@reddit
What do you think those words actually mean, btw?
inevitabledeath3@reddit
I don't know why they are downvoting you. You are right. Within 5 years most new code will be AI written. Most of the people here probably won't be employed as programmers anymore by then. The rest of this is largely just denial about the inevitable. Once you have seen what LLMs can do with the right tools and techniques and the rate of progress being made it's very apparent what us going to happen. The writing is on the wall.
Bluemanze@reddit
Optimistic at best. Writing boilerplate has always been an annoying, much complained about, but ultimately minor part of the job. I have never been on a project where there weren't novel problems specific to the company or the industry as a whole that took up the majority of my time. Those problems are why we get paid.
These tools have been out for a few years now and we have yet to see any measurable increase in productivity across the sector. Either AI implementation has an inexplicably long J curve for being a no-infrastructure subscription service, or its just corporate fluff.
inevitabledeath3@reddit
Again you haven't been paying attention of seen the latest tools and development. There are new models coming out all the time. I had three released in two weeks that were worth looking at. You have this weird assumption that the technology has not improved and is not moving forward. In just the past 12 months there has been enormous progress especially in open weights models that are nearly as good as closed source now.
EveryQuantityEver@reddit
Yes we have. None of these models actually understand code.
grauenwolf@reddit
So why haven't we seen any visible effects?
Name some companies that are saying "We're so far ahead of schedule that we're now working on projects that were planned for next year." or "We're adding new (non-AI) functionality to our software thanks to how fast AI makes us."
Schmittfried@reddit
In our company dashboards and helper tools get created that previously were just not important enough to warrant expending valuable engineering resources on them. And it doesn’t matter that those tools have subpar code quality because they are disposable.
Companies can now simply do more without hiring more engineers.
grauenwolf@reddit
So stuff you could have knocked out in 10 minutes using PowerBI or a similar tool?
That's not very impressive.
Bluemanze@reddit
Without explaining what I do, I can assure you that I'm familiar with the technology.
Schmittfried@reddit
There are many potential futures between those two extremes.
pdabaker@reddit
It’s not just useful for boilerplate. It’s useful for any coding an average junior engineer could do with a few hours of work. Which doesn’t replace engineers, but it is bad news for those junior.
And there’s tons of tasks where you wish you could just give it to a junior engineer for a couple hours but it wouldn’t be worth the scheduling burden.
Carighan@reddit
Of course, but say, how do you get those senior engineers if you replace your junior engineers?
pdabaker@reddit
I mean the job market will obviously change. There are good junior engineers who can solve creative problems though, and they will still get jobs. The mediocre ones may not be able to anymore
Bluemanze@reddit
How do you know the difference between an excellent junior and a mediocre junior until you hire them?
Dev has always been a pyramid. You need a churn of hopefuls so that the ones that make muster climb up the seniority ranks and start making decisions on architecture. Without that, we guarantee a drop in excellence by virtue of simple statistics.
And sure, maybe a similar argument was made about cheese making when the hydraulic press was invented . But this is am engineering profession with an impact on everything in modern society. We need to have excellent people.
pdabaker@reddit
I dunno dude I’m not saying how it should be just how I think it will be, at least for a while.
SnugglyCoderGuy@reddit
There have been studies showing it slows people down, all the while they think they are sped up.
One of my team members spat out 4 big PRs that are all AI slop and have been kept in code review for almost 2 weeks because they are so awful.
Carighan@reddit
Plus most boilerplate is written deterministically, because well, it's boilerplate. It's not variable code, it doesn't need to be able to be dynamically generated from context.
Like your IDE or some intermediate interpreter generating accessors and such is not something you need AI for, and yet this is about the limit of what AI can viably code nowadays without constant oversight. And worse, it even fucks getters and setters up frequently enough to warrant looking through something that a simple "Generate code..." without AI did fine before.
EveryQuantityEver@reddit
No it fucking won’t! LLMs don’t know anything about writing code. All they literally know is that one token usually comes after the other. They know nothing about syntax or patterns.
Schmittfried@reddit
Okayyy, this isn’t what I said and I wouldn’t agree with it. Just that AI does often provide valuable completions.
Carighan@reddit
You essentially agree then, yes? After all, SO-coding was memey before vibe-coding with AI displaced it in that field.
Schmittfried@reddit
Agree with what?
Memes come into existence for a reason. They usually refer to a common shared experience. Yes it was a meme. Yes it was exaggerated often. But it was also very true that SO provides to many issues a typical developer would encounter over the years. The same applies to AI. Yes, it makes many mistakes and is far from replacing developers, but claiming it’s completely unfit for supporting programmers is just stupidly contrarian.
No.
omgFWTbear@reddit
Is because you’d be writing
switch(dinosaur)
case triceratops: foo(bar)…
and look up an example that does exactly what you need on SO but used the more generic example of rainbow colors, so
switch(colors)
case red: foo(bar), …
so your junior dev would happily end up with code like
Switch(dinosaur)
Case triceratops: foo(bar)
Case red: foo(bar)
literal ellipsis here
Case violet: fizz(buzz)
… and replacing (a person who might be trainable, or at the least, replaceable with one who is) with a box whose next version might helpfully replace dinosaur with color is adding problems.
kadathsc@reddit
I think you’re onto something. AI has reduced the cost of making code, it’s no longer this very valuable, time intensive product that needs to be carefully guarded, which is what the traditional PR and CI/CD process aligns to. We might be getting into a situation where code is like a napkin. In most cases you’ll have throw away code that’s not meant to be maintained indefinitely and is instead meant to be used for that particular use case. You might still have cloth napkins but that will be for fancy stuff that merits the cost and maintenance.
The reality is that AI is capable of making code that works for simple scenarios and sometimes even for more complex scenarios when you follow certain patterns. And in those scenarios you’re getting code back in minutes that’s costing you very little money. So anyone spending time on code comments, legibility and other aspects is just wasting time.
eyebrows360@reddit
Or, you just shit this "new confounding situation" off into the bin.
jpcardier@reddit
"I'd rather work in a factory plugging hair into dolls if all im getting out of this is a paycheck."
Hey man, that's a skill! Hair punching is hard. :)
croto8@reddit
Why not use your engineering chops to write some automated quality checks and code introspection? Saying there’s no opportunity for skills growth is myopic.
Bluemanze@reddit
Because linters and AI tools that do that exist, because I could never get approval for a project like that because of the former, because heuristics is not an interesting engineering problem, and because it ultimately doesn't solve the issue of the PR process for AI code not having any learning opportunities.
21Rollie@reddit
Tbh most people that got into this career would do that lol, we’re all here for a paycheck. If it paid the same as McDonalds, all computer scientists would be in academia only
mindless900@reddit
While I’m still on the side of using AI as a tool to assist developers and not a replacement of developers, I have seen some good results with AI (Claude and Gemini Code) when it is used correctly.
Just opening it up and saying “Implement this feature X” will yield pretty bad results the majority of the time. If you instead provide it with context and knowledge (just like a junior developer) it can produce some pretty good results. And just like a good engineer, you should have it go through the normal process when doing anything. First gather requirements from product specs, tickets, documentation, best practice and standards documents, and general project architecture so it can tailor its code to suite the requirements. Next have it plan what it is doing in a markdown file and treat it like a living document for it (and you) to update and modify so you both agree on the plan. Then and only then should you have it start to create code and I would tell it to only do one phase of the plan before stopping and letting me check its work. Finally, it should run tests and fix any issues it finds in those tests before creating a PR.
The nice thing is that with some files checked into your repository, a lot of this setup is only needed once by one developer to help everyone else. Add in MCPs to go fetch information from your ticketing system and you have a pretty close approximation to the “Implement this feature X” as it gathers the rest of the information from the checked in repository files, sources the product and tech specs from the MCP, and (if you have the rules set up) will just follow the “gather, plan, execute, test” flow I described above.
The more I use it the more I see it as the same argument that the older generation had when modern IDEs came out with auto-complete and refactoring tools instead of the good old VIM/emacs everyone was using at the time, but I can see AI companies selling it to CEO/CTOs as a miracle that will double the output with half the heads… which it unfortunately will not.
SporksInjected@reddit
Why can’t you still do this? The other developer can still explain the code to you. If they can’t explain it, then you don’t approve.
Bluemanze@reddit
Because there's no value in two developers sitting down and musing on the merits of an ijklmnop nested for loop neither of us wrote. Its obviously stupid and I reject it, but what does the junior actually learn from that experience?
grauenwolf@reddit
The junior learns that you won't accept AI slop and either gets better at refining it or starts writing it on their own.
alienfrenZyNo1@reddit
The junior becomes the senior. Bosses don't care.
RICHUNCLEPENNYBAGS@reddit
Well except they pay you a lot less to do that.
Bluemanze@reddit
Well, the administration seems to believe consumers are primed for 500 dollar dolls made in America, so maybe follicle engineer will be more lucrative in the future.
GirlfriendAsAService@reddit
Hey sorry I didn’t really want to do it, but the customer made enough stink so AI slop is what the get
kronik85@reddit
For these kinds of reviews, I'll make a good effort to identify a couple glaringly obvious issues. And once I get to three - five major issues I finish the review requesting changes, which includes them reviewing their own PR and addressing the slop.
seanamos-1@reddit
Why are you giving this PR special treatment?
If a human wrote the code and sent you a PR that was a giant mess, you'd decline it saying it was below the minimum acceptable quality and the whole thing needs to go back to the drawing board. You can add some high level comments about the design and overall issues, exactly as you did here:
If there's a further issue, it gets escalated and the person responsible for the mess goes into performance review for constantly pushing garbage, ignoring or being incapable of maintaining the minimum standard and wasting everyone's time. That is just someone being incompetent at their job and unless the situation improves, they are out the door.
People can use AI, that's not an excuse for shoving garbage for review. If they are doing that, it reflects on them. "AI did it", is not an excuse.
elsjpq@reddit
Somebody who uses AI like this is just going to copy your review into the AI and have it generate more slop. You're just gonna get back a different pile of garbage instead.
seanamos-1@reddit
That's exactly what they will do. That's why I don't suggest giving more than a few minutes to a review like this. High level/broad comments that its bad, so bad that its not worth your time, reject PR.
When they come back with even more zero effort unacceptably bad code. Reject again, begin the escalation of whatever your companies performance review process is.
grauenwolf@reddit
Politics and fatigue.
Politics because you're accused of not be the team player and not accepting their AI vision.
Fatigue because you can only deal with this shit for so long before you just get so tired you give up.
txdv@reddit
whats the point of reviewing at this point? Just write a bot which auto approves.
anon_cowherd@reddit
That's literally the title of the article- I am a programmer, not a rubber stamp that approves...
grauenwolf@reddit
I expect that is going to happen at a lot of places.
txdv@reddit
id argue just do an AI review bot which detects AI generated code, then you can get rid of that “team player” excuse, because its the AI that does everything, right?
grauenwolf@reddit
That's the plan! They want people out of the loop. They are literally telling people the goal workflow is...
Presumably some executive kicks off the whole process by giving it a prompt. Or maybe the AI reads customer complaints to decide what to build next.
txdv@reddit
dasdull@reddit
You're absolutely right! Great implementation 5/5. Approved :rocket:
Sincerely your n8n agent
txdv@reddit
aisarcastoapprover
john16384@reddit
The AI vision is similar to hiring a bunch of cheap juniors to write code. Except, in the latter case you might get a return on investment. When that incentive is gone, teaching AI how to write better code is similar to teaching externally hired juniors: a complete waste of resources
cornmacabre@reddit
Snark aside, I'd argue the opposite -- investing in an internal knowledge base that's mandatory context to AI/Junior folks is probably going to be an essential (if flawed) guardrail. More than a system prompt, I mean a whole indexable human curated KB.
It's very different than 1:1 coaching, but a KB that documents long term learnings, preferred design patterns, and project-specific best practices, etc is mission critical context. Context is king going forward is my personal soapbox opinion, and a high-effort KB is the only way I see to minimize AI or junior humans making bad assumptions and bad design choices.
In practice, that means a pretty big investment in workflow changes and documentation. And understandably, a pretty painful and resource intensive one upfront.
peripateticman2026@reddit
Sad, but true.
Heuristics@reddit
so, run it through an ai and tell it to clean up the code?
hugazow@reddit
Reject it or make the developer explain it without ai
gc3@reddit
Just reject it and tell the guy to fix each thing
b1ack1323@reddit
I’m really shocked when I hear this, I made a very clean set of rules for the AI I use and it is exactly as I would make it. Specifically I made a ton of rules for DRY and loosely coupled design.
Now everything is deduplicated, created DLLs and nuget packages where code is used between projects.
Built an entire Blazor app and it’s decoupled and clean with EF and a database that is normalized, just writing specs and letting the AI go.
Why aren’t people building rulesets to fix errors they find with AI?
They only thing I don’t have it do is make security policies for AWS, for obvious reasons.
Embarrassed-Lion735@reddit
Your ruleset approach works when it’s backed by hard gates in CI; otherwise reviewers drown in noise.
What’s worked for us on .NET: codify the rules in repo, not just the prompt. Keep an architecture.md with banned patterns, layer boundaries, and “when to extract a package” rules. Enforce with .editorconfig + Roslyn analyzers/StyleCop, dotnet format, and fail the build on warnings. Add duplicate detection (jscpd or dupFinder) and auto-fail if similarity > N lines. Require an OpenAPI spec first, then generate stubs; use property tests (FsCheck) and mutation testing to catch the happy-path bias. Cap PRs to small, focused changes and block mixed refactor + feature diffs. For EF Core, demand explicit migrations and seed scripts, not ad hoc schema drift.
I pair GitHub Copilot for scaffolding, SonarQube for quality gates, and DreamFactory to spin up REST APIs over existing databases so I don’t hand‑roll controllers; Postman collections run in CI to lock the contract.
This takes the burden off the reviewer and aligns with OP’s gripe: AI is fine when the system forces DRY, decoupling, and small, testable PRs.
Bottom line: rulesets plus enforceable gates make AI useful and keep reviews sane.
b1ack1323@reddit
I use a terminal tool called Warp, it makes a md file in the repo with the specified rules in it and a lot of the rules you listed are in it.
It also forces a check with SonarQube on commit an then reads the output and makes corrections.
314kabinet@reddit
Then reject it and have whoever made it do a better job. Other people sucking should be their problem, not yours.
HideousSerene@reddit
I had a situation like this where the engineers just started going to different reviewers who did just rubber stamp stuff. And if I pointed it out I would get berated for it.
So I quit. After four years, I said fuck it. Enjoy your slopfest.
Anybody hiring?
Halkcyon@reddit
I also did this. Unfortunately the US economy is sinking like the Titanic so no one is hiring.
Tai9ch@reddit
You two should get together and start a consulting company to fix AI slop.
Halkcyon@reddit
I'd rather become a farmer in an age when tariffs are bankrupting them en masse.
darth_chewbacca@reddit
If someone puts this much effort into their code, you can justifiably put the same amount of effort into the PR.
Find the first case of the duplicate helper function, deny the PR and just stop reviewing. They'll fix that one thing, you find the next one thing and deny. Lather rinse repeat.
If you want to be nice, just put a general comment saying "too many unnecessary comments, too many duplicate helpers, poor architecture. Please clean up this code before re-requesting the PR"
Chii@reddit
If there's so much wrong with it, why not use an ai to do the first pass of the review? Once the person doing the coding has addressed the majority of the concerns first, do you do a manual second pass (for which there ought to be fewer issues).
EveryQuantityEver@reddit
What is with this stupid suggestion? One AI completely fucked up, but miraculously two will work out? Do you see how crazy that sounds?
gefahr@reddit
That's a great idea... for the person who "drafted" the PR. Not the reviewer's responsibility.
syklemil@reddit
IMO you're not obliged to spend any more reviewing code than was put into writing it.
If someone is just prompting and expecting you to do all the reviewing, what work have they even done?
gefahr@reddit
Well, the prompting was work, but in any case, I agree. You don't owe it more effort than they spent writing it. That goes for code or design docs or anything.
EveryQuantityEver@reddit
Prompting is not work
gefahr@reddit
I'm old enough to remember hearing this when I first started programming. Because it was just typing words. Probably something to reflect on in there.
wggn@reddit
if they just prompted and didnt review the outcome, there was barely any effort put into it
Jonathan_the_Nerd@reddit
So you're saying let the AI do the review? Write "This code is ugly and so are you" and ask ChatGPT to expand it to three paragraphs?
syklemil@reddit
That's really what we should be doing, yeah.
Though at that point we really should be looking into completely automating the process of having two LLM prompts duke it out. The humans could go drinking instead; it'd likely be a better use of their time.
MrBleah@reddit
Have the AI review and critique it the way you want it critiqued.
For generating code, I find that using the GitHub spec kit forces the AI to generate the code you would want, because it forces you to plan out everything ahead of time. That said, in the end I can probably code what I want in the same amount of time and just use the AI for boilerplate code fill ins.
EntroperZero@reddit
I had a PR like this, but I went through it with the developer and made it clear what his responsibilities were. He still uses LLMs, but he doesn't just send me slop anymore.
dylan_1992@reddit
You can ask the ai to improve the code, it’s actually pretty good at refactors. The first pass is usually not good.
UltraPoci@reddit
What's the point of the review, then
dylan_1992@reddit
What do you mean? A reviewer is supposed to review code you think is good, not review your draft.
sayaKt@reddit
The problem in this case is the AI user wasting a reviewer’s time. He should have noticed that before requesting a review.
dylan_1992@reddit
Yes, I’m saying this isn’t a limitation of AI. This is totally the reviewers fault and they should’ve polished it, either manually or with follow up prompts.
HaMMeReD@reddit
All things that can be mostly resolved by writing a good copilot instruction file for your project.
But we aren't here to talk about being productive and useful w/copilot, just bitch right?
eyebrows360@reddit
Or, y'know, just type "with", because it's a normal word, just a regular word like any other, so why the hell does it get this nonsense "w/" shorthand that some people seem so impressed with themselves for using?
cuddlebish@reddit
It comes from physically writing shorthand notes, its a super common word when summarizing
Floppie7th@reddit
Or just, y'know, write the code yourself. It's not that hard.
HaMMeReD@reddit
Or you know, and this will be controversial, but how about you just do your work however you want, and not tell other people how to manage theirs.
Telling someone not to use copilot is just as dumb as telling someone they have to use it. In both cases it's a "mind your own fucking business" moment.
CanSpice@reddit
This isn’t a “you should use Rider and not VS Code” thing, this is a “you should write the code you’re paid to write, not have AI do your whole job for you”.
useablelobster2@reddit
Your ability to work however you want disappears the second you work with anyone else. Outsourcing your work to a virtual junior who never learns is not a good work pattern, and other developers aren't in the wrong for pointing out shit code you check in just because you made a glorified markov chain write it.
Floppie7th@reddit
Right, because this
definitely isn't passive-aggressively telling people how to work.
falconfetus8@reddit
Tbh, that could easily just be bad human written code from the description you've given.
lightmatter501@reddit
My strategy is that I will make AI review it and pick out comments until the AI is done reviewing it with valid feedback, then read it myself.
CovidWarriorForLife@reddit
If you think that’s just an AI problem I got news for you brother
SnugglyCoderGuy@reddit
I am running into this as well
Strostkovy@reddit
Ask AI to reject it for you
RubbelDieKatz94@reddit
It's crazy how often that happens. We have a massive codebase and even without Copilot there was a lot of redundant hooks and other functions. We used to have three (!) ways to handle dialog popups (modals). I tore it down to one.
Interestingly, Copilot tends to reuse existing utilities with the same frequency I do. It searches the codebase and tends to find what it's looking for, then uses it.
Sometimes utilities are hidden in a utils.ts file in an unrelated package with a crappy name. In those cases I doubt that I'd have found it either.
xt-89@reddit
Part of the problem is that teams using AI need to also use advanced architectural fitness functions and CI rules adapted to the issues of AI programming. For example: - reject direct merges to main - reject PRs that are too long - lower score if a given change is clearly shotgun surgery - lower score for detected god objects - lower score for poor cohesion - higher score for function/class docs - higher score for path test coverage - lower score for functions/classes that are too similar - …
What becomes clear is that aligning on things like architecture, design patterns, test flows, and sub-tasks are the bottleneck. These are the true substance of software engineering, but we used to rely on the PR process to have those discussions. Now, the bottleneck moved and we need to make explicit what was implicit. That’s why you should also write PRDs and ARDs for any given feature complex enough to require it.
CockroachFair4921@reddit
Yeah, I feel you. That kind of AI code is really hard and tiring to check.
Floppie7th@reddit
Reject it and tell them you're only going to accept code written by humans.
gefahr@reddit
Terrible advice, IMO. Just reject it because/if it's bad, not because it's AI. That position isn't sustainable and will lead to weird witch hunts as the models' output gets harder to superficially eyeball and identify.
trxxruraxvr@reddit
That will only get you shit from management. Just say the code doesn't meet quality standards and they have to fix it.
loriscb@reddit
The AI PR review problem is more about incentive misalignment than the tool itself.
Saw this pattern emerge on my last team. Dev who generated 2000 line AI PR got credit for "high output" in velocity metrics. Dev who spent 4 hours reviewing that mess got zero recognition. Eventually everyone started generating AI code because review work was invisible to management.
What broke the cycle was making review time count as contribution metrics. Track hours spent on PR reviews, give public credit in standups, weight it equally with feature work. Suddenly people stopped dumping AI slop because they knew someone whose time they respected would have to clean it up.
The technical solution is easy, just enforce small PR limits and require human explanation of design choices. The organizational problem is harder because most companies measure output by lines changed instead of problems solved or code quality maintained.
Turns out when you measure the wrong thing people optimize for the wrong thing.
tudalex@reddit
Just leave comments that when fed to an AI fixes it.
ClassicPart@reddit
Alternatively, reject the PR and have the pusher sort their shit out. You're reviewing it and assessing it's suitability for production, not fixing it.
Echarnus@reddit
A discussions should be held with the person checking it in. Using AI is no excuse for having technical debt. With clear specifications and a test pattern AI agents can actually build decent code. But that's up to the person setting it up/ making usage of said tools. And even then the code should first be supervised by the one making the prompts, before creating reviews for others. Nowhere should it be an excuse for laziness.
GlowiesStoleMyRide@reddit
I can imagine that is exhausting. But it also somewhat reminds me of a PR I could have made when I was newer to a project. If I were to review something like that, I would probably just start writing quality-of-code PR comments, reject the PR, and message the developer to clean it up for further review.
Until you actually address this, and allow the dev to change, this will probably keep happening. If it doesn’t improve, bark up the chain. If that doesn’t work, brush up your resume and start looking around at your leisure.
chili_oil@reddit
Don't worry, with vibe coding, comes with vide reviewing, and vide debugging. Be open minded and go embrace AI
Big_Combination9890@reddit
It's really easy: If someone uses AI to write the code they send my way, I will use AI to review their code.
Said AI is a small bash script that closes their pull request with the comment: "No."
stipo42@reddit
I don't mind reviewing copilot code, but if I leave a comment asking why you did something this way, or that you cannot do it this way and your answer is "that's just how copilot did it" we're gonna have a problem
ram_ok@reddit
I get an AI generated response from the author. They’ve gone from broken English to em dash in no time
GirlfriendAsAService@reddit
Cyborgs are here, man, and they’re Indian
rokd@reddit
Not just Indian, happens with everyone, but god it's so fucking true. Our India team has gone from writing no documentation, to every doc being a 15 minute-read, that's perfect English for a simple script.
Their code comments? Also perfect english. The code completely AI generated, if you question it, you get no response. "Was this entirely done with AI?" Answer: "No it was simply cleaned up by AI" like I'm a fucking moron.
I once said great, and went along with and asked for an in person code review, and they refused the meeting lol. It's disastrous.
GirlfriendAsAService@reddit
Okay, you gotta calm them down, they're way too stoked about this AI stuff.
derpyou@reddit
I got that answer from a staff engineer! Granted he.. shouldn't have been one, but it blew my mind. "Oh, Claude wrote the entire IaC folder" explaining why the memory / cpu requests and limits looked basically random.
grauenwolf@reddit
My company has a policy that you can't use AI to do anything you couldn't do manually. I will be strictly enforcing that policy on my projects.
Far_Oven_3302@reddit
I had a teacher who said were not allowed to use copy or paste as it led to lazy programming and propagating errors. You weren't allowed to copy and paste in your own code.
grauenwolf@reddit
That's insane. I thought I had it bad when I got marked down for defining the constant
feet_to_inches = 12
.hyperhopper@reddit
To be fair, I would also in a code review ask the variable to be renamed to
inches_per_foot
, as its not a function, andlength_in_feet * inches_per_foot
reads better thanlength_in_feet * feet_to_inches
.Actually the real answer in programming language with modern constructs would be to encode the unit into the type. Even allowing a program to compile when you try to pass a feet argument into an inches function is a mistake.
mccurtjs@reddit
Even less modern languages. I've seen C people advocate for:
or similar, because it will block the automatic conversion on assignment or when passing to a function.
grauenwolf@reddit
This was an intro to C class. We weren't allowed to create functions yet.
I do mean "allowed". I also got marked down every time I used a feature that he hadn't taught yet.
BaPef@reddit
My c++ instructor didn't allow computers in his classroom and all assignments had to be submitted on 3.5" floppy via the mail. USB drives tablets laptops and iphones existed at that point.
barthvonries@reddit
That could be turned into a function or a module, so if there indeed is a bug in that code, fixing it once would fix it everywhere ?
That's how I understand your teacher's point of view.
Far_Oven_3302@reddit
I suppose, but sometimes you get a line that is:
int x = cos(theta) * radius;
and I find it faster to copy paste then change the x to y and the cos to sin. Then again I coulda created a class to describe a vector to contain x and y pairs that had a constructor like vector(theta, radius)... but this is just an example of the way I type.
For me it is a typing style, non linear typing I guess, lol.
barthvonries@reddit
Well, if you modify the line it's not really "copy/paste". But it is error-prone depending on the amount of modifications you have to make in order to complete the instruction.
Copy pasting your instruction and changing x and cos to y and sin is perfectly fine. But on a longer line with 7 or 8 changes in the line, it may indeed be better to retype everything.
Far_Oven_3302@reddit
I find it to be a template when I copy blocks. It all depends on your use case of course.
str foo[a++] = "{
as, df,
bg, hj,
}";
str foo[a++] = "{
oi, kn,
kj, iu,
ok, po,
}";
slutsky22@reddit
literally heard this from my mentee today "that's what the llm did"
Keganator@reddit
Yeah. “I don’t know, the AI chose it” is never going to be acceptable as an answer to me, rather, that’s a sign someone is on their way to a PIP.
BaPef@reddit
Right like I've used copilot to generate a input confirmation pop-up to drop into existing code but I understand the syntax and languages from working for 15 years. I tried to get it to refactor a 4400+ line toolbox script with around 20 functions into individual files to simplify maintenance and it exploded. I did it myself and used it as a tool to add things to functions I write. It's a tool and has its place but can become a crutch with a weight.
iloveyou02@reddit
it's worse when AI is then used to answer PR comments...we have a person that does this...to the point where it's like working with AI...he is just the proxy
0x0c0d0@reddit
You have a guy begging to be fired.
In this job market.
Deranged40@reddit
I honestly wish I could upvote this a thousand times.
I honestly don't care how the code got generated, but I do 100% expect my co workers to be responsible for their own contributions.
AlSweigart@reddit
Yeah. I mean, why are you reviewing code that the "author" didn't even bother to read?
SanityInAnarchy@reddit
It's just rude.
You can use it strictly as a tool to accelerate actually writing code, where you write some code and the AI writes some code, or where you write most of the code but the AI is a smarter intellisense. In that case, you'd be able to tell me why you did it that way, because you did it that way.
Or, you can replace your job writing code with a job reviewing AI-generated code. You prompt the bot, it spits out code. You read it, maybe refine it a bit yourself, maybe tell the bot how to change it so it gets closer to something you'd write. When it's up to your standards, you send it off for review.
"That's just how copilot did it" tells me you replaced your job writing code with a job reviewing AI-generated code, and now you want me to do that job for you.
I guess maybe there's a world where that's a fair trade, because I can do the same to you -- just send you some fully-vibe-coded slop that I don't understand and let you talk to my bot through code review comments. But what are the odds that someone too lazy to review their own slop is going to put any effort at all into reviewing mine?
Joris327@reddit
Too late, by the end of this we’ll all be professional TAB-pressers.
/s
Tasgall@reddit
I wish there was another button for it, sometimes I actually want a tab, and it's already overloaded to auto-complete for intellisence. I feel like I hit
ESC
more than anything else, lol.dauchande@reddit
Maybe read the MIT study. Not only does it screw up your brain while using it, it keeps doing it after you stop. No thanks. No AI (really ML) for me. It’s a useful tool for specific tasks, but writing production code is not one of them.
blind99@reddit
It's going to be the India exodus all over again where you had to rubber-stamp the code from a team of 50 devs that are paid a pitance to save money and avoid hiring people here to actually work. Then you get questioned by the management on how it's possible that their garbage code does not work. The only difference now is that nobody gets the money except a couple billionaires and nobody has jobs at the end.
QwertzOne@reddit
Problem with programmers is that we don't understand the system we work for. We think merit and skill protect us, that good code and clean logic will always matter, but the industry doesn't reward creativity. It rewards compliance. The more we optimize, the easier we are to measure and the less space there is for real thinking.
Our creativity gets absorbed and sold back to us as someone else's product. What felt like expression turns into data, property and profit. The myth of neutral technology hides the truth that every tool trains us to surrender control. We start managing ourselves like we manage machines, chasing efficiency, until exhaustion feels like virtue.
Capitalism does not need creators. It needs operators who maintain the machine and never question why it exists. True creation means uncertainty and uncertainty threatens profit, so the system gives us repetition dressed as innovation and obedience dressed as collaboration.
Programmers like to think they build systems, but more often they’re maintaining the one that builds them. Every metric, every AI tool, every performance review teaches us to think less and produce more. The machine grows smarter, the worker grows smaller.
That’s not a glitch. That’s the design.
mexicocitibluez@reddit
No it doesn't. It rewards making money. Which is why AI is so alluring to people.
If you're a CFO and all you see is "If we use AI, we can save $X in programmer salaries" you'd be fired for not entertaining it. That's not saying it's the correct call o that it can replace actual programmers, but this has been the same system we've been working in since forever. The only difference is the power is becoming inverted.
We, as software developers, have just as much bias against the tech as CEO's have for the tech. And anybody that tells you they can objectively measure a tool that might replace them one day is lying to you.
RoosterBrewster@reddit
"Don't hate the player, hate the game."
QwertzOne@reddit
In this system, following the money is how people learn to obey. You do not need someone to tell you what to do, when the rules of profit already decide it for you.
A CFO is not just making a smart choice. They are trapped in a game, where not chasing profit means losing their job. That is how control works now, not through orders, but through incentives. So yes, AI looks like progress, but it is really the same logic that has always run the world. The difference is that now the machine is learning to replace even the people who once built it.
SweetBabyAlaska@reddit
I'd love to see this idea fleshed out more in a blog post or something. What an interesting way of applying that analysis.
QwertzOne@reddit
I'm not really doing anything novel here, it's more or less Critical Theory, so if you find it interesting I may recommend learning about thinkers like Byung-Chul Han or Mark Fischer.
I know that programmers don't typically delve into modern philosophy, but I was tired of neoliberal explanation of how world works and decided to dig deeper.
Nyadnar17@reddit
Capitalism not needing creation is backwards.
Free(relatively anyway) Markets are the only system that does need creativity because unless you are protected by the government stasis equals decline and death.
There is always some “genius” investors class members trying get rid of or control labor instead of investing in. This always leads to a bubble or stagnation, which leads to decline, which leads to a new generation of market leaders.
The companies embracing AI at the expense of talent are gonna pay for it. The trick is surviving that whole process.
EveryQuantityEver@reddit
Stasis does NOT equal decline and death. That’s the stupid finance bro “growth at all costs” mindset that’s ruining this world.
Nyadnar17@reddit
What healthy industry has zero innovation?
If an industry isn't evolving it is dying. Has nothing to do with "growth at all cost", or "hyperscaling", or whatever bullshit term MBAs are throwing around to try to justify their bonus to investors.
EveryQuantityEver@reddit
No, what you are advocating for is the MBA bullshit. Companies can be sustainable and work just fine, but finance bros don’t let that happen because they need double digit growth at the expense of everything else
Nyadnar17@reddit
Sustained with innovation and improvements.
Car companies can’t put out the same car for ten years, you can’t fix just keep releasing the same Assasins Creed Game, the same album, etc.
If you aren’t improving your good or service someone else is and they are going to take your customers.
sleepwalkcapsules@reddit
Nah. Workers will.
TheBoringDev@reddit
It'll likely be both, but the execs making those decisions will just hop to another company when the current one fails, having learned nothing - exactly like the last outsourcing bubble.
QwertzOne@reddit
Bad companies will eventually fail, but what about the people who have to work for them to survive?
Think of it like a video game, new update adds a super-powerful, easy-to-use weapon. To keep winning, every player has to start using it. The players who refuse to adapt get left behind. So, you spend months getting really good with this new, easy weapon. Your old skills get rusty.
Then, the game developers realize the weapon is breaking the game and they make it weaker, but it's too late. The way everyone plays has already changed forever. The best players are now the ones who mastered that easy weapon. It's the same here. To survive this phase, programmers have to learn to use AI, so even when the AI-obsessed companies fail, the next wave of companies will hire from a pool of programmers whose main skill is now working with AI.
The game doesn't reset. It just moves to a new level where the new tool is a permanent part of the game.
Nyadnar17@reddit
If AI did what c-suite thinks it can do there wouldn’t be a game. It would just be The Singularity because you can print human level workers on demand.
That’s a big part of the point of free markets. Unlike videogames no one can control the meta and the meta is constantly shifting as innovation and market needs change. If AI is the new meta then it’s the new meta but assuming a technology that currently isn’t profitable and currently isn’t even on track to be profitable to be the future seems wild.
The short term is gonna suck….but what else is new in this fucking industry. Dotcom bubble, outsourcing, “free money” for massive hiring ramp ups, favoring contractors or salary, firing seniors and leaning on exploited college students, etc. Some asshole always has one neat trick to make more money without providing more value and it always blows up in their face.
QwertzOne@reddit
This is different from outsourcing or the dot-com bubble for one key reason. All those previous cycles were about replacing the worker with a cheaper one, like an offshore team or a junior dev. The fundamental craft of programming stayed the same. AI is the first tool being pushed that attempts to replace the work itself. It changes the job from creating code to managing, validating, and prompting an AI that creates code.
My argument isn't that AI will be the Singularity or even that it will be profitable. My point is that forcing programmers to use it fundamentally changes the skills that are valued. Meta is shifting, but it's not a business meta. It's a skills meta. Even after this wave of companies blows up, the next generation will hire from a talent pool that has been trained to think like AI supervisors, not creators. The damage isn't in the companies that fail, but in the craft that gets eroded in the name of survival.
Agitates@reddit
We automated away so many jobs, I actually just see it as karma that we suffer the consequences of our own actions. We've destroyed the value of humans and turned everything into variables and values.
And we did it for a nice fat paycheck.
TheBoringDev@reddit
Automation is good, if a job doesn't require a human to do than forcing a human to do it is meaningless busy work. The only real problem is that we've structured society to stop paying that human when the job is automated.
Agitates@reddit
Yes and no. I think it's partially a lie we tell ourselves. Some jobs are boring or obviously better to have a machine do, but people exist across an entire spectrum of skills and abilities, and they all need jobs.
Unless we're gonna tax the ever living fuck out of everyone making over 200,000k a year and a 1% capital tax (over 1mil) and give everyone a livable UBI, then we're literally saying, "because you can't match automation in skill/abilities, you're worthless and we don't care if you die"
sleeping-in-crypto@reddit
Downvoted because people don’t like that you’re right
geusebio@reddit
conversely, that was the labour they were buying.
john16384@reddit
The only thing that matters in the end is that the software doesn't annoy users to the point of giving up. This means it must be highly available, responsive, easy to use and trustworthy.
That implies a lot of things that most experienced developers/architects/etc will "add" on top of a regular feature request. Not only do they build the feature, they ensure it scales (highly available), has a reasonable latency (responsive), is well integrated into the existing system (easy to use) and secure (trustworthy).
Managers almost never "ask" for any of this, it's just the default expectation. For developers to keep delivering features with the same quality standards, the design must be solid and evolved with new requirements. Good luck doing that once AI slop pervades your code base.
kappapolls@reddit
chatgpt wrote this post
PurpleYoshiEgg@reddit
94% probability based on gptzero's analysis. Good catch.
stevefuzz@reddit
Until the software sucks and they want the creative programmer with clean code....
Bleyo@reddit
... is this AI?
mazing@reddit
This is poetic and now I want to Hack The Planet with my comrades✊
mindcandy@reddit
Can anyone name a specific company where
I keep seeing this complaint. But, it’s just too bizarre…
DowntownSolid5659@reddit
My company started tracking Cursor and Copilot usage, and the senior software director even built an AI-powered app to track pull requests with a scoring system.
Now it’s turned into a toxic race among developers to climb to the top of the leaderboard. He also mentioned that incentives might be added soon based on the scores.
Soccer_Vader@reddit
I wish I could be a rubber stamp. It feels more like babysitting when using AI at work.
BrianThompsonsNYCTri@reddit
Corey Doctorow uses the phrase “reverse centaur” to describe that and it fits perfectly
gefahr@reddit
I don't think I'm smart enough to get this. Anyone feel like explaining?
felinista@reddit
perhaps this
BlackDragonBE@reddit
In my mind a reverse centaur is someone with a horse upper torso and head while the legs and butt is human. This dude's definition is almost random.
Tarquin_McBeard@reddit
May I introduce you to the concept of metaphor?
A centaur is a being that has a horse's speed with human intelligence.
This is a metaphor for a developer with human intelligence whose speed is increased by automation/tooling.
A reverse-centaur is where a developer has to review the code, and is therefore limited to working at the speed of a human (they have to read and understand code they didn't write, which is slower than just already understanding it because you wrote it), but the code is written by AI, and is therefore unintelligent slop.
i.e. the speed of a human, and the intelligence of a horse. A reverse-centaur.
Little_Duckling@reddit
Bojack?
Tai9ch@reddit
Right.
It's an AI head and a very tired human body.
felinista@reddit
As I understand it he's just using that phrase for its more abstract meaning. Just like how upper human torso + horse legs is sort of like taking the best bits from both, the reverse construction arguably takes what's least useful from both man/horse. In his case, he's saying instead of man driving the machine, the opposite is happening.
FlyingBishop@reddit
This presumes that the machine works at a fast pace. And it does, but it's a bit like it sprints 100 meters in a second and just freezes. And there are a thousand paths and in the happy case where it finds a happy path, it's great, but it has limited ability to actually drive quick progress because 90% of the time you have to painstakingly retrace its steps at normal speed.
gefahr@reddit
Thank you.
DownvoteALot@reddit
We have all become middle management now, just without the salary.
VestOfHolding@reddit
If I can get paid like a programmer, I'll happily rubber stamp at this point. I've been out of work as a software engineer for over a year and I'm ready to sell my soul for a decent paycheck again.
icowrich@reddit
Engineers second-guessing their instincts because they feel pressured to agree with whatever the model suggests is just... sad. Same sentiment though. I use CodeRabbit for reviews and it’s been helpful for catching routine stuff and keeping feedback visible between people, but the bigger worry is how some teams treat AI feedback like it’s the final say. It changes the review dynamic when people stop questioning.
pVom@reddit
Goddamn so over these doomer posts.
Software isn't going to collapse because of AI and AI isn't going to just go away, it's too inherently useful. It's not perfect, it's not the end all be all and it's not what the AI companies are selling it as, but it's still insanely powerful at what it does.
We happily offload our work to programming languages, frameworks and libraries and third party services. I bet the vast majority of people in the could not write anything useful in C, and software hasn't collapsed, the economy hasn't imploded.
Our value has never been in writing code, it's been in translating human problems into technical solutions, that hasn't changed. Much like farming hasn't changed, they still produce food, but the boots on the ground job has. They're no longer labourers, they're tractor drivers, mechanics and agronomists.
In a sense we're all tech leads now. If AI puts bad code in your codebase, that's YOUR fault and you haven't done your job correctly. You haven't structured your code in a way that's easy for AI to extend on, you haven't planned appropriately and given specific enough instructions, you haven't been rigorous enough with your code reviews and test suite.
Quite frankly if some kid out of college, assisted by AI, can translate a human problem to a technical solution faster and better than you, they're more valuable than you are.
We've been using AI more at work and honestly my job has never been more secure, business has never been better, there's never been more work for me to do and never been a greater need for the business to have someone in my role. If I get more done in less time they just find more for me to do, there's always more features to improve the product and more markets for us to tap and expand into.
If you're worried about your job maybe look at product management because there's never been a greater need for them.
AI means everyone can do more with less, the companies that just treat it as a way to reduce headcount will be left behind by those who use it to increase their value to the consumer.
So my suggestion is to suck it up and adapt. Embrace it. If there's problems with AI then look for solutions. Sitting around moping and wishing it would just go away isn't going to get you anywhere.
cobalt8@reddit
The problem is that early studies are showing that frequent use of AI leads to reduced cognitive functioning. The old cliche "use it or lose it" is true. If people start relying on AI to do all of the coding for them their skills will regress.
Until AI reaches the point that it can generate flawless code there will be a need for people with the knowledge to debug the code. Once it can generate flawless code jobs everywhere will be cut to the bone. Meanwhile, people will get dumber by the day because they value instant gratification in the short term over long term skill gains.
pVom@reddit
You could say the same about tech leads now though no? If the bulk of what you're doing isn't writing code but reviewing it then yeah you might not be as sharp at writing it as you once were. But arguably you're actually providing more value as a lead, by planning out solutions and finding issues ahead of time you're uplifting the entire codebase instead of just the small parts you touch. I'd argue you don't need to be as sharp at writing code and conversely you'd be sharper in the parts you do need.
Like one of the benefits to using it I've found is it forces me to make sure that I've planned it out and architected the solution ahead of time and I'm getting much better at that. Before, especially as a junior, I was more likely to just start hacking away and find issues as I went, sometimes well after it was already live. I didn't have the same opportunities to practice architecting because that was someone else better at it was doing that role and code needed to be written.
I think we're kidding ourselves if we think that our job before AI was some grand learning experience, most of it is implementing solutions to problems we've already solved in the past and going through the motions. By and large I stopped learning to code years ago, I've already mastered it. What I learn now is better engineering and that hasn't really changed besides having more opportunities to learn it because I'm not spending time and energy typing code and solving little micro problems with limited impact.
But yeah if you're just writing a quick prompt and pushing it to master you're going to have bad code created by a bad developer. But that's a human problem with a human solution. If you're actively engaging with it then you'll get more done, done better and be better at your job.
kooknboo@reddit
My large fortune 100 IT org is about to announce a goal of having ALL IT out it AI generated and reviewed by EOY 2026. We’re apparently having all new titles to change specifically to, for example, Prompt Engineer.
This is in an org where the overwhelming complexity is self-generated bureaucracy. And now there will be people that suddenly have the critical thinking to know how to have a dialogue with MyPartner about a specific goal and then understand its response and then test it. Many people are confused by the synonyms directory and folder.
Oh, and yes, our AI service of choice is apparently Gh Copilot but we call it MyPartner because we have to rebrand every fucking IT term imaginable.
Great place to work. Stifling lack of imagination or ability to think beyond yesterday. Thankfully my time is short. Good luck to you youngsters that have to survive this AI fuckery.
MyotisX@reddit
Either we wake up and there's the biggest stock market crash of all time. Or we continue on this path and in 20 years we live in a dystopian AI slop future where everything is constantly broken but we've accepted it.
fire_in_the_theater@reddit
i await all the mysterious bugs that start appearing in all the services i use due to this approach.
PerduDansLocean@reddit
That sounds nightmarish. Glad you're leaving though.
TheBlueArsedFly@reddit
You're an artizan, a craftsman, a poet of code.
But I'm a business and I want you to do it faster.
tomz17@reddit
My experience so far is that it's just trading some marginal forward progress for massive technical debt. The amount of time spent understanding + debugging the AI code when something goes wrong wipes out those gains very quickly.
je386@reddit
I tried a new AI agent, so not the normal chat, but assining a task to the AI agent and it makes a Pull Request then, and after 3 days trying that, I made it myself in 2 hours.
It is still possible that it was a skill issue on my side, but getting loads of code fast is not as a huge feat as one might think.
Developing Software was never about typing fast.
tomz17@reddit
So I've been spending a lot of time trying to get a fee for which agentic-ai programming tasks can actually save me time... So far anything more than a discrete, well-defined function (sort of how you would describe it to an intern) is a wash if not a net-negative. I've "vibe coded entire" polyglot applications, but the thing you get out is disposable trash. They are not particularly maintainable without inputting a roughly equivalent amount of time into understanding/auditing all of the choices the AI made along the way.
grauenwolf@reddit
It is a massive skill issue. Or more specifically, the issue is you have actual skill and the AI fanboys don't.
grauenwolf@reddit
And you think I don't? Do you think I enjoy working massive amounts of overtime to address the failings of some AI bullshit?
Synth_Sapiens@reddit
Don't talk to idiots as if they are humans.
BlobbyMcBlobber@reddit
It's more in line of "I am a business and I will hire those who do it faster". That's how it is.
xFallow@reddit
“Do you want it don’t faster or do you want it done right”
“Faster please”
SolarPoweredKeyboard@reddit
At the cost of what? Either hire more engineers or be ready to scrimp on quality.
VermillionOcean@reddit
My current workplace isn't mandating copilot use, but it's highly encouraging it so they can evaluate the effectiveness of it. Thing is, most people on my team isn't really engaging with it, so I wouldn't be surprised if they try to force us to use it at some point just to see if it's worth the continued investment. I feel like my team is just slow to adopt things though, since one of the devs on our team wrote a tool to automate writing testing documentation which is frankly a godsend imo, but only me and one other person was using it for months, so now they're asking me and the other guy to help everyone else set up and basically force them to give it a try. I wouldn't be surprised if they do something similar with copilot given the current usage rate.
i8abug@reddit
As a programmer with more than 20 years experience, you can be whatever you want to be, but not necessarily with employment income. I've embraced being a rubber stamp and it has it's benefits, and once you stop fighting the current, there are all kinds of opportunities
anengineerandacat@reddit
AI first is the general mantra being told, my own organization is making this shift but cautiously (we typically lag behind a generation but those in charge want us to give it a go for the next fiscal year to track efficiency gains and see if it's worth it).
Internally on my own team we are metrics focused and treat PR's as if the developer is still the responsible entity behind it; AI simply exists to accelerate their productivity, not replace.
On our 3rd real sprint with the technology (a few sprints with PoC's and such) and results are decent so far but definitely nowhere near as good as proponents are thinking.
At "best" it's a 30-50% efficiency gain (depending on the overall complexity of the sprint) but on average so far it's about 15-20% (which is pretty big for a simple introduction of a tool stack).
All of our existing processes still take place (code reviews, unit testing, integration testing, functional testing, etc.) the only key difference is we start with a prompt, see how far we can get with it, then manually jump in and clean things up / improve the tool in some capacity.
One thing not heavily discussed is that whereas you might get code up for review sooner, you really do spend a considerably more amount of time reviewing.
Mostly because with another human developer you have this "Oh, I trust Bob to build the feature correctly; I'll just look things over real quick and see if anything stands out".
With AI it's this weird situation of having an extremely talented graduate working with you, but they aren't quite used to building enterprise grade systems.
So you'll get the feature built, and it'll work to the requirements; but fall apart in other areas like performance, error handling, etc.
The other issue is that it really is a huge mental shift, moreso akin to TDD where defining "what" you want built is incredibly important for a quality output and until you get used to it takes way more time in some instances than just cranking out the feature.
Aggressive-Ideal-911@reddit
I’ll be that stamp then and take yo job. Stfu
Affectionate-Listen6@reddit
And Magnus Carlsen — arguably the best chess player of our time — still gets his ass whooped by chess engines. Get with the times or get left behind, “programmer.” AI is the current state of the art.
— Proofread and refined by ChatGPT
toroidalvoid@reddit
The PRs I see at work are already awful, I wish the devs would use AI
selucram@reddit
I thought the same, but AI slop is on another level. I used to write approx. 20-30 comments on a really bad PR. Now it's in the high 80s sometimes breaching 100 comments.
_chookity@reddit
How big are your PRs?
selucram@reddit
PRs are getting increasingly big, even though I asked the colleagues to split them in a couple smaller ones. Around 90-120 modified files.
ianis58@reddit
IMHO most PRs should be somewhere in between 1 - 10 modified files. Refactoring PRs can go high like 20, 40, 80 files but that's not every day PRs. Honestly above 20 files it gets nearly impossible to do a meaningful review. Correctly naming the branches and not doing more changes than what the branch name describes is the way for me to keep a lower count of modified files and not mix two changes.
toroidalvoid@reddit
😬
ngroot@reddit
> Now it's in the high 80s sometimes breaching 100 comments.
If I encountered a PR like that, it'd get a "no" and get closed. That's insane.
selucram@reddit
We're a small project team and "blocking" would reflect badly back onto our small company and the dev involved 🤷♂️. But even if it wouldn't, I'm personally more inclined to never block something, I want to get things merged / fixed, even if it means that most of the comments won't get resolved; but I'm commenting still, if I see an issue.
aaronfranke@reddit
It's not blocking that person's work, it's giving them work (the work of fixing their PR).
ericl666@reddit
After 5 comments, it's a phone call.
selucram@reddit
Yes, but that's what makes this even worse. Before I could at least ask the dev to "show me through your thought process" on a quick call and video share. Now I can't even do that because "dunno, AI generated this".
deja-roo@reddit
If you don't understand the code you're checking in and responsible for, it's just going to have to be rejected and redone until you do
grauenwolf@reddit
Not everyone has that luxury. If you do, use it.
UnidentifiedBlobject@reddit
Yikes. Huge PRs? Or is it stuff that could be automated?
realultimatepower@reddit
also the quality of AI code depends in large part on the quality of the underlying codebase. if you're hand written code is already garbage AI code will be an utter disaster, but if you have a clean codebase with simple, consistent design patterns, AI can pretty much nail it, as long as you don't give it too much to do all at once.
mexicocitibluez@reddit
"But the LLMs are spitting out wrong information"
Welcome to the internet, where W3Schools has been the #1 search result for anything web-related for the last 20 years.
Supuhstar@reddit
I was hoping this would be an article about how terrible of a business practice it is to tell people to "approve" public requests rather than to "review" them… But no, it was yet another article about how LLM‘s are bad for coding actually.
Same shit we’ve seen a billion times on this sub lol
grauenwolf@reddit
And we're going to keep seeing this until either AI stops sucking or employers stop demanding that we use it.
Supuhstar@reddit
well, it’s gonna have to be the second one, along with everyone else who thinks that they can insist that it’s used everywhere.
I’m a real enthusiast for the technology, but it has some really hard limitations that just can’t be overcome unless we get a successor to the Transformer model architecture
grauenwolf@reddit
And that's another problem. If someone does come out with a new type of AI that isn't based on this garbage it's going to destroy the economy. Or more specifically, the stock market that's attached to LLMs.
Eris is pregnant. With what we do not know, but it won't end well for us.
Supuhstar@reddit
idk about you but I think that if a talking computer can crash an economy, then that economy was poorly structured to begin with lol
grauenwolf@reddit
You complete misunderstand the risk vector.
All of the growth in the US is centered on the AI sector. If you eliminate them from the equation, the US is already in a recession. And the vast majority of that money is in LLM style AI.
The top 5 companies of the S&P 500, representing 26.5% of its total value, are heavily investing in LLM style AI.
If a non-LLM AI is developed to replace LLMs, that investment evaporates. The S&P 500 crashes, which will cause panic selling of other stocks. Which in turn will cause mass layoffs, which hurts the real economy even further.
Supuhstar@reddit
I think they invest in companies, not the specific implementation of technologies that those companies use.
Either way, you’re proving my point that it’s a poorly structured economy, mostly based on debt
grauenwolf@reddit
My apologies. I didn't mean to imply it wasn't a poorly structured economy.
Supuhstar@reddit
you didn’t imply it, you very rigorously described it.
Economies shouldn’t be based on speculation nor debt. They can allow them, sure, but if those fall through… it shouldn’t crash the entire economy.
grauenwolf@reddit
I wonder if we'll make it to the 20th anniversary of the 2008 crash.
Supuhstar@reddit
One can only hope
Heuristics@reddit
a programmer produces a specification. the ai is simply a type of compiler that turns the spec into another type of functioning code.
AlanBarber@reddit
I've said it before and I'll say it again... and this is coming from a grumpy old greybeard that hates change.
Automated code generation is just the newest tool we developers have to improve our productivity and output. right now these tools are in their early days, so yes they can suck and generate garbage, but they are getting better and better.
Anyone that refuses to learn these tools, you sound like the same developers 20+ years ago that bitched and complained about how IDEs were stupid and bloated. All they needed was a text editor and a compiler to be productive.
Maybe I'm wrong but I think we're on one of those fundamental industry shifts that will change how we work in the future so I'm sure not going to ignore it and end up sidelined.
MrMo1@reddit
What do you mean early days iirc llms were initially theorized after ww2.
grauenwolf@reddit
My use of an IDE did not affect your workflow.
My use of an IDE did not require VC subsidies to pay for it.
My use of an IDE did not result in your job being threatened.
My use of an IDE didn't result in massive security vulnerabilities.
This is in no way like an IDE. Which, by the way, were already popular in the 1980s.
kappapolls@reddit
depends what IDE you're using lol
IDEs are productivity tools. if everyone in your org is using an IDE but you, and your productivity is low ...
IDE plugins and extensions are a pretty common vector for attacks actually
yes of course, IDEs haven't changed at all since the 1980s
grauenwolf@reddit
All of your responses are bullshit, but let's focus on this one.
If you have to lie to make your point, you don't have an argument.
kappapolls@reddit
you're a bit prickly huh. why do you think it's important to know how the money is shuffling around? feel free to address my other points if you want. they're still there.
grauenwolf@reddit
I'm well aware of the round-tripping frauds that AI vendors are now engaging in to avoid admitting that revenues aren't matching expectations.
But I fail to see how that helps your argument.
kappapolls@reddit
what AI vendors? what round-tripping fraud? can u make real claims please?
grauenwolf@reddit
Round tripping is when you give money to a company so that they can buy your products with that money. You then book it as revenue even though it's really a loss.
Nvidia is doing this with OpenAI and numerous smaller players in order to prop up a GPU sales. In many cases no actual money exchanges hands.
https://youtu.be/CBCujAQtdfQ?si=Jm6ZOqZzUTeiDD_7
grauenwolf@reddit
All of your responses are bullshit, but let's focus on this one.
Productivity matters. I'm not aware of anyone fired or being threatened with being fired for not using an IDE. Even today a lot of web developers prefer to just use a simple text editor and no one complains so long as the work gets done.
AI isn't like that. People are literally being fired because they "aren't using AI enough" even when they can prove that AI is actually slowing them down or completely useless for their situation.
So no, I'm not going to accept your implied argument that AI improves productivity so much that people can't work without it. Especially in a thread where the chief complaint is about how much it slows everything down.
kappapolls@reddit
pretty sure you're just making this up
grauenwolf@reddit
Two seconds of searching news headlines will solve your ignorance on this point.
Dig a little deeper and you'll find that Microsoft is releasing tools specifically to help companies track AI usage so that they can more effectively punish people.
kappapolls@reddit
i see about 30 headlines with CEOs of trendy companies talking shit. forgive me if i don't believe this to be a true representation of the industry right now.
grauenwolf@reddit
What the fuck does "true representation of the industry" mean in this context?
That there isn't enough publicly people firing people for not using AI so you can just ignore it and accuse me of lying anyways?
kappapolls@reddit
it means that you should take everything CEOs and business folk say about what they're doing and why with a big grain of salt.
why you so testy? i don't think you're lying, i just think you're doing a chicken little. CEOs jerk themselves off about firing people all the time. it's just AI-flavored now.
grauenwolf@reddit
That I agree with.
grauenwolf@reddit
All of your responses are bullshit, but let's focus on this one.
So what? That doesn't change the fact that IDEs were already widely accepted 20 years ago, completely disproving your claims.
kappapolls@reddit
i can't tell if you picked up on my sarcasm or not. software development in general has changed a lot since the 1980s. i don't see why it matters that the concept of an IDE has been around a long time when the concept itself has changed substantially since then.
grauenwolf@reddit
All of your responses are bullshit, but let's focus on this one.
While it is possible for an IDE extension to contain malware, we're not talking about malware. We're talking about the output of AI tools when those tools are used as designed.
Not only is the code it produces often insecure, sometimes it just gives your information to hackers just by reading a pull request. https://old.reddit.com/r/programming/comments/1o6tew1/camoleak_critical_github_copilot_vulnerability/
currentscurrents@reddit
Maybe not your job, but hundreds of jobs have been automated by software over the last few decades.
Sorry that it's your turn now.
grauenwolf@reddit
The number of jobs no longer needed due to software automation is far more than "hundreds". And exactly zero of them were replaced with an IDE.
Keganator@reddit
Yup. Right on the head.
“High level languages! Fah! I can make better assembly by hand!”
“Scripting languages? Fah! They’ll never work, they’re not as efficient as compiled languages!”
“Garbage collection? Fah! No software garbage collector will ever bff ed as efficient as my manually memory managed code!”
“Generics? Fah! My hand written data structures are perfectly tuned to the problem, I don’t need ‘em”
“Reusable Standard libraries? Fah! What, you can’t figure out those protocols on your own?”
“Package managers? Fah! I can build and compile each component I use myself, don’t you know how to use a linker?”
“AI codegen tools? Fah! I can write it more efficiently myself, using high level, scripting languages with garbage collection. I’ll just grab express and build up a simple typescript app. Let me quickly download it from NPM.”
aaronfranke@reddit
The fundamental difference with the last one is that the correctness and maintainability of code is threatened. It doesn't matter if the code is fast or slow, or you can produce a thousand lines of it in a minute; so long as the code has the incorrect behavior or is unmaintainable, at best you are digging yourself into a hole, at worst all you are accomplishing is stupid faster. https://i.redd.it/jrb4e1wr9ll31.png
Tai9ch@reddit
IDEs are still stupid and bloated. All you need is a text editor, compiler, and well designed language to be productive.
darkentityvr@reddit
I’ve taken some time to look into the math behind these LLMs out of personal curiosity. From what I can tell, we’re not really in the “early days” anymore, and I don’t think what we have now is going to improve dramatically. I could be wrong, of course, but I’m not convinced by what Sam Altman and the other AI tech leaders are saying about these models getting smarter. It mostly looks like they’re just throwing more computing power at the problem to attract more investment. At its core, an LLM feels like a glorified “SELECT * FROM table” operation — a brute-force approach powered by massive GPUs that makes inefficiency look impressive.
FeepingCreature@reddit
I don't understand how you can "look into the math" and come away with thinking it's a "SELECT * FROM table" operation. That doesn't correspond to anything in the math that I'm aware of.
grauenwolf@reddit
The point is that it isn't fine-tuned for the task but instead, like a "SELECT * FROM table" query, just throwing massive amounts of resources at the problem.
Among database developers, "SELECT * FROM table" isn't an example of SQL, it's an insulting comparison.
loquimur@reddit
That's what translators already went through. Rest assured that you'll end up being there as a rubber-stamp that approves LLM generated code.
Even though hand-written code might be of higher quality and even sometimes faster to write, ‘nobody’ will want to pay for it done this way. What people want is to have it done ‘all automatically’ and then an alibi programmer to come in and sprinkle some fairy dust of humanness over it at the very end. Since ‘all the work has already been done automatically’, this serves as a justification that the programmer must then offer their fairy dust contribution at the utmost cheap.
It needn't actually be that way, but day by day by day, someone will wake up to think that it ought to be that way, come on, the machines become better and better so that surely now at least, can't we give it another try? Variations of this will come up in every other team meeting and management decision until it is set in motion.
inevitabledeath3@reddit
I mean look at what's possible today, and look at what was possible a couple years ago. It pretty much is possible to have all your code be AI generated with some human review and editing today. In two years I don't even want to know how much more advanced it will be. There is an astonishing rate of progress. If programming is your job and only skill set and not design, architecture, systems engineering, security, and so on then you won't have a job in a few years. It is that simple. Denying it won't save you.
grauenwolf@reddit
What makes you think it will be more advanced. They had one good leap forward with gpt 3 to 4 and haven't seen any meaningful progress since.
When I look around at Reddit forums for specific tools the general consensus is that the newer models are both more expensive and less effective.
inevitabledeath3@reddit
This is what happens when you don't pay attention. Open AI isn't the only company. Even just counting them they had a breakthrough with O1. That in turn inspired R1 from DeepSeek and a whole bunch more models.
grauenwolf@reddit
I'm not just talking about OpenAI. I'm talking about all of the tool specific forums I happen to encounter.
You're talking about the press releases for the models, I'm talking about the feedback from the people who are actually paying for them.
inevitabledeath3@reddit
Brother if you were paying any attention to the forums you would see the buzz around GLM 4.6 and GPT5-Codex.
grauenwolf@reddit
Ah yes, GPT5-Codex. Some people really like it, but most are saying it's slower that Claude Code but at least it's cheaper.
That's not a good sign. If it's slower than it's probably using more resources per query, which in turn means it costs more and they're just subsidizing the price.
inevitabledeath3@reddit
Claude is run on TPUs, not GPUs. Completely different hardware stack. Not really comparable.
If resources are your concern then pay attention to China. DeepSeek V3.2-exp is very efficient, as are many Chinese models. GLM 4.6 is only 357B parameters for example.
grauenwolf@reddit
When OpenAI investors start panicking about DeepSeek I'll start to take it seriously.
inevitabledeath3@reddit
OpenAI CEO said about R1, which is there old model, that it is a strong model and they welcome the competition. They then started throwing around false accusations about them. You have been paying no attention to China
ericl666@reddit
I go back to the article that says: "if AI apps are so easy to make, then where are they?"
FeepingCreature@reddit
IMO, with programming an app being "easy" now, the comparative effort of releasing an app and pushing it through the bureaucracy looms much larger.
My best guess would be, people are making apps for their own use and then move on with their lives. That's what I'm doing.
inevitabledeath3@reddit
Exactly this
inevitabledeath3@reddit
It's actually an insightful question even if the answer is not what you think. Most people don't really have any new ideas that have not been done before. The biggest use of AI code will be in existing products like how Microsoft now use AI in their products. I work with systems that were around before LLMs but are now being improved with LLMs going forward. Even with LLMs it takes time to develop something new and you still need some technical knowledge and project management knowledge.
Domain and workplace specific tools will also be made using AI and LLMs. This is the use case for nontechnical people as they can now make simple scripts, programs, and websites for simple tasks without needing to learn coding. These solutions won't be broadly advertised or done on a professional scale.
QwertzOne@reddit
Delusional take, programming is not much different from the rest of this list. I'm currently working on personal project and current LLMs are already able to cover all of that and more, with "some human review and editing today".
Is what they generate perfect? No, because LLMs are trained on specific data, they have limited context, so they're better in popular areas and struggle with niche.
Can they generate everything in the instant? No, it still takes time and effort, you need to work iteratively.
However, there's nothing special about design, architecture, systems engineering, security and other areas. It's still data, that LLM can analyze and generate.
inevitabledeath3@reddit
Your right maybe I am being too optimistic. Maybe those things can be automated too.
john16384@reddit
I hope companies will be prepared for software that lasts a mere couple of years before collapsing under its own weight, or when their customers start leaving when inevitably the slop starts leaking through the cracks and annoys your users.
Synth_Sapiens@reddit
"needn't" ROFLMAOAAA
And what exactly makes you believe that programmer jobs must be protected?
PurpleYoshiEgg@reddit
First known use in 1778.
Synth_Sapiens@reddit
I know that it is a word.
And even if it was not, I wouldn't laugh because words are used to convey meaning and there's absolutely nothing wrong with inventing new words as long as meaning is conveyed.
sreguera@reddit
Developer puts the ai-generated code in the repo or else developer gets the hose again.
cheezballs@reddit
I'm so fucking sick of all these same articles just saying the same thing. Think of something new and stop flooding the sub with "ai sucks here why" posts. We get it. This sub is more of an anti-AI sub than anything else.
PurpleYoshiEgg@reddit
you know you have the power to control your own social media by unsubscribing, right?
cheezballs@reddit
I just want to unsubscribe from all the rule 34 content I see here. Brackets can be very sexy.
grauenwolf@reddit
And that's how they win. They keep throwing this AI shit at us until we get so tired and worn down that we stop fighting back. They don't have to make it better, they just have to outlast us.
kooknboo@reddit
I’m usually one that filters the LinkedIn bot slop, but this one caught my eye. 100% true. Read and think about those last two sentences. If you’re in a CHO that is masturbating to your exciting, AI led future… GTFO.
“TL;DR: Two brilliant ex-Pivotal engineers share a powerful message: Al isn't replacing developers-it's making tiny teams of exceptional engineers vastly more productive. The winners? CTOs who embrace small, high-skill teams paired with Al over large, coordination-heavy organizations.”
Far_Oven_3302@reddit
I once was an electronic technician, finding faults in circuits boards, then the machines came and I had to rubber stamp what they were doing. Now my job pays minimum wage and is unskilled labour.
ohdog@reddit
Yeah, past code review processes are not very suitable for AI development, processes need to change.
EveryQuantityEver@reddit
Why? If the AI isn’t generating good code, why do we have to settle for it?
ohdog@reddit
That premise is false. AI generated code is often good enough.
Petrademia@reddit
I'd argue that they just want the system to built under the assumption of, the bulk of the product is perceived as "already done" by the AI. We'd become a validation layer that would drive the hiring margin towards the marginal tasks. Then as the compensation is pressured downwards it would be the win-solution for the company anyway to double down the expectations towards engineers as it creates a loop where AI is proven to be successful.
IG0tB4nn3dL0l@reddit
I just approve them all as fast as possible without reviewing. Today's AI slop is tomorrow's employment opportunity to clean it up. And I like employment.
agumonkey@reddit
yeah you're a human with personal and intellectual growth goals, but CFO values this at zero USD
l03wn3@reddit
No, that’s a PMs job.
grauenwolf@reddit
PMs shouldn't be approving pull requests.
ConsciousTension6445@reddit
AI is too concerning for me. I don't like it.
-Something_Catchy-@reddit
Old man yells at clouds ☁️
chance--@reddit
Oh kid, those of us screaming at the clouds are doing it for those who follow.
tekanet@reddit
Look, I just don’t want to fight this war. I can code and make good use of this skill. Am I better than AI? Surely not. Can I, up until now, effectively use AI for my own advantage, keeping control of my work? Yes I do. Can juniors grow as skilled as I am, having this thing in their hands? I believe that’s not possible.
The current large generation of seniors is the last one in history, I have little doubts about it.
It’s not necessarily bad, maybe developing will just evolve without needing for our services.
If my skillset will be required later on to mitigate the impact of current AI output, it will come with a high price tag.
If someone else will be there to do that job for dimes, I’m completely ok with that: maybe I’ll finally code just for fun.
grauenwolf@reddit
Yeah, because we've got a massive steam pipe leak and there shouldn't be clouds inside the office!
mixxituk@reddit
You are when you accept my PR
is669@reddit
Copilot can speed things up, but it doesn’t understand context or consequences that’s still on us
qodeninja@reddit
I think seeing this through a different lens is helpful. A lot of roles in the SDLC already do this in some regard.
Keganator@reddit
Your job as a programmer isn’t to write code. It’s to deliver features and make a maintainable system so you can deliver more features better and faster. Programmers that don’t realize this are going to be left in the dust by AI tools.
grauenwolf@reddit
Your job as a programmer isn’t to play with AI tools. It’s to deliver features and make a maintainable system so you can deliver more features better and faster. Programmers that don’t realize this are going to be left in the dust by people who don't outsource their brain to AI.
manly_@reddit
Nothing like automating the creation of legacy code.
BlobbyMcBlobber@reddit
You can stand your ground as loud and proud as you like. Question is, who will be hiring in 5 or 10 years.
Software engineering is going through a paradigm shift. Adapt or die.
grauenwolf@reddit
That's an incredibly stupid thing to say.
If these tools actually work then none of us will have jobs in 5 years. There's no adaption, only retirement or manual labor.
Sounds like you got henchman syndrome. You're like the guys who help the mad scientist destroy the world without thinking about what happens to you afterwards.
yamfun@reddit
seniors approves junior codes too
VehaMeursault@reddit
I feel the same way you do, but I also acknowledge that this is what happens with all jobs as innovation continues. That is to say that jobs being automated is practically natural.
over_here_over_there@reddit
We imagine ourselves as codesmiths who carefully think about every for loop, every variable name, every function. We have strong opinions about OO. And we get paid a ton of money to produce code slowly so we can have artisan coffee machines in the office, beer fridays, have 2hr lunches and anime breaks (those were the good old days anyway)
And none of this matters bc company we write code for gets sold and your IP gets tossed into dumpster of “we have X at home”.
Been in industry for 25 years, I’m here for the paycheck.
hippydipster@reddit
Jim, I'm a doctor, not a grease monkey!
RogueJello@reddit
I am not a number! I am a free man!
Kindread21@reddit
I'm guessing in the long run all but the most sensitive systems will have prompts checked in rather than generated code, and part of the build process will be generating the system and testing the result.
So instead of generated code reviewers we'll probably end up being prompt reviewers instead.
great_divider@reddit
Yet.
HappyZombies@reddit
Problem is these AI tools are here to stay and I complained about something similar and was told “it’s the future sorry”. So just adapt I guess? Whatever I’m still gonna complain lol
mindaugaskun@reddit
I see nothing wrong with it. More importantly good programmers should be more concerned about rubber-stamping "Rejected" on PRs that don't meet required product quality. Both juniors and seniors should strive to become good at such a skill to tell bad code from good code, so nothing really changes in the field.
hindustanimusiclover@reddit
The problem is that I have to work with timelines. Things that till a couple of years back I would promise in a week’s time, i have to deliver in a day nowadays. If you ask for a week. An intern will do the same in a day! How can I not vibe code in this scenario?
BlueGoliath@reddit
You sure?
trxxruraxvr@reddit
Have you seen the shit that AI generates? If you're halfway competent you'd at least be a rubber stamp that declines generated code.
StupidIncarnate@reddit
This was so short it could have been your reddit post, rather than linking to a crappy site.