AI reshuffling the group hierarchy
Posted by gburdell@reddit | ExperiencedDevs | View on Reddit | 83 comments
Honestly feel like I’m on the verge of irrelevancy with AI tooling. I thought it would happen much later in my career of ~15 years. The backstory is that my manager has fairly aggressively pushed our mixed-ability team to use AI tooling, even to the point of being vaguely threatening that people who don’t will be “unemployable”. The other senior devs and I are too busy with team critical tasks to quickly pivot to an agentic style of work, so my manager has tasked some juniors with idle time to lead the charge. It’s gone quite well, and now they’re presenting well up the management chain, which I am truly proud of them to have this opportunity.
The problem is that now everybody in the group feels empowered to think up features and submit pull requests. Before, the seniors maintained critical infra that not many people touched because the things it did was sort of specialized outside most people’s skillset (e.g., databases). While the submitted code passes style guidelines and is bug free, it’s usually about 4x longer than it needs to be and isn’t coherent with the architecture. I have a tough time articulating why the code’s bad, other than it adds technical debt, so I tend to approve the PR’s if they add immediate value.
So we have more and more people feeling increasingly emboldened to let Claude crank out reams of infra code over the weekend, which I need to sign off on, on top of whatever mission critical stuff lands in my lap, while doing a major re-skilling to AI agents that are themselves churning quite a bit, while I’ve got 2 toddlers at home. Extrapolating from here, I probably will just never catch up and instead focus on what I am good at today while being slowly (or quickly?) outcompeted.
PaleCommander@reddit
I see a few examples of sacrificing long-term sustainability for short-term gains here:
All of these are maybe defensible in isolation, but if you want the long-term trajectory to look good, you need to stop sacrificing it. And yes, fixing it now will be harder than fixing it back when the juniors were looking for something to do. But what else are you going to do, give up?
UsualNoise9@reddit
Agree - if we were in the on-prem to cloud era and you went with "we were too busy fixing the server in the basement to figure out how to deploy the docker container" it wouldn't fly either. The bargain of working in tech is lifelong learning.
gburdell@reddit (OP)
Yes, and I’m continually learning new things — C# and Git for my current job — but I had a few months to get up to speed. I also keep up on academic papers being published in my field. My company only approved LLM usage a few weeks ago and scale that AI tooling is being foisted on teams is probably 10x faster than any change I’ve seen before.
dweezil22@reddit
Find the Jr AI expert that's really smart and collaborative and partner with them to try to solve the code quality problem you articulated above. You'll end up with one of three outcomes:
Utter failure (at least you learned something new; that you're stuck w/ this shitty situation)
Proof for the Jr that this still needs human code cleanup. Follow that up with a plan to get the Jr's actually doing that work (you could unilaterally demand this, but I think you realize that risks seeming like an old gatekeeping Senior that pisses everyone off; ideally you want consensus across all 3 of leadership, seniors and juniors)
A workflow to get Claude refactoring, which you can have the Jr's get to work on using and then write up as a 1-pager to brag to leadership about how you're embracing their AI goals
xmBQWugdxjaA@reddit
This is the perfect analogy.
jedilowe@reddit
Exactly. If it was human generated would you let it go? As it was generated quickly, why not take the time to make it right? If the Jr's can't tweak it quickly, then isn't it a problem on its own?
Articulating why code is not there yet is much harder than we think. So much of expertise is intuitive, so we know that something isn't quite right, we know what we would do to fix it, but it is hard to explain it all to someone else. Just like coaching a golf swing or dance move, sometimes you need to correct the problem yourself, with tips rather than instruction from your mentor.
No-Economics-8239@reddit
It doesn't sound like code gen is increasing productivity. It sounds like it is increasing the rate code is produced, and you are feeling the pressure to keep up with it. Any increase in productivity is from the increased review of the new code coming in.
If your job were at risk, they wouldn't need you. They could just have AI agents in the pipeline to automagically review pull requests. But, as I'm sure you can imagine, this would just hasten the downward spiral you're already seeing.
If these junior coders are feeling emboldened and empowered, it's because you aren't pushing back at the code they are shoveling. They are being trained that what they are doing is good and it's working.
You're right that your problem is that you can't figure out how to articulate what is wrong. Junior coders need mentorship and training. AI doesn't provide that. And telling you that you need to do all your normal responsibilities but also extra duties doesn't magically make you more productive.
You need to get clarity on your priorities. Do they want you tooling up to use these new code gen utilities? Or mentoring junior developers? Or continuing your current duties of overseeing and generating code and reviewing changes?
There are only so many hours in the week. It is perfectly normal for leadership to want to look for ways to boost productivity. One way they can accomplish that is making you do more work. Is that what you want?
Figure out how to advocate for yourself. Find how to articulate your concerns. Help manage expectations and get clarity on your priorities. And don't rubber stamp code that you aren't comfortable allowing into the code base. If you aren't the watcher on the wall... who is?
Away_Echo5870@reddit
Yup I guarantee management is not understanding that increases in code volume from has side effects that increase workload elsewhere; as a senior it’s this guys responsibility to make them understand the workload issues and get them to adjust responsibilities (or hire more people to cover it).
Electrical-Ask847@reddit
yea i want op to clarify what they mean by "it has gone well"
stevefuzz@reddit
Or just wait for their shit ai slop MVP vaporware to fail miserably... As a high level coder that was smart enough to leverage AI for productivity, I can say with certitude it is no more than a sometimes okish poor coding junior that constantly makes hard to parse mistakes with little to no understanding of context. To someone inexperienced it is magic. To me it is a useful mirage that is good ay tedious blocks of boilerplate and general common knowledge regurgitation.
MathmoKiwi@reddit
OP approved it though
ReachingForVega@reddit
What will happen is the blame will fall on the person approving the code and OP will be sacked. Later it will all fail miserably.
stevefuzz@reddit
I'm hoping once all the investment money dries up some of the ridiculous rhetoric around AI will come back down to earth. It's at snake oil levels and companies are lathering it all over themselves with a sideways smile.
Electrical-Ask847@reddit
> It’s gone quite well
>it’s usually about 4x longer than it needs to be and isn’t coherent with the architecture.
i am confused.
g1ldedsteel@reddit
Perhaps I’ve just become way too cynical way too fast but I think this is just the new way of software in the agentic age. Passing guidelines and bug-free seems to be the current “good enough”. Architecture is a tool we use for conveying complex concepts easily, and how we structure our discussions about the code. If our understandings about a given system come from the agent’s understanding of the system (as seems to be the trend), then adherence to architecture might be headed for the technological dustbin.
I hope I’m wrong.
MoreRopePlease@reddit
One of the goals of architecture is (should be) to support change. Can I swap this component out with another and not break the system? Can I easily swap out the UI widget framework with something else? Can I use this other database? Can I use this other 3rd party tool?
If your software is unable to adapt to change you are in for a world of hurt down the line. Unless this software is inherently short-lived, I guess. But software has a knack for living well past its use-by date.
g1ldedsteel@reddit
Well said, and gods know I agree with you. My big worry is that when the business folks realize that an architectural change (or worst case, a complete rewrite) to support costs about the same as your casual every day bugfix, then pushing for sane & consistent architecture is going to be a losing battle.
My recent experience is that the use-by date has gotten shorter and shorter, and architectures have become more and more disposable. That being said, my bias is colored by experience in mobile frontend and CRUD endpoint work, so this might be less true as you move deeper in the stack
TheOneTrueTrench@reddit
This is something vitally important that LLMs and junior devs just don't seem to understand.
Sure, you got this feature working, but you're painting us into a corner. We have no flexibility, we can't adapt.
It's the same difference between unnormalized and normalized database schemas.
You have an address, City, State, and zip field in your user record? Cool, very cool... what happens when you need to have separate billing and shipping addresses? Just duplicate? What happens when you need to have two shipping addresses because the user spends 6 months in Florida? What about...
LLMs are advanced cargo cult programmers, they "know" to do things, but they can't understand why on a purely abstract basis. They can't foresee the usefulness of an abstracted interface when you just ask for an HttpClient that rate limits on FQDN, if they can even manage to shit out some halfway usable code. They tend to prattle on and on, both in English and your programming language of choice.
Sure, the code works for this, but why did it do it this way? Because that's the way it's seen it done before. That's the only reason.
tblaziken@reddit
My concern as well. If, aside from the computer and the developer, the code needs to be readable by AI agent to let it write and 'debug', then certainly the architecture and design must conform to assist the agent. We are seeing thousand-line long files again in AI era, which is a clear sign people are throwing old standard to the trash bin. I see a similar pattern with the HTML5 time when companies wrote single jQuery file per web app with no standard, and then React was introduced to save the day and shat the bed again a few years later.
Mission_Cook_3401@reddit
Sounds like increased job security for senior devs
galwayygal@reddit
I’m a bit confused. What’s keeping you from installing Cursor, opening your codebase, and asking it a question? It’s actually super easy to do and having toddlers shouldn’t keep you from trying out AI tooling. Having said that, I agree that juniors using AI tools can potentially write bad code. Why do you say “I have a tough time articulating while the code is bad”? It sounds like the code is longer than what it needs to be, and is non-coherent. If you don’t have time to review, you can even quickly ask GitHub Copilot or ChatGPT to review what best practices it violates. What I find AI to be good at is small lines of code. When you ask a specific question in a line of code it can give more definitive and correct answers. It’s not too late to incorporate AI into your way of writing code. When you do, and when you start participating in AI-related discussions, you can definitely do a better job than the juniors. Don’t give up without trying.
unstableHarmony@reddit
One of the things I was taught as a junior developer was that it's best for software to be written as if it originated from a single author. Writing this way allows developers new to the code base to quickly gain a sense of what patterns are in use and how things should be structured beyond what a static code analyzer can discern.
This is important when issues come up because understanding how to read the code base makes it easier to track down issues. Dissonant sections can slow down the troubleshooting process and make it more difficult to discern a resolution strategy.
You need to have a discussion with the other senior members of the team about this. Are the others okay with the new code being introduced and how it changes the readability of the repo? Maybe look through prior pull requests as a group and decide what changes are red flags that need to be refactored.
Something else to consider is to begin keeping track of the cognitive complexity of the repos. If the company is paying for AI it should also be paying for a static code analyzer that can calculate this. While this won't solve the code voice issue I described it should show everyone where there are a lot of decision points in a function and begin a discussion about how AI generated code can affect this.
Suspicious-Line-5126@reddit
Ageism is tech is real and evil and thks time it is empowered by AI
FietsOndernemer@reddit
Seems like the gatekeeper has been worked out of the gate.
In the past, saying “this code isn’t good. I can’t explain you why, it just isn’t” worked well for you. That strategy obviously stopped working.
You could learn to articulate better why the code submitted doesn’t adhere to your standards. If you can’t do that, re-evaluate your standards. I, for one, have learned a long time ago that less lines of code isn’t always better or better maintainable. Often, it only strokes your “look how smart I am”-bone.
Stop gatekeeping, start working as a team. Or be stubborn and find yourself worked out of the team soon.
fragglerock@reddit
PR rejected until these are fixed. ez
Disastrous_North_279@reddit
I’m going to give you a bit of a harsher perspective because I struggle with this myself and I’ve recently had to learn this lesson:
If there’s no bugs and it passes the style guidelines - perhaps it’s not bad code. Perhaps it’s just code you don’t like.
If length is the problem, update your style guidelines with length checks. And articulate why that matters.
You have to remember this is a business. If more people are shipping production ready code, and the only thing you can legitimately criticize is its length, this is a net good for the business. You need to reinvision your role. You aren’t the arbiter of what pretty code gets into the codebase. You aren’t the orchestrator of a whole team that has leveled up rapidly.
If there are actual problems and I’m wrong, then it’s your job to articulate them, train your coworkers, and let them generate better code. Sounds like they’re doing a great job and you should catch up.
Jmc_da_boss@reddit
Code that is 4 times longer than it needs to be for vital systems is not production ready, that's 4 times the lines to maintain.
Do what's best for the long term health of your projects
nicolas_06@reddit
Still why can't OP find a way to describe why the code is bas is strange...
Adverpol@reddit
It's been my experience with AI code as well: it often looks ok on a first glance, just overly long and overly complex. I'm also not in the habit of using just that as reasons for rejecting a PR, but if you want to give concrete feedback "use x or y instead" you're not only solving the original problem instead of the dev making the PR, you also have to wade through all of the code, understand it and point out why it's no good.
There is no way to do that with how fast these PRs get created. So without buy-in from leadership that this is bad in the long term you have to start letting them pass. We'll see 6 months from now how these scenarios pan out, whether the companies flourish with superb velocities and feature packed apps or whether the code-base is an unworkable bug-ridden hellhole and velocity has dropped off of a cliff.
MathmoKiwi@reddit
Maybe OP needs to use AI to reject the PRs ;-)
MoreRopePlease@reddit
"single responsibility principle"
Maybe OP needs to learn some of the jargon associated with good architecture? Can you articulate the responsibility in one or two sentences without saying "and" or "except for..."?
Successful_Creme1823@reddit
But the ai maintains it so who cares? I’m not sure if this is a sarcastic comment or not.
Jmc_da_boss@reddit
The LLM does not maintain it, the LLM will continue building tech debt over and over until it collapses under its own weight.
Successful_Creme1823@reddit
What if you tell it to refactor it to meet the standards? Is that a thing? Tell it to refactor it to be more terse.
Tell it to look at the code it has created and dry it up? Is that a thing?
I’m behind on all this
cstopher89@reddit
You can but by the time you've prompted it down to that level you could of written it yourself long ago. As far as keeping a consistent architecture goes it's highly dependent on the size of the codebase and context window the model you are using has. In a medium size codebase it tends to mess up quite a lot and hallucinates frequently. The best use I've found so far is just for bouncing ideas off of it. For anything non trivial it isn't faster and often times much slower. I have the context in mind already. The work required to transfer that to the llm where it may or may not hallucinate tends to be more work then it saves. Maybe im just doing something wrong but I don't get how it's that amazing. I will say for trivial stuff it is faster and its faster to do things I have no experience with for whatever good that is lol
To actually utilize it well it seems that you'd need multi repo codebase so it doesn't need to understand everything all at once. Legacy codebases make up a lot of the work going on so I don't find it super helpful day to day.
YMMV
MoreRopePlease@reddit
This reminds me of the process of writing specs for offshore contractors.
Jmc_da_boss@reddit
I mean sure you can do that, that's part of the review. You either keep prompting over and over or get fed up and do it by hand because that's way way faster.
End result HAS to be correct mergable code. How it gets there should be whatever process is fastest
Successful_Creme1823@reddit
Ok so the tool isn’t just reading your codebase and coming up with PRs yet?
ZorbaTHut@reddit
It'll totally do that, but you still need a human to look at it and say "ah, this needs to be improved". The problem here is that you can easily end up in a situation where the juniors are saying "go make a PR! yeah whatever, good enough, I don't care" and then all the burden of making the code good lands on the senior's lap.
The important cultural change is to make it clear that pull requests are the responsibility of the person making them, and the code should be good before it's sent.
studio_bob@reddit
This is a lesson I think a lot of people are going to learn the hard way. OP says this code is full of unnecessary work and overlapping functions. That certainly makes it hard for a human to maintain, but it also creates a lot of meaningless context that can push AI off track when you want to make future changes, whether it be to add features or fix problems.
And how will the LLM respond to that situation? By producing even more verbose nonsense until it again "passes the tests" until eventually, one day, it simply can't get it there no matter how many tokens it spits out. Then what?
lab-gone-wrong@reddit
This is abdicating your job. It is your job to articulate at least an example of what's bad and enforce standards.
Like any review, it isn't necessarily your job to point out every single issue. But if an issue appears repeatedly, you should give an example of how to improve it once, then link back to that example any time it appears in the future.
If AI takes your job, it will be because you stood aside and let standards collapse, rather than because AI was better.
gburdell@reddit (OP)
Like I said, the code passes style guidelines and it’s bug free, it just tends to create too many overlapping functions and do extra unnecessary work. It’s not “bad” code. The main problem is that the features are not on our priorities, yet it feels like I’m gatekeeping in a bad way if I let a PR sit because it’s being done ahead of other priorities, even if it is taking my time off critical tasks to review. My manager is really pushing people to use AI
MathmoKiwi@reddit
Is simple, just tell hem:
DRY
_GoldenRule@reddit
This is sort of how AI generated code goes. It tends to repeat itself a lot (I've seen this from experience using these tools in prod).
I think the problem is that they're creating PRs for the first thing that works rather than prompting the AI to clean up the code (or cleaning the code manually). I dont think you should reject prs for AI usage but its totally fine to give feedback that the code should be cleaner or less verbose. I think over time the AI users will learn how to refine the AI generated code and you'll be in a better place.
pandafriend42@reddit
Isn't that already bad code when it does unnessecary work?
Can't you just say that "The code does stuff in way A, but way B would be better and we should strive towards way B, because otherwise in the long run it will lead to a multiplication in required work, time and money."
If you're publicly traded say "The profits will sink and we will lose market value, if we keep on doing that."
Maybe also make a list of bad practices which can be found in the code. Bonus points if you can find ways to quantify the potential loss.
The problem with AI is that it works through semantic patterns, not content, which leads to the "it looks good, works (sometimes), but isn't quite there yet" type of code.
On a higher level cognitive debt is also a major problem. Overreliance on AI will lead to less skilled employees.
AchillesDev@reddit
Yes it is.
If you're a senior+ or, even worse, a tech lead. Get better at articulating your arguments.
There's your articulable reason.
studio_bob@reddit
Is this not a contradiction? Creating a rat's nest of redundancy and nonsense still seems like "bad code" even it technically passes tests. If a junior submitted code like this before AI, would you have felt compelled to accept it in the same way? Why isn't the codebase already full of similarly messy, ugly, but technically functional code? What has really changed?
nicolas_06@reddit
Doesn't seems to be an AI problem then.
vivalapants@reddit
This really sounds like its going to blow up on someone
TalesfromCryptKeeper@reddit
Precisely this, and it's part of the reason for the 'adapt or you'll be left behind' rhetoric circulating around. It's manipulation to raise anxiety and just accept these tools blind.
UsualNoise9@reddit
A good craftsman doesn't blame his tools. Being adaptable is key in this industry - I remember when git came out and people were complaining about how "bad" it is while mailing zip diffs back and forth.
TalesfromCryptKeeper@reddit
I'm in the AEC industry. Obviously when CAD first came out there were a lot of draughtsmen who were against it, but adapted. Then after CAD came parametric software, 'smart' technologies.
The modern day problem with CAD is that there is an illusion that it is far more efficient - it is! And at the end of the day you're committing resources to complete a set of tasks that still take a defined amount of time even if the process of getting from A to B within a task is streamlined. Things like reviews, sign-offs, permits...etc etc. So in the end you have directorship saying "well since [insert tool here] improves efficiency, that means you can take on more work. In the case of AI, the same directorship says that we can remove certain roles from the organization because AI makes them redundant. But wait, that work is then put on the shoulders of remaining resources, because it still needs oversight and review, overallocation becomes a huge problem.
All that is to say I agree with you that a good craftsman doesn't blame his tools for a poor job, but this isn't exactly the same situation. Being adaptable is fine. Leadership forcing you to take more time fixing someone elses' handiwork hammering screws into drywall in addition to your job, plus the job screwdrivers lost to the guy with the hammer, it's exhausting.
binaryfireball@reddit
A good craftsman uses the right tools for the job.
SolarNachoes@reddit
Many of those craftsman are still creating drawings with little to no built-in intelligence.
Now we are trying to use AI to process the drawings after which we can apply intelligence.
UsualNoise9@reddit
Oh I agree with you 100% - AI very maybe improves coding efficiency in very specific scenarios. But even if it did improve efficiency - most of at least what I do day to day is not coding (sadly).
SnakeSeer@reddit
Tbh I live for the days that Claude or whomever can go and hunt down what the hell the business is on about opening a defect that just says "the year-end snizzlenick value is 20 and it should be 22!" with no other details, where snizzlenick isn't close to the name of any field in your system, and it must be fixed because upper management has "taken an interest"...
SpiderHack@reddit
Ibm kernel devs were still doing this late 2010s when a buddy of mine got hired in
Disastrous_North_279@reddit
And if they can’t articulate why it’s bad - perhaps they need to rethink if it is bad.
Maybe it’s just code someone else wrote and you wish you had time to have written it yourself.
Smart-Emu5581@reddit
AI researcher here. Start with this to fix several problems at once:
Ask an LLM to review the code the juniors submitted and point out code smells. You can also tell it that you think it feels off. If you do, it will try to validate your intuition and look more strongly. This has several benefits:
- You learn AI use. There is a chance that the AI will actually say the PR fine, but my experience is that it tries to please the user. If you ask it to review anything for mistakes it will always find something. The question is how critical it is. Learning how to ask the right questions is a critical skill.
- The juniors get feedback on their code and learn the same lesson. There is a good chance they were vibe coding and not reviewing anything (because they are juniors) so this could be a wakeup call. They ask the AI and it says it's fine. You ask the same AI and it says there are issues. Seems paradoxical, but is actually working as intended.
- Management will learn that you are also using AI and if you frame it right it will look like you ar ebetter at it than the juniors: Your reviewing AI is pointing out mistakes in their stuff, just like real reviewers point out mistakes in normal code.
You can literally just paste your reddit post in Claude and ask it to help you articulate what's going on, and it will tell you a good way to articulate things. For example. I just copy-pasted your post and this response of mine into Claude and it gave me a concrete list of code smells that LLMs often produce and why they are bad. Just ask a reviewing LLM to look for instances of that in the submitted code and suggest rewrites.
MoreRespectForQA@reddit
>vaguely threatening that people who don’t will be “unemployable”
They're threatening you because they're salivating over the prospect of laying half of you off.
Jmc_da_boss@reddit
My brother in Christ it is your PRIMARY JOB to point this out. To yell about it from the rooftops and ensure the projects under your expertise remain clean and maintainable.
Anyone can shit out random code, Claude or no Claude it's not that hard.
The skill comes from knowing when NOT to write a lot of code. So nut up, put your foot down and ensure your juniors submit and ultimately merge code that passes standards. If you are not good at articulating these problems in a digestible and coherent way to stakeholders then frankly you are not a senior level dev as that is literally the primary function of technical leadership.
MoreRespectForQA@reddit
If you're not being listened to you have discharged your responsibility just by pointing it out. Your primary job isn't to fight to get people to listen if they're not inclined to.
pwnasaurus11@reddit
1000% this.
prisencotech@reddit
How long has it been?
MoreRopePlease@reddit
I've been reading books about software architecture, unit testing, functional programming, and learning React and CSS in a more systematic way. Most of my knowledge in these areas has been haphazard, on-the-job so there's gaps in my understanding. I think this makes me more effective at using AI.
MoreRopePlease@reddit
Ironically, it would be helpful to talk to the AI about this and let it help you clarify your thoughts and reasons.
Hixie@reddit
As a senior dev, your job should include empowering junior devs. Having parts of the codebase that only the annointed can maintain is dangerous long term.
That said, if someone submits bad code ("4x longer than it needs to be" and "isn’t coherent with the architecture" are both "bad") then, regardless of how the code was submitted, it shouldn't be landed.
You handle this exactly how you would handle a junior dev writing the same code without an AI helping them. Because at the end of the day, the AI is just a tool.
gburdell@reddit (OP)
I just want to clarify that we now have front end devs and the like submitting PRs to back end infra. We can and did let back end junior devs work on our infra.
Hixie@reddit
I don't really buy into the "front-end dev"/"back-end dev" dichotomy. There's just devs. Some are more experienced at one thing than another, but you don't foster growth by gate-keeping who gets to work where. (And front-end is no easier than back-end. They're very different skill sets, and both are difficult to do well.)
You do, however, need to enforce standards everywhere. If a "front-end dev" wants to write back-end code (or vice-versa), with an AI or otherwise, they need to do a good job. This might involve getting a mentor to spend some time with them helping them learn how to do it well, it might involve them getting training, it might require that they go get experience somewhere else first, whatever. But just because they're using AI doesn't mean they get to check in the code without review.
LTKokoro@reddit
I buy the difference between frontend backend devs, because working on frontend is a vastly different thing than working on backend. Backend is mostly about cold logic and efficiency, while frontend requires some artistry and feeling of beauty. Also a lot of concepts from js just don’t translate into mainstream backend languages, and vice versa. Of course fullstack devs are a real thing, but i fully support people who want to expertise in single stack, instead of having broad but shallow knowledge of multiple stacks
Hixie@reddit
I agree that there's different skillsets. That's true even among devs who specialize in frontend -- a developer who specializes in dynamic web apps has a different skillset than one who specializes in creating cross-platform UI frameworks, who has a different skillset than one who specializes in raw Win32 MDI apps, for example. I'm just saying that these are "merely" skillsets, and gatekeeping by saying that we don't accept "front end devs and the like submitting PRs to back end infra" is fundamentally a bad policy. A good lead would encourage curious frontend devs to examine backend code and submit PRs if for no other reason than to foster growth in their eng team.
A frontend-focused dev is going to do a better job if they understand why a backend needs exponential backoff during failures. A backend-focused dev is going to do a better job if they understand the inherent race condition involved in a stateless paged query API. The way you get these skills is crosspolination.
LTKokoro@reddit
100% agree. As long as lead would encourage cross pollination and not expect/demand it.
AchillesDev@reddit
You and whoever else you have on your side needs to put their foot down on this.
LogicRaven_@reddit
Thinking about the junior-senior relationship as hierarchy is possibly one of the reason that led to this situation, and you might want to reevaluate.
Keeping the juniors outside of the critical infra was a mistake. How they could learn new skills like databases if they never touch it? This situation can be perceived as seniors gatekeeping things in this team.
Also this led to that a group of juniors decided on the new way of working. And naturally it has gaps because there was no senior to advocate for the importance of architecture.
Your experience is useful in an AI world as well, but you need to be able to apply those and become part of the AI change.
Talk with your team about the risks of not adhering to architecture and what will that lead to in practice. You could work together with the juniors on how to change the AI setup so the generated code fits the architecture intention. Agree on what issues PR reviews must catch.
Dobata988@reddit
You’re not falling behind, you’re carrying deeper responsibilities while the system rewards quick wins.
AI can generate functional code, but not sustainable architecture. Your role isn’t to match output, it’s to ensure coherence, scalability, and long-term stability. That’s not replaceable.
Lean into what AI and juniors can’t do like critical thinking, systems design, and strategic oversight.
SolarNachoes@reddit
You can leverage AI to do code reviews.
Sevii@reddit
You are overestimating how hard it is to get on top of AI coding agents. Steal one of your teammates Claude.md and try and do some stuff with claude code. Have claude review their infra PRs and suggest ways to make the code shorter.
Crafty_Independence@reddit
You don't just have 2 toddlers at home - you're working for one too. This is a grossly toxic job, and if you can get out I'd recommend it
skg1979@reddit
Just get Claude to do the PR reviews and feed the corrective work back to Claude and repeat.
Ok-Car-2916@reddit
I feel really bad about the way the less skilled are forced skip over learning and not do things by hand. If I start relying on AI for a particular type of code, I've noticed that after a few months I have generally turned completely stupid and incapable of writing it by scratch and am slowly getting worse at fixing bugs. Putting myself in the less skilled workers shoes, I would feel like disposable trash, that the company wasn't interested in helping me grow my skills, and that I would be doing anything to look for a better environment for advancement.
It seems like that is what management is going for. It's going to blow up in their face no doubt. I've been mentoring new engineers for about 20 years now and none of the work environment you described is normal. It's far more toxic than where I work now, which is the number one AI arms dealer (surprisingly less likely to force AI usage cause management understands the underlying technology), or have ever worked.
This AI stuff is a serious poison pill (in most current use cases where it is being pushed beyond its limits and that have instructive value).
DeterminedQuokka@reddit
I think there is a core misconception here. If you were only important because you were hoarding knowledge something was already wrong. There is literally nothing at my job that I haven't shown at least one engineer to do. I'm not valuable because I'm the only person who knows how to do something. I'm valuable because I'm fast, have good instincts, and I can learn anything even if I haven't done it before.
If all the knowledge you had that you weren't sharing can be done by AI, then it also could have been found with a google. The value you should be bringing to the table if they are learning new things with AI is the same value that you should have been bringing to the table before, teaching them the rules and how to do it correctly.
I would practice the idea of explaining why the code is bad. Because the answer shouldn't be it adds debt it should be that it adds debt that causes X. Here is an example from a doc I wrote about using AI to generate tests under specific circumstances:
"The tests are written quite poorly particularly the ones that are related to the code around the DB models. The tests are extremely heavily mocked to the point that they are testing the implementation of the code and not it's effects. You could easily break the code without breaking any tests, and you could make a change that does not break the code and break 10-20 tests. Presence of tests is likely to make developers over confident that they have not broken the code when in fact the tests would not be able to tell if they had."
Also, I know it's really hard to get it to fly but "I can't read this code so I can't tell if it has a security issue", isn't tech debt. It's a security vulnerability and should be presented as such.
Ok-Car-2916@reddit
I know it's a cliche on Reddit and in tech in general, but I work for probably the biggest arms dealer in the whole AI bubble and have never been treated by leadership of any level in the company in the way you described.
The forced AI usage (I'm assuming the tasks are worth doing by hand for learning purposes or because the AI sucks at them, I'm not against automation in general but that doesn't sound like what is going on for you) is bad enough and will damage the potential of the team and every person being forced down that path. The threats take it to the next level. And what you are doing to the less skilled teammates is tragic and abusive.
I'd have walked by now but understandably that isn't always a great idea and you have yourself to look out for. I'm not sure your experience level in the industry
false79@reddit
You can have agents tuned to how you review a PR just like how you have system prompts to generate code.
As more and more unwanted instances occur, a pattern is established to make it a prompt when reviewing code.