How do people enforce developers to write tests without a strict code coverage requirement?
Posted by martiangirlie@reddit | ExperiencedDevs | View on Reddit | 86 comments
At previous positions, I’ve always seen test writing enforced by meeting a percentage code coverage amount. The issue with that is that people will just write bad tests to get around the coverage requirement.
And we can’t rely on code reviews for people to enforce it either because… well we all know that relying on code reviews just falls to lowest-common-denominator in terms of quality.
Things I’ve considered:
- Add a comment on a PR through the CI that runs on PR creation, if a .ts file has been changed with no related .spec file change
- Add a comment on the PR through the CI if the coverage percentage has dropped, but don’t fail the build
- Include a checkbox in the PR template stating you added any tests needed
- Empower reviewer to reject a PR if no tests attached
The thing is, all of these options can just be circumvented by a guy who doesn’t feel like doing his job that day, and I don’t want a select few amount of people to have to be responsible for reviewing everything because they’re the only ones that care.
So I’m trying to find something that can be automated and enforced, but isn’t a hard limit on code coverage requirement.
And yes, I know that all of this is coming from a symptom that people should just agree on standards and do their jobs, but, especially in a corporate environment, you can’t expect that of people.
WoodenStatus6830@reddit
For me the easiest thing is to ask: "Where's the test for it?" If there are no tests, then ask them how will they test it and how will they prove it, write all the things down and put screenshots for it etc. In the end they will realize that having an automated test for it is the easiest way, they don't need to do anything once the test is there!
hyrumwhite@reddit
Enforcing tests is just going to get you shitty tests. Half of them basically just test the framework they’re a part of anyway.
“Does react update this element when a prop changes”
Code review. Request a change when a test would be appropriate. In my experience this is usually best for complex logic and mapping.
Miserable-Bid1245@reddit
Struggles with test compliance could be a sign that devs aren't understanding how test coverage can make their job easier. That could be the result of a lot of things. Some examples:
- tests aren't really testing anything, so they never catch bugs
- tests are too fragile, so they fail on every change and end up doubling feature work (this could also be a sign of bad abstractions)
- dev work doesn't involve a lot of iterating over existing code, so no one's ever actually developing against the test suite
As others have said, TDD would definitely surface stuff like this, but in my experience trying to do TDD when you're not yet skilled at writing effective tests will be very difficult.
I think bug fixes are a good place to practice both writing tests and using tests: write a failing test that reflects intended behavior, fix bug, test should pass. At first devs will probably not actually trust the test to reflect that the bug is fixed, but hopefully over time they will notice that it's way easier to re-run a test suite instead of checking everything manually every step of the way. Also, you can probably use existing scaffolding for bug tracking (steps to reproduce + expected behavior) as guidance for how to write cases. This small-scoped TDD also makes enforcing via PR review easier, since the check is simpler for you:
- do you have a passing test that reflects expected behavior
- does the test fail against original code
You could write a script that checks this for you and set a standard that you won't actually read the code to review until the check passes.
This doesn't solve the test coverage problem for the full app, but if you're able to up-skill the team on writing and trusting tests before you introduce the goal of full test coverage, you'll probably have an easier time getting people on board.
metaphorm@reddit
PR review is the best lever. don't merge PRs that have inadequate testing.
MoreRespectForQA@reddit
The only really reliable method I've found is to try to pair with people and TDD by example.
If I review a PR I usually cant tell the difference between a code change that has tests and a code change that has all edge cases covered.
metaphorm@reddit
TDD is sometimes the right methodology but I wouldn't insist on it. it's well suited for the situation where you have very nailed down requirements and spec already, so you're writing the spec as the tests, and then backing into the spec with implementation code.
but you don't always have that when you start on the implementation. it's not inherently bad to implement first and write tests later. the real thing here is not shipping untested code, because those tests enforce against regressions from future changes.
MoreRespectForQA@reddit
No. When the spec isnt nailed down enough to write any kind of test it is a mistake to start coding.
Several times when that has happened I have had the requirement changed beyond recognition or canceled entirely meaning I had to toss away the work.
When there are parts of the spec which still need to be confirmed but there is enough to write a test that is ok.
yeah but you know what? so is programming.
no, async review is something different. async review means being given a "finished" piece of work and if there are fundamental problems with it AGAIN you are faced with the possibility of having to toss it away. that is expensive.
metaphorm@reddit
draft PRs are a thing. so are POC implementations.
BadTime100@reddit
How are draft PRs or POC implementations a solution there?
martiangirlie@reddit (OP)
Right, these suggestions would all happen at the PR review stage with a CI build, but it’s hard to do that when you have a culture of “LGTM! 👍”
BigDickedAngel@reddit
The problem is the developer not caring about the quality of his work.
downshiftdata@reddit
A pattern I like, but is _wildly_ unpopular, is to have whoever tested/reviewed the work be the one that demos it in Sprint Review (not the author).
If I'm the author, I may not care as much about looking dumb in the Review (or maybe I can hand-wave over stuff), but I'm not going to have my teammate look stupid in front of everyone.
If I'm the reviewer, I'm not going to LGTM the PR. I'm going to really scrutinize it. I'm going to actually run the tests. Because I'm not going to look stupid in front of everyone thanks to someone else's work.
As others have said, you can't change culture with tools. So here's a culture change that I guarantee will fix a lot of problems.
metaphorm@reddit
culture problems don't usually have technical solutions. eng managers should be working to enforce culture at the level of human interaction.
afops@reddit
I think technical solutions can help ”social problems’ like telling your colleague (basically) their work isn’t good enough. If a bot highlights it you can lean against that.
DandyPandy@reddit
“enforce culture” - I don’t think culture can be mandated. It can be encouraged, but if you want it to stick, you have to get buy-in from the collective. It doesn’t take a manager to do that. Anyone with sufficient respect in a team can drive that forward. It helps to get management buy-in to reinforce the initiative, but cultural shifts take time and persistence.
edgmnt_net@reddit
It helps to get management actually let you enforce it or support it some other way. Just going to give a limit case where you and a a handful of people are going to be mandatory reviewers for a while. Or maybe they hire people who have more experienced with stricter review criteria. Sure, it's hard, but they did let it fester and get to that point. It didn't just happen for no reason.
While I do agree that you can lead by example and improve things even without a manager, as this kind of stuff makes everyone's life easier after a while and if you have reasonable people around you then something might stick.
martiangirlie@reddit (OP)
That’s a good point. I’ll definitely talk with my manager about the different ways that they might be able to encourage that, keep being in sync with the other managers on different teams.
DandyPandy@reddit
Have there been any incidents that insufficient or poorly implemented tests were attributed to be one of the root causes? That would be your justification for why the business would care about quality tests. The manager is (or should be) interested in optimizing the performance of the team for generating the highest business value. Tests are academically a best practice, but it’s difficult to quantify their value until something fails because they weren’t there to catch some failure. That is what is going to drive top down process changes from management.
NortySpock@reddit
I have gotten mileage out of finding something that looks like it probably breaks, writing a test for them, running it myself, and if it fails, pasting most or all of test block as a comment in the ticket.
"Hey, if we are in this mode and then blah happens, it crashes. (example code) This seems like a possibility if (dns is down, dataset returned nothing, whatever)
What are you doing to cover that case?"
Then you get a public comment of "come on, probably won't happen" (and if they say that, put it in writing in the PR) or "oh, yeah, let me catch that".
This forced them to respond to your claim not with "that's impossible" (because you demo it in the test) but "that probably won't happen" or "I won't fix it", which is at least useful when it happens in production and you can politely say "I did suggest that was a risk", at which point hopefully people start listening when you suggest "obvious " tests.
Don't overdo it. One test suggestion per PR is plenty to get everyone to silently hate you for calling out their lack of tests.
Clyde_Frag@reddit
Anyone who has aspirations of moving to a Senior Engineer and beyond should not be doing this. Tie it into their career development.
Tainlorr@reddit
All the seniors on my team think unit tests are a waste of time lol
martiangirlie@reddit (OP)
I completely agree with your first point, but this is definitely a culture issue because I’ve seen poor requests with 30 instances of “any “types that were approved by senior engineers..
Also, the funny thing about using the AI tools, is that when we’ve tried using AI tooling, it writes shitty tests
jakeStacktrace@reddit
Typing in typescript isn't really relevant in that you need good quality tests. You mentioned in your post that everybody agrees that code review is just lcd so it sounds like you already have given up on the idea of code reviews enforcing better tests. I would not give up I would be the change I wanted to see and write good tests and flag bad tests in CR.
Clyde_Frag@reddit
The AI tools definitely do not remove the need to think for yourself which is a misconception that many seem to have.
CodelinesNL@reddit
There are no real technical solutions to severe culture issues like these. It sounds like management is giving people the wrong incentices.
magical_midget@reddit
You can’t change culture with tools.
For IC is hard to change culture overnight, but you can lead by example, and bring it up every retrospective. “Hey this bug happened, maybe we should add more testing on these critical paths to avoid these issues slipping by”.
Be a squeaky wheel on the team, sometimes that works.
If you can convince your manager (or are a lead, or have some pull) you can enforce it more. Quickly skim PRs and if there is no tests block it. That does mean you are the gate keeper, but you may be able to get others on board to help.
Jiuholar@reddit
We had this issue (amongst others) at a previous job. We fixed it by adding a PR template with a checklist (ideally automatically prepopulated via config on whatever platform you use) and enforcing a minimum of 2 approvals.
Worked better than you might think at first glance. The 2nd approver acted as a "review of the reviewer" that created some social pressure for the first reviewer to uphold the agreed standards in their comments.
If I had the same problem now I'd look into automated AI reviews that include a check for tests.
What often seems to happen in scenarios like this is a passivity or ambivalence towards writing tests rather than an active dislike or outright refusal. Creating some friction for that behaviour (i.e. making them lie about writing tests by checking the box) seems to nudge people onto the right path, without making their job actively harder / more frustrating .
azuredrg@reddit
Make them fix the bugs they introduce
instilledbee@reddit
Our org is serious about code ownership. In our monorepo we have markers on which team owns which directory, so it's relatively easy to track who fixes which bugs, once confirmed their part of the code caused it.
That said, enforcing code ownership is a solid motivator for code coverage/unit tests. It gives you a reason to maintain high code quality and low bug counts, if it's easy to trace who to blame.
shared_ptr@reddit
This is the most sensible and sustainable path I am aware of. Aiming for 100% test coverage isn’t necessarily even the right goal, you want the least amount of test coverage that achieves your reliability aims, as too many tests can slow working with a codebase.
If the team are focused on the experience of their customers and use that as the north star then it should be a stabilising influence on how they write their code, both around testing and other things (load testing, reliability drills, etc)
Better to use a target that directly corresponds to the outcome you want (customer satisfaction) rather than something like 100% test coverage which can easily become counterproductive.
Sheldor5@reddit
you want to hold me accountable for my own mistakes?
are you insane?
Mornar@reddit
Ridiculous.
You're accountable for your mistakes, your boss's mistakes, and your boss's boss's mistakes.
I better not see this irresponsibility and not being a team player again.
Sheldor5@reddit
and because you are responsible for everybody's mistakes of course you get the lowest salary of all of them
abandonplanetearth@reddit
But now I have to track bugs and follow up on them. And as a tech lead, bugs from anyone is a bad look on me.
Why not be proactive instead of reactive?
azuredrg@reddit
I actually like the PR review comment below, I would support that one the most and would love to be in an organization where I feel like I don't have to just LGTM rubber stamp PRs. It would need buyin from management and an overall organization wide culture shift from top down. The problem I had previously as a lead was that we never really had backup to push back on PRs without tests. The dev can always claim they don't have time to deliver on the deadline with tests and defer testing to the manual QA staff.
SellGameRent@reddit
more points for me
azuredrg@reddit
Oh yeah, I do that scam too. I just yeet out some bugs just to get feedback and extra points. Sometimes if it's too polished, I don't get good feedback
gyroda@reddit
I only found a feature was actually in use recently because it broke and someone complained.
I genuinely thought it was wasn't actually looked at by anyone.
Strong-Evening1137@reddit
"And we can’t rely on code reviews for people to enforce it either because… well we all know that relying on code reviews just falls to lowest-common-denominator in terms of quality."
Rule with fear, don't enforce quality? Don't get to review and around pull requests. It's not that complicated
Weasel_Town@reddit
In addition to what everyone else has said, I have found that a lot of people aren't actually good at writing unit tests. They're not comfortable with the frameworks. They're confused about when or why you mock things. They don't know what scenarios they should test. They're also defensive about it, and would rather argue that "testing isn't important" or "you can write tests that don't actually prove anything" than buckle down and learn.
I've seen it repeatedly when I have management buy-in that we will be adding tests. A week of struggle to produce a chaotic mess that really only proves that the testing framework works. And then they argue that "it takes too long" and "doesn't prove anything anyway". That is true of anything, when you are terrible at it!
So maybe plan on some time to get everyone up to speed.
nasanu@reddit
Out of the thousands of tests I have seen in FE I have only seen a few that actually test for anything meaningful. It's almost always "some text".isInTheDocument(). Useless. Gets promotions though.
sherdogger@reddit
The only way I've seen it work is you have a core pattern set where every pull request includes tests and it is abundantly obvious that everything is tested always, by looking at the copius amounts of robust tests. Then you feel like an idiot and a pariah if you try to be a cheeser and submit a code without tests.
And with AI, tests are often so formulaic that you can have or asks you devs to add tests, and that should take some of the pain and gruntwork barrier out of it
account22222221@reddit
You reject PRs
hyay@reddit
Idk, in my company we have no choice but to quickly generate unit tests with ai at the end of our sprint. There is zero time to put the large amount of effort into unit tests for so much code delivered every cycle. Yeah I know it’s fake quality but we are a tps report driven company and the appearance of quality is what we are forced to deliver.
Recent_Science4709@reddit
Code coverage is fake quality. A rushed meaningful test that tests actual business logic is better than a bunch of crap tests that exist just to fit an arbitrary requirement
hyay@reddit
Yes but the reports look good to those above and that’s what they want.
drnullpointer@reddit
Fundamentally, the issue is that you are measuring something you don't care about.
Remember, the thing you measure becomes the target.
My director, stupidly, put together a dashboard that shows each developers number of commits on any given day. You can figure out what is going on -- everybody is doing lots of commits whether they make any sense or not.
And the things that are really important, like how your contributions are impacting reliability, are obviously not measured because they are very hard to assess.
Sometimes it is better to not measure things and create "strategic ambiguity" than measure something you don't care about and create a clear optimization target.
mushgev@reddit
the pattern I've seen actually work isn't more automation — it's shifting reviewer accountability. automation handles the zero-effort cases (no spec file, coverage dropped 20%), but anything beyond that, people who know the system will find workarounds. the real question is whether reviewers feel personally responsible for what gets merged. teams with genuinely good test coverage usually have a norm where approving a PR means you're vouching for the tests, not just the feature code. that shift in culture does more than any CI check
Over-Veterinarian338@reddit
check out the comments section for more context
GoodishCoder@reddit
We use code quality tools to determine how much new code is covered by tests but ultimately your options are pretty limited if you don't want hard limits or to rely on PRs.
edgmnt_net@reddit
Reviewing works, it's just that you can't have your quality and not work for it. Successful open source projects typically have one or more maintainers who do strict reviewing, sometimes an entire hierarchy of maintainers responsible for particular areas and cross-reviewing stuff alongside the community. So at best we're talking about a compromise here. You can set a bar, you can foster giving more attention to details, you can argue for higher hiring standards, you can complain about areas with low code quality, you can lead by example and so on. Slapping on a metric that's easy to game at best and counterproductive at worst (a lot of boilerplate just to write meaningless tests) won't really solve much. I'm not sure there's much that can be automated here, although you might be able to come up with a guide on what's worth testing and how, although enforcement is a different matter.
ZukowskiHardware@reddit
Revert the pr.
jl2352@reddit
It’s partly cultural. You should block a PR if there are no tests, and in the culture that should be normal. It should also be normal to be hunting for bugs.
I’m not a fan of the automation you suggest. I’d instead recommend you just look at their changes, and think up test cases. Then see if their tests cover them.
If the tests are hard to understand, then tackle that problem instead.
For me places where writing / maintaining tests was difficult always had shitty test coverage. Places where it was a breeze had great coverage. People are more willing to go along with a culture change if it’s low friction and easy to do. Honestly making the tests as easy as possible to write should be the top of the list. That’s not easy using some bespoke library and knowledge you have. Can you give the test to a stranger to understand? If you can it’s great.
Next someone needs to drive the change. A good colleague of mine put together a meeting to discuss our testing, and out of that came a new approach, which expanded into something really good. Another approach is to be that guy who is on people’s backs on PRs demanding tests.
Finally whatever happens, make sure you are writing tests. Otherwise the whole endeavour will be undermined.
vooglie@reddit
Code coverage checkers have gotten pretty good so I’m not sure why the hesitance to use them as one of the ways to ensure tests are written. If your product is something that can run integration / end to end testing in the pipeline, then even better. Beyond these two hard gates the rest is a culture issue.
Candid-Profession720@reddit
AGENTS.md
martiangirlie@reddit (OP)
lmao
frogic@reddit
I do think they're good for writing tests but honestly asking any cheap agent if the PR has (some standard here) test coverage is probably a good solution to your problem as long as your engineering culture isnt strongly ai skeptic.
dfltr@reddit
Seriously though. We’ve been dealing with non-deterministic, fickle, error-prone agents for decades. They’re called “software engineers” and they exist to stress test any given ruleset you try to create.
Tell the AI agent to write tests and run the linter and not commit code until both checks pass. It’ll do a better job than the majority of devs.
Distinct_Bad_6276@reddit
Better to set up pre commit hooks to run linter and unit tests. I notice agents often forget to run them even when explicitly told to.
boneytooth_thompkins@reddit
skills/increase-test-coverage.md
lawrencek1992@reddit
I reject PRs without test coverage.
leftsaidtim@reddit
Only hire people that write tests. Especially those that write tests first.
tevs__@reddit
You fire them, it's pretty easy. Not every process needs to be bulletproof or avoiding any personal responsibility.
MapLarge614@reddit
I am their lead, I just tell them. We all have the same education, so I just need to remind them from time to time.
PaulPhxAz@reddit
I used to make sub-tickets for Documentation and Tests off of Features. It's not automated, but it's described and documented.
veryspicypickle@reddit
Technical solutions to political or cultural problems never end well.
Ok-Armadillo-5634@reddit
AI writes all my tests now.
SoggyGrayDuck@reddit
I'm leaving a company like this and I'm terrified of following the new procedures correctly at my new job. We're migrating to a new front end system and that work is being handled by a consulting firm while we keep the lights on in the old one. They're getting the cloud environment setup and permissioned correctly and have decided to stop expanding the old on prem model BUT that means I'm trying to do everything in a sandbox. Making me the architect, dev lead, DBA and etc. I went to a larger company so I wouldn't have to wear so many hats.
I'm stepping out of that into a highly regulated industry with a 3 month contract to hire. I better be able to pull this off.
SnugglyCoderGuy@reddit
I try to encourage people to do TDD. Write tests first, get them reviewed to make sure they appropriately cover the new requirements and then they can go hog wild. I encourage this by doing it myself and asking the team for such review.
low_slearner@reddit
I'm afraid you're looking for a simple answer to a complex problem.
If you want to truly fix the issue, you really do need to fix the culture and get the majority of people invested in it. If you try to fix the issue with a workaround solution, people will find the easiest path to comply with that specific requirement but not the intent - just like you've seen with code coverage metrics.
sayqm@reddit
So coverage requirements doesn't enforce it either.
You just need to agree on it. If your team agrees on it and people don't follow, then you have a team issue
martiangirlie@reddit (OP)
Completely agree that it’s a team issue, but it’s really hard for one person to change the culture of a 15 team product team. I’m fighting uphill battle with this, but getting everyone to shake hands and agree on it is not really that feasible. Because they will shake hands, they just won’t do the work.
high_throughput@reddit
I strongly believe that culture comes from the top. If management does not back the effort to write good tests and evaluate employee performance based on it, then there's nothing you can do.
Definitely don't make the mistake of working harder and harder, putting more and more effort into test coverage, hoping people will finally start caring. Trying to force a culture change is a recipe for burnout.
(If management says "well, if you don't like the test coverage you can help be a force for change!", that's just their way of deflecting responsibility and avoid saying "no". Don't believe them.)
Like most things in life, you should give 10% more than you get, but no more.
08148694@reddit
You just need the right incentives
Make it a performance KPI and watch the developers suddenly write great tests
You can easily write a quick script to scan PR and compute this metric for each developer
NeckBeard137@reddit
Part of the performance review
diablo1128@reddit
At places I've worked at Testing was part of code reviews. So you if you didn't write good tests, missed test cases, etc... then your code change would not be approved.
If people are not doing their job then they should be told that in 1-on-1s with their manager. If it's consistent then there should be ramifications like poor yearly revies or low raises because they are not meeting the expectations of the company / team.
Though this means you need a manager that cares and will take the time to do all these things. If they don't care then people will do whatever they want.
Frankly any process you put in places can generally be circumvented if enough people don't care about following it.
If management is not going to do anything, then sometimes this is necessary. I worked at a company where before any change a "white paper" had to be created that detailed everything you planned on doing. Then you needed to have a meeting with certain people and presented your change to make sure you through it through.
The problem was people were fixing symptoms and not root cause issues or just superficially adding new features without thinking things through and raisins concerns that became pretty apparent as soon as you used the product.
People either started to fall in line or left for "better jobs", which was fine for the company because they were not doing what was expected anyways.
ugh_my_@reddit
Just don’t write tests
Routine_Internal_771@reddit
Add it to the PR template
Enforce testing on regressions
Mestyo@reddit
Anything that attempts to reason about code coverage is a huge red flag, imho. It misrepresents what good tests actually do, it encourages creating bloat tests, and it discourages writing small functions.
Encourage writing selective, high-quality tests for the parts that actually matter—code coverage be damned.
As for how to encourage it, I have found that a self-report checklist on PR templates works decently well. "I have added tests to relevant functions", "Tests and build pass locally", and so on.
Having to justify why you didn't write a test takes effort, and if you have to spend effort anyway, you might as well just write the test.
martiangirlie@reddit (OP)
Yeah, what defines a good test is a completely different beast. That’s why I don’t want to do coverage requirements because you get into shit tests.
I feel like the idea of having to explain why you didn’t write tests instead of it just being a checkbox
lokaaarrr@reddit
Make it as easy as possible, good examples, frameworks, etc. stay on top of friction points that develop
Nudge/remind, automatically as you are
In the end, it’s only going to work if people value it and want it to happen. It’s a culture issue.
martiangirlie@reddit (OP)
Completely agree on it being a culture issue. But these are good pointers, thank you.
It’s funny because the culture issue shows really well when many of the PR that we had were explicitly breaking standards that we had set in or read me an agents file, but after I implemented a system-wide linter, everyone loved it and they say it helps reviews a lot.
So it’s like, you can respect the standards, you just didn’t want to.
lokaaarrr@reddit
You can try to build the culture. Have the most respected / Sr engineers give practical talks on how they fold testing into their dev workflow. Regularly highlight great new work (cool tests, framework improvements, etc), giving the people doing it praise and a chance to present their work and stand out. Have a quarterly “test day” where everyone goes after flaky tests, or modules with very little coverage. Generally increase the visibility, but not in a nagging “do more” style.
roger_ducky@reddit
Mutation testing in addition to unit test coverage.
Also: Enforce “all unit tests must read like documentation for the unit under test”
martiangirlie@reddit (OP)
I really like these, thanks!
dbxp@reddit
Set a low coverage requirement and then ease it up. To start with you need to overcome the blockers of them writing any tests, then you can try to ease up the coverage. I would look to see if the issue is localised to specific members of the team too.