Questions about unit tests
Posted by ngugeneral@reddit | ExperiencedDevs | View on Reddit | 58 comments
For each company I have worked before Unit Tests coverage was either optional (Startups) or had solid QA department, so I never had to bother maintain them up myself. This has introduced a gap in my professional knowledge.
Now, recently I have joined a small team where I am given enough freedom (kinda Lead position), so for the next quarter I am planning put in order the test coverage.
Question #1: what is the purpose/advantage of test coverage? From what I understand - compability of new features with existing ones. As well - early tracking of new bugs. What else am I missing?
Question #2: in my case there are no existing coverage, so I am looking into tools for scaffolding tests. Stack is .Net, so first thing I looked into was Generation of Tests with Visual Studio Enterprise (or similar with JetBeains). The last time I was doing that was like 8 years ago and the quality of the generated tests was questionable (which is expectable and one can't avoid "polishing"). How are things now? I have a feeling that AI tools can apply here just perfectly, is there any you can recommend?
dogo_fren@reddit
Tests are written to do the following in order of importance:
Most devs can write mostly correct code without tests, but breaking something you forgot to think about is very hard.
Don’t use mock frameworks, at least in the beginning if possible, they make very fragile white box tests.
SorryButterfly4207@reddit
I agree with your 3 bullet points, but the second is actually the most important: if the code you just wrote is wrong, you're starting in a broken place.
Most devs can't write "mostly correct code" without tests. They can write mostly correct code for the "happy path," but the "unhappy paths" are many, and it just isn't in human nature to think about them. I find that it is only until I sit down and deliberately try to break my code (via more test cases) that I actually then recognize the my code won't work if the input is null, or an empty map, or if a value is negative, etc.
allllusernamestaken@reddit
I would expect someone with enough experience to lead a team to have developed a personal philosophy about unit tests, not questioning why they exist.
When I have a couple of beers and talk to junior engineers I tell them that good unit tests are an expression of product requirements in code form. Your product manager will have some list of requirements for you; at the highest level, your unit test suite should assert that these requirements are met. The closer your unit test matches the verbiage of the requirements doc, the better.
rlbond86@reddit
I disagree, unit tests should be more granular than product requirements. I've certainly never had a PM write a requirement that an algorithm merging time Intervals should return a list sorted by lower bound.
allllusernamestaken@reddit
rlbond86@reddit
You are describing an end to end test, not a unit test
mprevot@reddit
One critical cause of failure of a project is its management cost. Something can work but also have technical debt, it can be difficult to maintain, update, upgrade, then you discover that you have hidden costs which may lead to the end of the project (could be the company).
Each unit tests is a proof, and requirement if TDD (and both at the end), that something works, when tests passes, then one feel confident that covered code is testable, maintainable, updatable, refactorisable. Ideally you want to continuously test while you write your code.
Time to get the answer "this passes" can be critical too. Say you have a problem, discovered after deployment, you want to be reactive, update, test, deploy ASAP. This is about business, reactivity, image (in the eyes of the customers, but also of the team) of the project.
You want to black box test the logic and limits of your functions (not the code = white box testing). Ask yourself why.
You may prefer to TDD, and separate testers and coders, the firsts challenging the seconds. Ask yourself why,
Ultimately it's about business, maintenance costs, "agility", upgradability, success, sustainability.
ngugeneral@reddit (OP)
I appreciate a competent answer. Thank you
mprevot@reddit
I appreciate your appreciation.
ottieisbluenow@reddit
Learn how to use AI to generate tests. It's one of the things it kicks ass at.
ngugeneral@reddit (OP)
Would you mind elaborate on that?
mprevot@reddit
It was a troll.
ngugeneral@reddit (OP)
Honestly - I prefer this type of trolls instead of the ones yaping "You supposed to know that instead of asking"
mprevot@reddit
I understand that someone can be surprised, but I prefer to think that it's always healthy to ask. It's a good sign. It does not necessarily mean that one do not know or have an opinon. And if you do not have yet a point of view, it's OK, we all have our own trajectory and develop our own favorite skills.
I think we will always find more dogmatic people somewhere. In the end, it's them, not you.
DWebOscar@reddit
When the practice does not exist, it's not always easy to jump right into the concept of testable code.
Write your unit tests and code so that when something breaks, you can "lift the hood, pull out the spark plug" and check if that specific part works as expected.
You shouldn't need to start the car and drive around (integration and e2e) to check the spark plug.
mprevot@reddit
+1 for "Coverage is a vanity metric."
birdparty44@reddit
Tests are useful in many ways! First of all, test coverage % is for me just a silly metric that makes managers and people who look at dashboards happy (or sad). Not to say it isn’t without merit, but coverage for coverage sake is bureaucracy.
Tests are there to help you gain confidence in your code. It is easy to have confidence with one maintainer and few tests but that might drop when there are many contributors, not all necessarily knowing the entire codebase or architectural conventions of your project.
The real benefit in tests is to ensure that in teams larger than one (or even one!) that when somebody implements a new feature and a test breaks, they’ve just avoided a bug making it into the product and/or might need to rethink their solution or get a deeper understanding if how things work. You could look at tests as a way to specify how something is expected to work, if you write them in a way that demonstrates that.
I should point out I mean “tests”. The pedants here will go into what’s a unit test, what’s an integration test, what’s a UI test, an end to end test, and so on. The bigger picture is code that tests code so you have confidence that your systems are robust.
And of course while writing more easily testable code you end up writing WAY less spaghetti and get a better sense for interfaces between components and better encapsulation / separation of concerns.
bravopapa99@reddit
Ask yourself this question, "What did we break today?".
brunoreis93@reddit
Unit tests are your friend
Goingone@reddit
Write tests that will save you time later.
My favorite part of tests is that it allows you to update code later without the fear I broke something related. If you find yourself constantly testing related things after making updates, you probably need a test.
Kaimito1@reddit
100% this.
I can go faster and change things confidently when my tests for other things (or even the current thing if refactoring)
If I accidentally break something my test alerts me.
If I didn't have them I'd go much slower and have the mental load of making sure any niche cases are handled
yoggolian@reddit
Breaking related things is pretty much expected - automated tests excel in stopping people from breaking (supposedly) unrelated things.
Goingone@reddit
True, but I specifically said related because I didn’t want someone coming back and saying, “breaking unrelated changes is a sign of poor code design, that’s a bigger issue….etc”.
ausmomo@reddit
New code can certainly break things, you just know about it earlier, and with ease.
serial_crusher@reddit
The big benefit you get from having more test coverage is preventing regressions. New feature comes in and you make changes to support it, accidentally breaking some niche requirement from 3 years ago that you forgot about. If, when you had introduced that niche requirement, you had also added a test that checks it, you'll now get immediate feedback reminding you that it's there.
I wouldn't recommend generating tests. That'll verify that the code continues to work the way it works now; not that it continues to work the way it's supposed to work.
What my team found useful was diff-cover. Basically for any PR, 100% of lines you touch must have coverage. So when you're adding new code, you're writing new tests. When you're working on some old legacy code with no coverage, you're going to have to start by adding coverage to the best of your ability. Over time, you end up increasing coverage more and more and more. We still have old legacy code that has no coverage, but it works well enough that we haven't had to touch it, so it just kinda sits where it is.
It's hard up-front, because some tasks will take significantly longer than others depending on the state of existing coverage in that code, but the team agreed coverage was a priority so we were willing to make the investment. Plus over time you find ways to minimize impact and get a little smarter about the amount of tests you need to add.
metaphorm@reddit
the most important thing unit tests accomplish is building confidence that the code is working as designed for the set of inputs it's been tested on.
the value in this is that when you change the code later, if a test breaks, you know you have an unintended defect/regression and will need to fix that before merging or else you'll have a brand new production bug.
unit tests help you make changes to code over time without causing downstream breakage due to your changes. that's really what they're for in most code bases. there are other benefits as well but that's the most important one. for any code that has users/customers interacting with it daily, this is ridiculously important. software quickly becomes unusable due to accumulation of bugs/regressions/bitrot. unit tests slow that down and give you the confidence that your system does still work.
pydry@reddit
I actually think test coverage is actively harmful. None of the behaviors which the numbers drive are good behaviors.
Unit tests are better for testing calculation/decision making code. Integration tests (including e2e) are better for testing code that integrates.
I wouldnt write a unit test to test sending a kafka message when a user clicks a button and I wouldnt use an integration test to test a pricing model.
pavilionaire2022@reddit
IMO, unit tests, and in particular, TDD, make the initial development faster or at least as fast. There is nearly no debugging. You write a few lines of code and immediately test it. If it doesn't work, you know exactly where the bug is.
Test coverage makes it safe and easy to make small improvements. If you want to make some code more readable, remove duplication, or handle a corner case, the effort and risk of manually testing makes it not worth it.
I would say don't bother trying to get to 100% coverage right away. Rather, cover all new code written. Soon, you will be at 30% or 50% or 70% coverage but covering 90% of the code you touch most often. It's not that important to cover code you don't touch. Your users have already tested it.
ngugeneral@reddit (OP)
I also started to lean to not cover a specific % but rather cover everything new as we go.
At this point my biggest concern are the devs who are not used to writing tests and me making their life harder. I need to sell it somehow to the product owner, but that's me talking out loud.
Thanks!
StolenStutz@reddit
A few things I've learned over the years...
Arrange / Act / Assert - Follow this simple pattern. I do unit testing of stored procedures in T-SQL, using very crude, bare-bones scripts that follow this pattern. I love people's reaction when they see them. It's usually something like, "You unit test SQL? Huh, I never thought of that. That's a good idea!" So, no matter the language, no matter what libraries you might or might not have, just stick to that pattern.
In new code, always include at least one unit test of each piece of functionality, even if you don't think you really need testing there. This ensures that you're writing testable code. I've refactored things in the past because I realized I goofed and wrote something that wasn't easily wrapped in a unit test. This pays off later, when you find yourself piling on more tests in that one suddenly-critical code path that needs the scrutiny.
Test-Driven Development (TDD) is most useful in situations of heavy legacy code and tech debt. Back to T-SQL, my first assignment at one place was to make a surgical adjustment to a typical 20yo 2,000-line behemoth stored procedure that no one wanted to touch. I spent two days writing the first unit test, a couple of hours writing the second one, and in short order had about 8 or 9 tests that suited the situation. And THEN I went and made the fix.
I would think this goes without saying, but always start your work by making sure all of the unit tests pass first. At that same place, when I got a new ticket, I'd clone the repo, build the solution, and find that half the tests were failing. So I'd take the time right then to fix them BEFORE beginning my work.
radiant_acquiescence@reddit
Could you share more about your approach to unit testing SQL stored procedures? It sounds very useful.
StolenStutz@reddit
I take a repo-first approach to databases. Everything after the CREATE DATABASE is in the repo. Everything is deployed via a PowerShell script. The unit testing is the last step in that (for lower environments).
The scripts themselves just follow a pattern. I SET XACT_ABORT ON and BEGIN TRAN in the Arrange step (if needed). The Act step is almost always just EXEC the_sproc. And then any tests in the Assert step that fail just THROW an error. And then ROLLBACK TRAN. And that's it.
So if I'm a dev on this project, I clone the repo, blow away my local database and create it again, and run the PS script. I then have a good clean dev environment, with passed unit tests, in which to begin my work.
morswinb@reddit
Lead position, with 8+ years, and no experience writing tests?
Can you name the company so we know to avoid it?
dogo_fren@reddit
Give them a break, most software companies suck at software development.
80eightydegrees@reddit
Wait, QA are writing the unit tests for the Devs work? I’ve never heard of that before
Automated integration and E2E testing, but unit tests?
ngugeneral@reddit (OP)
I phrased incomplete: QA write automated tests and the product relies on them during release and by so - makes unit tests optional.
80eightydegrees@reddit
Ah gotcha, sorry was kind of intrigued by this scenario of QA writing unit tests
context_switch@reddit
I've heard of this before in other cases. It sort of ends up like you'd expect... not well.
davvblack@reddit
it's a real role, usually called "SDET" Software Development Engineer in Test
riplikash@reddit
While they both have the word "test", unit tests and QA tests actually serve two ENTIRELY different purposes. One does not remove the need for the other.
And unit tests INCREASE velocity. There's no beneficial trade-off to skipping unit tests.
Past_Reading7705@reddit
I could trust to merge anything without tests, I always find something with them
Odd-Investigator-870@reddit
BLUF: Learn TDD. 99% test coverage is a side effect, not the main benefit. Focus on public interfaces/APji of your code package, and showing your future selves how to use the code as designed. Dig into it more when you struggle with TDD, it's revealing a lot of skill caps at the same time. You'll very likely need to practice outside of work or to create a Code Dojo as a team (ie learn the skill together, using non work code problems).
jgengr@reddit
I always thought the trickiest part was testing coverage on failures or errors.
ummaycoc@reddit
There are a whole host of reasons for why to test.
There are more but those are some of the common reasons people bring up.
davvblack@reddit
my hot take wrt unit testing is that tests that mock the data layer are usually a waste of time, unless you have some small tricky thing you want to test. But for most of most apps it's not worth it.
I find full integration testing by far the most useful, it verifies the API behavior from the customer's perspective, which is the aspect that defines "working" or "broken". Any test that's too close to a single unit also makes the structure of units unrefactorable: you can refactor one individual unit easily, but changing any internal interface invalidates a ton of tests.
ngugeneral@reddit (OP)
That's a very valid take
UnrulyLunch@reddit
It's too easy to confuse test coverage with test quality. Yes, I think you need to have good coverage. But you also need to over-cover those code paths with tricky logic or numerous control variables. Those kinds of tests won't show in the coverage numbers but are crucial for locking in the functionality.
My personal philosophy on unit tests is one of enlightened self interest: They protect me later when I come back to the code after a while to make a change.
LordSavage2021@reddit
In addition to the other benefits already mentioned, unit tests can improve the code you're testing.
Step 1: Get a test coverage plug-in for Vis Studio, like Fine Code Coverage or similar. (FCC can be fussy, there may be better ones out there. I haven't looked lately.)
2: Run the report to find all the classes/methods that have poor/no coverage.
3: Write tests for the things that are easy to test.
4: Refactor the things that are hard to test: Break down big methods, split up classes that are doing too much, create interfaces, use dependency injection, use a mocking framework (Moq is popular), create "fakes", etc.
5: Repeat steps 2-4 until you feel like you're testing stuff that really doesn't need to be tested or you reach a decent level of code coverage (80% maybe).
6: Congratulations! In addition to having a comprehensive test suite, you now have code that's easier to understand and maintain!
ngugeneral@reddit (OP)
6 - My exact point in the update!
In my opinion - it is big advantage. But I fear to get rage from colleagues :)
Dimencia@reddit
Test coverage is there so you don't have to understand the entire project every time you make a change - you can rely on the idea that if you broke something by accident, the tests will tell you. They're usually integrated into CI/CD so you literally can't complete a PR if the tests are failing, preventing bugs entirely, not just tracking them. I don't personally think test coverage % is particularly important, it's up to you and your team to figure out which parts are important to test, not an arbitrary percentage
As for AI generated tests, usually I wouldn't recommend them because they test the code, not the business concept - they'll write tests that specifically pass or fail given the code that exists right now, with some hardcoded data that doesn't represent the real world, but if some implementation details change in the future, the tests will probably have to be updated (which is bad, of course). And if the method under test already has bugs in it, the AI generated tests might just enforce that those bugs have to occur. But for adding tests to an existing project, it might be viable, just beware what they generate. I'm not aware of any tools to do that automatically, but I'm sure you'll find some
The difficult part of most testing is really setting up test data that mimics a real environment, especially when there are complex relationships involved, rather than setting up data that specifically passes for the current implementation but might fail if implementation details change in the future. That might be something you should do by hand.
But you also usually want to avoid using the same test data for more than one test- you'll usually end up adjusting it to try to fix or setup for one test, and break a dozen others. Look into AutoFixture or similar, though it can be a pain using it with DB entities, and doesn't allow incremental customization - so far the best approach I've found is to build your own special fixtures (based on AutoFixture) that can use a Context's Model to do some auto setup, and also allow incremental customization (so you can setup some reasonable default values, and then let each test customize it further as needed)
Also beware in-memory databases with EFCore because while they let you give each test its own unique database to avoid interfering with other tests, they don't enforce a lot of restrictions that a real DB does, so things that pass tests might fail in a real environment. But if you try to test without them, you'll have an even worse time with that test data when all of the tests share the same database and data, and can run in arbitrary order, and most of them probably mutate the data when they run - though you can setup ordered tests, that tends to make things even more complicated and hard to trace
I've spent an annoying amount of time trying to find a good reliable approach to making unit tests independent and also give them good data, without having to build an entire company from scratch in each one, and still have yet to come up with a clean solution that doesn't overcomplicate everything... maybe you'll have better luck
ngugeneral@reddit (OP)
Just beautiful, thank you!
not_napoleon@reddit
I like to think of it as "Unit tests show that the code does what I think it does; Integration tests show that the code does what the rest of the team thinks it does; Functional tests show that the code does what the users expect it to do"
PowerOwn2783@reddit
"what is the purpose/advantage of test coverage?"
Coverage as in what percentage of your codebase is tested?
I like to think about testing this way. The most important tests are E2E/smoke/PDV (whatever you wanna call it). Essentially, you want to make sure your app actually work in a real prod-esque environment before deploying (duh). VR tests also belong here for the FE.
Next down the line is integration tests. These introduce more granularity because instead of testing everything altogether, you test specific workflows, which means it a test fails it's often easier to isolate the issue.
Next is unit test, which follows the same logic as integration tests. A failure in a unit test most often leads to rapid identification of a bug because you know exactly where they are.
This is roughly how tests complement each other.
When it comes to coverage, it's easier to quantify coverage for unit tests (just count the files) but it's harder to quantify them for higher level testing. So whilst it is obviously important, I would start with the higher order test first if you haven't got it already then work your way down to integration, then unit tests.
MatMathQc@reddit
Server: Unit test for API, so easy with AI now.
Front-end: Storybook snapshot for UI, Typescript will catch more than most unit test would do. Any function that is complex need to be unit tested, component is covered by storybook (fast).
E2E for critical path.
josephjnk@reddit
I cannot imagine working without unit tests. They ensure that bugs are detected early. When a prod bug happens a unit test should be written for it before it’s fixed, ensuring that it doesn’t come back. Tests give you the freedom to refactor, because you can have confidence that you’re not breaking the code’s behavior when changing its structure. They speed up development by letting you test and re-test your code in seconds instead of spending minutes doing manual testing. They save effort because devs can run each other’s tests repeatedly instead of manually re-verifying behavior that someone else had to manually verify a week or a month ago. They give you the ability to do CICD, by giving you confidence that code is working without lengthy QA cycles. And they encourage good design by pushing code towards modularity.
PmanAce@reddit
We use xUnit and write tests religiously. PRs don't get approved without unit tests and some functional tests if needed. Some microservices have over 1000 unit tests. It's great for covering yourself on changes and confidence. Tests take much less time to write when you are used to them.
alien3d@reddit
No idea - unit test. We do integration test with dummy data randomly on test database. if any error occurred , will see in the log. not focus on coverage but no error as possible.
PothosEchoNiner@reddit
If someone makes changes that break the thing you’re working on, how would they know? How do you know if the changes you are working on are subtly breaking an existing behavior of the system? The unit tests protect the code so that it does what it’s supposed to do.