How much of your testing is automated?
Posted by AdamBGraham@reddit | ExperiencedDevs | View on Reddit | 34 comments
I’ve been doing a ton of diving into the automated/code driven testing tech and platforms recently, from xunit, vitest, playwright, appium, etc. I’m loving the coverage and sense of security that can come from having all of your components tested on a regular basis and without as much manual intervention.
But, since I haven’t been on projects where this was possible/pushed by management before, I’m curious: how much of your testing is actually automated on your projects? How much testing is still done manually, what edge cases are not easy to solve and capture and run via automation, etc? Is it on average 80%? Or are we talking a big variety of 30%-99%?
liquidpele@reddit
All of it. If I have to manually test then I’m automating it to accomplish that instead.
New_Firefighter1683@reddit
I'm surprised by the top comments here.... wtf?
I have never worked in a place where we don't have unit tested code automatically run in our CICD pipelines. Breaking builds will not get merged.
Code coverage on some projects are legacy projects are bad... but all new projects need 80%+ coverage on unit tests.
We're a little more lax on end 2 end testing because we don't always have QA resources to help with those since they take forever
Lopsided_Judge_5921@reddit
I regularly get 100% unit test coverage for my changes. I also write integration tests and end to end tests through the ui if my change requires it. But even then I will manually test my code because you can never fully trust automated tests
doberdevil@reddit
Why?
Lopsided_Judge_5921@reddit
It's because the tests run in a fixed context, but doing some manual testing can create a complicated context that might expose something you didn't anticipate in your unit tests
New_Firefighter1683@reddit
Then those mocks should be in your unit tests.
doberdevil@reddit
It's not the tests that "you can never fully trust" though.
Write better tests. If your tests miss problems, you need to have a more thorough understanding of your code and what tests to write. The tests aren't missing anything, you just didn't write a test for that context.
That being said, humans are the best way to find bugs and unexpected behavior. And then write tests for that behavior once it's corrected.
Honest_Use6360@reddit
because even the best automated tests only check what you told them to check — not what you forgot they won’t catch missing logic, broken UX, or the fact that a button “feels wrong” unless you explicitly write for it. that’s why a quick manual pass still matters, it sees the stuff automation can’t predict.
hammertime84@reddit
It depends on what it is.
Something like a pip package, data pipeline, or app will have nearly 100% coverage on all major functionality with a lot of redundancy.
Something like a dashboard or Jupyter notebook will be mostly manual outside of some gross automated 'will it run' and so on.
SideburnsOfDoom@reddit
prety much all of it. And I wouldn't have it any other way. It's a good practice that opens the door to other good practices.
Though IMHO, test style and test design is as important if not more so than "how many test do you have?"
mlow23574356@reddit
I don’t have a good answer but I’ll give you my two cents.
It depends on the company. Some companies really lean into testing and devops. Others don’t. They see testing as something that shows compliance. But in general, not all testing is automated due to a variety of reasons. That could be cost, like setup cost. For instance, in the healthcare industry it may not be worthwhile to test for specific codes and business data that exists within the database that can change quite a lot.
Other reasons include that the gui wasn’t written in a way that allowed for easy testing or the setup process to test is too hard to automate compared to a manual person. Or it could be a certain test ran too long( this is common in resource constrained companies).
Additionally there is a type of testing called Exploratory testing which can only be done manually as you basically ask the tester to break things and give it weird edge cases.
In an embedded environment, you are often constrained by the hardware you have. Not having the right hardware or not having up to date hardware could be an issue. Networking is also a problem as well. If you engage in polling you are likely to have problems as you aren’t able to truly isolate the environment when you control one system in a web of systems. It’s possible to fake one but you still haven’t fully tested it out without integration.
There are plenty more dumb reasons like a company doesn’t want to engage in this cause what they have works.
I can’t tell you the exact ratio, just that you almost never have your tests work 100% of the time whether that be manual or automated.
KitchenDir3ctor@reddit
Firstly, it's done not manually, but by a human, which uses test tools, which also could include automation (in testing).
Secondly, ET is not about breaking things and edge cases at all.
Its more like this: Exploratory testing is simultaneous learning. test design, and test execution (Developsense.com). It is the opposite of scripted testing.
Or "E.T. is a style of testing in which you explore the software while simultaneously designing and executing tests, using feedback from the last test to inform the next. When I offered that definition to an XP programmer recently, he quipped, "It's Test Driven Testing!"" - Elisabeth Hendrickson
Or ""Learning" is really key when think about ET...it's starting to feel like a specialized subset of Exploratory Learning. Test design and execution are things I have the ability to experiment finding that apply ET thinking in other areas beyond testing" Jonathan Kohl
Or "My definition of testing is technical investigation of a product, on behalf of stakeholders, with the objective of exposing quality-related information of the kind they seek". This definition is inherently exploratory My core definition is "brain-engaged testing". My public definition is simultaneous learning, design and execution, with an emphasis on learning." Cem Kaner
KitchenDir3ctor@reddit
It is more a question of when you want human testing. As in when automation in testing doesn't give you the confidence you need.
Note that a human performing testing, is always doing more than what am automated script/check would do.
For example, when risk is high, new features, changed features with big impact, when interacting with systems from other teams, at high value features.
Note that deciding what to automate, and where in the stack is also important.
So talking about what % is automated doesn't make much sense. As testing, the act of, cannot be automated. Testing is learning. Automation doesn't learn. Automation helps gather information.
BoxingFan88@reddit
As much as possible within reason
ObsessiveAboutCats@reddit
We do a massive amount of manual regression testing. Significantly less than 10% is automated. Management has been saying they want more automation coverage and agrees it's really useful but they won't hire more QA people or give the existing QA people much time to write tests. It's infuriating to me and I am on the development side.
Finally the PM managed to get approval to have some of the devs help out with automation test writing. That is helping but there is so much to do.
FinestObligations@reddit
If you hand over the reigns to non-engineer QAs to write all the tests you will end up with a brittle test suite that takes ages to run. I’ve seen it time and time again.
AdamBGraham@reddit (OP)
I hear you there. I will say we had a dedicated qa for a long time but we never got traction for automated regression tooling licenses. However, we recently had to let our dedicated qa go and since devs have access to open source tools that do basically all of the necessary qa automation, I’m making the executive decision to implement our own now :) Funny how that works.
ActiveBarStool@reddit
You guys are still writing tests? I thought we were all just rawdogging this shit in 2025
nutrecht@reddit
I have a back-end focus so; all of it.
We have separate pentesting companies that do security/pentesting.
No_Bowl_6218@reddit
Don't wait for your company to embrase testing. It's your job and responsability as a software engineer
chrisinmtown@reddit
When I worked on a project associated with the Linux foundation, they required minimum 70% line coverage to be achieved by automated tests in Junit. We struggled to get to that level at the time! I'd like to think I learned something there, and on my current project some of my Python components are covered over 90% by automated tests in tox. Those tests are a great big safety net to save you from mistakes!
AdamBGraham@reddit (OP)
Awesome! Do you measure line coverage a particular way?
chrisinmtown@reddit
Coverage here means line (statement) coverage as reported by the basic Python coverage tool as controlled by tox.
AdamBGraham@reddit (OP)
Gotcha. I know you could manually review your if statements, errors, conditionals etc and come up with a number. And I know some ai assistants can check your coverage. So want sure. Thanks!
Empanatacion@reddit
Most (all?) of the unit test suites will spit out a coverage report giving you percentages by line or class or method. There are also IDE integrations that will color code the lines of your source to show you what code did and didn't run when you ran your tests.
doberdevil@reddit
Code coverage metrics are a gut check. Don't mistake it for a quality metric.
bigorangemachine@reddit
We're trying to build our automated offerings.
selemenesmilesuponme@reddit
We set up an automatic payroll for a manual tester. So yeah, very automatic.
smc128@reddit
Testing?
blablahblah@reddit
All of it, other than some manually verifying that features meet requirements before we launch tme. I run a web service, we own so many features and release so frequently that it would be unsustainable to manually test that all the old features still work on every new release. No change makes it into main without unit tests and no feature gets enabled without integration tests. And if it sneaks in without a test, it's not getting tested because we're not holding our releases back for someone to manually check stuff.
NeckBeard137@reddit
99%?
08148694@reddit
Each feature gets a manual test by everyone involved
Automated tests for each piece of the feature in each commit
So I guess as time increases the percent of automated testing of the entire system will approach 100
lordnacho666@reddit
I try to mock every significant piece. AI helps a lot with generating tests, so I've written a lot more since those tools became available.
forgottenHedgehog@reddit
We do exploration testing on new features manually and that's pretty much it. Every single regression test is automated.