Should QA test for failure cases (i.e. non-happy paths)?
Posted by dystopiadattopia@reddit | ExperiencedDevs | View on Reddit | 40 comments
We have a QA tester who writes scripts for direct API calls instead of doing UI testing (we have another UI QA for that). I'm concerned that he may be only doing happy path requests. This is a concern because this is a legacy codebase where data in a request has historically not always been validated correctly, resulting in successes where there should be failures and vice versa.
We've been trying to plug these holes over the last couple years, but this sometimes still happens, and I was thinking we should tell the QA guy to add tests for failures by submitting invalid data, just to make sure expected behavior is happening.
I'm not a QA professional so I'm not sure if this is standard practice or not. Is this something that people do, or am I just trying to force QA to go beyond their scope?
False_Secret1108@reddit
I thought QA was dead. Why do you have people doing QA
SnugglyCoderGuy@reddit
They should be creating and testing as many cases as possible.
Fenix42@reddit
I have done data driven testing setups to hit 100% of the permutations. I was told I was paranoid and over testing.
SnugglyCoderGuy@reddit
I don't think over testing could be a thing unless you've got redundant tests. This statement depends upon a lot of things in my head at this moment that would be too lengthy to type out at the moment, but basically tests should be written for the things that are going to be used the most and they should be written at the outer most boundaries of your application so that you can best avoid redundant tests or changing tests. Then just create tests until your eyes bleed, go home, come back, rinse and repeat. Who wouldn't want everything that can be tested to be tested automatically?
Instigated-@reddit
The first QA I worked with, I asked them why they chose that career path. Their answer: “I like to break stuff”. This is the attitude we want in QA, for them to put the product through the ringer.
If QA are only testing happy paths, we wouldn’t need them (devs should be testing happy paths at same time as coding to validate their work, however devs are biased as we come from a builder perspective; QA comes from a different angle, trying to find what we overlooked).
Fenix42@reddit
Long time SDET here. I don't just kike to break things. I like to figure out ways to break things.
Instigated-@reddit
Yes, very skilled in breaking things in every conceivable and inconceivable way! Devious, tricky, creative, malicious, accidental, likely and improbable. The one who can predict what different types of users might do, from a hacker to the most incompetent user. What would it look like if a cat walked across the keyboard?
Sensitive-Ear-3896@reddit
Short answer yes, long answer: they should be realistic failure cases, not stuff a user would never try, becuase there is an infinite amount of those. But stuff like calls with missing data, out of order calls, invalid data types, missing headers... should all be done
Fenix42@reddit
No such thing.
There are not. The key is understanding meaningfull difference in your tests. Lets say you are doing a single text input field. You have some charter validation on it. It allows only letters, and 10 at most. So, you have (26×2)^10 valid permutations.
You don't need to run all valid permutations to validate the code though. You can validate the happy path with these tests:
The chater that is used can be randomly selected if you want. It does not matter.
Negative cases can be covered in a similar way.
You can again randomly select your good and bad charters and the postion they are in.
This will v
Ballbag94@reddit
They should be testing literally everything, I've had good QA testers try and submit completely invalid characters just to see what happens. If they're just testing the happy path and calling it a day they're not only shit, but also doing less than the bare minimum
Happy path, unhappy path, completely random path, the "what happens if I try putting an emoji in the forename field" path, etc
Fenix42@reddit
As a long time qa/sdet this is why I do what I do 99% of the time. The other 1% is "because I could."
MuggleAI@reddit
The question isn't whether QA should test unhappy paths — it's whether QA can imagine them. A couple of examples from our own product last quarter. Silent-fail auth — the login button "worked" but the session cookie never got set. Password reset links pointing at localhost because the preview env .env didn't refresh. Both shipped. Neither got skipped because QA decided not to test unhappy paths; they got skipped because nobody wrote the ticket in the first place.
Aviation checklists have this same problem — the list only covers what someone thought to write down, which is why NTSB reports keep adding items after crashes. Discovery-based exploration catches the paths humans don't list. Doesn't replace a test plan, but "what did we forget" is a separate question from "did we run the plan."
Only-Fisherman5788@reddit
the question isn't whether qa tests failure cases. it's whether qa distinguishes 'the request succeeded' from 'the system did the right thing.' a legacy codebase with historical validation issues can return 200 for bad data and 400 for good data. if all he's testing is 'status=200,' he's testing HTTP, not your API. the check should be against the resulting state, not the response shape.
dystopiadattopia@reddit (OP)
Exactly. All I hear him say is "I got 200s for these tests so they're good." Makes me nervous.
Only-Fisherman5788@reddit
that exact phrase is the tell. '200s means good' treats the api as the ground truth, which only works if the api is correct. for a legacy codebase where validation is known to be wrong, the test has to go one layer deeper - check the db row or the downstream side effect, not the response. if he pushes back the question that usually works is 'your tests pass, so what bug did they catch last?' if the answer is a shrug, the suite isn't actually testing anything you care about.
yodal_@reddit
Of course they should test the non-happy path. Your error responses are still a part of your API that needs testing, and they should be checking that you don't just crash or hang when given even the simplest of invalid inputs.
I'm baffled how this could even be a question.
dystopiadattopia@reddit (OP)
Well if I were writing integration tests (which I was fought tooth and nail on and eventually overruled - long story) I would definitely include failure cases, just like I do on unit tests.
This is just really the first time I've worked on a team with dedicated QA, and that's a new development for even us. I've worked on either small or startup teams in the past, which either couldn't afford dedicated QA (small teams) or thought they get in the way of delivery (startups). I had to rely on writing unit tests and integration tests, but as I mentioned, the current place didn't want to devote resources for integration testing smh.
WhyIsItGlowing@reddit
There comes a point when doing that kind of work where you end up having to give people want they want, which is a green tick, rather than what they need, which is a test suite that reduces the system to a smouldering wreck and shows the uncomfortable truth of what kind of state it's in.
Some people prefer to keep more end-to-end stuff more focussed on green path and rely on putting lots of the negative scenarios and edge cases at the integration tests layer but there's always a need for it.
If you've had trouble getting stuff to happen, it may be that they have been burnt in the past and are now phoning it in as a result.
A lot of teams in this position talk about "we want to clean this up" but the second they start seeing more builds with red xs against them, get their knives out and start going on about how it's slowing their velocity. It's worth being positive about doing it right rather than jumping to the conclusion that it's because they don't want to.
dempa@reddit
Think of it like this, the QA's job per feature is to find any possible unexpected behavior.
This means:
- testing all possible inputs for happy path
- testing all edge cases (ie, if one or more of the inputs is an integer, test the bounds. If a string, test empty string + 1 character string + max length string)
- testing all possible inputs for unhappy path (ie if one or more of the inputs is an integer, test 0, negative numbers, a number greater than the max)
And then if your QA is capable of writing simple code, automate the above to run per checkin/nightly/some other cadence.
This is all a baseline for what I expect for a junior/mid level QA.
A senior will take it a step further and get involved at the beginning of the SDLC for the feature, asking questions during planning & refinement to get a better understanding of what the edge cases are, how this feature should work (or break), and other components/features which may conflict with the current feature.
source: 13 yoe in software dev, 7 yoe as a test engineer
petrol_gas@reddit
If you’re not testing failure scenarios you’re not testing. Just testing “correct is correct” is called ‘verification’ and it’s important but as a secondary to testing.
aruisdante@reddit
QA departments generally have two jobs in traditional V-model development methodologies:
It sounds like your QA department isn’t even doing acceptance testing correctly. They absolutely should be, at minimum, testing all of the contracted failure paths of the system.
johnpeters42@reddit
We don't even have dedicated QA people (we have operators who also test) and they still do #1. Out instructions to them for #1 routinely read like "If you do X then it should work, if you do Y then it should block with explanation". They also do some amount of #2, especially these days because they are the normal user (the client-facing stuff is relatively fairly simple and polished already, most ongoing development these days is improving the more complex stuff that we do behind the scenes).
aruisdante@reddit
Correct. Generally speaking the instructions to QA are written in the shape of a Gherkin/BDD style specification using
SCENARIO/[AND_]GIVEN/[AND_]WHEN/[AND_]THEN.You can actually write your requirements this way too, and it often helps everyone at every stage of the V understand what they’re actually doing from product definition all the way down. But often people think of this kind of specification as only being for testing, so they rarely do.
Visa5e@reddit
Every great QA I've ever worked with has been obsessed to the point of sociopathy with breaking stuff.
You give them the details and the specs and they'll immediately bin them and start doing random stuff. Anything possible to find the one use case you thought was impossible.
Your guy is not a great QA.
diablo1128@reddit
I don't hear this as often any more, but back in the 2010's I always heard people saying great SWEs don't necessarily make great testers, weather it is automate or manual testing. Great testers have a mindset that doesn't exist with some SWEs. That mindset is exactly what you said, being obsessed with trying to break things.
I'm not a tester, but I was pushed towards that for a good chuck of my 15 YOE. I always tested my own code really well. I think this is because I had used so much shitty software in my life that had obvious issues that I didn't want anything I created to fall in to that as well. So I rarely created bugs that were found elsewhere.
The ones that had gotten passed me were usually super duper corner cases. Stand on you head at exactly 2:48PM while the toilet is being flushed type of thing. Something that no reasonable person would be expected anybody to find during testing.
dystopiadattopia@reddit (OP)
Yeah, I'm beginning to realize this
saposapot@reddit
Yes
That's not a QA guy. That's an imposter. All QAs I know do about 50 failure cases and maybe 1 happy path if they remember :D
All the jokes about QAs are about failure cases...
new2bay@reddit
The best QA people can make things fail just by being near them.
double-click@reddit
You should automate known good data testing. Manually test the exploratory cases.
throwaway_0x90@reddit
So they can code and understand APIs and HTTP requests? An SDET/TE then?
They absolutely should be going beyond just happy path.
endophage@reddit
Given your description of the problem I think what you might want to look at is fuzzing. There are good libraries that can store a corpus of explicit failure cases along with ongoing random testing to avoid regressions.
dempa@reddit
That is 1000% what your QA should've been doing from day 1
dystopiadattopia@reddit (OP)
Yeah, this QA guy doesn't exactly inspire me with confidence. He's been here for at least 3 months, and based on the questions he (repeatedly) asks me, I don't think he knows what he's doing
ClydePossumfoot@reddit
Immediately type -10w into every input.
dbxp@reddit
Yes, unless this is a weird rebranding of UAT/user training then the unhappy paths should be the focus for QA
CodelinesNL@reddit
What the heck is this question even? Of course it's part of the job, for any dev and any QA.
Sounds like it's time to automate their position if they can't change their behaviour.
Dependent_Lobster_98@reddit
Your negative paths are as equally delivered as your happy paths, and so should be tested and asserted on. Even if it’s an uncaught exception relegated to tech debt, nothing the system does should be a surprise.
No-Try5566@reddit
Insane that it hasn't already been happening
Cube00@reddit
Since you're not sure (used "may"), this is a good opportunity to get those scripts into source control so you can look at what they cover.
You (collectively as developers) can then contribute the negative cases you're concerned about as a PR for your QA to check and approve. L
I find this fairer because usually there's multiple devs to a single QA and a single QAs can't be expecting to write all the automated tests.
grahambinns@reddit
Dear god in heaven and all his wacky nephews, YES.