Most of the Test and TDD people promote as good things in testing are meaningless.
Tests should be fast, test methods/classes, mocks etc are also meaningless
Definition of a good test
1. If you change the code structure (rename, refactor) and not the behavior of the code - the existing tests should pass
2. If you change the code behavior like LEFT JOIN to JOIN - then existing tests should fail
You can test your test with these 2 hypotheses if they are good or bad
Writing test before or after you wrote the code is also meaningless - just write tests
More low effort TDD propaganda, Listen, I don't want to join your cult.
"you only write code to satisfy a failed assertion", okay then:
def is_even(num):
if num == 2:
return True
if num == 3:
return False
There, both my test cases pass. That's "real TDD" right? If you're going to assume the worst about other methodologies, I'll throw the same back at you.
Yeah, uh, 'test coverage 100% goals bad, "just" write TDD' is not useful advice.
But, going solely by the title and ignoring the article, I see a lot of obsession with test coverage.
And I don't get it.
The fact that a line gets executed during a test run doesn't prove its bug free at all.
"Defensive" throwaway lines such as a default case that covers an impossible case to throw an exception, in order to ensure if somehow the developer is wrong and that is reachable, the program explodes with an exception pointing right at the problem instead of silently doing nothing, that is a good thing, and a 100% test coverage goal means you either delete fantastic code and cause headaches, or, you delve into ridiculous hacky kludges, such as adding a INTENTIONALLY_NOT_REALLY_REAL enum value to all enums just so you can write a test that covers the impossible default, or whatever. This is just one of a number of examples of wanting to write code that you cannot currently fathom can ever run.
The 'value' of a line can differ by a factor of a million. Hence, "98% coverage" sounds great but it doesn't actually say anything. If the 'value lines' aren't covered that's dogshit. What are the odds thoe 2% that aren't covered are the 'value lines'? You'd think.. zero. But, if you're chasing max coverage, Goodhart's Law would like to have a word.
Hence, anything less than 100% is meaningless, but a 100% goal actively incentivizes really bad practices (removing fallbacks, writing bullshit tests that you already know will never fail and just slow everything down and are pointless boilerplate to maintain, writing tests for the purpose of covering lines, not for the purpose of testing anything). Sooo.. why are we caring about coverage %?
I use code coverage tools so that when I run a unit test or a bevy of unit tests, I can open any source file, and see exactly which line(s) are covered by which test(s), including 'ah, this line isn't actually covered by any test you just ran at all'. That's it. That's the only thing test coverage is good for. But, that's enough. Extremely valuable. Just.. don't tell me the %. Get the fuck out of here with it. It distracts, it's a useless statistic, I do not care about it.
If, chasing down some bug or trying to learn about some code I haven't written / I have forgotten about, and the lack of coverage is getting in the way of that, then I will probably have some words with whomever is responsible for that. That's not good. But "not enough to test or grok this stuff properly" is not something you can lock down with some simplistic percentage. Goodhart is relentless.
Meaningless is a stretch but damn I've seen a lot of unit tests for shit like simple getters and setters (that were often themselves auto-generated by the IDE!), like what are you just making sure the language runtime and core data structures don't have any bugs?
NormalUserThirty@reddit
life is meaningless
congolomera@reddit (OP)
Nothing has meaning except for the meaning you give it.
fiskfisk@reddit
Headline is meaningless
.. you must understand what the number means (line ran, not that behavior was verified).
But it's a great tool to discover what you didn't think about testing.
Does that mean it's meaningless? Absolutely not. Do you need to know how to use your tools? Yes.
gonzofish@reddit
“Oh look, I didn’t hit the inside of that if block, I probably should write a test that exercises it”
That’s what it’s good for
jonhanson@reddit
TL;DR - thou shalt accept as facts, unsupported by evidence or reasoning other than straw-men, that bad unit tests are bad and Real TDD is good.
chipstastegood@reddit
Well if it wasn’t good it wouldn’t be called “Real” then would it? Just like Real in “Real Mayo”.
OddKSM@reddit
That is true. Real Mayo is definitely better than light/lite versions
But at the same time we've got "The Real Housewives of fooBar" so I think I need a bigger sample base.
wineblood@reddit
TDD is crap
bert8128@reddit
Is “Real TDD” trademarked yet?
gjosifov@reddit
Most of the Test and TDD people promote as good things in testing are meaningless.
Tests should be fast, test methods/classes, mocks etc are also meaningless
Definition of a good test
1. If you change the code structure (rename, refactor) and not the behavior of the code - the existing tests should pass
2. If you change the code behavior like LEFT JOIN to JOIN - then existing tests should fail
You can test your test with these 2 hypotheses if they are good or bad
Writing test before or after you wrote the code is also meaningless - just write tests
wineblood@reddit
More low effort TDD propaganda, Listen, I don't want to join your cult.
"you only write code to satisfy a failed assertion", okay then:
There, both my test cases pass. That's "real TDD" right? If you're going to assume the worst about other methodologies, I'll throw the same back at you.
rzwitserloot@reddit
Yeah, uh, 'test coverage 100% goals bad, "just" write TDD' is not useful advice.
But, going solely by the title and ignoring the article, I see a lot of obsession with test coverage.
And I don't get it.
default
case that covers an impossible case to throw an exception, in order to ensure if somehow the developer is wrong and that is reachable, the program explodes with an exception pointing right at the problem instead of silently doing nothing, that is a good thing, and a 100% test coverage goal means you either delete fantastic code and cause headaches, or, you delve into ridiculous hacky kludges, such as adding aINTENTIONALLY_NOT_REALLY_REAL
enum value to all enums just so you can write a test that covers the impossible default, or whatever. This is just one of a number of examples of wanting to write code that you cannot currently fathom can ever run.I use code coverage tools so that when I run a unit test or a bevy of unit tests, I can open any source file, and see exactly which line(s) are covered by which test(s), including 'ah, this line isn't actually covered by any test you just ran at all'. That's it. That's the only thing test coverage is good for. But, that's enough. Extremely valuable. Just.. don't tell me the %. Get the fuck out of here with it. It distracts, it's a useless statistic, I do not care about it.
If, chasing down some bug or trying to learn about some code I haven't written / I have forgotten about, and the lack of coverage is getting in the way of that, then I will probably have some words with whomever is responsible for that. That's not good. But "not enough to test or grok this stuff properly" is not something you can lock down with some simplistic percentage. Goodhart is relentless.
kehrazy@reddit
..said no-one competent, ever
sisyphus@reddit
Meaningless is a stretch but damn I've seen a lot of unit tests for shit like simple getters and setters (that were often themselves auto-generated by the IDE!), like what are you just making sure the language runtime and core data structures don't have any bugs?
PhysicalMammoth5466@reddit
No, f off