Do you guys use TDD?
Posted by Scientific_Artist444@reddit | ExperiencedDevs | View on Reddit | 321 comments
I was reading a book on handling legacy code by Michael Feathers. The preface itself made it clear that the book is about Test Driven Development and not writing clean code (as I expected).
While I have vaguely heard about TDD and how it is done, I haven't actually used TDD yet in my development. None of my team members have, tbh. But with recent changes to development practices, I guess we would have to start using TDD.
So, have you guys used TDD ? What is your experience? Is it a must to create software this way? Pros and cons according to your experience?
snotreallyme@reddit
Just because code passes a set of tests doesn’t mean it’s good code. Not necessarily clean code.
ButterflyQuick@reddit
Easy to refactor bad code with good tests. More difficult to maintain any code with bad tests
wvenable@reddit
Except that test freeze your design. So if your design is bad, you can shuffle around code behind that design to you hearts content, continue to pass tests, but you won't be able to fix that design.
ButterflyQuick@reddit
Depends what tests you write. Plenty of ways to test without forcing you to stick to original design decisions. In my experience one of the things that puts people off testing is they're bad at identifying boundaries in their code, so they end up writing tests that are very tightly coupled to implementation and then any time they make a change to their code they have other change a bunch of tests.
wvenable@reddit
I know that test driven design is not for me. I've tried to do it properly and found it too constraining to my creativity. I work in an iterative process and I usually don't settle on an interface right away. Deciding on an interface first feels like being trapped in a box.
When I did TDD, I found as soon as I started implementing that my interface assumptions were sub-optimal and then I'd have to re-write the tests.
One thing is that I'm not afraid of change -- if some design is wrong I will edit 100 files to fix it.
ButterflyQuick@reddit
I'm not trying to sell you on it, just pointing out that TDD doesn't necessarily mean tying you a specific approach up front. And most people who feel constrained by TDD just aren't writing good tests in the first place
I actually don't find I have to totally change the design of anything I'm building considerably very often. Sure, I might tweak the exact API, but the rough shape usually ends up being what I designed early on. I don't experiment in code all that much either. I know the problem I'm trying to solve and the methods available to solve it. By the time I actually write code I usually have a pretty good what the final shape is going to be.
If you are making massive changes to your code at several points through the build process then yeah, you're really limiting the amount of tests you can write that won't end up needing rewriting. But I honestly think that as you get more experienced you'll settle on a design sooner, make less large changes through the process, and maybe find that TDD is more effective.
Like I say though, really not worried about selling you on TDD, there's plenty of other ways to create good software. I do think you're missing the point of TDD if you don't think it allows you to be creative or work iteratively, but that's fine
wvenable@reddit
I'm actually kind of wanting to be sold on it. This entire thread is people who enjoy TDD and I just don't get it. Everybody does work differently and have different development styles (their brains work differently). This is why most language wars are dumb; what feels objectively better is often just subjectively better because people are different.
The "you're holding it wrong" of answers. When I did TDD I picked basically the perfect project for it -- a base library that needed to be rewritten with a clean modern API. And I, of course, wrote tests to design that API. But as soon as I started the actual writing code, I found I wanted to change that API. I was re-writing the tests constantly.
I'm willing to accept that how I think and how I work is different from how you work. But at the same time, I'm almost incredulous that you can build tests up front and then just fill out those tests and it's an efficient way of getting the best results. I've had to redesign because some dependency won't allow me to use it the way I had assumed it would work. The user's often don't even know what they want until they've seen something that isn't what they want.
I've never met an API that I didn't want to refactor.
Can I ask the type of work that you do? I work in a smallish corporate environment; we have offices around our country. I manage a small team. We produce several applications a year. Mostly web applications to improve process or automation. I've never had a project fail or be late or go over budget.
Exactly! I am missing that!
ButterflyQuick@reddit
Part 2 because I think I was over the length limit
My flow, let's say for a new feature, is roughly
Hopefully that gives you an insight into why I find TDD beneficial. To be clear I don't do it in every case, and I'm not a total purist who will rerun tests after every change (but sometimes I do). But I actually find that in the majority of work I do, with a bit of upfront planning, I'm able to come up with enough to write those initial tests that cover the work at a high level. And then as I go I drop down to the unit test level to drive out more of the functionality.
The main benefit for me is the very short feedback loops, and the confidence that if I make a change that affects other parts of the code I'm working on, or the codebase as a whole, then I'll find out very quickly. It forces me to think about the API at a high level before I start, and I have more confidence in any refactors I make because the code is well covered. I think the mistake a lot of people make is trying to write too many tests up front, and couple those tests too closely to implementation details rather than the feature itself. Cover the feature, then write the unit tests once you start to decide what the units are
wvenable@reddit
I appreciate the level of detail that you went into here. That's pretty much how I do unit testing except for the writing of tests up front.
For something like this integration I would approach it very differently. First of all, I will build a very minimal viable product for connecting to their API. I roughly get the authentication working, the main calls, check the results, etc. I have been burned far too many times at this part of the process to anything other than the most minimal effort. Sometimes at this stage it just doesn't work and can take months of back and forth to resolve. I would not start writing tests or even doing any design until I'm past this part. Sometimes until I call the API I don't know if it even matches the documentation.
Now I could write unit tests to do this work but that is a lot of effort and ceremony for something that I'm still unsure about.
Once everything is confirmed working than I'll unpack that MVP into something we can actually use. If it's a complicated integration, and some of them are very complicated, then I have to spend a lot of time working on it and figuring out the best API. For our document management system, they have a REST API and documentation, but it actually took months to really figure it out. A naive implementation would have been a one-to-one mapping of their REST calls to function calls using their documentation as reference. But that puts a lot of effort on developers every time they want to use it. Instead I effectively reverse-engineered their system to make our API match the real semantics of their system. Any effort put in here is less effort on our developers using it. At some point in this process when I'm fairly satisfied of the correct design, I start writing tests. It's not at the end but more in the middle.
For the process you describe, I still don't see how it's beneficial to the development process. For something as simple as a single integration, it doesn't seem like a cost or a benefit. But also it is not something with a lot of design (like a whole new product) and it is something easily automatically tested. What I would be worried about is something that doesn't fit as nicely into that kind of box.
ButterflyQuick@reddit
I thought maybe this was a given but perhaps not.... I don't use automated tests to cover every little thing that I do
What you described, ensuring the vendor's API matches documentation etc. I do not consider development work. I will have already done anything like that before I start work. I don't feel the need to create some kind of MVP of the feature to do so, I'm literally testing a a few endpoints, I can do that with curl from the command line
I appreciate the discussion but like I said, I'm not here to convince you of the benefits of TDD. If you don't like the approach or it doesn't fit your preferred workflow then that's fine. I've tried to layout why I find it beneficial. If that is compelling to you then so be it, it's not my job to convince you and I wouldn't have anything to gain by doing so
TDD works for me. I'm productive and write robust software. But there are many ways to write good software and it sounds like you have an approach that works for you and that's great
wvenable@reddit
No that is a given. Although the principle of Test Driven Design is that tests drive the design. I just fail to see how good design comes from doing the tests first for the reasons I've described.
I find it interesting that you don't consider that development work. I suppose the main thing is that I think in code; I would prefer not use curl from the command line and just do in the language I'm using. Both because it's different in the sense of thinking about the problem but also because it's a different environment that might end up being a factor too.
I'm not sure that you have. You've described the process that you go through but not actually how that helps you with design. It's like describing how you drive to work by telling me all the turns but not the reason you took the route and why it's the best route.
I accept that. And you're right, you're not here to convince me. I guess I just struggle to understand how TDD works for anyone. Your post, for as detailed as it is (and I appreciate), it not really helping me get over that hump.
ButterflyQuick@reddit
Honestly I thought this spelt it out pretty well, but maybe I over estimated
You don't have to design a ton of stuff up front. TDD is about writing simple, possibly inelegant code to get you to a working state, and then using passing tests to allow you to refactor freely. You don't decide how the entirety of the system works up front, you decide at high level what you need to achieve and then use tests to build out the design.
I addressed this in a previous comment. I think this comes down to people not being familiar enough with testing and testing patterns to write tests unless the code is in front of them. Get better at, and more comfortable with, testing and you will find this easier. Maybe you're going to suggest I'm making a "you're holding it wrong" argument but it's very difficult to advocate TDD to someone who isn't able to consider what tests would be passing to consider the code valid before any code is written.
We know what the acceptance criteria of our code is before we start writing it, and are able to express that through automated tests. I assume you don't start writing code without knowing what you are trying to achieve? TDD just goes a step forward and instead of representing the aim of the code through user stories, or acceptance criteria, or design documents, or whatever else you use at work, it represents the high level design as automated tests
wvenable@reddit
This:
Seems immediately in conflict with this:
But to be fair I think this line represents a big difference. I don't even like to think about the API at high level before I start. That's something that evolves over time.
I guess the problem is that most of my refactoring will break my tests. But again this comes down to not having decided on a high level design at the start.
How would one get that point? If I suck at writing tests how would I ever get to the point of not sucking at writing tests to do TDD. What's the process to get from here to there that doesn't involve incredible pain?
You know all the acceptance criteria of all your code before you start writing any of it? I think forget TDD, this is the thing I find most unbelievable. :)
ButterflyQuick@reddit
I don't consider thinking about the API and coming up with a high level design and not designing all the individual pieces upfront to be contradictory. Maybe this is a product of the amount of time I have been doing this but I can look at a problem and know roughly how I am going to solve it a high level without writing a bunch of code to "get a feel for things" or whatever it is writing that code is achieving for you
It's writing the wrong tests, and coupling them to your implementation details
How do you get better at anything? You learn about it and you practice it. There's no magic sauce here, but there are about a billion books on testing, some specifically for TDD, a lot just about testing in general
There's really not that much difference between good tests for TDD and good tests in general. The gap is all about being experienced enough with testing to be able to write the tests without having code to "reference" - and this coupling of tests to code is exactly what TDD is trying to avoid anyway.
There's a ton of material out there on TDD process. I remember being where you are and not really being able to comprehend how to write tests before code. I wish there was some "secret" but it really did just come down to getting better at writing tests. I didn't even set out get into TDD, I just read a lot about testing and the idea of writing tests first felt natural, so I learned a bit of the TDD process and it clicked with me
Enough to write some tests that will tell me my code is working to spec, yes. If you don't know what your acceptance criteria is how do you even start work? You must have some concept of what you are trying to achieve. Write a test that covers it
wvenable@reddit
The last project our team started and the next to deploy started as a prototype. I personally got tired of getting nowhere with management trying to solve a problem. They wanted to integrate two third party projects together but it just wasn't happening. I instructed one of the team to stop what they were doing and instead spend 3 days building a prototype replacement for both products. The prototype was all front-end; the whole backend was just facade.
That prototype turned out even better than I thought and we quickly got management approval for the project. We are under a tight deadline due to some external factors (which fuelled my frustration on lack of progress). We had some meetings with stakeholders about the various features we need to add to solve that before-mentioned problem and we've been putting out iterative releases, getting feedback, and then changing the product as needed.
But honestly I have no idea how I would do something like that with TDD. I literally have no idea where one would even start.
The other project that I'm most personally is the replacement of a big shared service that all our applications use. It basically integrates a bunch of different external data sources together into a single cohesive database that all apps can retrieve data from. Several of those data sources have changed over time and compatibly shims added to ensure applications continue to see the data the same way as always. In addition, the quality and completeness of one of the data sources was vastly overestimated that was not accounted for in the original design. The purpose of this project is to change the API as well as the internal design -- new applications will use this, old applications can continue to use the old service and/or be slowly migrated.
This is essentially a port and in the TDD world what I would do is take the original set of tests and change them to fit the new API. And, in fact, that is what I did at the end. However, that's not how I developed it. I wanted to keep the API as minimal as possible and model only exactly what was needed. I started porting the code from the original service over, removing all the legacy crap shims, columns, and tables. I also poured over the data itself looking for ways to resolve the issues that we had and ensuring that I wasn't just naively bringing over data I don't need.
I really didn't want to be constantly running database migrations and doing partial tests while developing this because of the degree I was making changes. So I would compile regularly but I didn't even run this thing until I was mostly done. I absolutely wanted this to be perfect so I was constantly adding, removing, and renaming models and columns as I ported over the code and examined the data. Once I could run it, I had a few minor problems. When I ran the tests I only failed a handful of tests in a particularly tricky area. And now it's done.
Again, I don't know how I would do that with TDD.
ButterflyQuick@reddit
Well you don't really go into a lot of detail about either project but I don't see why either wouldn't be achievable with TDD. I probably wouldn't write tests for 3 day prototype, but I don't often have the need to build 3 day prototypes
Nothing about TDD precludes this
Sounds like you'd have too many tests up front. I'd ignore the previous tests and do exactly what I described earlier, pick a data source, write a high level test that proves part of the data integration works, write unit tests to build out the parts that enable the data integration
TDD isn't about producing the most amount of code in the shortest amount of time. If that's what you're aiming to do, and in both your examples you emphasise speed over anything else so that seems plausible, then TDD is probably a bad fit for you as a developer.
In every mature product I've worked in (and that spans a large number of projects which I've joined across a wide range points in the development cycle) the most challenging aspect has been maintenance, bug fixing, and extending the application in a way that doesn't break existing functionality. Writing a bunch of code that does pretty much what you want quickly is easy, that's why rewrites are so appealing to a lot of developers.
I don't think TDD is a slow way of building software, I seem to be plenty productive, but of all the ways I've worked it has enabled me to write software that has the least bugs, and be the easiest to maintain and extend over years, even decades. To me and the companies I work for that is immeasurably more valuable than throwing together a rough proof of concept in 3 days, but that doesn't mean it's going to be the best approach for everyone
wvenable@reddit
We release about 3 applications a year at roughly 15-30 tables per app. We also support and maintain all the applications that we produce. Obviously maintaining high quality software is very important; we would never get any new work done if we were constantly having to maintain the old software. I literally get an alert on my phone whenever there is any failure in any app so I have pretty high incentive to have almost no defects.
Nothing in TDD seems to support it either. How goes testing lead to the design? It is supposed to be test driven design -- not merely just doing tests first.
ButterflyQuick@reddit
Comparatively small applications then. Maybe you just aren't at a scale where TDD is beneficial.
There's plenty of resources out there if you genuinely want to explore this further. I've explained the process of using tests to drive out the design three or four times now. You just refuse to accept that what I'm typing is an answer to your question. I have no idea why, maybe you are expecting some deep insight but there really isn't. You write a test, you write code to make the test pass, you refine your design by refactoring, knowing at every stage your software works because the tests pass. That's all there is to it.
I wish you all the best with your software development, but this discussion is getting ridiculous
wvenable@reddit
It seems to me you need to have a design before you get to this very first step. It seems like a very import piece of software development is just handwaved away right at this point.
Everything else:
This part is basic principle behind unit testing. You can change your code safely knowing that it continues to work as designed. But you've effectively frozen the design and are now preventing regressions.
Fair enough. The gulf might just be too wide.
Seems to me that we effectively write the equivalent of your entire code base every 2 years or so.
ButterflyQuick@reddit
You are allowed to design before you write tests. It's called test driven development, not test driven design. You can do upfront design, plan things out etc., and then use the tests to drive out the rest of the design/development
This is your definition of unit tests, and indicates you are writing tests too coupled to implementation details. I doubt any TTDer shares this definition of unit test, or writes unit tests in a way that freezes design. I think this might be one of the issues with how you write tests that prevents you "getting" TDD.
Lots of small projects are easier to maintain than one large one. I've been with this company less than a year. And in case it somehow wasn't clear, that's the size of our codebase at present, it's growing. Previous job was a much larger code base. Job before that was agency work which sounds much more similar to what you're doing in that we were working across multiple small projects. Definitely produced the most LoC at the agency. TDD has been effective on all these codebases
wvenable@reddit
I don't see how that's possible. Almost all tests are at the API barrier whether that's a library interface or a network service. What is higher level than that? I could go lower since everything is dependency injected and test individual components but that isn't often necessary.
We are specifically talking about TDD which I'm going to argue doesn't apply to maintenance. For maintenance, does it really matter if your tests were written first or last? They are done. You fix bugs or you do a minor refactor and then you run those tests.
That's pretty impressive. I mean I'm not radically changing my initial designs but the devil is in the details. I flesh out those details when I'm doing development. But I agree they could be fleshed out with tests -- it is development either way. But it feels so disconnected from actually building the software. I found myself creatively constrained by TDD. I want to moving stuff around as I'm building it out. My first thought is rarely my best. I want to expose as my API the best and cleanest design. Again, running it could be done with tests (and sometimes is) but keeping to TDD was just too hard for me mentally.
That makes sense.
That's actually why we do it that way! Most of our users have the correct mental model that they're working with separate applications but someone new might not be totally sure. Much of what we do could have also been one giant monolith or microservices. There is common code, common data, and a consistent UI. But they are separate applications; they generally don't cross paths with each other which is helpful.
Any one of these applications can be quite complicated; for many of them there are direct alternatives on the market. Most of those don't do exactly what we want them to do or they just suck. Our team doesn't build anything we can successfully farm out to another product or service.
ButterflyQuick@reddit
I don't think the distinction between objectively better and subjectively better is very clear cut though. If you have a team of devs who subjectively all prefer TDD, then objectively, for that team TDD is better. C# vs Java is a subjective debate, with some objective aspects. But if you have a team of Java devs and want them to be productive, then Java is objectively the better option.
That's a lot of words to say that of course a lot of the arguments are subjective. But that doesn't mean that in the real world the benefits of the subjective arguments are any less important.
Yes, if you are writing bad tests you are doing TDD wrong. I don't see that as a gotcha, it's just a fact of how the process works. If someone comes along and claims that JS is a bad language, but you take a look at their code and they are using a bunch of language features wrong and their base knowledge is poor then that doesn't make JS a bad language, it makes them bad at JS
I think a lot of the issue people have with TDD is they aren't good at writing tests. So instead of being able to write tests upfront, understanding the parts of the code they should test, and how to test them effectively, they need to have the code written to try and work out from there what tests to write.
That's not to say everyone who doesn't like TDD writes bad tests, or everyone who does TDD writes good tests. But I think a lot of the people who stare at TDD and don't see how it can be productive, or where to start with writing tests when they haven't worked out the majority of the detail in their code don't have a lot of experience writing tests, and aren't fully into a particular mindset that enables test first to work.
I work on a team 6 devs, we maintain a single product but across several code bases, a main backend, a few different frontends etc. product is mature but still under active development, lots of new features, but also lots of bug fixes and maintenance. Level of complexity is reasonable but we're not especially performances or security sensitive. Maybe 200k LoC but honestly I've never checked and that's a pretty out there guess. \~150 tables in our main database, 15m rows in our largest table
loxagos_snake@reddit
The few TDD devs I've seen in the wild wrote tests, wrote the code and said "ah great, it passes, moving on!".
The code itself was an absolute mess, even in formatting. You are absolutely correct, and TDD does contain a refactor step, but I feel like some people will simply not bother.
ButterflyQuick@reddit
But if the tests are good it doesn't matter that the original dev wrote crap code and didn't refactor. Anyone else can come along at any time and refactor as necessary
loxagos_snake@reddit
My problem with this is how realistic it is.
Someone who just wants to get it over with will simply rewrite the test to bend it to their will. I've even seen cases where tests had been removed because they didn't pass after a refactor.
Of course we have reviews for that reason, but realistically, few people will hold up a review due to missing tests, especially if there's pressure to release.
I feel like TDD can only work in stable environments, in teams with very experienced devs, and where management understands the ROI that tests offer and considers them as essential as the code itself.
ButterflyQuick@reddit
To be honest, and to argue against my own point, I think the biggest issue is the overlap of devs who write good tests and devs that write bad "code" (as if tests aren't code) is very small, maybe non existent. Most crap code that has tests also has crap tests
I don't disagree with what I take as your main points: Testing can be difficult, it can add overhead to work, and a lot of rework and refactoring involves changing the tests as much as the underlying code
But I think all of this stems from most devs being really bad at writing automated tests. It's not treated as a skill worth developing. People would rather practice and develop skills they see as more interesting and don't spend time getting good at testing
In my opinion if you aren't writing automated tests you aren't completing your work. It is a fundamental skill of software development. If it's slowing you down get better at it. If you are having to rewrite all your tests every time you refactor write better tests.
My issue with this is that all software development practices work best in stable, high skilled teams. That doesn't mean the less skilled teams shouldn't develop new skills. And the less stable the work environment the more important it is to have tests
chimpuswimpus@reddit
Fair point. Shit code is still shit code. Doesn't really matter how you wrote it.
Saki-Sun@reddit
IMHO the most important part of TDD is emergent design. It kind of pushes you to start simple.
chimpuswimpus@reddit
Yeah and TDD goes red, green, refactor. That last step is important.
flmontpetit@reddit
I think TDD is excessive but I've always found that a large chunk of the value provided by unit tests specifically is that it forces you to think of at least two integrations for the code you're writing and leads to far less coupling down the line.
FoeHammer99099@reddit
I assume you're talking about "Working Effectively with Legacy Code". I think the context of working with a large codebase that you don't really understand (and perhaps no one really understands) is important. You don't really know the "correct" behavior of the system beyond that it has to work the same after the change as before the change, except for the part you changed. No one can explain exactly what the system is supposed to do under certain circumstances, but they'll notice if it changes.
I think most people will eventually come across a system like this. The company's payroll solution is 20 years old and we want to rewrite it and move it to AWS. We bought one of our competitors and fired all the engineers, get their product working again.
What Feathers is saying is that you should first write tests as a way of mapping the behavior of the system. Then once you have a test harness around a component you can replace that component and be confident that you didn't break anything. Then you can start modifying and updating the codebase (which now has a bunch of tests).
I've updated large legacy projects a few times, some before I read the book. This is really good advice.
flashjack99@reddit
Had to scroll entirely too far to find this take…
Older code does wild things sometimes. Anything your best code practices catch today did not exist back then. If it compiled and worked, it shipped and they got bonuses. Go back far enough and code reviews weren’t a thing.
When you monkey with it not fully understanding it, you will break it. Tests help find those unintended breaks.
sritanona@reddit
No. I researched it for university and basically found out it doesn’t give better results in terms of productivity. I also think it introduces a lot of filler tests that are not important in my opinion. I don’t aim for 100% coverage. It might help you think about what to include in the code though. But it’s just a matter of preference and not an objectively better or worse choice.
PeterPriesth00d@reddit
Many moons ago we had a guy on our team push for TDD. At first I hated it. Then I kind of liked it, then I realized you can get 90% of the benefit from just writing tests sort of in parallel with your code and it’s way less miserable than full on TDD.
After you’ve written enough tests, you know how to structure your code so you will be able to mock it easily and you do things with tests in mind without the crazy overhead of every. Single. Fucking. Line. Of your code needing to be written only after working on a test.
Hell, I think having integration and endpoint tests solve a ton of issues in a codebase because you can change how things work under the hood but if the end result is different, you will at least know about it.
At the end of the day, testing is about validating that what you are doing is what you meant to do and preventing things from breaking in future updates.
How you and your team do it is on a spectrum and what works for one team or codebase might not for another.
83b6508@reddit
I use TDD primarily because asking myself how I’m going to test something immediately tells me how I want to interface with it, pass dependencies to it, and it forces me to reign in my desire to over engineer something by defining what victory actually looks like.
I find that doing this kind of thinking first almost always results in units of code that are simpler, much smaller, and much easier to maintain.
The fact that it also tends to result in way better test coverage is an added bonus!
I find instead of letting the implementation details lead that conversation.
jonreid@reddit
TDD for 23 years in professional development, mostly in Apple environments. What would you like to know?
Scientific_Artist444@reddit (OP)
If you have already used TDD and it is your go-to thing, please share your experience using TDD.
jonreid@reddit
I've done a summary write-up on my About page. It begins,
This fear prompted me to search for better ways of coding. I describe how Extreme Programming (XP) changed me, even though XP is a set of team practices that I applied solo.
https://qualitycoding.org/about/
Many_Particular_8618@reddit
Tdd means structuring your system to be testable.
eyes-are-fading-blue@reddit
I shared my experience of doing pure TDD/extreme programming for 3 years in another thread.
https://www.reddit.com/r/ExperiencedDevs/s/6SFhnhN7Ez
throw_it_further_@reddit
i think its useful in specific situations (fixing a bug or adding functionality when there is already an architecture in place) but its not my default.
kindapottamus@reddit
No. When I’m developing a new feature, things are messy af for me. I’m rearranging files/directories, sketching out concepts, and making many refactors. At that point, tests only get in the way. My style is to get something working, then refine concepts/components/classes, then write tests.
xabrol@reddit
Depends. I don't btdd an inflight prrof of concept, its a waste of time, it changes constantly.
But a finished design, yes.
willcodefordonuts@reddit
We write unit tests. If people want to do TDD to get that done fair enough, if they want to write tests after (like I do) that’s ok too. Main thing is they write good tests.
Personally I don’t like TDD as a workflow but that’s my own opinion. It does work for a lot of people. Just do what works for you
Scientific_Artist444@reddit (OP)
Yes, tests come after code in our team as well.
But it has a risk of missing functionality that goes untested. Writing tests first forces you to focus on the requirement and only then write code to meet those requirements. That's how TDD is supposed to work in theory. Never tried in practice.
edgmnt_net@reddit
Not everything is testable or worth testing, at least in that fashion. And if you go down the path of trying to test everything it's very easy to make a mess of the code, due to extensive mocking. The tests may also be nearly useless and highly coupled to the code, providing no robust assurance and requiring changes to be made all over the place.
Odd-Investigator-870@reddit
Skills issue. Everything worth delivering should be testable. Problems with mocks indicates bad design. Try doing it with stubs and fakes insteadp
Brought2UByAdderall@reddit
And when our back end team says they won't have enough time to modify a feature because they'll have to change too many tests, is that also a skill issue?
UK-sHaDoW@reddit
They should only be changing the tests where the acceptance criteria has changed.
edgmnt_net@reddit
There are plenty of cases when that's just not possible with unit tests. Imagine some rework or new feature requires adding a dependency to a unit or a field to an internal DTO (that may or may not be a good idea), you can't really avoid touching tests. Sometimes the units themselves mostly shuffle data around and there's nothing meaningful your tests can assert. Either the tests are trivial or they're highly-coupled to the code.
UK-sHaDoW@reddit
Both of those trivial if you've designed your tests right. Also what do you mean by internal DTO? DTOs entire purpose is to transfer to an external system.
Ignoring that, you should have equality helper in your tests for the DTO that's defined in one place. There be at most a couple of tests that check for specific values? Therefor only a few tests should change.
For dependency, your tests should be creating objects through a single function which has default values. When you add a new dependency you just update that single function.
edgmnt_net@reddit
As per Martin Fowler's notion of "local DTOs": https://martinfowler.com/bliki/LocalDTO.html
Basically structs used to represent calls, pass parameters and return results. They tend to show up heavily in layered architectures and can be an antipattern. Some even justify such layering and DTOs on the basis of testing.
Yes, but that begs the question of what you consider a module or unit. Unit testing every class and aiming for full coverage can easily turn into checking app internals, as many classes are exactly that, internals. Note that I'm not against testing per se, but at some point I'm going to ask why even call it unit testing, if testing the only truly public APIs implies system/integration testing.
If the unit you picked is just glue code that transforms one struct to another or merely reads in arguments and calls something else, then that's pretty much your entire test and it's not very useful. :)
Such glue code is more common that one might be inclined to think, even if you try to avoid it. Your app init code is probably just that: set up this subsystem, set up an HTTP server, wire things around, without any significant testable logic. Same for many HTTP handlers, they'll parse input and call something else. It's easy to end up testing essentially data shuffling.
My usual recommendation is to abstract (make helpers etc.) common parsing or auth or whatever you might need logic and try to test that instead, if reasonable. But in many cases you shouldn't really test stuff like "does this particular handler check the user" because that should be obvious from the code.
Injection-wise and for something like logging, sure. It might be more complicated if you want to set up expectations or the dependency returns stuff.
The point is if you assert too much (such as a particular order of calls to dependencies), you'll end up having to change the test too much and it brings little value over the code itself. It's more of assurance by mere duplication. And IMO good tests should bring something new, not just repeat the code.
UK-sHaDoW@reddit
You get full coverage without testing every class. Also only make a few classes public.
Also glue code is important. It should be tested. Accidentally not mapping one field to another is a fatal bug.
teslas_love_pigeon@reddit
Kinda blows my mind how some devs just accept crap code as the default rather than trying to make things easy to test by default.
If you purposely write code that is hard to test for, it's also hard to refactor or remove.
edgmnt_net@reddit
I do recommend breaking out some of the logic in functions that are easy to test, when it makes sense. However there really isn't a good way to test much of typical application code no matter how you write it.
And too much unit testing can very well make refactoring much more involved when you introduce extraneous interfaces, layers and internal DTOs and suddenly your changes blow up across many files. It also negatively impacts readability, as now you're not using well-known APIs, everything goes through makeshift layers of indirection just to be able to write tests.
The trouble is people rely way too much on testing and at this point it's causing them to write worse code and a lot more code just to check a box. Some things are inherently not testable. And considering the low bar for reviewing, general code quality and static assurance that some advocate, I'd say that's the real skill issue and projects seem to try to make up for it with testing. Which only gives a false sense of security, ends up slowing down the development in the long run and may even take resources away from more impactful things like proper design and reviewing.
teslas_love_pigeon@reddit
Notice how I didn't say write lots of test, just make it easier to test.
I deal with code everyday where the test code for a relatively simple class is like double the amount. People can learn how to write code that is easier to test, the only way you get better at this is WRITING THE TEST when you also write the code.
Ok_Platypus8866@reddit
Maybe it is a skill issue, but is impossible to say without any real details.
But that complaint applies to any sort of unit testing, not just TDD. If unit tests are slowing down your ability to modify features, then I think you are doing something wrong.
edgmnt_net@reddit
Your application periodically saves some data to a file. You choose to use atomic renames with fsync to ensure it's consistent and crash-safe. How do you test said implementation? How do you test that it's indeed used? That's the sort of stuff that you either got right or you got wrong and no amount of testing is really going to help you. In this particular case, code review is going to do you a lot more good. Best you can do is just have some coarse, sanity and stress testing and hope it might catch some random bugs, but it won't really catch such race conditions with any specificity.
It won't really matter if you use mocks, fakes or stubs unless you can somehow avoid having to add an interface just for testing purposes. Sometimes you can avoid it, e.g. inject a no-op logger instead of a real one, for free. But it isn't always reasonable. Putting every unit behind an interface with just one real implementation and one fake implementation leads to a lot of indirection. Just to test to what end exactly?
Although I agree that better design can lead to better testability, I'm just saying full unit test coverage just isn't very reasonable to pursue.
willcodefordonuts@reddit
That’s the theory of it.
The reason I don’t like it is that if I’m developing things from zero sometimes I’m not sure the shape of what I need to build. And so I build something / refactor, try something else, refactor again. The tests just slow that all down if I do them first as sometimes the methods I write tests for don’t exist or change a lot.
If you already have a system in place sure it’s easier to write the tests first as you are limited in the scope of changes. But I still just don’t mesh well with tests first.
Even writing tests first you risk missing functionality. If you can read a design doc and pull out what needs to be tested you can do that same process first or last.
kani_kani_katoa@reddit
I liked it early in my career because it forced me to build components that were testable from the start. Now I do that unconsciously, so it doesn't seem as necessary to me.
Adept_Carpet@reddit
Yeah, testable components are good for several reasons. The first is that they are components, and the subroutines should have lower cyclomatic complexity.
I'd bet that 95%+ of the value of TDD comes from making people write testable components.
I find that if you are rushing and doing careless work, or you don't understand the problem, or you don't know how to express the solution correctly, you're gonna introduce bugs regardless of the use of tests (same thing with type systems).
Twisterr1000@reddit
Big TDD user here. The thing around not knowing 'what you want something to look like' is really valid. The way I tend to approach those scenarios is to start with a functional/integration tests, and then move inwards to write lower level unit tests.
This is great as you can refactor as you go along without breaking tests, but whilst still ensuring your overall flow works.
Other things/variations I/my team do are:
Lots more I could write, but on a plane about to take off. Feel free to DM me if you want to have a chat about TDD though!
PureRepresentative9@reddit
That’s just BDD though?
crazylikeajellyfish@reddit
I never use TDD, and I totally agree with your philosophy re: not knowing what you want something to look like!
Iterating on an application until it looks/feels/acts right and then building an E2E test which verifies the happy path is a light lift and provides a lot of value. You don't immediately know what's wrong when that test breaks, but you know that the code isn't ready to be deployed. Once that E2E test exists, then I'll start writing unit tests around the most complex pieces of the system to cross them out as potential problem areas.
On your team that leans into TDD, do you all have a dedicated PM who's putting together specs for you? How much "product work" do you all have to do as engineers, figuring out what the right requirements are, vs getting them upfront and then writing tests to check them?
Jestar342@reddit
You don't need specs to use TDD. TDD excels in the unknown space because it forces you to think "What's next?" and nothing more. Which is perfect for when you don't know what the final picture looks like. The tests will drive you to that destination when you get to the point of "Ok, there doesn't appear to be anything else to assert."
Take your objective, what's the first thing you could assert that shows some progress toward this objective? That's your first test. Once you are happy with that, what's the next thing you could assert? Second test. Etc.
Independent-Chair-27@reddit
TDD will help you refactor code as you go.
You satisfy a requirement then clean up the code. It should become a cycle. It's not the fastest way to code as you need to produce more code.
it does mean your code is broken apart quicker as testing large classes with multiple dependencies is really awkward
I guess you should use TDD time to focus on the external interfaces of the code you're creating. They won't need to change too much/will be refactored early on.
It doesn't work well if your requirements effectively are pull the correct bits of info from an API you don't know trying to learn. Eg your method is digging headers from an http request. In which case your assets are. I read Xyz from this collection. If you don't know how to read Xyz then you can't really use TDD.
It doesn't work atall for POCs/prototypes as you don't care so much about structure you're trying to learn something as quickly as possible. Sounds like your mixing the delivery and prototyping.
IndependentMonth1337@reddit
If you shift you mindset to only think about the actual data and how it's handled or processed it's easier to do TDD because the UI is just something you put on top of it afterwards and it could be anything from a website, mobile app a desktop app, or an api.
hamorphis@reddit
My team writes the UI in .NET (Avalonia). I had always wondered how do TDD this.
UK-sHaDoW@reddit
You don't test the UI. You have an abstract model that represents what UI will display and then you test that.
The UI is then a thin layer that takes that abstract model, and then ergh displays it.
Brought2UByAdderall@reddit
Even a lot of TDD die-hards advise against the methodology for UI.
Jestar342@reddit
It's generally very hard to unit test UI, instead keep the UI as thin as possible - i.e., something like MVVM so you can unit test the models and view models, whilst the "view" is doing nothing more than plumbing in the (view) models.
wvenable@reddit
Do you not have users though? I find the best way to get requirements from users is to show them a UI that is wrong and they'll happily provide the correct design. But from scratch, they are mostly incapable of providing all the requirements.
If I just build something according to what they think they wanted, codified it with tests and back end APIs and then built the only obvious UI that comes from that design it would be wrong the moment it's deployed.
BumbleCoder@reddit
I think the important thing with either approach is to make sure the tests actually fail in the proper scenario. I've gone to update tests that my code breaks due to signature changes or whatever, only to find there's no actual asserts, verifying calls...nada. Just a bunch of mocks setup so the test always passes.
vtmosaic@reddit
I use TDD to help me figure out how I want to design whatever it is I'm creating (from scratch). I identify the most basic unit of functionality required to deliver the whole component, the simplest unit test scenario, and write that test. Once that's passing, I'll add additional logic to handle an additional use case, and so on.
This approach has been the best technique I've found in my career for avoiding over-engineering and unnecessary complexity. It let's me experiment with an approach by building it incrementally. So much easier to refactor if I went a little way down a dead end alley, which my tests showed my before I had gone very far.
It's also really hard to break old habits, so it takes discipline and practice to really follow TDD to the letter. I am still catching myself writing more code than necessary to pass the failing test, all the time. Even so, I'm still better off than when I used to build the whole thing and then testing it.
extra_rice@reddit
You are more likely to discover what you could have missed if you write tests first, because it forces you to think of the end state in more practical terms. It's not a fool proof method, but I find that it's more effective than just shooting from the hip.
If you're doing something like this, it's essentially doing test first development even if you're just thinking about it. Automated tests are artifacts of the practice, only you choose not to do that as you are writing the production code. However, if you even just thought about thetest first, then you're half way there. I say 'essentially' because to me, at the core of TDD or Test First Development is thinking about software as systems.
dbxp@reddit
I think that's true for an SME but many programmers know more about their code base than the real world usage of their software
positev@reddit
Sounds like an issue that should be addressed
TangerineSorry8463@reddit
SME should be involved with test writing then.
I weep for the fact that QA seems to be a dying field, it used to be a great middle-man between non-technical project people and all-technical developers.
positev@reddit
Interesting, we have “verification champion”s at work but being a SME is evidently not a prerequisite.
Brought2UByAdderall@reddit
You find that and that's okay. I don't find that. Especially in complex UI work. What's really sucked in the last decade is all of these thought leaders and productivity consultants actually sticking their noses directly into my process. It sucks. And it doesn't work for me. It slows me down.
And no, I'm not a cowboy coder who leaves all this shitty code all over the place that breaks all the time. I'm the guy didn't have all these problems with bugs in the first place. Because I think about what I'm doing. I don't adopt methodologies that protect me from having to do that. Aiming for 100% test-coverage is bonkers. It makes modifying anything a giant pain in the ass. And where tests are always expected, whether a dev writes tests first shouldn't be anybody's fucking business.
Hot-Gazpacho@reddit
Point of clarification, if you’re making changes without tests, especially if it alters functionality, you’re not refactoring; you’re just changing stuff. And that’s ok, just don’t call it refactoring.
If you’re writing code that causes a ton of churn in your tests, then your tests are probably too tightly coupled to the code under test.
PileOGunz@reddit
Refactoring is just changing code to improve code without changing the functionality. Martin Fowler didn’t couple it to unit tests when he wrote his book “Refactoring’
Hot-Gazpacho@reddit
And how do you propose to go about making sure you didn’t change the functionality?
Brought2UByAdderall@reddit
The same way people were doing that for decades before you found the One True Way To Write Code.
Hot-Gazpacho@reddit
Settle down, friend.
I never said there was one true way; please do not put words in my mouth
PileOGunz@reddit
The refactorings in the book are very precise small steps you shouldn’t need a unit test.
Hot-Gazpacho@reddit
Yeah, I really don’t think that’s what the author of the comment I replied to was talking about.
GrinningMantis@reddit
Lots of mocks & expectation-based tests are the primary cause of churn in my experience
Testing with less granularity, only public interfaces & asserting side effects is usually the way to go
Brought2UByAdderall@reddit
And that's the OG definition of unit testing back before people got all culty about it.
edgmnt_net@reddit
That's pretty much my thinking too. But I think it has more to do with the nature of the units. Pure and general stuff like algorithms tend to be very testable, it is very easy to write tests for things like sorting algorithms and the tests are very robust. But side-effectful and complex interactions with external systems aren't very testable. Pure yet arbitrary translations of structures, like internal DTO conversions, are also hard to test meaningfully, because that just is what it is. For similar reasons, you should avoid testing stuff like specific errors returned by an endpoint on failure, assuming those branches are clear from the code.
Many times there are other things you can do instead of writing automated unit tests. You can test manually, you can write end-to-end / sanity tests, you should abstract appropriately and review code. There's only so much tests can do and people definitely overuse them (part of it was driven by unsafe/dynamic languages where code coverage is merely used to trigger code and discover errors, which can spring up literally everywhere).
lordlod@reddit
I find TDD really helps me with the design and conceptualisation.
The tests are a crude mimic of your users. So starting from that direction helps me produce a better design. Little things, like function parameter order or types should be optimised for the user of the function rather than the function internals. I find writing the test first is better because it comes from the user direction, writing the function first produces a definition and design that's convenient for the internals.
All this stuff, tests etc are communication signals. Having them fail or xfail isn't always a bad thing, it's a clear communication. Especially during the early development stage when things are very rapidly changing.
Electrical-Ask847@reddit
What about building it twice. Do a quick spike where you explore APIs and get a good idea what the output would be like. Then throw away the spike and do TDD.
polypolip@reddit
Funny, when I'm in your situation I find it easier to develop with TDD. It helps me write testable code early on with parts that can be easily mocked if necessary. The architecture generally ends up better. And IDE helps a lot with redactors having minimal impact on the tests.
When I have a clear vision it's easy to write good code first then test it.
External_Mushroom115@reddit
Odly, this exact scenario is where I find TDD to be practical and feasible. Start with 1 test, make it pass; extract (refactor) what you need to reuse in second test etc…
Gradually refactoring to whatever you expect to need in production code.
Scientific_Artist444@reddit (OP)
Same.
That's true. That's why in the end it all boils down to your and your team's understanding of requirements. TDD is one way to do that. Documentation is a great way to be on the same page, so to speak.
BDD frameworks probably can work well in defining requirements clearly that is also understandable by non-technical audience, but frankly, I haven't seen any management people willing to learn them.
garlicNinja@reddit
In practice you just end up rewriting the test 10 times.
Complex-Many1607@reddit
You guys are writing tests?
ategnatos@reddit
It has a risk of copy/pasting source code into tests... or just powermocking the hell out of everything just to chase coverage metrics. Tests written after can be ok (even better if they uncover bugs and you go and fix your source code), but often there's a good reason not to trust tests written after the fact.
123_666@reddit
If you are in the explorative phase, only just figuring out what the actual problem you are trying to solve is, it doesn't make sense to start with writing tests.
Write the tests when you've narrowed it down enough that it's possible to write meaningful tests. Depending on the complexity of the thing it might come later, for a simple, easily reproducible bugfix you can usually write the test first thing.
putin_my_ass@reddit
I have found it to be necessary for information dense functions, like calculating an ROI with several different input figures and displaying a tooltip showing the work...that was a constant whack-a-mole until I started with the unit test and then wrote the functions after.
Simple stuff like a component that just displays a figure I don't generally bother with tests.
schmidtssss@reddit
All my tests always pass 🤔
Electrical-Ask847@reddit
TDD is a design tool. Writing tests after is more like QA.
willcodefordonuts@reddit
As long as you get good tests that cover the right things it doesn’t matter
-think@reddit
I’m not here to prescribe workflows for people, but I will say it was marked learning for me that writing expectation firsts changes my design and thinking.
Even working on personal cli tools, I write out the README’s usage section just so I get a feel of what I’m after.
TDD forces me to think api first, which of course, isn’t the only way. It’s just the way that I get the best results.
simon-brunning@reddit
Seems inefficient to me. You're still writing the tests, which is going to take as long as writing them one case at a time beforehand - if not longer, since the production code might not have been written to be testable. And you're not getting the design cues you'd be getting from test driving.
tobega@reddit
TDD is about verifying assumptions.
By writing a test that fails, you verify your assumption about the problem (or missing functionality). I have had cases, especially when bug hunting, where my assumption about what was wrong was actually incorrect (i.e. the test passed)
Later, the passing test verifies that your new code fixed the problem.
In some sense I always use TDD, even if I don't always write an automated test. I do set up criteria for failure and success, though.
When I have coded exploratorily, I may not know what to require on a detailed level, although I will know when I am "done". Then I will often comment out the code and write more detailed tests to prove that each line is really needed. Often they aren't.
aserenety@reddit
I talked to somebody who hates TDD. I
NastroAzzurro@reddit
Do you guys test?
hingedcanadian@reddit
I test in production
ar_reapeater@reddit
Ha ha. Crowdstrike says the same. Production driven development. Lol PDD
The_man_69420360@reddit
We have testers buddy, they’re called users
Scientific_Artist444@reddit (OP)
Frankly speaking, releasing the product as beta for user feedback and then improving based on that is a much, much better idea than having a QA team iron out all the defects they discover by rework of development team.
Why? Because users who use the application don't just help with functional insights but also usability insights. I don't think QA takes into account non-functional requirements which are also important to users.
And while the UX team does its best to anticipate how best to create a great user experience, no one other than real users know what they need in the application. Beta allows for immediate product feedback, extremely valuable to the product team.
cpb@reddit
Have any managers sought goals towards knowing about issues before your users do?
JoeBidensLongFart@reddit
Everybody has a test environment. Some organizations even have a separate production environment.
Scientific_Artist444@reddit (OP)
Remains to be seen how many do.
ar_reapeater@reddit
I remember reading that book when I was in college. Then I got into the industry and was so angry at the book. Lol. Some books should be 1 paragraph blog posts or tweets.
I am yet to see real world implementationS of TDD or Clean code.
At work, we do what the other devs do. They have a process that makes sense and has allowed them to deliver.
Having said all that, the best dev book I can recommend is “Working with Legacy Code”
renq_@reddit
Yes, that's my standard way of working. The only exceptions are POCs, but I never merge them. Once I have an idea of what I need to do, I start from the main branch and do TDD.
Scientific_Artist444@reddit (OP)
Is it something the development team decided or something that was enforced at organization level?
renq_@reddit
The dev team has decided :) We often write code together (pair/mob), we had about 50 coding dojos together. In other words, we learned how to do it and it kind of became our way of writing code.
Scientific_Artist444@reddit (OP)
Nice. Of course then, it is because of the value offered.
renq_@reddit
Sure, here is the revised text:
The issue with TDD is that many people, myself included, have attempted and struggled with it. It's challenging at first, and you need to learn how to do it correctly. That's why I suggest participating in coding dojos, as your team can learn more quickly than by attempting to modify actual production code. To begin, I recommend starting with TDD in a randori dojo.
zippolater@reddit
It’s a standard for me that I try to instil onto my colleagues and junior devs.
What’s important is to have structure in your tests ie //given //when //then. Given are your inputs, when is what you’re testing and then is your assertions. It follows into conjunction with SOLID principles as well.
It took a whole for me to get into it but once it all clicks, it leads to clean code
dbxp@reddit
Not really, I tried using it a bit this week around some bugs I was working on but found it didn't really work as to replicate the bugs I had to change the interfaces and fundamental assumptions of the existing tests.
Also I think AI has really changed things as you're able to throw a method at copilot and very quickly have either write some tests. I can write the code first, have AI write tests and use that out put to ensure the code matches the spec.
_hypnoCode@reddit
Damn, finally someone who sounds like an experienced developer and not some Junior or Exec obsessed with tech influencers and bringing some sanity.
TDD is one of the dumbest things I've ever heard of. If you have a very clear set of criteria, like something scientific, and things aren't changing. Yeah I can see TDD being great.
But how many people here are doing that kind of development? Most of us are working in some kind of constantly changing Agile environment where if you write tests first it might be very likely the requirements have changed by the time you get to the actual code. It's such a massive waste of time.
Scientific_Artist444@reddit (OP)
What is changing so much? The UI? If so, using patterns like Model-View-Controller can help. Patterns like MVC help create a separation of concerns between the data being presented and the view presenting the data. This way, your code to handle the data can exist independently of the logic for the view - the interface shown to the user.
Now if the changes you refer to are at the data level, i.e. the data architecture of your application is changing continuously, then either the product team isn't clear on what to build or the tech team isn't clear on the business requirements.
quiI@reddit
It's clear you probably have spent a few minutes thinking about TDD.
effusivefugitive@reddit
Nobody is writing tests months before their code. What are you even talking about? You sound like you don't have the slightest idea what TDD is.
The amount of "things you get tasked with" or "people interacting with your code" is completely irrelevant. It's obvious you're just flexing about how important you think your job is.
_hypnoCode@reddit
What are YOU talking about? Concept to production is measured in days, sometimes weeks, not months. That's what I was referring to for enterprise developers.
That was added after. No flex, but if you are measuring features in months then you're not doing the same kind of work I do.
chimpuswimpus@reddit
I've heard many arguments against TDD. Some I have done sympathy for. None have actually convinced me, but (and I apologise for this) this is the stupidest one I've ever heard.
If the requirements have changed that quick, you'll still need to rewrite the code you wrote instead of the tests you wrote.
In reality, yes requirements change, but if they're changing as much as you say you're experiencing then there's something seriously wrong with either the communications with stakeholders or the understanding of the problem.
Main-Drag-4975@reddit
It’s be pretty convenient if there were a way to program so that one’s interfaces were guaranteed to be amenable to testing though, huh?
dbxp@reddit
Something has gone horribly wrong if the spec is changing between writing a unit test and the code, that cycle should be measured in minutes.
I think the problem. In my case is the complexity is in the domain not the code which TDD can't really help with. If anything tests written according to a misunderstanding of the domain just reinforce the misunderstanding.
chimpuswimpus@reddit
Is the AI writing the tests from the spec not the code? If so I think I'd say this counts as TDD.
dbxp@reddit
From the code, we'd need a better codebase and a custom ai model I think to generate from the spec
chimpuswimpus@reddit
I don't get what the point is if the tests then? They're just testing what the code does, not what it's meant to do.
At that point they're literally only regression tests.
dbxp@reddit
Specs are written at the story and feature level not the unit level.
Unit tests test regressions but those regressions can be within the development of a single story. For example I was recently working on an API endpoint which had to take 2 types of entities. I wrote the code and tests for the first type before moving on to the second with the knowledge that I wouldn't break it for the first entity in the process. The endpoint also needed to work for a list of 2500 items so I could address the functional requirements first before looking at performance.
WaferIndependent7601@reddit
Writing a failing integration test first is what I’m doing now. Getting rid of too many unit tests is so good and makes refactoring easier. When I first read this on Reddit I thought: what a bs. But now it makes sense to me and I’m focusing on good integration tests and only do unit tests where it makes sense (for backend stuff: don’t test that the device calls the repository with the correct parameters. That does not help anyone)
So no: tdd with unit tests or everthing you write is outdated for me.
sobrietyincorporated@reddit
TDD is for people without deadlines.
ravigehlot@reddit
I wrote a bunch of unit tests for my PHP code using PHPUnit, and honestly, I learned a ton and improved my coding skills a lot. Testing really helps you get to know your code better. I see the benefits of TDD, but it was never my go-to method. I usually wrote a chunk of code and then a test for it, repeating that process. With TDD, I caught mistakes and avoided issues earlier, which was great. But for some reason, it always felt like it took me longer to finish tasks. In an Agile setting, that made it tricky to hit deadlines. Still, TDD is pretty fun and definitely makes things more interesting!
bobaduk@reddit
I use TDD more or less exclusively.
I never used to, because I tried writing tests and wrote bad tests, and they failed whenever I changed my code, and didn't assert anything meaningful, so I got no value from them. Later, I learned from a practitioner how to do TDD well, and I never looked back. I've been a TDD practitioner for ... somewhere between 15 and 20 years.
This isn't unusual, fwiw. I've found that, even when somebody tells me in interview that they practice TDD, they often do it badly. My expectation is that I'll have to teach engineers how to build useful, maintainable tests.
It's been a while since I came across the book, and I don't think I've read it cover to cover, but from my experience - yes - if you want to refactor legacy code, you need to approach it with a test-first mindset.
A lot of the books on agile technical practice, and surrounding philosophy - DDD, Refactoring, Continuous Delivery etc - assume TDD as a foundational practice. If you don't have tests, you're not refactoring, you're just rewriting stuff and hoping it doesn't break.
The best way I've found to fix painful legacy code is to write an approval test (https://approvaltests.com/), then start to write smaller tests that cover specific things as I refactor, sometimes using the Mikado technique (https://matthiasnoback.nl/2021/02/refactoring-the-mikado-method/). The approval test tells me that I haven't changed the high level functionality while I make changes. Without that, it's hard to break down complex code because the cognitive load is too high.
Tests mean that I can just ... try something... if the tests pass, my idea was good. If I can't get it to work in under 10 mins, I can git reset, try progressively smaller goals until my tests are green, then git commit.
It's a hard question to answer, because I think of it as basic hygiene. It's like asking a doctor the pros and cons of washing their hands in between patients.
borninbronx@reddit
What suggestions would you give to someone trying to learn this without anyone to learn from?
bobaduk@reddit
Hard for me to say, because that's how I learned! The original Test Driven Development By Example by Kent Beck is a great resource, and (plug alert) I wrote about how I apply TDD in Cosmic Python.
I keep thinking I should set up a Twitch stream or something to do TDD coaching on the internets, but it's effortful.
borninbronx@reddit
if you do please ping me up :D
I've read TDD by Examples by Kent Beck. It's great to introduce TDD but nearly not enough to use it proficiently. For someone that doesn't know anything about TDD is a must read, however once you did that applying it to the real world.
You also need to know you SHOULD NOT follow the huge amount of resources online telling you to write 1 test per class and mock everything, sadly that's why most people hate TDD or think it is useless, even experienced once.
Finding resources that actually teach you how to properly do TDD is really really hard. Toys examples are cool and all, but what's missing is something that actually go into the details of how you should do it in multiple different situations and domains. Writing backend and UI apps is completely different and TDD apply differently there. Working with 3rd party APIs and Frameworks that aren't really build well for testing etc...
That kind of thing can be learned by trial and error but it is tedious and add friction to your work.
This sounds like a rant, and it kinda partially is. I wish there were more developers that figured it out that would go online sharing their experience.
baynezy@reddit
I use TDD primarily. The lightbulb moment for me was that it's not actually a testing strategy. It's a design strategy.
The usual approach to software development is Design, Build, then Test. With TDD it is Test, Build, then Design.
So with TDD you write tests that define your requirements. You then write the simplest code possible to get the tests to pass. You progressively add more tests to ensure you cover the required requirements. When you feel you have enough test coverage you now can refactor your code to the design you are happy with.
On an existing code base if you have a bug to fix then you first write a failing test that demonstrates the bug. Now fix the bug and make sure all your tests are green.
neopointer@reddit
TDD is like sex in the school, many people saying that they are doing it. Very few people are actually doing it.
With the difference that sex is actually a good thing.
Seriously, I couldn't care less about TDD. But I think if you work with me and you do it, it's ok, as long as you don't want to force me to do it too.
The same comment goes to pair programming.
That being said, often if I have a bug to fix, being able to reproduce it (via tests) beforehand, can be a really good approach.
Saki-Sun@reddit
TDD light. You're half way there.
neopointer@reddit
I don't think so.
Building a feature from scratch with TDD is sooooo unproductive
And I don't always write a test to reproduce a bug before fixing it. If I don't do it always, it's almost like I'm not part (even partially) of the cult of the TDD. So no, I don't think I do "TDD light".
Saki-Sun@reddit
TDD works for highly complex and isolated business problems. When those kinds of things pop up I use TDD and it's like I've got a super power.
These days when I get those kinds of problems I get the whole team and and they get to see first hand when it really works.
They don't quite grok it, it's kind of a hard concept to get. They just think I'm a magician.
Fair enough, neither do I. We are in the real world.
nappiess@reddit
Not even close. The key difference is in the case of bugs, the code and functionality already exists at the point of test creation. It's more like creating a test that someone didn't think of creating after writing their initial code, just a lot later on.
Saki-Sun@reddit
If the functionality already existed there wouldn't be a bug...
nappiess@reddit
TIL if you create a feature and it has a single bug in it it doesn't exist
Saki-Sun@reddit
Well I guess. It could exist, it was just implemented and never worked or it might not exist at all, the developer just forgot about that case.
One could muse that until you write the test it's Schrodinger's functionality.
Business_Try4890@reddit
Sex in the school...what 🤣🤣🤣
internetuser@reddit
The idea of TDD was that you only write code to fix failing tests. If you want to add a feature, you would write a failing test for that feature, then fix the test by implementing the feature.
This made sense at the time because it ensured that your new code would have test coverage. Nowadays we have code coverage generation tools, often built into CI, so the argument for doing TDD is weaker than it used to be.
We have also learned that code coverage is a necessary but not always sufficient criterion for test quality.
I generally expect code and tests in the same PR, with acceptable test coverage and acceptable test quality. I understand that people have different problem solving styles, and I don’t ask or care whether the tests were written first.
lumut1993@reddit
I've never seen anyone, in my +10 year SE career, doing TDD. But everyone says they do it, on Linkedin
unflores@reddit
I used to do Tdd a lot more in rails. I've moved to a typescript / node env and I do it less. Also half of what I'd test before I now assert with types. It has def made me think about tests differently.
If I go back to a nontyped language I'll probably go back to tdd.
Cupcake7591@reddit
Not religiously. I do like it for big fixes and some things the small changes though.
mkluczka@reddit
if you can't write a test that reproduces the bug, that means it doesn't exist (or is gamma ray induced)
123_666@reddit
This only works if you can replicate the environment/system state where the bug happened. Sometimes easier said than done, especially when working with custom hardware.
TomerHorowitz@reddit
Or clients with air gapped environments
PurepointDog@reddit
Meh, self-contained is the easier case. When the tests start to require mock services, that's where I call it quits and get frustrated with the direction of the project
AdmiralAdama99@reddit
I also like TDD for classes with lots of string-in string-out methods. Find an edge case, write the test, make the fix. Don't even need to manually test which is nice.
The_Axolot@reddit
Same here. I think a lot of TDDers exaggerates its benefits, which I wrote about in [this article] (https://theaxolot.wordpress.com/2024/08/09/its-your-fault-people-dont-like-tdd/)
loxagos_snake@reddit
IMO this is the most sensible use of TDD; apply where needed. It's extremely cumbersome to try to write tests for a system that doesn't even exist, and the mindset of "write the code so that the test passes" can lead to all sorts of dirtiness just to get a green checkbox.
But if you have a bug, it makes sense to take a step back, write a test for the expected behavior and then fix the code so that it passes. You are pretty much fixing leaks in the most secure way.
positev@reddit
I feel that you are missing the point of TDD.
The point is to write code that is testable and that you can change with confidence. Just TDD’ing for a bug is doing so with a system that is probably hard to test because the work was not done up front.
Writing code “just to check a green box” that leads to “dirtiness” is because people like to skip the last step of refactoring.
My current philosophy, if you have no automated way to explain why the code exists ( a test ), they are you sure you even need that code? If we don’t need code, it should be removed, what is stopping me? Tests.
crazylikeajellyfish@reddit
I mean, code isn't written to satisfy tests, it's written to satisfy needs. Test help confirm those needs are being satisfied, because turn it around -- how do you know that particular test case needs to exist?
FWIW, I think the real distinction here is how legacy your codebase is. Testing unproven ideas built in greenfield development will just force you to update the tests as well while you iterate on what the right solution looks like. TDD makes more sense with a clearly defined requirements document, but I think there are lots of environments where getting completely accurate requirements upfront isn't feasible.
Ok_Platypus8866@reddit
because there is some spec somewhere that says the code should behave that way, or you as a developer have decided on your own that the code should behave that way.
If there is a need for the code, there is also a need for a test. Exactly who determines that need is somewhat beside the point.
Brought2UByAdderall@reddit
It's completely fucking absurd to aim for 100% test coverage. The original IEEE definition of unit testing was to test large modules of code where they intersected with each other. Eventually it got updated to something like the smallest unit of code possible. IMO, it's just another example of productivity consulting getting completely out of hand. I get that TDD works for a lot of people, but the first time I heard a back end team say they couldn't accommodate a feature request because of all the tests they'd have to rewrite, I was thoroughly over it.
loxagos_snake@reddit
That hits the nail on the head IMO.
Full-blown TDD just sounds like one of those ivory-tower ideas that almost never work in reality. Yes, in an ideal world, your code should always be safeguarded and fit neatly within the constraints of the tests. Then we'll get as much time as we need to make it pretty & fast, it'll work perfectly, and we'll go for drinks to celebrate our success.
In my experience, both in a startup environment with tiny runways and a huge-ass international company, this never gets to happen. Everyone wants features and they want them fast, so it makes more sense to build the system first, add as many tests as possible and deliver. You will be asked to cut corners, and you will do as you are asked because that's your job. Tests are usually the first thing to draw a short straw. You can warn your boss that the code will not be as thoroughly tested if you rush it out the door and let them decide; what will you tell them if the implementation is not there in the first place?
In these cases, Reddit likes to reply with "I would just tell them to shove it and start looking for another job" but this is too sci-fi for my likes.
positev@reddit
Who said anything about test coverage?
perdovim@reddit
Any development paradigm followed literally is inherently broken and produces bad practices and bad code.
Doing some research and thought about what you're trying to build before you start coding is a good thing. That was the beginnings of Waterfall, it went bad when it codified How people should do it and mandating it for everything.
A similar argument can be made for TDD, Agile, Scrum, Kanban,...
The principal of having a test suite you can run that gives you confidence in your project, and that you start with testability in mind (by writing a failing test that you want to fix, and not merging it until it passes) isn't a bad principle. Now does that test need to survive beyond this dev cycle? You don't want your test suite to grow unmanageably...
loxagos_snake@reddit
Maybe, but I also feel like this whole approach is very dogmatic, as with other 3-4 letter paradigms. In the end, it's just a tool. Real-life software can rarely be constrained within a single approach.
And to be honest, using TDD everywhere upfront just feels extremely awkward. If I'm making a run-of-the-mill backend API, I don't need tests to know what kind of code I need. The client-facing interface is enough to inform my decisions, and any services I'm going to write should serve the interface's needs. I find it easier to reason about what should be tested once a system is in place because after writing and adapting the code a few times, I'm more of an 'expert' on that specific problem. I don't find it hard to test at all with this approach.
I think it's at its most useful when applied to specific contexts, that's why I mentioned the bug. If something in your system fails, it makes sense to add a few guardrails first before writing code that fixes the problem, so that you can ensure it never happens again. When applied globally as a development philosophy though, it's just one of those things that might not mesh well with reality. If you are in a startup environment and you get asked to cut corners (which yes, is not a good development strategy, but it happens anyway), how are you going to explain that you can't because you left the actual implementation for last?
In general, I'm not very fond of approaches that claim a direct path to 'success' via a codified process such as "red-green-refactor". It rarely works like that, but I'm happy to be educated.
guyfrom7up@reddit
During initial development, it's usually more useful to write something closer to full-system tests rather than precise unit tests. At that stage, you are mainly wanting to test the input/output system behavior rather than details of the implementation, as the current implementation might not be great and is likely to change. Further along in development, it's good practice to go back and add more unit tests once you are more confident that you are happy with the inner-workings and organization of your project.
Scientific_Artist444@reddit (OP)
Same. Only in my case, I have never used TDD (though we do write tests). But some quality compliance teams now insist us to do so.
Steinrikur@reddit
The only time I really used TDD was to reproduce an expired cache bug. It could happen in the field, but was incredibly rare.
But filling up cache_size+1 in a test was easy to reproduce, and then fix.
JustGoIntoJiggleMode@reddit
TDD is about efficiency. You need to think ahead what you will produce next. And once you produce it, you have a safety net for refactoring and code cleanup. That’s all there is to it. You can do it without the rest of your team doing it. The main thing to get started is for someone to show you HOW to go about it in your language/area of software development.
_randomymous_@reddit
I use both approaches:
If something is simple, I write the implementation and then the test as a safety measure for possible future changes.
If it’s more complex, I go with the test first as it helps me understand the scope better and what functionality will my function have, otherwise I may overcomplicate it.
twinbnottwina@reddit
Having done TDD in the past, in a forced pair-programming environment, I wouldn't go back. But it was valuable experience, and I like learning new paradigms.
One person would write the tests as the driver, then the next person would implement the code/story as the second driver. Some tasks went smoother than others, and god forbid we got a story that wasn't fleshed out enough, or we got stuck, or some other problem. Then you end up implementing things and writing tests later, just to ship, which defeats the purpose.
TimelessTrance@reddit
My experience with TDD is that it works well for bug fixes and for new feature work on mature software. On new software your requirements may not be defined well enough to the point that you are doing a productionized POC
cpb@reddit
I do. And people take notice. Not just at work, but in interviews. They really notice.
curiouscirrus@reddit
TDD is fun to do with AI. You write the tests and ask an LLM to implement a solution. Of course, it works in the other way too where it writes the tests to your implementation. It’s also great at expanding additional test cases you might not have thought of or were too lazy to write. I find I go back and forth using both traditional and TDD methods.
syndicatecomplex@reddit
I try to get new features working end-to-end and then add functional tests based on those requirements. Then if I discover any bugs in edge cases I may include tests to cover that as well.
If I’m not rushed to complete the story quickly I think I’m more likely to refactor whatever I was planning to implement so that I really don’t need to add too many unit tests. Just so that I’m not leaving a massive problem in production for whoever needs to use it next.
But this can all change team to team, or PM to PM, so I just try and shift my test writing focus to however much is expected at the time. If it means writing code more horizontally, so be it.
MySpoonIsTooBig13@reddit
TDD becomes less daunting when you stop thinking of it as writing tests. Instead, think of it like a bunch of mini "main"s which you're using to execute little chunks of your code - just whichever little piece you're working on right now. Of course that piece you're working on is part of some bigger picture, but you've cut out this one piece and that's all you're focused on right now.
It's then almost like a fantastic side effect that you usually end up with a nice test suite out of the deal.
a_library_socialist@reddit
The big benefit of TDD isn't tests - it's that you're going to realize sooner that the understanding of the problem is incorrect or contradictory. When you write a test, you have to have the problem fully specified, and so it becomes apparent.
That mismatch of requirements, not realizing there's a problem till late, is what causes most delays and bad code (because the devs had to kludge to make the date with little warning). TDD helps with that. But takes discipline to front load costs.
Aromatic-Fee2651@reddit
This!
MasterBathingBear@reddit
This is what I’m trying to get my team to realize. When you have to think about what the outcome looks like, you’re more likely to think of all the problems with the way the problem is stated.
anonymous_drone@reddit
I practice TDD extensively. When I started around 10 years ago it was a mess. I had to develop a sense of how to make the tests reflect a requirement and not how the code is currently factored. "This class should call this other one" tests led to a super design and maintenance version. I felt like it prevented bugs, but it was too slow, and unfun to work on. Future changes forced me to rework old tests too often.
Now I TDD by writing a failing test that sounds like the intent of the story "if this widget was created through this api, it should be returned in this other widget search". I'll have a test that does this against a real database and another that does it against a mock database.
Generally, the tests reflect facts that don't change very often. They read like an instruction manual for what a component was intended to do. They catch bugs all the time. And I normally only have to change them when a new story actually alters the expectations meaningfully.
Sometimes I find that a part of the implementation is so intricate that I will isolate that one area and write the old style micro tests. Most often it's a complex data access or mapping component.
But most of the time I actively avoid what I consider to be ideologically driven purism "every class gets an interface, and is forcibly isolated from every other class, and the each set of tests makes sure it executes the other class". I find they hurt more than they help. My tests are trying to check, "does the damn thing to what the story asked it to do" and "can I step through this old feature without having to run the app directly and remember how to kick off this process "
borninbronx@reddit
Yes. I'm starting to learn to do this. The hardest thing is that there aren't many examples and materials to teach how to properly TDD like this.
And without anyone to learn from is a long and error prone process.
UK-sHaDoW@reddit
Yes, and I encourage people to use it. But they don't have to. The reason I encourage it is that I have seen tests go green even though I've ripped out the code that it's meant to be testing. Impossible for that to happen in TDD as you always see a test fail first
TitusBjarni@reddit
From people new to testing, I've seen way too many instances of tests that do not do what they think it does. The test passes even without the prod code change they made. Passing tests is not our goal, good tests are. TDD helps them find that out themselves.
TitusBjarni@reddit
If a test passes the first time you run it and you've never seen it fail (as is common in the test after approach), it's not uncommon that the test is just wrong. TDD helps ensure your tests are doing what you think they're doing.
Before TDD, we had the book "The Pragmatic Programmer" that suggested "testing your tests) by purposely introducing a bug and ensuring the tests catch it. TDD is just a better workflow for testing your tests.
I find TDD helps me think through edge cases much better.
teoska91@reddit
We attempted to use it, but it didn't work out at all. We gave up. We are basically focusing on the implementation itself, then writing unit and (if possible) integration tests covering it.
Interesting-Ad1803@reddit
I'm not a fan of TDD. It's putting the cart before the horse. I know that TDD fanboys and fangirls swear by it, but I've not seen the benefits.
I am, however, a huge fan of quality unit tests. They are indispensable in making quality software. I, and I think most developers, find it much easier, simpler, and better to write unit tests after, and I mean immediately after, writing the class or method.
If TDD really worked, there would be legions of people using it and there just aren't many. There are just a few "purists" who view it as some sort of religion.
wedgelordantilles@reddit
Yes, but I did it wrong for a long time and lots of people still do.
Watch "TDD Where Did It Go Wrong" https://youtu.be/EZ05e7EMOLM?feature=shared
It will skip you years ahead.
davidblacksheep@reddit
I do TDD in the sense that 'tests are a first class concern', not in the sense that 'I always write tests first'.
My philosophy is that tests are a good test for whether some code is usable. If it's easy to write tests for, then it's probably going to be easy to use in other contexts.
Kususe@reddit
My point is simple. Just test.
I personally believe and advocate that TDD helps juniors most of the time, forcing them to think first, code later. This requires a change of perspective, and I got multiple evidences in my inner circle that the approach works pretty good and pays in the long term run.
I personally loved TDD, and I think is way more valuable than just writing tests after the implementation for a simple reason: you cannot shape test code to implementation just to make test pass.
TDD comes at two costs: - you should find someone that grasped it and mentor your journey. It’s very easy to misread it and getting out of path - it requires a relevant learning curve since it change you way of thinking about software.
lherman-cs@reddit
I think TDD is great if the scope is well defined and small.
Otherwise, I usually get the code to work first, I tend to find hidden issues after I integrate into the system. If I were to do this with TDD, I would have to rewrite the code and tests instead of just the code.
Hour-Calendar4719@reddit
Both TDD and BDD
gollyned@reddit
Whenever I use TDD things turn out great. Things turn out great when I don’t use TDD, but it takes a little longer. TDD makes me think more upfront.
OverEggplant3405@reddit
Personally, yes, I use TDD, but I don't force it upon others. It's definitely a skill that makes coding harder until you get good at it, at which point it makes coding easier.
I don't agree with most of Robert Martin's views, but one of his lectures involves a demonstration of TDD to create a stack data structure. Starting here: https://youtu.be/58jGpV2Cg50?si=bOrWX1gBRvIrlyew&t=2611
That is how I learned to do TDD, and I still use that practice, but not religiously the way he does. (I don't make unit tests just to instantiate random objects I imagined, for example).
I suggest that you learn it on your own time while not at work. You will screw it up the first couple of times. So, try it with some practice projects a few times.
Dos and Don'ts:
Pros:
Cons:
x2network@reddit
You loose momentum and takes the fun away.. who wants that?
soolaimon@reddit
It's my favorite way to do it, once my public interface has taken shape. Then I can TDD as I handle the non-happy path and edge cases, with new test each time I think of a new problem. And I can refactor, bugs, and optimize with confidence, plus a the little dopamine reward I get every time all my dots are green.
When I'm staring at a blank file, I might write a test, but the code for the test will just be log/debug line with the function output.
2legited2@reddit
Yes, I swear by it after doing it for years. I don't have the dreaded "now I need to write tests" situation and I end up only with as much code as needed to get the functionality done. I'll be happy to guide you if you want to get into it.
agrostav@reddit
Do NOT jump on the hype bandwagon. Just make sure, that you have a useful and extensive set of tests.
ButterflyQuick@reddit
Calling TDD a bandwagon is a bit of a reach
perpaul@reddit
Perhaps not a bandwagon, but I do find many TDD folks to be rather dogmatic about their chosen approach to development
ButterflyQuick@reddit
One persons dogma is another persons discipline
I agree that lots of devs can be overzealous about lots of things. I don't think that makes a given thing a bandwagon though. But all this is totally personal opinion. In my experience TDDers haven't been bandwagon-y. I'm sure other people have different experiences but I think TDD has been around for long enough for any hype bandwagon to have worn off, but trends come and go so I'm sure there are going to be periods it has more hype around it
No-Appointment9068@reddit
Do you know where I use TDD the most? Interviews.
The reality is it's a decent methodology, especially when you're dealing with a codebase you might not be super familiar with, or tangled legacy code, but restricting yourself to TDD isn't beneficial in my opinion. Sometimes it's just faster to write tests after.
Little-Boot-4601@reddit
On paper TDD is great. I go into every project with dreams of TDD.
But 3 months later you’re rushing to fix a critical bug while a colleague is stalling during a live demo to investors, half your data is mocked, upcoming features are unspecced, and the deadline for 3 different unstarted features is in 6 hours. You’re lucky if any existing tests pass let alone new tests being written or any kind of TDD being followed.
And of course then a tests you didn’t write are now considered tech debt and we don’t have time for tech debt.
floyd_droid@reddit
All our unit tests are integration tests. I write the tests first, get them reviewed by a teammate. Then the code changes are straightforward most of the times. Reviews are simple too.
kcadstech@reddit
I use ATFD… AI Test Following Development.
iron2000@reddit
When everything is specified yes, but often you have to do some r&d/poc first. So my approach is, make it somewhat functional and try to discover if your planned design and techstack fits for your UC. Afterwards, write your integration and acceptance tests set so you can go forward with refactoring your solution and optimizing everything so your code is clean and fits the requirements. When writing the test suite first and you discover major changes during implementation you often have to rewrite your test suite also. When you discover a problem / defect, try to write a test first that fails in this scenario and then implement until its green. In my opinion you should adapt the principles out there and combine them so you are efficient in your work.
NiteShdw@reddit
Only in very specific situations where the requirements are very clear. Otherwise I write tests after.
ApprehensiveKick6951@reddit
Not really. It comes down to personal preference and I've never seen it enforced as a style. I almost always write tests if applicable, but TDD is generally good for resolving known bugs because the troubleshooting feedback is faster and easier to work with.
LlamasOnTheRun@reddit
I use it for java a lot, but for front end react, I haven’t quite figured out a good formula. Unit tests there seems so verbose & difficult to predict
rtc11@reddit
only when fixing bugs.
MisterFatt@reddit
Like others have said, we write write tests and try to have good coverage, but I think most of us code first and then write/update tests later. The bootcamp I attended taught actual TDD - writing out test cases first to establish what you do and don’t want from your code, then writing the methods/functions that pass you tests, but I’ve never seen actually done in a professional setting
Matt7163610@reddit
On the backend yes because code structure lends itself more towards function calls with args and return values.
On the frontend no because often you are crafting a UI working towards visual functionality, and if using a UI framework then often you don't have up-front knowlege of what elements to interact with in tests. So frontend unit tests in my experience are better written to match the code and then prevent regressions when making future changes. By using test coverage metrics it's possible to achieve 100% coverage and realizing that causes you to discover missing test cases.
plasma_yak@reddit
Technical Design Docs, yes. Test Driven Development, no.
I think it depends on the domain, sometimes TDD can work well. Most of the time I feel it works best to development and have a local environment to test things as a full system quickly. Then build in tests that you think will be the most valuable for future changes to your system to adhere to. Tests are for guardrails for the future.
slowd@reddit
TDD has its place. When refactoring or writing a particularly tricky feature it’s very useful, but for many small tasks it’s overkill.
TurbulentSocks@reddit
Most people - especially juniors - do test driven development. It's just manual tests; they write some code and then run it to see if it works. TDD is just automating that last step.
TwisterK@reddit
I being using unit test to write system function and omg it is a god send. I can refactor my code and rerun the tests to ensure it doesn’t introduce any regression bugs and it is so empowering. I don’t think I will ever write a system function without unit test ever. To me, it is like version control, once u hav a taste for it, u will never want to develop without it.
However, as we approach higher level abstraction ( as it is nearer to UI layer), TDD juz felt less relevant there.
herendzer@reddit
Never
realadvicenobs@reddit
my take is quality over quantity
here are my two roles that i usually follow:
if its a CRUD app with simplified business logic, 99% of my tests are integration tests, 1% are unit tests to test my request payload validation
if its a CRUD app or service with complicated business logic or complex state management, the complicated business logic lives in the domain layer (DDD) and i write unit tests for those too
tkbillington@reddit
TDD is great in concept and helps you keep in mind thoughts to make methods testable. But everybody just writes the tests later.
bwainfweeze@reddit
I think it was Feathers who said that all code without tests is Legacy Code.
ub3rh4x0rz@reddit
Tdd shines when writing framework code. Writing framework code is something to be done extremely sparingly
bwainfweeze@reddit
Intermittently like a booster shot.
Everyone should do TDD. Whether that’s all the time or periodically is another story.
dungeonHack@reddit
I think TDD is most useful in a legacy migration project. It helps to verify that what you think the system should do is what the system actually does.
Greenfield projects are mostly organic exercises in figuring out a particular problem, so don’t need to be as rigorous.
4444For@reddit
Yes, but no fanaticism :)
Andrewshwap@reddit
Really depends on my deadline. If I have a very fast, strict deadline I will write my unit tests after. If I have more time, I’ll use the TDD approach so I can knock everything in a few sprints
notkraftman@reddit
In my experience the only place TDD works effectively (i.e. not creating a tonne of extra work rewriting tests) is when everything is very clearly defined. In practise this means bug fixes, where you know what the bug is and the expected behaviour, or rewriting/refactoring something that already exists.
ISDuffy@reddit
Bug fixes are definitely where I get most usage out of it. I even done sessions where I been on a call and got the manual testers to help write the unit tests.
Traditional_Hat861@reddit
Heavily. We do proper TDD with XP, trunk-based developement and proper CD. We also do Kanban. All of these tie together nicely.
fakehalo@reddit
How's about sprinkling a little improper TDD and CD into your diet, live a little.
Traditional_Hat861@reddit
Saw too much of it everywhere else. Constraints free and empower you. There's a culture of pair programming that also comes with it. I wouldn't have it any other way tbh once that I've experienced it. So much learnings for me. Kanban is also very less stressful than regular agile. I can go on and on
jawisko@reddit
I have worked in 6 companies, and not a single one of them follows TDD. only 2 out of 6 had complete unit test coverage, 1 had 80% and other 3 had separate QA teams that wrote tests after dev was done.
bellowingfrog@reddit
No, it can be useful in certain circumstances but it presumes you know the output shape of your code before you start writing. Which is not true in many cases.
Usually I prototype and try copy pasted code and iterate until i have a basic proof of concept, then i deploy the code to the cloud to make sure it works there too, then I simplify and improve the code, finally i add unit tests and documentation.
banananannaPie@reddit
Unit tests, yes. TDD, no. The issue is it doesn’t work well if the requirement changes so quickly.
SpecialistNo8436@reddit
I do sometimes, for example when I have manual calculations available and I can use those to make sure the code translates to the same results
I do the same when I am 100% sure about enough assertions to get a clear pathway
If I am in discovery mode (aka, have no clue what I am doing) then no, it is a waste of time that produces a billion needless tests that should be intrinsically tested by other tests
I actually delete a bunch of the tests produced by TDD before committing
AdministrativeHost15@reddit
No. TDD makes sense for a single executable that performs complex calculations. Calculate the expected value manually. Create the test. Verify it fails. Implement the code. Verify test passes. But for a three-tier app with complex flows it requires too much setup.
ISDuffy@reddit
I do test driven development when I think it right, so functions with more complex logic around numbers ect or when I got a bug that comes in.
People keep having discussions about the integration tester writing all the tests before development starts as that how they see TDD and I am trying to explain that is not TDD and will lead to developers having to poorly code to fit someone who doesn't write apps code.
rcls0053@reddit
Yes. I've used it many times, but it's a complete change to my earlier working habits so I tend to slip often. Also, it's really difficult sometimes on the frontend or when working with new technologies. You often have to do a lot of experimentation to just figure out how something works and tests just get in the way at the beginning.
Witherspore3@reddit
Early in my career I really struggled to balance utility of TDD with the personal workflow changes.
Then, I got the opportunity to work with a few legendary mavens in this field who used TDD effectively on difficult and ambiguous technical challenges. They didn’t really care about the tests or test coverage aspects of the practice; they used it to create intent and drive DBC.
What impressed me most was the sheer speed and consistency in output of these practitioners. It was at least 5x what I was capable of at that time and most of it was the specific way they were employing TDD. I made it a personal mission to change my TDD mentality.
A couple years later it all clicked. Since then I’ve seen many teams really struggle in the same way I had been struggling. They’d focus on coverage or be didactic about mocking or setting up complex interaction diagram scenarios when a good foundation in OO decoupling and contract abstraction would simplify the testing. And, iteratively improve the code design.
It’s hard to explain in words; a person will never become a great boxer by watching boxing matches and reading books on boxing. It’s something only developed by training with a partner who is (hopefully) better at boxing. But, aside from the boxing itself, good cardio and strength is required as well. That part is more design basics, pattern experience, and modeling that provides a general direction as you cycle through the TDD workflow.
seba_alonso@reddit
Yes. Most the times, I don't work in that way if I am doing some spike, PoC or a small script, on those cases normally I get feedback using REPL.
ezaquarii_com@reddit
Yes, TDD all the way. Not religiously tho - I often write a bit of a code and test follows, so its the opposite of what TDD bible suggests. But I whenever I write a code, I have a test already in mind or captured as BDD spec.
We also keep the coverage at 100%.
Strongbad536@reddit
10 years exp, now working at a startup for the past year. Zero to minimal tests. Ship valuable product over all else. Not suitable for all companies
PaxUnDomus@reddit
Sure, everyone uses TDD.
Until it's time to actually use TDD.
daedalus_structure@reddit
So the reason this comes up when talking about legacy code is that you want to preserve the behavior of the system behind any major refactor.
I would hesitate to call that TDD as the point of TDD is to write the tests first and let them drive the architecture and implementation of the system.
This is just using unit testing in refactoring, a practice I strongly recommend.
simon-brunning@reddit
Absolutely. The other day I was doing a coding interview, and they told me not to write tests. Turns out I've forgotten how to code any other way. How do I know what code I need to write without tests, or how to structure it?
Ok_Platypus8866@reddit
Or how do you even know if the code works? Having to build and run an entire app in order to test the code you just wrote is so slow and painful. I know that cannot always be avoided, and it really depends on what sort of code you are working on. But it so much easier and faster to iterate on code when all you have to do is run the test suite for the code you are working on.
Scientific_Artist444@reddit (OP)
How's your experience with it? Seems like you enjoy coding this way.
simon-brunning@reddit
I love it, and I'm sure it results in better, more maintainable code. TDD is at least as much about design as it is about testing.
I do remember that there was quite the learning curve, though - there's more to TDDing effectively than you might think. I was taught by the best when I joined Thoughtworks. This TDD book would be a good start: https://codemanship.co.uk/papers.html
chimpuswimpus@reddit
It absolutely does. It's not a magic bullet by any means but I can tell pretty quickly if code has been written TDD. It tends to be much easier to follow.
I actually do BDD, outside in, starting with 100% coverage in Gherkin and then 100% coverage in other tests. If I'm working in code not written that way it feels "spongy" like I'm working on a foundation I'm unsure about.
DingBat99999@reddit
If this is Working Effectively With Legacy Code, then TDD wasn’t THAT much the focus of the book, at least how I remember it. Maybe it’s because I was already a TDD user.
Anyway, I’d be careful with some of the answers here. They make it clear they don’t really know what TDD is. TDD is mostly about refactoring, not testing.
uprightsleepy@reddit
In order to use TDD you have to have clearly defined AC which almost never happens where I work. Lol
AdamISOS@reddit
Yes. It’s the way.
daguito81@reddit
I don't do web development so front/back stuff is not really my thing. I don't write a lot of code toproduction anymore as I do mostly systems design and architecture but I've always been more in the "data" realm. Data Engineering, ML, MPP, AI (now), experimental stuff, etc. So TDD has always been super hard for me. Mostly because whenever I start we're always in the "Don't know what you don't know" quadrant, so it's pretty hard to write a test where even the output is like "maybe it'll be this". Like a classification model (just to start simple) you have an input and an expected output (% of change of customer churn) but that input will change as you develop the code and the model. new feature will throw that number up or down. So you don't have a specified output besides "It'll be a number".
Then imagine you do all your data transformacion inside a certain block. So now as you develop, the schema of the table will change (if that table is even locally to begin with). So the test will never pass until I finish it, then change the test with the latest changes and now it'll pass.
So it's really hard becasue a lot of times it's non deterministic systems, with a lot of experimentation later on, etc.
After it's done, then yeah, you can start writing tests becasue you already know whats coming in and whats coming out (even the input could change). But at that time you wrote the funcionality already, so it's not TDD anymore.
Then more complex problems like transcripcion or LLM responses, they're not even the same with the same input so tests tend to be "there is a repsonse, it has some text" which ok, that's a test. But not even close to the stuff we want to test.
nuwisdom@reddit
TDD often means write it once, well
Writing your tests retroactively will ultimately lead to tech debt and writing things multiple times
i think you especially notice the pain of not doing TDD when you are doing freelance gigs. You never spend any more time than what is required (since you are basing your work on the requirements only)
freelancers who dont TDD tend to overengineer and write more code than is needed to flex their egos
TDD ensures your write as much as needed, no more no less. Peak efficiency
rvlzzr@reddit
I find TDD pretty terrible and completely unsuitable for situations where there is any uncertainty or exploration, which is almost all cases aside from perhaps bug fixes on a legacy codebase. Prototype+refactor is a much more practical approach.
With TDD you can end up writing extensive tests assuming an API behaves as documented before learning that this isn't the case, where if you just started building the thing you would discover this immediately.
Maybe TDD advocates all code in circumstances where everything they interact with is extensively and reliably documented, but in my experience it falls apart outside of such sandboxes.
roger_ducky@reddit
Exploration? Write a “test” that doesn’t do anything but call your module. You get “one button” debugging.
In fact, once you get the actual result, copy from the debugger and use it as the basis of your assertion.
sherdogger@reddit
As a religious imperative, no. As a useful approach at times, yes.
Hot-Gazpacho@reddit
It seems to me, from the comments here, that folks view TDD as mechanized translation of requirements. That’s unfortunate, as it’s missing the greater value of TDD, which is to drive the design of the code. The output of TDD isn’t tests; those are an artifact of the design process.
If you think about TDD as Test Driven Design, then you start to think about the code you write from a different perspective. Such shifts in perspective are often valuable sources of insight.
roger_ducky@reddit
I think of it as documentation on how to use the unit correctly and see what exceptions or errors I can expect to get from it. Design happens as a side effect as well, but not the main thing I expect to see.
StTheo@reddit
I try to. I find that it’s a good way of ensuring I meet the acceptance criteria, since I often forget about one or two points.
Additionally, I think it’s overall faster. Writing tests after primarily manually testing the story felt like a waste of time.
AvailableFalconn@reddit
I worked at a consultancy that did TDD religiously for a while. To be honest, I never saw what’s so great about it. It makes your interfaces cleaner some times. But nowadays I only find a problem that suits TDD maybe once year. For 95% of the coding we do day to day, where we’re taking some data and wiring up pipes, I don’t see the benefit.
roger_ducky@reddit
TDD is similar to double entry accounting. You slow down by 20-30% in order to not forget requirements while you write code.
The most useful way to use TDD is:
Document those by writing a test. The assertions needs to be typical examples for how it really looks like. Run it to see it does fail. Write code to see it passes like we expect.
You end up with a lot fewer tests and you don’t have to debug as often. Try it!
Obsidian743@reddit
I do a modified version of TDD where I write the tests slightly behind my code design.
I don't think true TDD works the way the creators intended. I do think it achieves their goals of simplicity and writing minimal code. This doesn't really take big picture, architecture, and other concerns into account unless there is a lot of up front design.
I've never written anything except the most trivial that I don't redesign and refactor at least a few times immediately as I write the solution.
My workflow is I stubb things out approximately to what I know to be an intuitively good solution. But sometimes I realize that I really should adopt a different pattern, especially as I write a test. Which on might argue is part of the point of TDD, but I generally do this back and forth of subbing, playing around, make sure it works, then refactor. If you do this with TDD, you're only adding work for the refactor. Again, one could argue that the test should help make sure you are doing reasonable things and that the overhead is worth it, but I'm at a point in my career where I write a lot of code quickly and I'm often solving multiple, complex problems. It's easier for me to use tests to verify my work afterwards and adjust accordingly.
AmosIsFamous@reddit
The thing from a TDD mindset that I use all the time is making sure you see your tests fail. Anytime you have a test, make a small change (or a series of them one at a time) that your test is supposed to cover and watch your test catch the change.
Raaagh@reddit
If requirements are clear, then yes.
If not, maybe.
StackOwOFlow@reddit
easier to do it now with LLM coding assistants
miyakohouou@reddit
I spent about a year working in a team that did strict tdd. The code they wrote was horrid and still had a lot of bugs. I found the process itself tedious and saw no purpose at all for it. It’s a bad process that leads to poorly designed code.
I ran into two specific problems with it. First, strict tdd can be absolutely ridiculous for common tasks. Have a structure with 20 fields and you want to rename them all in some consistent way? TDD says you have to make the smallest failing test possible the make it pass, so instead of a search-and-replace or IDE rename operation you have to spend two days writing dozens of pointless intermediate tests to get each individual field renamed (and yes, my pair, a director and well known agile/tdd guy really did insist we do that over my objections).
The second problem was that the code ended up with an absurd degree of indirection everywhere. I’ve seen spaghetti code, and ravioli code. This was baklava code. An explicit goal of tdd is that you end up with testable code, but “smallest possible test” and “least amount of code possible to make the test pass” ends up leading to a completely unhinged and absurd amount of pointless dependency injection that serves no purpose at all. DI has its place, but TDD can practically turn it into the primary means of flow control.
In general I also find TDD not only fails in general, it’s also bad at the things it claims to be good at. The whole “smallest test/smallest change” thing also really prevents good API design. You need to think about your problem domain in a big picture composable way to build a good API, but TDD forces blinders on you.
Sea_Neighborhood1412@reddit
My experience is that TDD is only more important in the age of LLM generated code.
Writing code, and then saying to chat GPT “write test coverage for this” yields some horrendous results and tests that don’t test anything of consequence. I worry when engineers take this approach and it can be pretty obvious in code review.
I find beginning with a set of acceptance tests (Gherkin syntax) and then guiding the LLM to iteratively build something that satisfies those tests yields much better output. The test suite forms the acceptance criteria that engineers, product, and other stakeholders can look to as a source of truth for the business requirements.
Ghi102@reddit
The main thing TDD answers for me is: how do you know the unit tests you write are any good? How do you know if they will actually break when a bug is introduced?
What opened my eyes was doing TDD as an experiment, writing the failing test and having it pass instead of fail. Reading the test, it was not obvious why and no code review would have caught that it was a useless test. The only way to catch this reliably is to do TDD.
I've also seen people write tests by copy pasting the output of the code into the assert instead of thinking of what the expected output should be. This leads to bugs not being caught as people expect the output to be perfect.
A final note: When has a unit test broken for you in a useful way? Before I did TDD, it was quite seldom as tests would break for trivial reasons. Doing TDD also changed the type of test that I was writing to create better tests.
talldean@reddit
Kent Beck created TDD as a concept, and mostly doesn't use it.
It fits specific projects and cultures. It does not fit others.
If you were porting something from Java to C++, or from Playstation to XBox, it may work really well.
If you're building a brand-new product, it never fits; speed is more important than higher quality.
siqniz@reddit
in theory yes
LloydAtkinson@reddit
Sometimes. Mostly at the same time as a test, occasionally before but rarely. Definitely lots after but within the same session or follow up sessions.
I’ve had the misfortune of interviewing at some place that appeared to be a TDD fascist hell hole.
Did plenty of prep and even wrote code about some stuff that the company did. Tried to show it in the interview as it was all about testing etc.
Interviewer couldn’t care less, didn’t want to see it, and immediately started grilling me on TDD. This clown kept telling me I was doing it all wrong even when I wrote empty tests first. He even started having a hissy fit when I created the class of thing thing that we’d written tests for. I had to clarify multiple times but he just didn’t want me to use classes, everything had to be strings. Because that’s TDD, allegedly.
Utterly bizarre and it’s made me cautious of anywhere that talks excessively about TDD. I like unit tests, but whatever shit that was was not what I consider testing.
I knew I didn’t have a shot when after 20 minutes of the interview he was still lecturing me and agonising about the name of a single test. I couldn’t think of any names he was happy with and if I asked him plainly what he wanted it to be he back peddled.
loumf@reddit
I do it when I know what I want the code to do, but not how. It gets me going.
I also do it to guide bigger changes in code that is under tested. To make sure I have tests in place that will make sure I don’t mess things up.
mikolv2@reddit
Sometimes, there's time and place for it. When I know the input and output but not sure about the implementation, then TDD is the way. Apart from that, when you're developing features with loosely defined requirements, I think writing tests first is a waste of time.
MassiveStallion@reddit
I work in gaming, so no. I have done TDD in the past for limited scope business applications where that sort of thing makes more sense.
In gaming there is too much change and not really a 'core business loop' that makes TDD profitable. That said there are exceptions like long running MMOs and such.
salamazmlekom@reddit
Tried it, hated it
Now I just don't do any tests :)
zaibuf@reddit
Sometimes, not always. Depends how well I know the requirements up front.
reddit04029@reddit
Im strict with unit tests, but that is after everything is done. I just prefer that my logic, and ulitmately my overall workflow, is dictated by unit tests. It’s as if Im trying to “predict” what my logic will be. Sure, you revise the tests along the way, but realistically the team just doesnt have the time to drag out the development because of unit tests.
Thats just my preference. I could be doing TDD wrong. Im not gonna argue about it though.
Saki-Sun@reddit
TDD is good for isolated complex business problems. Your not going to TDD your way out of a crus app and feel good about what you have done.
PileOGunz@reddit
Tdd feels like a very bottom up way of coding. Hard to see the wood from the trees which is ironic because maintainable code mostly relies on performing logic without depending on the low level detail.
bigorangemachine@reddit
I think it just moves the bottleneck.
I did TDD & Extreme programming for 2 years. It's just like doing the alphabet backwards. The normal is easy.. doing it backwards is awkward but not impossible.
I had a project where the dev environment setup was really labour intensive... buggy... inconsistent... since I just had to get some UI to render my component/widget I used TDD to create my widgets sometimes. I knew what I had to write... it just had a lot of overhead.
Now when you don't know how something works... it can be hard to TDD
Saki-Sun@reddit
IMHO TDD and legacy code are not a good mix.
Isolated complex business problems? Yeah TDD is amazing. e.g. you need to write a parser or a template engine or a pricing calculator. It will turn you into a 100x dev.
Substantial_Page_221@reddit
Only when I can be arsed.
My motivation has been low for a while so it's pretty rare these days.
aaron_dresden@reddit
I mean the tests get in there eventually.
GoTheFuckToBed@reddit
We as a company are not doing TDD in general. Except when writing regex, validations, algorithms or mathematical calculations.
Every developer can decide to practise TDD during development but we don't want these personal TDD tests checked into the repository.
Note, the word TDD is used differently by different people.
Grumblefloor@reddit
I've used TDD in the past, but only in very specific circumstances.
One example that comes to mind was a national sports membership price calculator, where the cost was dependent upon a number of factors (age, affiliation to local club, whether the local club paid part of the fees in advance, etc). The client helpfully provided examples, and those became our test data.
But like others here, most of the time the tests come afterwards, and are just proof to reviewers that the code works as expected.
Coincidentally, I was recently rejected from a job because I didn't worship TDD, and had suggested that a QA should also be involved in the development process.
Healthy-Bonus-6755@reddit
TDD is great when it can be done unfortunately it's not always possible especially with tight deadlines and vague requirements, imo it's a lot of wasted time/effort to write test suites when you know the clients tend to change requirements. Best to get something to show then refine and write a test suite on the finalised solution.
cosmopoof@reddit
Always TDD for me - but it's a habit, I've been doing it ever (or shortly after) since the XP book came out, so a good bit over 20 years. My reasoning is simple: if I write the code after the test, my code will always, by definition, be easily testable. Writing tests for existing code is something I simply don't enjoy because there's so much stuff I end up having to either mock or inject. I don't force employees to do the same though - motivation is more important and the good ones always figure out a way of working that is good , fun and effective for them.
partyking35@reddit
I'm pretty early on in my career and my manager is big on TDD so its rubbed off on me too, I usually use it especially for bug fixes or adding a new feature with a unit of code that follows multiple flows of logic, e.g a factory. The one time I didn't use TDD when developing a new feature, I missed out the unit test completely and when it entered a UAT environment for testing it introduced a bug which could of been so easily avoided.
prestonph@reddit
I use TDD in every feature I need to code. Without it, I really didn't feel confident that the code I'm writing should be there. Let me explain.
Whenever I write test, I only call the public interface. If I cannot write the test with that, it means my code is actually doing more than 1 thing. This is a red flag.
When that was cleared, it's time to mock the input. This helps me "lock in" exactly what I'm required to output. I avoided/discovered a lot of bugs this way. Sometimes, it is the requirement that is non-sense.
There are many other benefits as well. Many other comments already mentioned them.
Overall, I proved to myself that the extra time I spent doing TDD is more than worth it. It's simply profitable business to me at this point.
alien3d@reddit
Integration test / unit test just a helper but still not bullet proove smoke test will be not fail. . for my last 15 year work nobody do unit test / integration test and mostly user acceptance test (uat) enough. Our own now do integration test not unit test.
cangaroo_hamam@reddit
For new, mission critical pieces of code, yes.
cholz@reddit
I use it for adding new regression tests. Example: someone discovers some weird behavior and tells me about it. I go into our existing test suite and try adding a case that fails because of that behavior. Once I have that I go and change the behavior to make the new test pass.
But that’s not my typical workflow.
cfogrady@reddit
I don't do TDD. Doesn't really work with my problem solving flow. The important bit for me is just writing testable code. I don't really care if tests themselves come first or second.
panacoda@reddit
Yes, I do, but in the team I work in it is up to the developer to choose if they want to do it or not. However, the strong requirement that the implementation without tests is not a finished implementation.
When you are good at it, it saves time, and can improve some aspects of the design of the solution being implemented. Many don't use it saying it is not for them, but I personally think it requires a lot of discipline to do.
In some cases, people think of TDD as in "write a test for every little thing I do" which is not the case, and in many cases a higher level test can help drive the implementation and at the same time, it would not require a lot effort to write.
dashingThroughSnow12@reddit
I’ve used it and I found it quite enjoyable.
I find it takes more energy to program this way and hence that’s why I don’t do it often.
I program in a very bottom-up approach, which makes my method signatures expand. Refactoring parameters that swap positions or get removed is quite easy. Tooling to add parameters in IDE are meagre in comparison (because it is a far harder problem).
With TDD and how I program, it makes the constant refactoring a pain.
_grey_wall@reddit
Haha
No
Testing just to get the ci going when we can't disable it lol
breich@reddit
Sometimes. When the requirements of what I'm doing are clear enough to know the expected behavior, I like to write the test first. Or if I'm refactoring something that already behaves a certain way and I want to make sure the end result of my refactor behaves the same way and there are not already tests, I might write some tests first. If I'm doing completely experimental or exploratory work where it's unclear what the shape of the result should be, writing test first doesn't really help.
rjm101@reddit
I do it with bugs because I actually have an existing piece of code to actually test on so I can expose the bug via a breaking test and then when I add the fix I can prove it's been fixed. Otherwise if it's a fresh feature I find it extremely awkward to do the testing beforehand. You don't fully know what's going to go where yet so it feels largely like a waste of time. I can understand on agreeing on a set of scenarios as to what it should do but actually writing the tests before hand? Nope.
hitanthrope@reddit
It's a good process, but it is hard to stay disciplined with it. In many ways I admire the people who can do that.
I try to do it as much as possible. This is particularly true when I am designing and building a brand new feature or something with a lot of intricacy or complex logic.
One of the biggest advantages of practicing TDD, I find, is to learn how to build "testable code". If you are practicing true TDD you don't have a choice in making your code testable but I find that many engineers, even those who claim to be good at building testable code, are often not.
Being able to effectively use mocks is a particularly tricky skill that I find many people are not very good at. If I had one unit of any major currency (including Japanese Yen), for every time I have seen code that uses mocks and essentially just tests that the mock does what the engineer has told it to do... I would retired to my own private island. It's not always easy to spot this, sometimes there are multiple levels in between the test and the mock that make it look like something useful is being tested, but when you dive in, you discover that what is really being done is assertion of the mock stubs.... shit like this is *everywhere* in code that heavily uses mocks, very typically.
In some ways, I look at TDD almost like code katas. It's good to at least have a TDD session once in a while, to refine your skills at writing testable code. Even when writing the code first, you really should be thinking about what the tests will be so you are, to some degree, doing TDD in your head. In that sense, I think it is a good practice.
Scientific_Artist444@reddit (OP)
True that.
Given that Michael Feathers' definition of legacy code is 'code without tests', I can understand the importance.
Ah, that seems to be a good thing to do. Pretty often, I get so involved in figuring out the logic, I often fail to see the big picture of why that code needs to be present. Seems like reciting a list of grocery items to remember so that you can buy when you finally arrive at the store.
addys@reddit
Nobody uses Scrum, everyone uses Scrumbut. As in "We kinda do scrum, but...."
Similarly, nobody does TDD but everyone writes unit tests.
In short - TDD is the extreme purist approach to writing testable (and by extension- well factored, loosely coupled etc) code. It's immensely helpful to be familiar with it, even if you chose to to cut lots of corners and go for the much easier 80/20 approach which gives you 80% benefits with 20% of the headache. Just be aware that it's very hard to get to the last 20%(ish) of quality without doing "full on" TDD. Most people are OK with that. You do whatever works best for you.
extra_rice@reddit
I began my career as a software developer in a startup that advocated the use of TDD. Upon joining, I was made to go through a bootcamp where they taught us the principles. Even after I left, I practiced TDD even when I worked in teams that didn't bother. I personally think it's taught me to be a more conscientious engineer.
Dro-Darsha@reddit
I use TDD with continuous testing 95% of the time. I find it is more effective and efficient to code when (some) tests are already there. Especially with work that is highly explorative, TDD helps to focus by answering the question "what are you even trying to do".
(The 5% is working on legacy stuff that is very hard to test, or UI or performance tweaks.)
uraurasecret@reddit
I saw that as a requirement in job description. It looks like requiring people to use right/left hand to write.
gfivksiausuwjtjtnv@reddit
I don’t do TDD but I do shoot for nearly full code coverage. It allows me to have a workflow where I avoid running the app locally, avoid expensive acceptance or e2e tests, and have a super tight dev loop loop where I can write code and test it within a few milliseconds, and before merging I can run the full test suite in a couple of minutes.
Scientific_Artist444@reddit (OP)
Sounds great.
simon-brunning@reddit
Absolutely. The other day I was doing a coding interview, and they told me not to write tests. Turns out I've forgotten how to code any other way. How do I know what code I need to write without tests, or how to structure it?
Flag_Red@reddit
I can't say it's a 'must' (plenty of very good engineers do well without it) but it's certainly a powerful tool to have in your belt.
At a surface level, TDD is a systematic process that gets you writing code consistently in a 'paint-by-numbers' sort of way.
On a slightly deeper level, though, it forces you to think through your design and have a solid idea of what you want the code to do before you get into the weeds of an implementation. This isn't appropriate for all tasks (R&D definitely not), but in an environment where requirements are more clearly defined you can really excel with it.
Scientific_Artist444@reddit (OP)
This is the best description of TDD I found so far. Thanks.
Ultimately, TDD is just another way to write code adhering to requirements. I know I don't need it all the time, but some things are just determined at the policy level and it is just mandated to follow. Unfortunately, can't do much regarding that.
Flag_Red@reddit
Best start making those requirements as clear as possible before beginning the work, then.
Maybe that's what management wants you guys to do anyway, given that they're enforcing TDD.
Traditional_Hat861@reddit
+1
QuantityInfinite8820@reddit
A lot of my work is in framework/library/SDK and there it works great, a lot of the time I am chasing bugs and then TDD is great for trying to write a reproducer and avoid regression etc. TDD is just a great fit.
However, for typical boring CRUD web apps I would not have the patience to do TDD development there.
false79@reddit
In smal companies, I don't think you can really afford to do TDD. Profitability comes before code quality when the financial runway is short. So TDD is the first thing to get scoped out.
But in medium to larger companies that have larger double digit teams, I've experienced it to be beneficial for complex systems where the tests make for okay documentation of how the system works.
skidmark_zuckerberg@reddit
No. Unit tests after the fact but not TDD. I’ve never once seen a dev do TDD and I’ve worked with some exceptional people over the years. TDD is something people online talk about but I’m not sure where these people are working.
kennyshor@reddit
I do TDD whenever I have clear input and output data. Algorithms are way easier and faster to implement when you just run the tests to see if everything works.
Also creating the boilerplate is much easier with the IDE if you do TDD. That being said, it is not always practical to do.
petee0518@reddit
Personally, I always aim for good test coverage, but when and how the tests are written depends on various things. From my experience, in the FE side of things, it's quite rare. However, it's especially useful in two cases: bug fixing, and legacy code. For new functionality it can also be useful, but the benefits aren't as strong.