How do you manage complexity in code and architecture?
Posted by dondraper36@reddit | ExperiencedDevs | View on Reddit | 12 comments
That’s something I reflect on almost every day when I discuss code design and architecture with my colleagues, or when working on a feature.
My main reasoning in that aspect is that we should always divide the total complexity into the 2 well-known categories: essential, inevitable and dictated by the task itself, and also accidental, which is the complexity we introduce with our choices.
This separation is helpful in that you can understand the minimum level of complexity. For example, if you need to persist your user data, that is calling for a database. Then come our database choice.
In general, I always try to follow the “use the simplest approach that works“, as described by Sean Goedecke here.
https://www.seangoedecke.com/the-simplest-thing-that-could-possibly-work/
The problem is that the word “simple” can have as many interpretations as there are devs on your team.
The example from Sean above is about Redis. On the one hand, it’s a popular and battle-tested piece of technology designed for cases when you need, say, caching or distributed rate-limiting.
On the other, and that’s my subjective opinion, introducing a new technology, however good it is, is always adding complexity.
That is why I usually do my best to keep using just Postgres as long as possible for queuing (for update skip locked), caching (unlogged tables help here), and unstructured data (jsonb).
I have had many arguments with people whose main objection was that you need to use the right tool for the job and how this approach might break when we have 100x load.
While it might be true, I am pretty cynical in that most projects are not even likely to reach that stage. If they do, however, it is most likely quite a successful milestone when you can afford to rewrite certain parts of your architecture and adapt to the new load.
Sometimes, I have no quantifiable criteria to select one approach over another. Even worse, I am not even sure I can sort them by “complexity”, which becomes apparent when my colleagues the alternative approach simpler.
Maybe, you have developed your own vision of simplicity and complexity and how you make such decisions. All that would be extremely interesting to hear
sebf@reddit
Large problems should be broken into smaller pieces.
Perfect-Campaign9551@reddit
"broken into smaller pieces".
This is an overly simplistic answer that doesn't give an answer at all, sorry. Everyone knows smaller pieces are easier. The trick is how to divide the work into those, and how to have those pieces "communicate" to create the feature. THAT's not always easy, it actually can be the hard part.
In addition, to me it seems like another difficult part is sharing the architecture with the team and having everyone understand it. Many people have different ways of thinking and they simply won't understand diagrams , etc, quite often.
It really IS like herding cats!
single_plum_floating@reddit
Personally i prefer using 100s upon 100s of function calls in the same file that are all single liners. If we need to change function ordering we can just use nested raises as we see fit.
single_plum_floating@reddit
great idea executed in the worst way imaginable. i hope i never have to maintain your code.
Perfect-Campaign9551@reddit
- Keep business logic separate from other logic
- try to anticipate the "seams' in your design and form interfaces at those seams and have the code use those interfaces and not talk directly to outside objects.
- use dependency injection to handle dependencies (ports and adapters)
- design patterns
Those strategies will help manage complexity. Then, the hard part is ensuring coworkers understand the architecture. Some people understand diagrams, some people need to walk through the code. Etc.
OAKI-io@reddit
ruthless deletion. most complexity accumulates not from hard problems but from decisions nobody cleaned up. the best codebases i’ve worked in had someone who actually removed things. adding is easy, removing requires confidence.
ManyInterests@reddit
Good software design and managing complexity is all about deferring details. You can reach a solid testable design without ever worrying about details. A robust software architecture thinks carefully about boundaries and interfaces, not implementation details.
The goal, in short, is to make your software so it is very easy to change. When you keep software soft, then the details become a lot less risky and less costly to change.
When you do finally talk about details, simple/easy and boring is not a bad default. And if your software was designed appropriately, the choice is should not be terribly consequential; swapping out details in implementation should be easy if and when the need arises.
Not to say that details are not important. There are a lot of things you might balance in making choices that weigh differently in different situations (which can change!). But the biggest costs in change usually involve software changes; so aim to minimize the required changes to the software and that'll payoff way more than getting the details "right" keeping in mind what's "right" tomorrow may not be the same thing that's "right" today.
MonochromeDinosaur@reddit
This is a solved problem go look at what engineers and architects do.
krimin_killr21@reddit
With code I find the most powerful maxim on complexity to be the single responsibility principle. If each class is doing just one thing with a well made API that doesn’t leak its abstraction, then the degree of complexity in any single class is naturally limited. The class itself stays short, meaning it is easy(ier) to validate all the behavior of the class, and to understand what the class is doing in a single mental window.
zrail@reddit
I think the answer to this question depends almost entirely on the context in which it is asked. A small focused group working on a greenfield project and a large org working on a set of scaled applications will make very different decisions wrt adding new technologies and managing complexity.
Personally I try to make things as simple as practicable from an operations standpoint. That is to say, if the ops team (which could just be me!) has a preferred tech stack and I can't make a very compelling argument for something else, that's what we're using.
As far as scale out, real numbers beat prognostication every time. If you can say with ~80% confidence that system X is going to see 100x traffic in six months, absolutely evaluate options. Similarly, if you have measurements telling you that a certain subsystem is using memory/iops out of proportion for the traffic, that's a good place to focus for optimization.
cmpthepirate@reddit
Can't even be arsed to answer this, read a book.
Logical_Newspaper_52@reddit
you don’t need to, it’s an ad