Is 'Monolith First' the Better Approach?
Posted by StellarNavigator@reddit | programming | View on Reddit | 181 comments
Posted by StellarNavigator@reddit | programming | View on Reddit | 181 comments
spaceneenja@reddit
Of course. Break off services from your monolith as the demands on your infrastructure make it logical to do so.
This of course requires your engineers to maintain separation of domains without requiring a separation of code repositories. In many firms, you get engineers who just do their stories and don’t particularly care about fundamentals or maintainability.
PrimeDoorNail@reddit
The thing is, if your engineers cant properly separate domains in a monolith, they wont do it properly either using Microservices, in fact the mess will be much worse.
Fun times
adrianmonk@reddit
They will and they won't.
They will in the sense that when two things are in two different microservices, it's a pain in the ass to make them talk to each other. Therefore a lazy person won't do it. Thus, some degree of modularity will be preserved. In a monolith, the lazy person will connect everything to everything because they don't care about modularity (or not enough to do work to preserve it).
They won't in the sense that when a system is broken up into microservices, a lazy or stupid person will say "it's already modular!" and then use that as an excuse to not worry any further about modularity ever again. So you will have web services with handler methods that are 2000 lines long.
Schmittfried@reddit
Don’t forget this one: With separate services a lazy person will reinvent the wheel and mix different domains in their service because the actual responsible service doesn’t support their use case and they are too lazy to coordinate with the maintainers of that service.
jl2352@reddit
You’ve touched on a common issue that crops up in monoliths. Where functionality for a domain just seems to go on, and on, and on, due to poor modularisation. Bleeding right across the codebase. It’s very easy to do even by smart and well intended engineers.
AnnoyedVelociraptor@reddit
And this is where this whole thing falls apart.
If you build a monolith, nicely separated with boundaries, but instead of serializing JSON and sending it over HTTP or serializing with protobufs, you gain a lot of speed as you don't constantly need to do FFI, and serialize/deserialize.
But unless you actually enforce this by splitting it up into separate projects whose contracts are only the protobufs of the OpenAPI contracts, you will end up in a situation where someone just talks to another part via a normal function.
And then it happens again and again and again, because writing a function call is 100 times easier than explicitly thinking of how to expose the API and functionality.
bunk3rk1ng@reddit
And then you decide to break out the different domains of concern into different repos / services
And then you realize there is actually a lot of shared functionality needed by both.
And then you create a common lib that both services can use
And then those services and devs grow further and further apart and they don't even know about eachother other than this one common lib they need to keep updating
And those teams move at a different paces and their services start relying on different versions of the common lib
And then they introduce breaking changes to eachothers services unknowingly
And then it's a huge mess.
malln1nja@reddit
It's eventually a huge mess no matter what, but with the monolith all the mess is at least in the same place at least.
Dreamtrain@reddit
I've suffered both messes but micro is a lot better, as much as you guys talk about "breaking other people's flows" I've seen this far more in monohell, not to mention the biggest disruptor is deploys, now what is usually a job where two dev teams need to hash it out its the entire team that depends on the one basket of eggs
JockeTF@reddit
Until people start copy-pasting thousands of lines of code between the services.
Dreamtrain@reddit
that's not a flaw on micro or mono design, that's a flaw on PR process
spaceneenja@reddit
Yep. Need to retest 100% of the surfaces to be sure, with every deployment
jl2352@reddit
Microservice doesn’t have to mean separate repos. If you have multiple services in the same repo, then that shared library code is not that different to being in a monorepo.
You go and change library code for one domain, and find it breaks five others. That happens across microservices and in monoliths.
spaceneenja@reddit
👏🏼preach👏🏼
shevy-java@reddit
Indeed. To be fair: even in a monolith you have code parts that may be more sound and stable. Not everything necessarily has to be a mess.
Cheraldenine@reddit
Only way is to split them off into two separate companies sharing no code and that don't know of each other's existence.
That'll teach them.
john16384@reddit
How do people use libraries then built by 3rd parties? This all can be made to work just fine. We do it all the time, with versioned 3rd party dependencies. This all goes out the window the minute you can commit to the same repository?
bunk3rk1ng@reddit
External libs don't have your business logic in them.
Cheraldenine@reddit
And faster. Unless there is a specific reason why the protobufs give a concrete advantage, function calls are simply better.
Recent-Start-7456@reddit
You can still just call functions…Just do it through defined interfaces that mark the boundary
FinishExtension3652@reddit
IMHO, this is the way. It's also the way I've refactored or led the refactoring of spaghetti monoliths into something sane, and that could then be taken to some SOA end state, if desired.
hippydipster@reddit
If you use a decent language and tech stack, you can enforce the module boundaries with the compiler.
Meleneth@reddit
can you be more specific please? What is your opinion of decent, which presumably helps enforce module boundaries with the compiler?
hippydipster@reddit
Java and the Java module system would be an example
Scroph@reddit
Spring modulith aims for this I think, though I'm not sure if it's production ready
Reinbert@reddit
Even if your language and compiler don't offer anything useful, for most languages you will be able to find test frameworks for architecture. Architecture tests are super easy to write and can achieve the same thing. As a bonus, they are super portable between projects.
modestlife@reddit
Even more dynamic languages like PHP have tools like deptrac which allow you to enforce which classes/namespaces are allowed to interact with each other.
MiningMarsh@reddit
Then it becomes a real debate on whether I'd rather work on a poorly designed monolith codebase with tons of code separation issues, or literally anything written in PHP.
MiningMarsh@reddit
Function signatures are an API, and can be translated pretty directly into REST endpoints. That might not be an efficient way to handle the network translation, but it works.
Functions you expose in your module or header or whatever are that module/header/whatever's API.
The fact that it's a function is far less important than the argument types and return type of that function signature, and whether they can easily be translated to an object that can be sent over some protocol like REST.
andrerav@reddit
Let me guess -- Go?
pheonixblade9@reddit
if you're really smart, you still use protobufs, but you just use them to generate the Java/C#/C++/whatever code and use the native classes. then if/when you do need to microservice it up, it's pretty simple to swap.
tigertom@reddit
We do that with openapi
pheonixblade9@reddit
Yup, lots of ways to do it ☺️
Worth_Trust_3825@reddit
Why does the API matter if under the hood it will make an http request, file dump, or w.e.? For as long as you properly define that something is async or sync, you're fine.
Whyamibeautiful@reddit
Im confused if you don’t call the backend via http how do you do it ?
Ecksters@reddit
Really seems like a decent linter could prevent imports from other parts of the app, I'm not sure why that's not the standard approach to this.
dagopa6696@reddit
Keeping code nicely organized is a lot easier than making the code fast. So if the team is incapable of defining a good architecture with good boundaries then we can't expect them to write performant code either. In practice, that speedup of using the monolith is just giving them permission to be even slower everywhere ese.
jl2352@reddit
It’ll be a distributed monolith. Which is always horrifying to work on.
r0bb3dzombie@reddit
As a lead, when this happens, you have to blame the leads/architects/dev managers or whatever else your organization has that's in charge of design, standards and planning.
I have personally failed and succeeded at this.
Mrqueue@reddit
You can restrict access to services to enforce these things
platypus_plumba@reddit
I love this comment because it hurts
spaceneenja@reddit
Mostly true, but to some extent having separate repositories can prevent some of the absolute laziest of practices.
ShepRat@reddit
If there is issue with lack of/poor enforcement of standards it is way easier to combine multiple reps into a monolith than it is to separate a monolith.
I've done both and the latter is just not worth doing.
spaceneenja@reddit
Not surprising at all
veryspicypickle@reddit
Can’t we safeguard tho use boundaries with some good old architectural tests (ArchUnit et all?)
spaceneenja@reddit
That sounds like work
veryspicypickle@reddit
More work than separate repositories, developer setups, contract tests, monitoring?
coldblade2000@reddit
But what will I do if my microservice can't just symlink a library from another project in the monorepo?
spaceneenja@reddit
Obviously you need to copy the project in and reference it directly. That’s what monolith means after all.
es-ganso@reddit
Yep, distributed monoliths are just as bad. It's so bad to maintain
veryspicypickle@reddit
Correcting those fuck-ups are a lot more painful in a microservice based deployment style than a monolith
CanvasFanatic@reddit
I don’t think that always follows. Constraints do constrain things. Plenty of people will choose the short and easy path despite knowing better if it’s available.
edgmnt_net@reddit
Separating domains in a monolith in the same way you separate them with microservices practically negates most advantages of a monolith. I'm not saying one shouldn't factor stuff out nicely or that you shouldn't write maintainable code, but that's altogether different if we're talking microservices. Nice tight code means less code, less indirection, less surface for bugs, less confusion, less effort and so on.
So splitting out microservices is going to involve at least some work, which you cannot do upfront because it hurts fundamental aspects of developing a monolith. But that's fine, you'll probably move faster with a monolith and have much less concern with respect to versioning internal APIs and such. Starting with microservices is usually a mistake unless you do a great deal of planning to build robust services, because change is expensive to orchestrate, especially on a micro level, IMO.
A true monolith makes it possible to have a really good review process and decent acceptance criteria, assuming the company is willing to allocate resources to it. It's not easy on newcomers (or even old timers who got used to silos), but they'll learn.
fondle_my_tendies@reddit
Lol, no it doesn't. hahah.
edgmnt_net@reddit
There are plenty of libraries and services out there (databases for example) that have well-defined, clean and robust interfaces. But your average component in the app just cannot be that. You can't expect to make a difference simply by providing an interface to enable yanking out the books endpoint from a library management app, it just doesn't work like that and interfaces are not magic. The primary difference is that those other things are truly robust and general components, they can be used with almost anything without making constant changes to the interface. And at some point you have to instantiate generic functionality to do specific things that are related to one another.
You can make an attempt to think ahead, keep things generic, write good interfaces, perhaps even end up with some highly-reusable components that you made yourself. But beyond some point, you're asking for meaningless boilerplate, giving up code sharing / merging opportunities and making things harder to change with no real benefit. You can just refactor and extract functionality later on without placing lots of artificial boundaries early on, or not to a very significant extent at least.
spaceneenja@reddit
I am in agreement on everything you say except I don’t know why you think separating services in a monolith should be any different than for microservices. Mainly, don’t treat every service as a possible aggregator and thus creating a rats nest of dependencies that would require an immense amount of work not just to maintain but break into microservices down the line.
edgmnt_net@reddit
Monoliths make it possible to share code across different parts of the code base. Things like helpers, abstractions, even domain object-specific stuff. For example, you can just call some URL construction function for "cats" in code that deals with "dogs" and you can keep it in the "cats" package. You can even pass closures between domains. You can cover a lot of ground and focus on business logic that matters without expecting everything to be a potential remote call or adding layers upon layers of data shuffling in an attempt to decouple stuff. A monolith needn't have services in the same way a microservices architecture has services, it ideally has various packages/modules and pieces of code which compose well.
Ultimately some separation is a good idea if you really foresee that something's going to need its own microservice, that's not entirely an issue. The trouble is people already seem to make that choice too granularly and we get crazy architectures with separate shopping cart, orders, invoices, authentication and every-feature-out-there into its own microservice, amounting to hundreds in total. And there's a cost to that flexibility, because now you can't share code and do all the good stuff, out of fear that it will make splitting more difficult. So if you ever make that choice, you should make it conservatively (and my hunch is most apps never really need to split out for technical reasons anyway).
I'm not arguing for spaghetti code or inappropriate DRY. There are plenty of good-quality monoliths out there, I usually like to reference the Linux kernel as an example, even though it's not a deployable service in the same way.
spaceneenja@reddit
Yeah that sounds like going to far in the other direction.
tigertom@reddit
It removes the ops/infra overhead
Slsyyy@reddit
Still you have a better performance, unified logging/tracing/monitoring, simpler deployment, more agility and less cognitive load
TheESportsGuy@reddit
Employees care about the things they're incentivized to care about. At non-tech firms, it's never maintainability.
spaceneenja@reddit
Yep
puterTDI@reddit
Our big problem is that separation of donations require product owners that can describe and adhere to the domains.
I do think splitting off from the monolith genius with this, mostly because you can let a feature become established then split it off when there’s a clear committed direction from product.
spaceneenja@reddit
Is this a GPT response?
puterTDI@reddit
Because autocorrect fucked with what I wrote?
spaceneenja@reddit
Yes?
puterTDI@reddit
That.. seems like a stupid reason to think that. If anything chat gpt is good at stringing together sentences.
rcls0053@reddit
I have not seen one company that does user story mapping or event storming. DDD is sometimes mentioned but nobody intentionally practices it. Often the separation in code is also done by technical boundaries, not by domain boundaries.
People don't really plan ahead that much.
WardenUnleashed@reddit
Trying to get the business to understand event storming / user story mapping can be such a PITA…for some reason them being more involved in the software conversation freaks them out.
I think they don’t like the idea that it makes them more liable/responsible for the outcome of the software or something.
spaceneenja@reddit
This is why most of this is just up to the engineers. The business should just state their user stories. Engineering can handle the domains. And if the business doesn’t like it? Mind ya own business.
hibikir_40k@reddit
The difficulty is a culture of spaghetti that can be easy to build with a monolith that has no modularization mechanisms.
See, for instance, the Giant Stripe Ruby Monorepo. They wrote articles in their blog about massive parallelization of tests, because it was impossible to run tests quickly: Everything entangled meant no actual unit tests. Eventually this all read to building a type system on top of ruby, and everyone having to program via a remote VM, because the monolith was unrunnable at any development machine. The creation of dependency charts that made it impossible to break anything apart. A switch to Bazel that still couldn't build a lot of very helpful, faster submodules, as ultimately too much code depends on too big a base. The sages say that a refactoring attempt started 8 years ago, and it's still not accomplished its goals.
So start your with a monolith, but you must start breaking apart earlier than you thing, having people in charge with some taste. Which gets us back to the core problem of all of this: How can you tell whether your architect/CTO/Trusted advisors have any taste? It's quite possible to have a very long career leaving big craters behind you, avoiding any responsibility for bad decision making: It's not just middle management that gets to do that!
spaceneenja@reddit
Yikes, that sounds like an absolute nightmare of coupling overload
gcnovus@reddit
In both of my last two jobs, they started with a UI/API split (split service and repo) from day one. By far the biggest thing that has slowed us down.
I’ve been part of three major UI-extractions and lead two of them. The pain to extract is high, but the pain of extracting to early goes on forever.
spaceneenja@reddit
What’s funny is this is probably the least value in terms of abstraction from a monolith. I would prefer to have the api/ui in a single monolith and the microservices feeding the api in microservices or a microservice monolith deployable.
jonwah@reddit
That's really interesting, because most of the time for me, the UI is a web SPA, so it's separate from the backend to begin with...
And I love it - I tend to create UI agnostic API's and then build out the front end.. it's not 100% efficient as particular UI pages might be more chatty than is strictly necessary or pull extra data they don't need, but it is super flexible and the payoff is that the UI can be built out without needing any changes to the API. YMMV but it works for me.
Luolong@reddit
The thing is that you should build monoliths differently than micro services.
In monoliths you usually use a lot of synchronous (function) calls to communicate between service boundaries.
When you do the same with micro services you just get badly distributed monolith with all the flaws of monolithic application married to all the downsides of distributed compute. So that when one of the services goes out of commission, the entire service is effectively belly up.
When you truly do micro services, you write your core monolith first and then make sure that any auxiliary services will get necessary information from your monolith in an asynchronous fashion.
The dead giveaway that you’ve messed up your micro services architecture is that your core service can not perform its function without involving other services.
mattsmith321@reddit
As long as you follow the process everything will turn out perfect!
Accurate-Collar2686@reddit
If you ain't Facebook, don't waste your time overengineering the hell out of everything just in case...
vom-IT-coffin@reddit
I think people are confusing a monolith and a monorepo.
kebabmybob@reddit
Start with a proper build system and then this question becomes a bit of a false dichotomy.
Manbeardo@reddit
Not really. It's pretty commonplace to have a monorepo that contains multiple services. That isn't a monolith. A monolith is a single service that does everything.
kebabmybob@reddit
Right and a bunch of discussion here boils down to packaging and consistency of things like schemas across artifacts. A good build system makes you feel like you’re in a fundamentally different (better) paradigm than monolith or microservices.
vom-IT-coffin@reddit
Maybe it's just wording, but I'm failing to understand what a build pipeline has anything to do with this. In a monorepo each service is independently deployable with no shared artifacts, besides maybe some compute sharing. Schemas across bounded context only applies in the monolith worth, microservices is one database per, if data needs to be in two services database it has to be replicated.
veryspicypickle@reddit
Always happens.
cutsandplayswithwood@reddit
Nope
alfredrowdy@reddit
Like everything else, it depends. There are some very clear boundaries that apply to every service, such as you never want to handle synchronous user requests and asynchronous long running jobs within the same service, that’s a boundary you’d obviously want to separate at the very start. Another obvious boundary is data persistence, you rarely want data persistence running in the same context as your service accepting user requests. Depending on your security needs, you may also need to split things up from the very beginning for security separation, in a monolith a compromise means they have access to everything, which is a huge risk.
Longjumping-Ad8775@reddit
I won’t say it is the best way. Every thing has a specific use and every circumstance is different. What I will say is that without an overriding reason to do something different, for the work that I do, it makes a lot of sense.
fragglerock@reddit
The fact this was published 9 years ago and we still have the same arguments really depresses me about the discourse in programming.
dagopa6696@reddit
There will always be two very special kind of programmers out there. Juniors who think that the way their problems are the same as everybody else's problems, and Seniors who want to retire doing things the same way they've been doing it for the past 10 years. No matter how far into the future you look, these two groups will be giving the rest of us a hard time.
ussliberty66@reddit
I honestly lean towards monolith until these conditions are met: - the rate of product pivots slows down - the team have at least 3 senior engineers (enough for leading the transitions) - the cloud infrastructure is already containerized with some orchestrator so it is easy to add new services - the team is skilled enough with containers, networking, tests
The strangling patter starting from the fire and forget services behind queues is the way to go.
Personally I already tried to move away from a monolith, and the number of challenges are really numerous, especially when nobody had the privilege the see a functioning microservices environment in action.
Don’t underestimate the complexity of testing locally the environment, you can’t have everything on your machine and you need to setup many infrastructure bridges (and security layers) with the cloud environment.
Bodine12@reddit
That first one is really key. It’s so much easier (initially, at least) to reason about a monolith, so those product pivots are easier to handle. Of course, too many pivots and the monolith is an intractable mess.
wvenable@reddit
Microservices are a solution to organizational problems, not to technical ones. They mean accepting increased technical complexity (your system is now distributed) in exchange for decreased organizational complexity (your teams can now deploy independently from each other, can safely make database schema changes, etc).
Going with microservices from day 1 will initially mean that you have one team maintaining many services. They have to deploy them separately. If you're doing it "right" you have separate databases per service. None of that is useful for a team that's just starting out.
If you want separation of concerns, the language's module system and a bit of discipline will get most teams as far as they need without introducing distributed computing into the equation.
ewouldblock@reddit
I think this is at best a half-truth. Sure, microservices solve an organizational problem--they create independent codebases that makes it easy for the system to be divided up across teams of ownership. There's other organizational benefits as well: Learning a 2k LOC codebase is an order of magnitude easier than learning a 200k LOC codebase. So microservices allow engineers to build expertise on a slice of a system. It makes learning large systems more approachable.
But there are also technical benefits. The biggest and most obvious one is isolation (and by extension resiliency). When you have a picklist service that builds picklists on the UI, and that's separate from your video playback service, it means a bug in your picklist code that causes the system to crash won't bring down playback for all your users. If your main line of business is video playback, that's a huge benefit. Even if you're not Netflix, there will always be parts of your system that are more important to users than others, and that's why isolation is so important.
It means that each service has a risk profile associated with it. E.g. what is the risk associated with deploying any individual service? A user profile info service has a much different risk profile to video playback (if you're Netflix or some other streaming giant). If the user profile info service goes down, maybe it means I can't see my avatar, or I can't change my profile from whatever it is right now. If playback goes down, I can't watch anything, and the service is unusable. So, that means different levels of process and testing can be applied for different services. Maybe 95% of your ecosystem can have continuous automated testing and deployment, because mostly bugs can have isolated impact on the overall system. But there's that 5% where an outage would be highly visible, or it would be "revenue impacting," or for some reason it needs manual testing. So maybe you don't do continuous delivery there. Maybe there's a larger process involved in testing, or maybe the manual QA team has to sign off.
If everything were to be in one large monolith, you're always subject to the full risk profile on every deployment.
-oRocketSurgeryo-@reddit
The risk here is that in splitting off separate services prematurely, you end up with a distributed monolith with single points of failure that does not have the benefits in scalout or resilience that a more general analysis might suggest are possible. It is very easy to send bugs into production through cascading failures if there are gaps in automated test coverage. And automated test coverage becomes considerably more complex across system boundaries.
Which is to say that while there's a set of tradeoffs that weigh in on the question of how to split up a system and no simple answers, I think the increased testing burden is overlooked in many analyses; certanly at my current employer.
ewouldblock@reddit
I've never understood what a "distributed monolith" was aside from a disparaging term for a microservice architecture someone doesn't like.
I agree that microservices are not a silver bullet. You still have to know how to write software and tests, or the wheels will fall off.
jl2352@reddit
Distributed monolith is basically a term for a shitty layout of microservices.
Primarily when development work commonly requires development across multiple services, and so you are working across multiple services becomes the default. It’s then just a monolith with the negatives of a distributed system.
It also reflects very poor isolation. A common dream of microservices is you can deploy and run them in isolation. In practice it’s common for services to have some expectation that they are deployed with other services in mind. In a distributed monolith the idea of deploying services in isolation doesn’t make any logical sense at all, due to how heavily coupled they are to other services running.
ewouldblock@reddit
Isnt that just poor software design? Everyone knows there are no silver bullets, so it's not like microservices absolve you from thinking and making good choices...
jl2352@reddit
Yes. Distributed monolith is a negative statement.
There are examples of microservices being fine, and there are examples where it’s amazing. Distributed monolith helps to distinguish poorly designed microservices which are painful to work on.
-oRocketSurgeryo-@reddit
I've had some experience working with a distributed monolith. In my case there is a complex state machine that spans many pages in our app in which you can only know the exact journey that a visitor is going to travel when they visit our site by running all of the separate services together. In our case it's made more complex by version skew and the difficulties around maintaining n-1 compatility across services, or, alternatively, coordinating PRs and deployments. Much but not all of this complexity could be mitigated with a monorepo setup (which we don't have).
One can attempt to enforce contracts at the system boundaries and mock out other systems in tests. But the additional complexity in getting this right means that there are large gaps in the test coverage because whole parts of the state machine are incorrectly mocked out. There is no set of tests that excercise the full state machine, so there's always a shadow of a doubt in the back of your mind whether one's work will blow up in production. I'm persuaded that a whole class of testing challenges would go away had the early engineers been less eager to split out separate services.
wvenable@reddit
I've had picklists crash but I've never had anything take down an entire system. I have had bad output from one system crash another. So I feel like this line of reasoning isn't that solid.
If you're big enough to have half the problems you're talking about then you're probably a big enough to have a large organization with separate teams owning different microservices. If you're not that big, you almost certainly don't have big technical or scaling issues.
ewouldblock@reddit
We can agree that if you run a one or two man shop.where scaling, revenue, and uptime are of little or no concern, then of course you can write a monolith.
MiningMarsh@reddit
Yeah, that's why Linux is famously a collection of microservices, a microkernel, if you will.
It's just impossible to scale a monolithic kernel into something that generates revenue and maintains uptime.
wvenable@reddit
Wow. So only write a monolith of you want a slow buggy mess that won't make any money? I think that might be just a tiny little bit harsh.
ewouldblock@reddit
Look, I'm not here to pick a fight. You made the claim that microservices only solve an organizational problem, not technical ones, and I explained why that's not true.
I personally would always plan for microservices because I prefer the upfront planning and work over having to retrofit after the fact when things get too big or things start to break. But I also understand that there are different opinions out there. Probably individuql preferences are rooted in personal experience and expertise.
wvenable@reddit
It maybe solves technical problems but it also adds technical problems. It is a big increase in complexity.
https://en.wikipedia.org/wiki/You_aren%27t_gonna_need_it
gnus-migrate@reddit
Sure, but does any of that really matter when you're just starting out and are trying to find market fit?
syklemil@reddit
Yeah, the last part there is a good chunk of what ops want. BSD jails and Linux cgroups and chroots grew into containers, and then again to systems like Kubernetes.
We want to be able to limit the amount of resources a service can use. We want to be able to start and restart services at-will, quickly, and reliably without getting hands-on. We want to be able to run parts of it in High Availability mode, and scale it both horizontally and vertically.
It's at this point I kind of want to deconstruct the phrase operating system and call it "operational system" or "ops system" to get the intent across without all the baggage that people think of when they think of an OS. The operational control is an important piece.
jl2352@reddit
What microservices can also help to solve, which I also don’t see mentioned in this thread, is the ability to hide technical complexity.
That has a big impact when people are running the monolith locally. Docker compose assets, environment variables, scripts to setup your local environment, and other such things can become a single URL to a service running on the QA cluster.
I’ve seen first hand that dramatically increase developer velocity.
TheDeadlyCat@reddit
You can build a monolith from modular building blocks, it doesn’t have to be solid to begin with.
i_andrew@reddit
All properly designed monoliths are modular, unless you build a big ball of mud that is an anti-pattern for 30 years already.
Making systems modular was a thing from '70.
Prod_Is_For_Testing@reddit
Most people don’t know how to start a new big project because they’ve only worked on existing big projects. I’d say it’s rare for someone to have the opportunity to start from scratch
TheDeadlyCat@reddit
Indeed.
wvenable@reddit
I feel like nobody knows how to build libraries anymore.
FullPoet@reddit
I think a lot of boils down to inherited dependency hell.
I inherited a large API that had at least 10 "core" nuget packages / libraries, each with their own "core" nuget packages.
It was hell trying to upgrade from 2.1 NET standard to NET Core 3.
It really turned me off making libraries because a lot of times it just isnt worth it if you dont have a set of developers whos responsibility is to maintain them.
TheDeadlyCat@reddit
Newbies nowadays aren’t taught the basics, they are taught frameworks.
bigfoot675@reddit
Was anyone ever taught how to create libraries up front? To me, it seems like the kind of thing people learn over the course of years of experience. Maybe the problem is we're asking people to contribute before they get to that point
Bakoro@reddit
The real problem is that you don't really know until you have the experience, and you can't get the experience without messing up at some point.
My first job as a developer was at a smallish company that was growing, all the code was written by one dude. I naturally ran into a dozen problems which I'd learned about in college, and was like, ahh, I see why we do, [thing] now, because this right here is a problem. Seeing real production code with real problems, and dealing with someone who codes like it's the 1980s, I learned stuff from a practical perspective on top of the academic perspective.
I was like: ooohh, unit tests. Ahh, automated build system. Hmmm, pull requests. Uhhh, code reviews. Ohhh, "computer science" vs "programming".
If I had come in and everything was already buttery smooth, it would have been a good example of what to do, but I would have lacked the personal insight on the problems being addressed, mitigated, and altogether avoided.
I don't just know to do a thing, I also have the experience to not blindly follow or implement dogma.
Any which way, you're going to run into tradeoffs. More pre-job training means more time and more costs for students, higher barriers to entry into the field, necessarily higher wages across the board, and after all that, people still need to see real problems being solved with real world hurdles, and a lot of people will spend a lot of time learning things they will almost never use.
vom-IT-coffin@reddit
Fucking a. They don't even know what concepts the frameworks are trying to abstract.
BasicDesignAdvice@reddit
You can but that doesn't mean it will happen.
One engineer can be smart. Each engineer you and increases the shit exponentially. Leadership and management either need to be omniscient, or skilled enough (leadership skills, not engineering skills) to make maintainability systemic and cultural.
It's very hard to get right.
flowering_sun_star@reddit
I think this is quite an important point. Every developer (beyond the newest of juniors) is smart and probably has a vision of how the code should maintain nice clean separations. And every one of those visions is both reasonable and subtly different.
BasicDesignAdvice@reddit
That's where leadership comes in and many organizations bang their heads against the wall. Leadership should be concerned with systemic strategies and gaining consensus, but those who move up through tech are too concerned with the details. Then there is all the "work around work." Unproductive meetings and talk without action.
TheDeadlyCat@reddit
Oh, I know. I have worked in IT for over a decade. Every company I was at I had clean up messes that it has become second nature. Refactoring and restructuring to modular architecture isn’t that bad though. It’s solving a puzzle and feels kind of zen.
OpalescentAardvark@reddit
Totally agree, I love that feeling and can refactor all day every day, love it! Like you say, it's like a puzzle, very enjoyable, engrossing, even relaxing like playing a game.Rarely see that mentioned, I wonder if it's a common feeling?
Vaguely get it with SQL as well, also like a fun puzzle.
TheDeadlyCat@reddit
It’s not common, but different people zen into different types of work. I had a test guy who was into breaking code. Cool dude.
drawkbox@reddit
Yeah you can make services that just run inproc rather than remoted, they can be built to be flexible to run in either. These connection points should be clean interfaces and abstracted facades/proxies where needed that maybe connect up to OpenAPI or another interchange layer which helps. The facade/proxy allows you to interface with it directly or remotely.
The best design is to have a clean, and if possible basic types, layer that is used to interface with the other services, then a concrete layer below that which can change more frequently but not have breaking connection API/connection signatures.
Basically a good de-coupling using interfaces and events/messages. The most basic of de-coupling strategies along with a consistent abstracted ingress/egress layer which attempts to minimize breaking changes but things can be swapped underneath that layer.
Today it seems like there is to much coupling even in monoliths and even when there are services there are tons of API/SDK breaking changes. You can abstract the damage of constant change and keep your interfaces to connection points stable. It makes things very flexible to change and the exposed parts are atomic so you only have to version there on major changes.
Simple_Horse_550@reddit
Start with modular monoliths, then do microservices…
RiPont@reddit
And don't do microservices unless and until you have the necessary infrastructure.
Doing microservices without any of those is asking for trouble.
underflo@reddit
5 - observability infrastructure (distributed tracing + logging + metrics)
Recent-Start-7456@reddit
Service organization will mirror organization organization
duxdude418@reddit
Conway’s Law
RiverOfSand@reddit
Genuine question, but would it be better to focus on the domain boundaries rather than performance? We can always scale each service as needed.
Simple_Horse_550@reddit
Microservices only exist because we can’t scale vertically due to physical limitations in hardware. If we could scale vertically in an unlimited way, that would be the best and easiest way to compute. Since we can’t, then we have to live with the tradeoffs that come with distributed systems. You want to have as few as possible nodes communicating with each other in order to reduce network complexity. That’s why you should only choose to introduce extra nodes if you know that there is going to be CPU/memory intensive work that will grow as the number of requests grow…
johny_james@reddit
Microservices are not a solution for performance!
You somehow managed to get everything wrong about Microservices and their usage.
Microservices is about the domain. Stop mixing architecture with scaling solutions.
Scaling and performance are a part of a big topic called distributed systems, which is all about handling a large number of concurrent requests and data.
It turns out concepts from distributed systems can be applied to microservices because they distributed system after all, but it's not the other way around.
Simple_Horse_550@reddit
If you have actually worked with microservices in large systems you must have seen that all the advertised benifits of being fault tolerant/isolation, simpler to deploy, independent teams working on stuff, reuseability, faster time to market, independence, security etc don’t hold up due to the nature of the business and real world situation. Thus most of the benefits becomes theoretical and in reality it introduces more complexity than needed. What is then left is general scaling, which is the only thing a monolith can’t do as well as microservices.
johny_james@reddit
Yeah, and I would say to that that the result of distributed monoliths is the pure incompetence of the teams, management, and lack of understanding of microservice architecture.
You CAN avoid distributed monolith in practice if the architect UNDERSTANDS what microservices are and have already architected a couple of distributed monoliths.
Yeah, and I understand that the advertised benefits don't hold up, because that's not what microservices are about...
In simple terms, Scaling of business logic --> scaling of people --> organizational division into isolated teams --> divide monolith into modules per team --> make each module an independent app --> microservices
After this, you end up with simply integrated systems that communicate over HTTP, and every software company has worked with such systems.
So, microservices are more organizational pattern than a technical one, the more people think that it's the inverse, break team isolation, the project can easily lead to distributed monolith.
iamiamwhoami@reddit
They didn't specify performance would be the scaling issue. The first scaling issue most multi team organization run into when working on a single monolith is issues with deployment frequency. If you have too many people deploying code to the same monolith eventually you get these long deploy queues and getting code into production becomes a PITA.
Uberhipster@reddit
the main problem I have with this takeaway - which, to be sure, makes sense and has been worded thusly by the master - is that the soundbite will be taken to extreme and people will justify waterfall under the credo "we have to build the WHOLE monolith first" using this as a justification
instead of "we need to grow the monolith one thing at a time" rivaling mantra
bears repeating for every post
QuantityInfinite8820@reddit
It is, and if you use dependency injection correctly breaking it off a microservice will require very little effort
RangeSafety@reddit
Yes.
But to be frank, anything is better than the microservice-bullshit of the last 10 years.
CoffeeBean422@reddit
Thanks for the read.
While the 2 reasons the article provides are valid enough for consideration I miss the fact that state isn't discussed here.
i_andrew@reddit
No, just like "Microservices first" are not a good approach.
Good approach is when you select the architecture basing on requirements at hand.
For instance, for a client that was in a real estate business, we've built one big service for real estate offers and a separate, big service for property management. From client's perspective it was a one system, but they had nothing in common. Data exchange was minimal. Bundling them together would force 100 people to work on one codebase. Same with deployments. The offer one could have no downtime (99,9). The property management one could go down for hours and nobody would notice.
And that happened when a memory leak took all instances of property management service down. The separate offers service was not affected (99,9 sla was saved). If it were a monolith, all features would go down.
jawdirk@reddit
The requirements are not a fixed thing. They change over time. Sometimes the architecture for the initial set of requirements is not the best for the changed requirements.
i_andrew@reddit
That's very true.
But unless you can predict (or you have premises of changes), you can only work with what you have. There's no sense in doing "just in case" work, because more often than not something not expected will pop up).
But what we CAN do though is to isolate the uncertainty. Whatever is dubious, separate it.
zoechi@reddit
Look up Modulith
zelphirkaltstahl@reddit
If you are sensibly structuring your monolith with abstraction layers in mind, it should be rather simple to later separate out parts. Easier said than done, but can definitely be learned. And even if one fails to do this, one can still refactor the monolith step by step. Unless the code is so horribly written, that a rewrite makes sense anyway.
wndrbr3d@reddit
Software has zero value unless it’s in production.
When I’m advising startups, I have to break this mentality their CTO/lead developer has in that the architecture has to be a perfect, leetcode-esque solution. Their only goal is to generate revenue. Generate revenue or die.
I view “Architecture” as a radar diagram with: Security, Maintainability, Testability, Readability, Performance, and Time-To-Market. It’s an architects job to make that radar diagram as round as possible.
fondle_my_tendies@reddit
99% of devs are terrible and do not care about architecture, just about solving the problem as fast as possible regardless of the damage the solution causes.
JustifiedCode@reddit
The majority of systems don’t need distributed processes. The problem is that we choose architecture to experiment, to enhance our resume.
hemel-@reddit
Monoliths are not bad. What matters is how you split your code inside, and if you clearly expose and respect the different layers inside your monolith, so that you will able to switch to another architecture if needed with ease.
vishbar@reddit
"Ship first" is the best approach.
Tejodorus@reddit
My experience fully corresponds to what Martin Fowler states. Microservice projects have a lot of overhead in design, implementation, deployment and maintenance. I always start small with a single monolith, and most of the times, that's enough and I do not even need a scalable monolith.
To avoid the issues stated above, like creating spaghetti code when multiple people/teams work on a monolith, I try to use Actor Oriented Architecture. It is a pragmatic mixture of DDD, Clean Architecture and Screaming Architecture. The idea: you structure your application as if all domain objects (aggregate roots) live on another computer. Imo, that really helps to think about boundaries. And it makes it easy to scale up to distributed monoliths -- should that be necessary -- especially when using a virtual actor framework like darlean underneath (warning: I am the author of Darlean).
I have written a draft paper about Actor Oriented Architecture. Should you be interested, you can find it here: https://theovanderdonk.com/blog/2024/07/30/actor-oriented-architecture/
Mrqueue@reddit
it depends on what you’re building, sometimes yes, somethings no
dimitriettr@reddit
The monolith aproach is nice, until you need to deploy it.
It takes some effort to deploy features independently.
Refactoring or Upgrades? They just became 10x harder to do.
ProgramTheWorld@reddit
The answer is always “it depends”. There’s no one size fits all answer.
NotAskary@reddit
I was looking for this answer, there are examples of full micro services from the start and full monoliths.
We also have storys of migrations to both cases, and if I'm no mistaken we have a pretty good story from prime video migration to lambda and back to monolithic saving lots of money and compute.
So this is the most correct answer, and also the least satisfying.
darkcton@reddit
Lambda is a super expensive technology. There's definitely cheaper ways to run microservices
Dreadgoat@reddit
Mmm no, can you please reformat this as a lengthy blog post describing a golden hammer that I can pursue dogmatically, without any further thought? Thank you.
frobnosticus@reddit
YAGNI.
Push off commitments to architectural complexity as long as reasonable, then a little bit longer.
thebuccaneersden@reddit
Um, I generally agree with the blogger. However, I have definitely been in situations where I had to architect platforms using micro services in the design stage. It really depends on the situation you are in and the experience of the developers.
jawdirk@reddit
I think an underrated option is having a single repo for your code, but multiple application deployments based on that code. You do potentially end up with a larger-than-necessary memory footprint for each application, but you save a lot of developer work by being able to reuse code without duplicating it across several repos. Also, it's easier to recognize that you are making a breaking change within one of the applications if tests in the application that interfaces with it are now failing.
RICHUNCLEPENNYBAGS@reddit
It’s a reasonable approach except when it isn’t.
MrPhi@reddit
Monolith and micro-services are too extrems of a spectrum, not a binary choice to make for any project.
Eventually you may need to create additional programs around your main one that will handle a specific task, related but not directly dependant to the main project.
Thos programs will have their own repository, their own configurations.
Unless you are working on a refactoring of a project that needs to be extremely scalable to handle millions of users at the same time everywhere on the planet with a short response time, this is really not a relevant topic of discussion.
BenE@reddit
Yes start with a monolith in order to get tightly scoped hierarchically organized logic where the surface for various problems is reduced and break out parts as needed but only after having hardened them, always being aware that when you break them out, you are broadening their scope, are coupling them through less reliable, less statically checked, more global layers and they will be more difficult and dangerous to change once they are at that layer so they have to be more mature.
Here's an attempt at explaining the theoretical benefits of this approach based on minimizing code entropy.
This debate has some history. One relevant data point is the history behind the choice of architecture for Unix and Linux. Unix was an effort to take Multics, a more modular approach to operating systems, and integrate the good parts into a more unified, monolithic whole. Even though there were some benefits to the modularity of Multics (apparently you could unload and replace hardware in Multics servers without reboot, which was unheard of at the time), it was also the downfall of Multics. Multics was deemed over-engineered an too difficult to work with. Bell Labs' conclusion after this project, was that OSs were too costly and too difficult to design. They told engineers that no one should work on OSs.
Ken Thompson wanted a modern OS to work with so he disregarded these instructions and wrote Unix for himself (in three weeks, in assembly). People started looking over Thompson's shoulder and be like "Hey what OS are you using there, can I get a copy?" and the rest is history. Brian Kernighan described Unix as "one of" whatever Multics was "multiple of". Linux eventually adopted a similar architecture.
The debate didn't end there. The Gnu Hurd project was dreamed up as an attempt at creating something like Linux with a more modular architecture (Gnu Hurd's logo is even a microservices like "plate of spaghetti with meatballs" block diagram).
It's Unix and Linux that everyone carries in their pockets nowadays, not Multics and Hurd.
ZukowskiHardware@reddit
Not at all. I’ve done both and micro services are far superior. I used events, which supplied a contract between services. Without that, then I don’t really think it is micro services.
Salamok@reddit
Not going monolith first is likely a premature optimization.
Trevor_GoodchiId@reddit
And so it goes.
Tiquortoo@reddit
Yes, micro services should be discovered. Not all apps need that form of architecture
qrrux@reddit
Duh.
shevy-java@reddit
Micro sounds lean, agile, epic.
I think you can gain a LOT of insights through a monolith too. My pseudo-webframework, for instance, was done in a monolith"ic" fashion. I wanted to have all functionality in one place and try to minimize re-using functionality made available outside that project as much as possible.
So the question that was asked on the website via "When you begin a new application, how sure are you that it will be useful to your users?", can also be asked if it is a solo-dev solo-user project. How useful is this or that? And, even more importantly, if you don't have enough information, which is often the case initially; it becomes more clear lateron and then changes may be necessary.
Many years ago I used PHP and just tied together functionality, mostly in functions, later in classes. That became the basis for when I switched to ruby - that eventually became a multi-paradigm "webframework". I hated being tied down to one particular way to go in rails. For the current iteration I am expanding on "treating every HTML tag as an object" (mostly, actually, it is the div-tag and the p-tag that is more important, as well as input). One key of this is that I can, for instance, do:
So, kind of being able to programmatically access everything in an OOP style. (The above may look a bit verbose; I made it a bit more verbose so it is easier to understand what the key idea is, the rest is just a DSL wrapper).
I want to expand this onto traditional GUIs, onto the commandline (as much as that supports it), via ncurses too (even though I absolutely hate it). I want to abstract as much as possible while trying to keep it as simple as possible. Anyway, going a bit off-topic - the point is that designing it as a monolith from A to Z, from bottom to top, it is not necessarily super-elegant, but it seems easier to start that way and keep pushing forward. Eventually you'll see which patterns can be simplified. Once the foundation is solid, well-documented, tested, it is a lot easier to build additional things on top of it, including third party code or microservices (depending on the size and its stability; the latter is a big problem. I hate being tied down to any frozen API, so my code kind of becomes instable over time, which is not good but difficult to avoid. It's often more fun to write something new or fresh than fix ancient bugs in a code base that became really ugly and complicated.)
remy_porter@reddit
Don’t define modules. Define messages. That’s OO basics right there. Start with messages. Then monolith vs microservice becomes a deployment question: “how do I route messages between components?”
cv-x@reddit
Does Martin Fowler have any track record of actual projects that could lead me to take his advice seriously? Genuine question.
StellarNavigator@reddit (OP)
Martin Fowler’s been in the game for a long time. He’s worked with tons of teams and companies at ThoughtWorks, so while he’s not a startup guy or some good project guy, his experience has shaped a lot of the practices developers use now. People who execute at the ground level also have some experience worth sharing.
Slsyyy@reddit
He has a nice presentation style, cause he just presents, there is no evangelism. Just look at this article https://martinfowler.com/articles/serverless.html
supermitsuba@reddit
He used to be a consultant for thoughtworks, if Im not mistaken. Wrote a bunch about his experiences, patterns and practices he used. So in short, yes. Do some googling
frederik88917@reddit
Hell yeah, why to complicate things from the beginning?
Striking-Ad9623@reddit
Exactly. Only taking out unnecessary networking makes a huge difference.
AncientPC@reddit
Generally speaking, yes.
A monolith—like dynamic typing—allows people to iterate interfaces quicker with less overhead than SOA (or static typing). Generally this is a preferable trade off for newer companies and products with an ambiguous future.
However in larger established companies or employee count, the benefit of formal contracts (i.e. APIs) is preferred over interface iteration speed; especially if the cost of deprecating interfaces or supporting backwards-compatible APIs is expensive.
moreVCAs@reddit
Yes
Jabes@reddit
Yes
GYN-k4H-Q3z-75B@reddit
Next thing they tell me is I don't have to develop my in-house app with a Netflix style architecture?!?!!!!
CanvasFanatic@reddit
It depends.