Microservices vs. Monoliths: Why Startups Are Getting "Nano-Services" All Wrong
Posted by Complete_Cry2743@reddit | programming | View on Reddit | 150 comments
sevah23@reddit
Why does nobody seem to mention the obvious “break ownership of the monolith in to separate libraries that can be iterated upon and owned by different teams while still being deployed as a single application”? It seems like IME the best balance of distributed ownership without the infrastructure overhead of microservices.
WindHawkeye@reddit
Because that model is really bad in large organizations and doesn't have independent deployments controlled by teams that own the code
sevah23@reddit
can you elaborate on "large organizations" and why it doesn't work? Teams deploy their libraries through full CI/CD pipelines just like a microservice would, they just aren't shipping to prod on any more frequent schedule than whatever the main platform's schedule is.
Any less frequent deployment requirements than the main platform's schedule means the library owners just implement feature flags to control the activation of their specific stuff. In practice, I've seen this work well for orgs up to \~12 development teams since code ownership can still be federated without the excessive cost and complexity of "every team owns microservices that are all integrated together" and without the "big balls of spaghetti" problems of million LOC monoliths. Yes it sacrifices the full tech stack flexibility of a true microservice architecture, but at the benefit of a much simpler system to manage which is great for companies that are somewhere in between "one person coding a POC" and "netflix" scale.
WindHawkeye@reddit
You can't "ship" a library independently. You have to nag everyone else to update their dependency on you.
hippydipster@reddit
There are hundreds, thousands of open-source projects doing exactly that.
No, different team, different responsibility. Any large organization is going to live or die on it's best practices and the discipline of the various teams. There's nothing new there.
WindHawkeye@reddit
The point is that when you have to get a critical bugfix for some particular system auth you don't want to have to go get 200 other teams to go push their thing that uses your library to prod.
hippydipster@reddit
I get that, but the bug is something those other teams are experiencing, so it's their bug too. And my point about the large org living or dying on it's practices means they have a plan for how they keep up to date with their internal libs.
I just don't see it as some terrible problem. Seems a pretty standard issue we deal with in our projects all the time.
But when my project is suffering from a bug that's then fixed in that library, I update it. Presuming I have good processes for keeping my projects up-to-date generally, this is a simple thing.
Presuming we have bad processes and practices just means there's no good advice to give. If someone's or some team's digging themselves a hole, they have to first stop digging before we can fix other things.
WindHawkeye@reddit
Its only their bug because of the shitty library model making it their bug.
hippydipster@reddit
Writing libraries is the cause of bugs?
WindHawkeye@reddit
Writing libraries forces the bug into consumers of your api
hippydipster@reddit
That's how software works. A bug in code is a bug, whether it's in a library, a module, a config file, an external service, etc.
WindHawkeye@reddit
No
sevah23@reddit
If the main platform the teams are integrating with simply pins to major version of the dependencies, then the most up to date dependency gets pulled in at build time for the platform. that's a trivial fix for modern package management software.
we could circle jerk about what constitutes a "large" organization, but the point I was making is that if your only reason for moving to microservices is to distribute code ownership across many teams, there's a very reasonable and scalable alternative to "everything's it's own service with it's own infra and latency".
Realistically, most organizations at most companies do not have such unique and challenging technical constraints that they need to deviate from a shared platform, and this plugin-like architecture easily adapts to microservices if they become necessary since the library would just replace it's logic with the appropriate service call to a dedicated microservice.
WindHawkeye@reddit
The alternative simply doesn't work. You can't "deploy" a library to production like that
No_Pollution_1@reddit
lol this is like ChatGPT barfed up a SEO optimized article for clicks
n3phtys@reddit
Microservices are not as bad as one might think.
At least they force people to use their brain. It enforces some kind of architecture and makes it very painful to go against it.
But maybe that's a good thing too from some point of view? Monoliths are actually way harder to program in because they do not enforce any modularization. Microservices somewhat do. It often sucks, and is paid for with incredibly bad performance and development velocity, but it is enforced.
People online keep making it appear as if Monoliths are the solution to any problem. I still encounter bugs in monolithic software. So they are also not perfect. So why do we get 5 blog posts every week talking about monoliths vs microservices?
Pure functions with reusable functions are way better. But we're not talking about that kind of stuff nearly as much.
Ran4@reddit
It doesn't enforce architecture. People will gladly end up with N+1 queries... Except not against a dB with a 10 ms penalty, but a web service with a 250 ms penalty per hit.
n3phtys@reddit
We're not talking about small projects, we're talking about multi-million dollar greenfield projects. Otherwise there would never be a need to scale to more than 20 developers.
And CTOs of companies like that should rarely need to actually make such basic architecture decisions.
Most projects of that level are either profit first development (time to market is important in that case), or rewrite projects. And if you're doing partial or full rewrites, moving from monoliths into a SOA is more of a question.
And by the way, 10ms DB penalty is something that hides pretty well until you're dead from a thousand cuts of performance issues. But if it's 250ms each, everyone feels this from the start. In a time of agile development, such poorly performing queries will be visible in reviews / demos, which should happen every few weeks. That's the kind of enforcement I am talking about. Make bad decisions either impossible or very clear from the start - that's the job of good architecture.
A well performing well structured code base is always preferable, but our industry can't even determine if it should accept Clean Code as a good book recommendation or not - we're nowhere close to a simple silver bullet.
BasicDesignAdvice@reddit
I have a lot of expertise work micro-services, but I do agree with the general online opinion that most companies don't need them. At least not at the start. The biggest benefit is when scaling you can scale only the parts of the application you need, but most companies and ever need that in reality and can just stick with a monolith.
Everything else is bickering about problems that exist no matter what, like automation tooling, whether or not the company writes and maintains shared libraries, culture, testing, and a bunch of things that neither approach is meant to solve.
hippydipster@reddit
Almost no one actually needs microservices. 80% of design discussion in the industry is how to do microservices. The attention is out of whack.
evert@reddit
One mistake I see people make over and over again, is that it's not Monolith vs Microservices. There's many steps in between. Most people should just adopt SOA. Microservices is an extreme form of SOA this that's absolutely unnecessarily for 99% of cases.
WindHawkeye@reddit
I think when most people use the word microservices they are referring to SOA.
dark180@reddit
It’s not a monolith vs micro service problem . They can both be great and they can both be horrible. It’s a bad architecture/design problem. Unless you have someone who knows what they are doing making these decisions and enforcing them , they will both suck. Rules of thumb when deciding if you need a new micro-service is
Will it accelerate delivery ( this is more of a organizational problem) no point if having multiple microserrvices if you only have one small team.
Is there a technical need
robhanz@reddit
Like all decoupling, they can make sense if you are actually decoupling. Truly decoupled services are great.
Tightly coupled services split across multiple projects are a disaster in the making.
For most services, a given operation should ideally result in one call to any given other service. If you're going back and forth in a single flow, you're not decoupled. Exception is for things like logging/analytics, where the calling service isn't really dependent on the results anyway, and it's basically fire-and-forget.
tonsofmiso@reddit
I'm pretty new to working with microservices, or distributed services. What makes services truly decoupled?
wildjokers@reddit
Each service having its own database that is kept in-sync with asynchronous events. No microservice should have to synchronously communicate with another microservice.
seanamos-1@reddit
You can build a system doing only this (subscribing to events) instead of calling/querying another service synchronously.
In practice, this comes with many of its own downsides, so I don't see such a dogmatic approach very often.
It has its place, but there wouldn't be such significant ecosystem investment in things like inter-service communication (service discovery, routing, service-meshes, gRPC etc.) if "No microservice should have to synchronously communicate with another microservice.".
Sync communication is a core part of building a distributed system, and depending on exactly what you are trying to do (at the call/feature level), sync/async could fit better.
wildjokers@reddit
Can you explain how you get independent deployment and development when making synchronous calls to another service?
I can't think of a single advantage of taking fast and reliable in-memory function calls and replacing them with relatively slow and error-prone network communication.
Perentillim@reddit
Tried and tested api versioning and feature flags?
If anything I’d say it’s preferable because you are in control of the cut over
hippydipster@reddit
If the processing required dwarfs the network lag, and is large enough in complexity to require a team, or is large enough in hardware requirements to not fit in your monolith's other hardware requirements, then it can make sense to move a synchronous call to another deployable unit.
Imagine you have a process that can take 2 hours, require 40GB RAM to do? Do you want to provision your monolith's vm with an extra 40GB just because this job might run once a week?
FarkCookies@reddit
Imagine I have a service called OrderService, CatalogService and CustomerService and I am working on DeliveryService (or something) that needs both orders, products and addresses, so does it have to have a copy of all three databases plus its own?
wildjokers@reddit
Not necessarily a complete exact copy but it should have data in its own database tables needed to do what it needs. It should be getting this data by listening for events being fired by the other services.
Data redundancy is both accepted and expected.
You can google “eventual consistency” and “event-based architecture” for more details.
FarkCookies@reddit
Imagine some Fullfillment service listening to all catalog changes, what a waste and just inviting defects. That's more of event sourcing, and that's just one way of doing microservices, it is not the way (cost here is none). And eventual consistency is a pain in the ass enough with sermi-synchronous distributed databases like DynamoDB and Cassandra. Keeping distributed state synchronous is one of the most challenging computer problems, it is absolutely insane to prolifirate it all over the place. If we are talking about just services where like a team owns a service then I could consider it. We have like 3 microservices between 5 of us and it would be absolute insanity to keep redundant data at sync at this scale. And if you change what data (or you had a bug) you need later you need to replay the data. Yeah no thanks I stick with good old sync calls, retries, idepotency tokens, queues, async jobs for longer running things and streams in rare occasions (plus some AWS specific tech that just works).
wildjokers@reddit
It’s the only way that makes sense to meet the goals of microservices which is independent development and independent deployment. If you have services making synchronous calls to each other that is just a distributed monolith. There is no independent deployment or development possible.
fletku_mato@reddit
So what you're saying is that an application listening an event queue which is fed by another application is more independent than an application with is calling a rest api provided by the same "other application"?
wildjokers@reddit
Yes, because it has all the information it needs in its own database and it can be developed and deployed independently of another service.
The only time any type of coordination is needed is when a downstream service depends on some new information an upstream service needs to add to an event. Even in that situation though the upstream service can be modified at some point and then later the downstream service can be changed to handle the new information. Deployment is still independent, might just need to coordinate with the other team/developer about the name/format of the new event data.
fletku_mato@reddit
It's often a lot easier to mock api calls than message queues. And you absolutely can deploy an application which uses external apis that might be unavailable at a time. Similarily to an event-sourced app, it will not work when there are no event / the api is not responding.
Not sure what you are talking about here. When we use an event queue, there's at least 2 network calls instead of one api call.
Literally exactly the same problem you have with direct api calls. The apps are not decoupled, as true decoupling is just fantasy. Any amount of layers between apps cannot remove the need to common schemas when they deal with same data.
mgalexray@reddit
I worked with a system designed like that - and not even that big one, maybe 80k items in inventory? Multiple services, all wired together through CDC over Kafka.
It was a nightmare when the “eventual” part of the consistency kicks in for various reason. Multiple multi-day outages because Kafka stopped replicating, or bad data got pushed, or downstream consumer had a bug that resulted in replicated view being outdated or wrong.
I think in general we build things with a wrong level of complexity for a problem at hand, expecting 10x when it never happens.
FarkCookies@reddit
Yeah exactly, it is HARD. I also worked (albeit briefly) in a bank which did this approach integrating different systems (not just services), but it was not microservice integration pattern. I left before my part went live, so don't really have hands on experience, but people who worked there were oncall prepared to replay messages, yeah no thanks.
Sauermachtlustig84@reddit
Tbh, in that cause you service should probably be a monolith and just query the data.
Decoupling is super useful , but it only has benefits if you can decouple in some form. Maybe that's services i.e. you have a domain describing ordering and that's decoupled from fulfillment - both share data but only via events and one of them might be down without affecting the other too much. Or do geographic decoupling, i.e. your service is split between EU and USA
hippydipster@reddit
FarkCookies@reddit
If you just do everything only via events you are not DEcoupled. You are just coupled through other ways. You still depend on schemas and business logic that emits the events in the upstream service. If the issue of coupling is "not knowing the IP address of the service" then there are appmeshes and service discovery. The only virtue of going all in with events is that catalog service may be down and this doesn't affect maybe some fullfilment operations (not all services can work independently in any case). If for your business it is critical that fullfilment goes no matter what then sure, there is some value in this approach. But this is not MICRO service scale, those services or systems must do quite a lot to be able to work independently anyway.
robhanz@reddit
Well, as I said, one call per operation. Requiring multiple round-trips for a typical operation usually indicates tight coupling.
Having separate databases is good - or at least separate schemas.
Look at how closely the calls you make match what you'd make in an "ideal" environment The more "leakage" of internal details there are - the less the calls match your ideal - the more tightly coupled you likely are.
Calls should be to "services", not (conceptually) manipulating state. Sure, some state manipulation will happen as a side effect, but it shouldn't look like you're changing state.
leixiaotie@reddit
For me, it's when a service can run it's main objective by itself without any other internal microservice required, it is decoupled. External, third party service may present.
Example is if a payment service need either credit card service or bank transfer service to work to make any payment, it's coupled to either one of those service. It can be argued whether it is tightly coupled or not but it's another matter.
Credit card service however, can be made decoupled because it only communicate to third party cc API and no other internal service. If we expose some APIs we can do activities to the cc service perfectly while other microservices are down.
Both examples are actually good, because while payment service needs other services to make payment, it still can do it's main works without both service, such as recognizing the items that need to be paid, creating bills and receipts for payment, etc. Now if you say separate payment and bill service, both are tightly coupled services that is bad, since payment cannot work without bill service, and bill cannot works without payment service.
MillerHighLife21@reddit
Not needing to call each other to function.
s13ecre13t@reddit
I would also add, having multiple clients.
If a microservice only has one client, it should be rethought if it needs to be a microservice.
I seen microservices where they only ever get called by a specific other micro-service. It is as if someone was paid by how many micro-services they can cram into solution.
Saint_Nitouche@reddit
A separate database is often a good first sign.
Cheraldenine@reddit
The database server itself is a good example too. It's developed without specific knowledge of the applications that are going to be using it, so it's decoupled.
alarming-deviant@reddit
Good summary
CanvasFanatic@reddit
Meanwhile here’s me with a 2M loc java monolith two dozen teams own little pieces of that takes an hour to deploy.
edgmnt_net@reddit
Do you need to actually deploy the monolith that often? I've seen really bad microservices setups where you couldn't test anything at all locally, everything had to go through CI, get deployed on an expensive shared environment and that limited throughput greatly.
Skithiryx@reddit
The CI/CD ideal is that you deploy validated changes in isolation from one another, so with multiple teams I’d expect to want to deploy multiple times a day. Of course, that’s not always realized.
Pantzzzzless@reddit
Our project comprises 20-25 different domains, with I think 17 separate teams. (A few teams own 2 domains)
We have 4 environment through which we promote each monthly release. Mainly because any prod rollbacks will be very costly.
We do multiple deployments per day to our lower env which is isolated from the app that consumes our module and do as much integration/regression testing as we can before we release it to the QA env.
It's a bit cumbersome, but pretty necessary with an app as massive as our is.
edgmnt_net@reddit
Some of the stuff I interacted with was massive particularly due to granular microservices, artificial splits and poor decisions which introduced extra complexity, code, work. It has become too easy to build layers upon layers of stuff that does nothing really useful and just shuffles data around.
Skithiryx@reddit
What makes a prod rollback costly for you? Half the idea of microservices and continuous deployment is that rollbacks should be relatively painless or painful ones should be isolated to their own components. (Obviously, things like database schema migrations can be difficult to roll back)
Pantzzzzless@reddit
Costly in terms of feature work piling up behind prod defects if they make it that far. Some months we end up with 7-8 patch versions before being able to focus on the next release, which is then 5 days out from it's prod deployment target date.
Though this speaks more to our particular domain's scope ballooning out of control in the last few years than it does our deployment pipeline.
CanvasFanatic@reddit
Without going into a lot of detail that might give away my employer: yeah we do.
billie_parker@reddit
Isn't that what the article is saying?
CanvasFanatic@reddit
It’s in there, but this “nanoservices” thing feels like a straw man.
edgmnt_net@reddit
I'm personally yet to see where even micro makes sense. Truly decoupling stuff is harder at small scales. Otherwise we've long had services like DBs and such, those work really well because they're sufficiently general and robust to cover a lot of use cases. And once you get into thousands of services, I really can't imagine they're big. The less obvious danger is that they've actually built some sort of distributed monolith.
CanvasFanatic@reddit
Microservices primarily address two problems:
Organizational - you have a teams that need to own their own project and be able to own their own pace without stepping on what other people are doing or being stepped on. This is by far the most important reason to consider a multi-service architecture.
Scaling, locality etc - different parts of your application need to scale at different rates / deploy independently / exist in different cardinalities relative to one another etc. An example would be real-time services with inherent state (think socket connections) juxtaposed with typical stateless user services. Authentication / authorization is also one of the first pieces to be “broken out” once you start to scale your number of users because it might be something that happens on every request.
My rule of thumb is that stuff that deploys together should be in repo together.
It’s true that most people don’t need this on day one with only a handful of people working on the codebase. It’s also true that if you live into the “late-stage startup” phase with a few million active users and enough people that everyone can’t eat at the same lunch table anymore you’re going to probably need to start breaking stuff apart.
BundleOfJoysticks@reddit
What's hilarious to me is that maximally independent architectures where stuff can be deployed independently are a trend that's contemporaneous with the rise of the monorepo. Both are stupid for the vast majority of startups.
CanvasFanatic@reddit
Over and over again it turns out there are no shortcuts for understanding your actual problem.
BundleOfJoysticks@reddit
But that's, like, upfront design and domain modeling!! 1! WATERFALL!!1!1!! WE'RE AGILE!!1!
SMH my head
fletku_mato@reddit
It's always a distributed monolith, but that's not always such a bad idea. The truth is that there is no way to build any large system in a way where the components are truly fully decoupled, but splitting functional components into their own services can make development and maintenance easier in some compelling ways.
TomWithTime@reddit
Sometimes I just wish I could test micro service A and micro service B without having to also have micro service C through ZZZ running (which all update and break often)
Fuck graphql
CanvasFanatic@reddit
For the record I’m right there with you on that.
TomWithTime@reddit
I like message queues but I feel like that graphql server is going to stop us from ever getting away from a distributed monolith. Hopefully something emerges that is easier to use, easier to integrate & separate, and somehow more efficient so everyone is basically forced to move over to it.
I can't imagine what that would look like. From a dev experience I think it would be like moving from react redux to svelte. That wonderful syntax stopped me from giving up on front end dev
CanvasFanatic@reddit
I’ve honestly never been entirely clear what it’s meant to be good for other than integrating with popular JavaScript frameworks and letting frontend devs shape server responses without dealing explicitly with the backend.
TomWithTime@reddit
The only argument I've heard that justifies the extra work is
Different consumers of your API may have different needs and this is less work for you to customize it for them
Clients can reduce the size of the payload which will be good for any potential limits of bandwidth and processing power to an edge system
But of course there are many things you might know about your own applications and consumers that make those points moot. Our team hasn't been request bombed yet but I've read about horror stories of that and I'm concerned. We have request cost analysis and maximums, but you never know when a new feature accidentally didn't work around the n+1 problem you get for free with gql out of the box so you've got a deep relationship with no loaders to mitigate it
Cheraldenine@reddit
It's the micro part of the name. Splitting off things that can clearly stand on their own as services, of course that's good. The micro seems to cause people to go way overboard with it (things that would work so easily in the same database with a foreign key between the tables have to live in two different services with their own databases, because that's what our architecture document says, etc).
psaux_grep@reddit
You can have shit monoliths, and shit microservices.
What is best for your org and your use case really depends on what you are attempting to do, but at a certain point monoliths typically need to be broken up for the sake of velocity.
Had a former colleague who talked about a project he worked on for a client where the monolith took three hours to deploy.
Releases became hugely expensive. Basically two week code freezes and two deploys per day and lots of dead time waiting for deployment.
CanvasFanatic@reddit
💯
sameBoatz@reddit
I have 3 teams and I each does at least one release a day. We ship a feature when it is ready to lower risk and simplify rollbacks if needed. I get mad when our builds take 10+ minutes.
BasicDesignAdvice@reddit
We can run unit tests and database tests locally, but everything else is just "cross your fingers." Luckily the services take seconds to deploy, and can be rolled back in seconds as well. We deploy to prod dozens of times a day, which in general I like.
PeachScary413@reddit
Are you deploying/building it on a microwave from 2013?
Even a medium sized modern server should not take 1 hour (!?) to build a 2M loc application... unless part of your build process is to try and calculate all of the existing prime numbers.
zacker150@reddit
Once upon a time, Amazon had a monolith for its retail site. It got so big that it took half a day to deploy. They saw that and invented microservices.
CanvasFanatic@reddit
Also why they went so hard on api contracts as interfaces between teams.
dinosaursrarr@reddit
My work has hundreds of micro services that each take over an hour to deploy
BasicDesignAdvice@reddit
Why on earth do they take so long? We have hundreds and well but they deploy in seconds.
s13ecre13t@reddit
there are few possibilities:
CanvasFanatic@reddit
That sounds pretty awful.
onetwentyeight@reddit
Now imagine that but you've got a thousand dependencies in a thousand tiny many-repos.
CanvasFanatic@reddit
I am in no way arguing that it’s impossible to make a mess out of microservices, but too many people use the fact that they can be done badly and carelessly as an excuse to stick with monoliths past the point where they ought to have begun decoupling things.
By sheer accident of fate I’ve spent more of my career in places making the latter error than the former.
onetwentyeight@reddit
Fascinating I wonder if massive monoliths are more likely in your industry or language(s) or choice.
I refuse to work with Java, I have since 1997. I have been working in C, Go, Rust, and Python lately and have not had monolith issues. In fact I've seen a push for the inappropriate application of microservices pretty consistently.
CanvasFanatic@reddit
Most companies I’ve worked for have been later stage startups. In almost every case the background of my tenure with them has been moving from a giant monolith running on the JVM to smaller services written in Go, nodejs etc. With my current employer I’ve just shipped our first rust service.
bwainfweeze@reddit
I was so proud when I wrangled our little monster down to 30 minutes.
CanvasFanatic@reddit
Right? I mean if people go nuts and decide to make services out of individual functions that’s clearly wrong-headed, but that’s not really a point in favor of monoliths.
So much or engineering is understanding the problems you actually have.
LessonStudio@reddit
Most programmers suck at multi-threading; like really really really suck. Many people proceed to avoid threading by badly designing microservices which are effectively running as threads; with all the same sorts of issues like race conditions, etc.
Properly designing a threaded system is hard, but ends up being very clean. But, a badly designed one often ends up with piles of hacks. Things like one thread having a sleep so that it is most likely that the other thread is done. Or threads all hanging waiting for each other so much that it is all single threaded but with way more extra steps.
mgalexray@reddit
lol, that borough back some memories. Of of mine previous workplaces was paying 300k/mo on AWS to run about 60 services - each having its own geo redundant RDS (you can imagine where the cost came from). Would not be that sad if they didn’t have an inventory of only about 80k items to sell - and roughly 800 peak concurrent visitors.
Honestly sometimes you can run three times over on a pair of Herzner dedicated server and not pay through the nose and get Bezos another yacht - but apparently you’ve gotta suffer that CV somehow.
LessonStudio@reddit
Insanely this is enough to set up your own ever growing server farm and staff it quite well.
I do industrial ML is which is about the most demanding per user server application I know of. But, most customers don't do a pile of stuff at any given moment. With that, your pair of very nice cloud servers would be totally fine for 800 customers.
But, for most things that most people do, a couple of $5-10/mo servers should be able to service 10,000 customers no problem. So, even your 10x issue.
hippydipster@reddit
At my last place, they paid roughly $1 million/year to google to service somewhere in the neighborhood of 10,000 users.
LessonStudio@reddit
There is an esoteric technical term for that:
Holy Shit!!!!
Clawtor@reddit
This sounds a lot like my work -_-
Monthly costs are about 5k and afaik we have fewer than 50 users. I'm currently working on an automation that will save a few minutes per month. Completely stupid decisions.
KevinCarbonara@reddit
I feel like "nanoservice" isn't even a thing. It's just microservices in a world where people started using the word microservice to refer to any service ever.
hippydipster@reddit
How many lines of code at the boundary between nano and micro service? Between micro and just "service"?
Capable_Chair_8192@reddit
Using microservices is a great way to turn reliable, microsecond-long function calls into failure-prone, 10s-to-100s-milliseconds network calls. It’s also a great way to make deployment and ops 10-100x more complicated.
I know it has its place, but most people gloss over the overhead. You have to really make sure it’s needed before you use them, otherwise you’re just making your life harder.
wildjokers@reddit
Sounds like your experience is with distributed monoliths, not microservices. Because what you describe isn't microservices.
PorblemOccifer@reddit
I think that’s part of the critique - many systems use microservices on a service level and think “okay perfect, I’ve obtained the benefits of a microservices architecture”
Despite the fact that not a single microservice is actually independent :D
BasicDesignAdvice@reddit
I actually work in micro-service architecture and I think there are definite problems pros, but this always makes me laugh.
Capable_Chair_8192@reddit
I had to Google this, thanks for the laugh, amazing article! https://grugbrain.dev
billie_parker@reddit
Does anyone have any opinions on the use of microservices for products that don't even interact with the web?
I'm currently working at a startup with a product whose main feature is physical. Let's just imagine it is a robot vacuum. The software has several components: high-level state machine, room mapping, control, system monitoring, etc. In all there may be 10 or so separate components, each running in its own container, communicating over the local network.
I have some background in embedded systems and this seems pretty new to me. So far it has been much more of a hindrance than a benefit. Each component has its own git repo which adds a ton of compatibility issues. There's a huge amount of overhead and work spent just maintaining the interfaces between the components. I understand that would take some work even without microservices, but it seems to me that communication over a local network adds to this. Even just compiling new code to test it is a major pain. There's a bunch of additional steps that wouldn't exist otherwise. I have to ssh into the device, check out my code, then enter the container to compile it and launch the executable for my component.
I have to wonder, does this make sense at all, or is this startup completely doing the wrong thing? We don't have any devops team. It seems like this container design was undertaken by embedded developers that wanted to jump on the bandwagon.
MadRedX@reddit
Speaking as someone who works at a company with a very small IT team - you're describing a system that probably takes far more time to maintain than any benefit the microservice / container architecture provides. If I spend more time doing DevOps work than providing value as a software developer - it's usually been a sign to me that catastrophic decisions were made.
One valid scenario I can think of is if each component runs on a different isolated hardware sets. It'd have to be a situation where orchestrating everything in one app is too inefficient / cumbersome.
Another scenario is that each of the components you describe are so large (millions of lines of code) that it's impossible to reason about the system in one repo.
But say it's multiple containers on a single embedded chip. There are so many inefficiencies going on - containers are not resource optimal compared to running a single application instance. Microservices networking has its own overhead and problems to deal with - whereas a single application instance can communicate across components in memory.
The best analog to your situation are Electron desktop applications - they offer web developers (JS, HTML) the ability to use their skills to make desktop applications. The problem is that these apps are literally single purpose browsers - and if you're familiar with the amount of memory a single Chrome browser process uses, these apps are sucking up so much RAM.
I'd scrap the whole thing in favor of one easy to manage application.
BananaParadise@reddit
Isn’t there a middle ground between monoliths and microservices? Break the monolith into modules that are shared amongst applications and transition to microservices if need be
evert@reddit
Most people should transition to SOA. It's fine to have several large services that encompass a single domain.
feczkob@reddit
There is, it’s called distributed monolith, you should try it ;) /s
Seriously, a “Modulith” (modular monolith) is rather what you’re looking for: a nicely structured monolith.
LaOnionLaUnion@reddit
I keep on seeing dumb decisions to make frontend apps that should be independent somehow dependent on another application when it doesn’t have to be. It’s always in large public corporations
Complete_Cry2743@reddit (OP)
Been there
editor_of_the_beast@reddit
Hasn’t this exact post been written like a thousand times?
Complete_Cry2743@reddit (OP)
Well, not by me. I think, or maybe I forgot I’ve written something like this before? Damn Alzheimer!!
jmonschke@reddit
I think that microservices can make sense for some cases (like Netflix), but when word gets out about anything new that works for a single (but well publicized) case, it gets picked up as the latest buzzword that everybody has to use even when that technology is not a good fit for their problem.
I also think that it works at Netflix because they also invested a lot in additional infrastructure like the "chaos monkey" and "circuit-breakers" to offset the issues that are introduced or exacerbated by the architecture which are then ignored/de-prioritized by other companies trying to (selectively/incompletely/cheaply) emulate the success story.
bruhprogramming@reddit
Monorepo gang rise up
PositiveUse@reddit
This comment doesn’t make any sense.
Monolith VS microservice is not a debate linked to Split Repos or MonoRepo…
edgmnt_net@reddit
It's not, but there is a fairly meaningful connection between independent versioning and independent deployability. You can hold microservices in a monorepo and share code, but then how do you roll out changes for just one microservice? Conversely, you might think about holding a monolith spread across repos, but what does that achieve? These combinations may be workable or even useful in some way (e.g. deploying different binaries in a truly heterogeneous architecture), but there are some caveats to consider.
WindHawkeye@reddit
Having multiple microservices within a single repo is rather easy.
edgmnt_net@reddit
Yeah, although the harder part is figuring out whether you can redeploy only X out of X, Y and Z when they're all sharing some code or definitions. In the most general case it isn't safe to do that, unless you have other means of reasoning about the actual changes.
Also, considering code size is rarely an issue, if you're going to redeploy and roll out everything and computing resources are homogeneous, just go with a monolith. They can scale horizontally too under typical workloads. A rather typical platform serving many small customers is unlikely to ever require computing resources beyond what one node can offer for any individual customer, so sharding and load balancing are often enough.
WindHawkeye@reddit
Nobody with a brain has ever suggested using microservices when only dealing with small customers.
HalcyonAlps@reddit
Your CI/CD pipeline becomes a lot more complex but it's doable. Not that would have been my preferred solution to begin with. The company I work for can't figure out how to scale a data parallel application other than by splitting a monolith into microservices.
kunthapigulugulu@reddit
We have a monolith spread across different repos. We have different teams for some of the repos and different release cycles. Also some of the repos are common to a different product also. We integrate them in a main repo as libraries.
Rakn@reddit
I always split my service over three repositories. One for the configuration, one for the service code and another one for the business logic. It just makes the most sense. /s
Lothrazar@reddit
memes over content apparently
drakgremlin@reddit
Every time I work with monorepos it's a horrible mess.
reddit_trev@reddit
Splitting the mess into lots of little piles isn't tidying up.
bwainfweeze@reddit
Now we have fifteen versions of lodash in production…
drakgremlin@reddit
Instructions unclear.
Another team updated pyarrow and now our whole system is broken.
Loves_Poetry@reddit
Survivorship bias. The manyrepo setups don't get to production fast enough, so they get cancelled
bwainfweeze@reddit
Oh shit. Shots fires.
bwainfweeze@reddit
Every time I’ve worked on separate repos it’s been an mess.
Every time I work on anything more than 150K LoC all in, it’s a mess.
carlio@reddit
Wow I haven't an article saying exactly this for at least a week.
wankthisway@reddit
This sub is so dull and predictable. The same topics every week.
apf6@reddit
There's a simple rule so you can avoid overthinking it-
One team = One service = One repo = One deployment.
If you don't have a compelling reason not to do it that way, then just do that.
HR_Paperstacks_402@reddit
Yes, always go with the KISS approach. No need to introduce unnecessary complexity because of what things might be needed in the future.
Create a single service and split it up when there's a compelling reason.
A team that was working on a project that ultimately got cancelled due to not being able to get everything working well enough created a set of four micro services that was so overkill and only really needed to be one.
Now I'm part of the new team coming in to clean up the mess and simplify it all. Already took a different set of three services and consolidated them into one. The main functionality was a single threaded app with more than 30 instances. Now it's a multi threaded app with around a third of the number of instances.
zvons@reddit
One thing I don't see discussed here is the cost of refactoring everything into microservices from monolith.
Depending on the complexity, communication between services and most of all, managerial willingness to waste time where you "get no new features" it's hard to sell refactors.
I don't have experience in doing that so it may not be that complex (if you have experience, please share). Maybe it's not that big of a commitment. But from my experience a lot of the time "we'll do it later if needed" usually means "we won't find the time".
Unfortunately it's sometimes a hard sell for managers, especially if you do SCRUM and need to deliver something every 2 weeks.
HR_Paperstacks_402@reddit
They key thing is to think about how things could be split out from the beginning and creating well delineated modules you can simply pull out without massive refactoring. Because yeah, if everything is intertwined, you are going to have a hard sell on doing it later on.
During design sessions, we have talked about what the long term state may be based on what we know today but also the fact we do not want to jump directly to that state because our path might change along the way as we learn more. We don't want to build in complexity that may never really be needed.
It is very likely some of the functionality we think could stand alone in the future will remain grouped together forever and that's ok.
One of the biggest issues I see with teams trying to do micro services is them essentially creating a distributed monolith and that is the worst of both worlds.
zvons@reddit
Yeah, keeping everything as decoupled as we can is good not just for microservices. And if we keep it that way, we are good in the future.
And as you said, we need to be okay that some things will just stay together and not waste too much time on, probably, little return on that time investment.
Thanks for the perspective.
braddillman@reddit
Conway's law.
apf6@reddit
Yup! You can't escape it, so embrace it.
UMANTHEGOD@reddit
I like to think of this as coming up with a reason for doing something, versus discovering a reason for doing something.
In the first case, you are trying to come up with reasons to why you should build a microservice. Scalability, SoC, blabla. The common buzzwords. They sound great on paper but you are really just guessing at this point. Parroting trends and what others have told you, or it might even be based on your experience. But you still don't really know.
In the second case, you discover the reason why you need a microservice. Something in the real world happens where you are forced to go down that route, because there are no other good alternatives. Something concrete and tangible. There's no guessing game here.
namtab00@reddit
it's great when the client asks for microservices just because, when "the team" is one frontend, one backend and some junior turnover...
fuck me and my 17+ years experience, I guess, you know best...
Holothuroid@reddit
I mean, unless you don't do server side rendering, you have two services...
prof_hobart@reddit
Martin Fowler wrote a similar article a few years ago.
BenE@reddit
Here is my attempt at getting this debate on a stronger theoretical footing based on code entropy.
The tl;dr would be something like: use monoliths early on in order to get tightly scoped hierarchically organized logic where the surface for various problems is reduced. Then later maybe carefully break out parts that could clearly benefit from being separate, only after having hardened them, and always being very aware that you are broadening their scope into a riskier extent, are coupling them through less reliable, less statically checked layers. They will be more risky to change once they are at that layer so they have to be more mature.
This debate is as old as time. One relevant data-point is the history behind the choice of architecture for Unix and Linux. Unix was an effort to take Multics, a more modular approach to OSs, and re-integrate the good parts into a more unified, monolithic whole. Even though there were some benefits to modularity (apparently you could unload and replace hardware in Multics servers without reboot, which was unheard of at the time), it was also the downfall of Multics. Multics was deemed over-engineered an too difficult to work with. Bell Labs' conclusion after this project, was that OSs were too costly and too difficult to design. They told engineers no one should work on OSs.
Ken Thompson wanted a modern OS to work with so he disregarded these instructions and wrote Unix for himself (in three weeks, in assembly). People started looking over Thompson's shoulder and be like "Hey what OS are you using there, can I get a copy?" and the rest is history. Brian Kernighan described Unix as "one of" whatever Multics was "multiple of". Linux eventually adopted a similar architecture.
The debate didn't end there. The Gnu Hurd project was dreamed up as an attempt at creating something like Linux with a more modular architecture (Funnily enough, Gnu Hurd's logo is even a microservices like "plate of spaghetti" block diagram).
It's Unix and Linux that everyone carries in their pockets nowadays, not Multics and Hurd.
dlevac@reddit
It all boils down to: does the contracts of each of your component makes sense.
Micro services are great at enforcing strong delimitation between various contracts.
However, if your contracts are ill defined to begin with then your micro services will just make the problems more apparent. Which may be a good or bad thing depending how you think about it...
No_Flounder_1155@reddit
but to understand coupling you need to allow coupling.
aitchnyu@reddit
Do you have "contracts" that can't be enforced by "dont allow imports of x from y module" packages?
Moozla@reddit
Totally agree, and this is why it's always better to start on the big side and split when you see an obvious division in the functionality.
GaboureySidibe@reddit
Reinventing QNX will never go out of style.
DEFY_member@reddit
I agree with the general point of the article, but that graph is completely contrived, and makes me doubt if the conversation with "John, the Microsoft Evangelist" is real.
augustusalpha@reddit
TCP servers laugh in C ....
datnt84@reddit
Amen.