We saved 76% on our cloud bills while tripling our capacity by migrating to Hetzner from AWS and DigitalOcean
Posted by hedgehogsinus@reddit | programming | View on Reddit | 188 comments
ReallySuperName@reddit
Not to be one of those "hetzner deleted my account!11!!11!!!" type comments you see from people trying to host malware or other dodgy content, but Hetzner did actually delete my account out of the blue without warning.
Apparently, from what I've been able to tell, an automated payment failed. They sent a single email which I missed. That was the only communication about the missed payment I got.
I got an email a few weeks after this saying "Your details have been changed". Well that's weird I thought, I haven't changed anything.
So I try login, only to be told "Your account has been terminated as a result of you changing your details".
First of all, I didn't change anything, second of all, a single missed payment and then immediate account nuke along with all the servers and data has to the most ridiculous and unprofessional act I've seen from this type of company.
gjosifov@reddit
then write a blog post
It happens other people too, but not with hetzner, it was with google cloud
They made a internet noise for google to notice and for some google fix the problems and some
switch to different cloud provider
ReallySuperName@reddit
What good is that going to do now? The servers are gone. For every popular post to /r/programming and Hacker News about the latest tech company fuck up, there's probably ten more that get zero attention.
jezek_2@reddit
So you didn't have any backups? I think that's the bigger problem here.
Note that you have to have backups at a different location and not managed by the same company that you host on, otherwise it's not a backup, just a convenience (eg. faster restore).
FortuneIIIPick@reddit
I had over a dozen domain names with Google Domains. My bank sent an SMS to check that the annual bill for all the domains which came due was OK for them to process. I was busy working and didn't notice the SMS until later in the day. The bank denied the transaction.
Google's billing system refused to use my card marking it as bad. I had to use my wife's card to pay the bill to keep my domains.
If Google can't do any better with basic billing for cloud customers, it should be understandable when smaller companies have issues.
hedgehogsinus@reddit (OP)
I'm sorry to hear that, that really sucks.
DGolubets@reddit
One of startups I worked in went full circle on this. They were using AWS when I joined, then they decided to cut costs and moved to Hetzner, then they were fed up with problems and moved to AWS..
At my current place we use DigitalOcean and we are quite happy with it. It's cheaper than AWS but much easier than managing your own infra.
jezek_2@reddit
The answer to this is obvious, start and stick with an architecture tailored for running on dedicated servers and/or VPSes, that way the costs are the lowest possible (for both total cost by not changing architectures as well as for running/maintaining costs). You're still free to use containers or virtualization to make things easier.
Never use clouds, they're there to lure you with fancy features but the goal is to lock you in and extract as much money as they can from you. They provide you an interesting options of various features that you can combine etc. and then silently get you on the massivelly overpriced bandwidth costs and huge unexpected invoices from misconfiguration and spikes.
While their promises will break anyway (whole DCs unavailable because of their misconfiguration, lost data, less than stellar availability, etc.). It's just someone's else computer after all.
seanamos-1@reddit
For most people there are huge savings opportunities if you can release resources when they aren't in use, utilizing spot capacity and transitioning to arm/graviton. This can get you a 60%+ savings on compute right there without any sort of savings plan commitment.
Now if you need all that capacity provisioned 24/7 and its not tolerant of interruption, moving away from big cloud is probably the right move.
The one thing that there is little room to cost optimize on is the NAT gateways. They are just overpriced for what they are.
As you mentioned in your post, its also not an 1 to 1 comparison. The big clouds make it extremely easy to build out highly resilient applications that can survive DC (AZ) outages, so easy that one takes it for granted. When you start trying to achieve this in smaller clouds/your own DC, its a much more complicated ordeal. DCs have outages, sometimes multiple outages in a year. That's something that needs to be weighed in this decisioning as well.
Now I don't know the specifics of your workload, but I estimate I could run it on AWS at +-$250pm with bursts to 40+ VCPUs as needed, with HA. That's more expensive than Hetzner obviously, but again, its not a 1 to 1 comparison, there is additional value in that $250 that is easy to overlook.
jezek_2@reddit
And then you'll get hit with the insanely overpriced bandwidth costs. This killed every idea I've got when trying to utilize cloud offerings.
10113r114m4@reddit
I mean Hetzner is a very light cloud, in that you need to write a lot of services to support what AWS can do. It just depends on what you need
andynzor@reddit
You better have dedicated Kube ops folks running a really resilient HA cluster, because in my experience Hetzner has constant internal network outages.
jonnyman9@reddit
Exactly this. Can’t wait for a post in a few years moving to managing servers on prem and/or back to AWS to solve their constant outages.
FortuneIIIPick@reddit
People ran on premises and in hosting centers fine, before the cloud era. It's not that difficult.
jonnyman9@reddit
Yep
Win_is_my_name@reddit
Also has very poor support
Gendalph@reddit
Hetzner actually has pretty decent support for what it needs to do: swap hardware and plug in iKVMs. Everything else should be done in-house.
andynzor@reddit
Depends? I don't know what kind of issues you've run into, but in various run-of-the-mill matters it has worked well.
zauddelig@reddit
Never had an outage on Hetzner
Gendalph@reddit
as a long-time Hetzner user: old and established regions (i.e. Falkenstein) rarely have issues. I had 1 or 2 outages in the last 5 years.
as someone who works with AWS on the daily: AWS has been pretty stable lately, but we had maybe half a dozen outages in the last 5 years.
bwainfweeze@reddit
I’ve never had an outage on AWS either but that doesn’t mean Virginia isn’t a hot mess.
LiftingRecipient420@reddit
With how much they saved moving to hetzner, they could hire multiple kubernetes experts to manage their infrastructure and still be saving money.
bwrca@reddit
Better throw some nodes in aws to be super extra resilient.
Slggyqo@reddit
In fact, why don’t we just put everything on AWS?
8 years and a couple million dollars later we’re right back where we started.
BmpBlast@reddit
It's the circle of life.
spicypixel@reddit
> We saved money by swapping to a cheaper less capable provider and engineered around some of the missing components ourselves.
Legit.
minameitsi2@reddit
How is it less capable?
spicypixel@reddit
Lack of managed services, lack of enterprise support for workloads, lack of dashboards and billing structures for rebilling of components to teams for finance teams, etc
The blog even says they had to run their own postgres database control plane to run on bare metal for one.
freecodeio@reddit
note taken, AWS is profiting off of laziness
yourfriendlyreminder@reddit
Honestly yeah. The same way your barber profits from your laziness. That's just how services work lol.
freecodeio@reddit
if cutting my own hair was as easy as installing postgres, I would cut my own hair, what a stupid comparison
slvrsmth@reddit
If running a postgres database was as easy as installing prostgres, I would run my own postgres database.
Availability, monitoring, backups, upgrades. None of that stuff is easy. All of it is critical.
Your servers can crash and burn, it's not that much of a big deal. Worst case scenario, spin up entirely new servers / kubernetes / other managed docker, push or even build new images, point DNS over to the new thing, back in business. Apologies all around for the downtime, a post-mortem blog post, but life goes on.
Now something happens to your data, it's entirely different. Lose just couple minutes or even seconds of data, and suddenly your system is not in sync with the rest of the world. Bills were sent to partners that are not registered in the system. Payments for services were made, but access to said services was not granted. A single, small hiccup means long days of re-constructing data, and months wondering if something is still missing. At beset. Because a lot of businesses have gone poof because data was lost.
I will run my own caches, sure. I will run read-only analytics replicas. I will run toy project databases. But I will not run primary data sources (DB, S3, message queues, ...) for paying clients by myself. I value my sleep entirely too much.
FortuneIIIPick@reddit
> Availability, monitoring, backups, upgrades. None of that stuff is easy. All of it is critical.
I do it and don't find it difficult at all, I run it in Docker with a compose script. And I'm only a mere software developer and can do it.
Plank_With_A_Nail_In@reddit
These guys are just running a hobby business, the whole company is just these two guys lol.
freecodeio@reddit
Hetzner has all of the above, just fyi.
FortuneIIIPick@reddit
I do both, quit going to a barber in 2008, I'd estimate I've saved well over $1500.00.
NotUniqueOrSpecial@reddit
My Dad's been doing it for 30 years. So much savings!
thy_bucket_for_thee@reddit
There are bowls and scissors bro, that's pretty easy. Have at it.
Proper-Ape@reddit
I'd suggest getting an electric hair cutter. Decent-ish results if money saving is your thing, or you have male-pattern baldness.
mirrax@reddit
"Do it Lady." - Chit
Sufficient-Diver-327@reddit
Correctly running postgres long-term with business critical needs is not as trivial as running the postgresql docker container with default settings.
yourfriendlyreminder@reddit
So you're incapable of understanding how service economies work, got it.
freecodeio@reddit
What?
Chii@reddit
nothing wrong with profiting off other people's laziness.
spicypixel@reddit
Sometimes it's cost and time efficient to outsource parts of your stack to someone else - else we'd all be running our own clouds.
Supadoplex@reddit
So, the real question is, how many engineering hours did they spend on the missing components, how much are they spending on their maintenance, and how long will it take until the savings pay for the work.
hedgehogsinus@reddit (OP)
That's a good question and one we ourselves grappled with. Admittedly, it took longer than we initially hoped, but so far we spent 150 hours in total on the migration and maintenance (since June 2025). We reached a point where we would have had to scale and increase our costs significantly, however due to the opaqueness of certain pricing it's quite hard to compare. We now pay significantly less for significantly more compute.
Besides pricing, we also "scratched an itch" and was a project we wanted to do both out of curiosity, but also feel more free from "Cloud feudalism". While Hetzner is also a cloud, with our set-up it would now be significantly easier to go to an alternative cheap provider. We have been running Kubernetes on AWS, before there were managed offerings (at that time with Kops on EC2 instances) and with Talos Linux and the various operators it is now significantly easier than in those days. But, obviously, mileage may vary both in terms of appetite to undertake such work and the need for it.
ofcistilloveyou@reddit
So you spent 150 manhours on the migration - that's a pretty lowball estimate to be honest.
If migrating your whole cloud infrastructure took only 150 manhours, you should get into the business.
That's 150 x $60 hourly rate for a mid-tier cloud engineer. You spent $9k to save $400 a month. So it's an investment for 2 years at current rates? Not that $400-$500/monthly is much in hosting anyway for any decent SaaS.
But now you're responsible for the uptime. Something goes down at 3am Christmas morning? New Year's Eve? You're at your wedding? Grandma died? Oncall!
FortuneIIIPick@reddit
> But now you're responsible for the uptime. Something goes down at 3am Christmas morning? New Year's Eve? You're at your wedding? Grandma died? Oncall!
They were always responsible, if AWS had down time, the OP's staff still had to start working to mitigate for their customers and deal with a nebulous AWS to find out when their hardware would start working again.
Proper-Ape@reddit
>That's 150 x $60 hourly rate for a mid-tier cloud engineer
If they made this happen in 150h they're pretty good at what they do and probably don't work for $60 hourly.
Plank_With_A_Nail_In@reddit
Or maybe it wasn't actually very hard to do.
maus80@reddit
Well.. that's one way of looking at it. I would say that the company is now operating 76% cheaper with 3 times more room for growth (estimated 92% reduction in cost). This lower OpEx will win the company it's next investment round as it looks much more profitable at scale. The startup company running on AWS will not exist next year (failed).
AuroraFireflash@reddit
That's about half of the truly burdened cost, and possibly as low a 1/3 of the real cost to the business.
grauenwolf@reddit
How's that's why different from a cloud project? AWS doesn't know the details of my software.
hedgehogsinus@reddit (OP)
I think that's a pretty good monetary calculation, assuming your cloud costs don't grow and that there is an immediate project to be billable for instead. However, our cloud costs were growing and we had some downtime. But you are right, the payoffs are probably not immediate and part of the motivation were personal (we just wanted to do it) and political (we made the decision at the height of the tariff wars).
We were always responsible for uptime. You will have downtime with managed services and are ultimately responsible for them. Take AWS EKS as an example, last I've worked with it, you still had to do your upgrades (in windows defined by AWS) and they take no responsibility for the workloads ran on their service. While with ECS and Fargate, you are responsible for less, you will still need to react to things going wrong. We may live to regret our decision, and if our maintenance burden grows significantly, we can resurrect our CloudFormation templates and redeploy to AWS. Will post here if that happens!
CrossFloss@reddit
Better than: you're still responsible acc. to your customers but can't do anything but wait for Amazon to fix their issues.
Otis_Inf@reddit
No offense, but 500$/m for a cloud bill is peanuts for a company. I truly wondered why that low amount of costs still motivated you to invest all that time/energy to move (and the risk of a cloud provider that might not meet your promises to your users)
Chii@reddit
to give some perspective, a mid-sized SAAS provider that has a yearly revenue of approx. $300-$400million has a bill of about $1mil-$5mil per month of AWS.
Shogobg@reddit
Fixed it for ya
mr_birkenblatt@reddit
FTFY
grauenwolf@reddit
Countless companies with small teams had no problem maintaining enterprises grade infrastructure before cloud computing was invented.
The advertising is designed to make you think that it's impossible to do it on your own, but really a couple of system admins is often all you need.
Plank_With_A_Nail_In@reddit
We have dedicated people just sorting out AWS stuff for us, we still ended up with a couple of system admins on AWS anyway.
mr_birkenblatt@reddit
Soooo.... That is exactly what I said. There is no free lunch. The money you "save" you have to pay elsewhere now.
And if you don't hire someone to manage the infrastructure, now the existing team has to take time away from building features to focusing on infrastructure
grauenwolf@reddit
What do you mean "now"? Do you imagine that cloud computers configure themselves?
When I look at the typical cloud project at my current company, each one has a decided cloud engineer or two. Sure, they call it "DevOps", but they aren't writing application code.
I don't have enough data points to say that cloud needs more support roles than on-premise, but it's sure looking that way.
pooerh@reddit
Yes, because maintaining AWS infra with all the... everything! is just so easy, compared to other providers. Whoever has not gone through the "FUCK IT, imma just give this IAM role everything to fix this issue I spent 16h trying to do proper permissions" phase, I envy you.
hedgehogsinus@reddit (OP)
We don't currently have to do any more maintenance than before, but time will tell I guess...
mr_birkenblatt@reddit
Well you just spent a bunch of time doing infrastructure work to do the migration...
SourcerorSoupreme@reddit
In OP's defense building and maintaining are to different things
mr_birkenblatt@reddit
I'm sure it works out for OP but they framed it as if it is this golden loophole they just discovered. Everything comes with tradeoffs but they presented it as flat cost reduction. It's like saying: we saved 50% of costs by firing half the workforce. Sure, you are saving money but you are also losing what their service provided
spaceneenja@reddit
No clue why you’re being downvoted for saying they spent their time migrating infrastructure instead of shipping features. That’s pretty straightforward.
bwainfweeze@reddit
My coworkers agreed to manage our own Memcached for half the cost of AWS’s managed instances. Saved us a bunch of money but I was also glad not to be the bus number on dealing with the ritual sacrifices needed to power cycle all our caches without taking down prod in the process.
The worst thing is the client we used supported client-side consistent hashing and we didn’t use it. So we had 8 different caches on 6 beefy boys and played a bit of Tower of Hanoi to restart.
Darth_Ender_Ro@reddit
Dound the AWS account manager
andynzor@reddit
Also shaved a few nines off the SLA uptime?
CircumspectCapybara@reddit
Hetzer has no SLOs on any SLI, much less a formal SLA.
KontoOficjalneMR@reddit
No it does not. Why are you lying on something that is so easy to check?
CircumspectCapybara@reddit
Assuming you're not a bot, I assume you're new to programming / software development, because any engineer who's been paying attention for the past decade knows that all the major cloud providers' object store products offer a standard 11 nines of durability or greater. That's been the industry standard for a decade, and it's been the benchmark to beat.
Nevertheless, since these things are new to you and you don't know them off the top of your head, go ahead and read https://aws.amazon.com/s3/storage-classes:
KontoOficjalneMR@reddit
Read this you condescending jerk:
https://cloud.google.com/storage/docs/storage-classes#classes
Sure it's designed to have durability of 11 nines, but actual SLA for availability is ... 4.
AWS SLA for availability: 99.9% (three nines)
From: https://aws.amazon.com/s3/storage-classes/
Mister Donning-Kruger.
CircumspectCapybara@reddit
My brother in Christ, do you know the difference between availability and durability?
Read the OC you commented on, which you yourself quoted:
Durability. Do you know the difference between data durability and service availability?
Durability is the probability of the the data being lost due to the disk failing or data being corrupted and it not being recoverable there not being a redundant backup from which to heal.
Availability is the % of the time the service responds with an OK response when you query it.
Those are two different SLOs.
You would fail the systems design portion of every interview.
KontoOficjalneMR@reddit
I see you are about five mentally, so I'll try big letters. DO THEY OFFER ANY SLA ON DURABILITY?
If not then it's just a marketing speak.
CircumspectCapybara@reddit
Did I say SLA? Read the OC. Read your own quote.
I said they have an SLO of 11 nines of durability. An SLO is not an SLA, it's a target or or objective (that's what the O in SLO stands for); it's aspirational. An SLA is a legally binding contract.
Friend, you really are out of your depth and don't know what you don't know and therefore can't appreciate just how incorrect you are because you're not even aware of the concepts or domains you're ignorant of but speaking so confidently about—i.e., you're suffering Dunning-Kruger.
I don't work at AWS, but I do work at Google, and I can tell you at least for GCP, those numbers aren't just made up. They're based on hard numbers and calculations, both statistical models based on known failure rate of physical storage media, the failure rate of individual files on filesystems to the point where parity checks can't recover it, taken together with redundancy figures (if you store 3 copies of a file on 3 different disks in 3 different data centers that are spread out geographically, all three would have to fail simultaneously for the file to be lost), the maths of integrity and error-checking and error-correcting algorithms, etc.
But it's not just statistical models. I won't speak for GCP, but AWS in 2021 claimed in a blogpost that S3 at that point hosted "hundreds of trillions of objects." When you host that many objects, you don't need a statistical model anymore, because you're at a scale where you can actually verify or falsify an 11 nine durability claim: at 100T objects, 11 nines of durability annually means in a year you can lose no more than 1K objects. So you can track internally how many objects are irrecoverably lost, and actually verify if you're meeting the SLO or not.
If you really want to know some of the nitty gritty and the engineering details of how GCP does it, you can read up on https://cloud.google.com/blog/products/storage-data-transfer/understanding-cloud-storage-11-9s-durability-target
As for marketing speak, well, the data speaks for itself. AWS and GCP and Azure have never had a publicly documented incident where they've lost even a single file in S3 or equivalent. The fact that they're willing to market themselves as having 11 nines of durability means they have confidence to give customers the impression and the expectation going forward that they pretty much never lose customer data in a hundred years. Vs a provider like Hetzer which wouldn't even venture to claim that because they don't have any basis for or confidence in making such claims.
Yeah you really don't know what you're talking about. Your Dunning Kruger is really showing in that you don't even begin to understand how durability figures are computed. You can't get 100% durability, because that would not only require drives that never ever fail (a physical impossibility), but that you're housing them in a building that never catches fire or floods in a way that destroys the drives, and that they are never struck by cosmic rays that permanently flip bits, and that your SWEs never accidentally push bad code that overwrites the data by accident.
That's what you don't understand about durability figures. It's way more than just "we back it up off-site." You need to know the failure rate of the drives, how often there can be partial corruption due to physical degradation of the storage media or due to random cosmic rays, how often those are recoverable via error-correcting codes, how good your integrity checks and self-healing algorithms are, etc.
KontoOficjalneMR@reddit
... sooo... marketing speak. Got it.
FortuneIIIPick@reddit
> My brother in Christ
Disagreeing on cloud SLO's is one thing. Blaspheming is a whole other thing.
sionescu@reddit
You can. People have done that for decades.
CircumspectCapybara@reddit
And then a couple decades ago we invented the discipline of SRE and we got scientific and objective about it. And we're no longer in the dark ages where you're making stuff up based on hopes and vibes.
Plank_With_A_Nail_In@reddit
Lol you really believe all of that....lol.
FortuneIIIPick@reddit
How do you think the Internet worked (very well I might add) before the cloud got into motion. I helped engineer the software part of an HA objective for a very large Fortune 50 in the 1990's with geographic redundancy.
sionescu@reddit
You can start with reasonable assumptions, make observations, and adjust in due course. Explicit SLOs are more for placing blame in large organizations or when signing big enterprise contracts, not for engineering.
CircumspectCapybara@reddit
I'm a staff SWE (not a SRE, mind you) at Google, and have also worked at many other large F500 companies, and can tell you you are 100% wrong about.
Engineers care about SLOs because it gives you a basis for claiming any kind of performance promise, whether that's uptime, availability, latency, durability, consistency / data freshness, etc.
"High availability" is a claim that needs to be backed up by numbers. You can't promise to your customers (who can be external companies, end-users, or other internal teams) you'll be highly available up to a specific figure for a specific SLI if you have no basis for your dependencies behaving to a certain level of performance.
As an example, say you're a big financial institution, and you want to promise to never lose customer data. How can you reasonably do that if you store customer data in a managed object store whose durability is unknown? "Assumptions" are not gonna cut it in a regulated industry and if your reputation and hundreds of billions of dollars or revenue are on the line.
sionescu@reddit
And I was an SRE at Google for many years. The first versions of Colossus and BugTable were built without much in the way of explicit SLOs, yet they were observed to be HA. My claim is epystemological, not legal: all you can say is that, based on past behavior and first-principles analysis of the algorithms involved, you feel most confident of categorizing a system as "N nines" for some N in [1..12]. And you don't need an explicit SLO in order to make such a judgment.
Plank_With_A_Nail_In@reddit
Reddit this is two bots arguing over nonsense. I assume its some weird AWS sales thing.
FortuneIIIPick@reddit
> Amazon S3 has an SLO of 11 nines
Oh SLO I saw the nines and was thinking $$$$$$$$$.
lelanthran@reddit
How many AWS clients are building an HA product?
For reference, even a company being run on Jira can handle small Jira downtimes daily, so when you need HA, it has to mean really high availability (i.e. hundreds of transactions get dropped for each second/minute of downtime).
Mostly when I see people locked into AWS services "for HA", they're selling something that is resilient to significant downtime numbers.
A TODO app, or task tracker, etc doesn't have more value by being HA.
bwainfweeze@reddit
Most companies have failed to meet their SL*s. They’ve all payed the penalties but what you pay vendors is always a small percentage of what you charge customers so getting $100 back on $10,000 in lost sales is kinda bullshit.
5 nines would require they’ve lost basically nobody’s data and they lost a cluster of drives a few years ago that took out a couple percent of the people in that DC. So maybe they’ve managed 99.95, but 99.999 is marketing.
CircumspectCapybara@reddit
At least when they're in breach of their SLOs, they're held accountable and you have a means of recourse that also functions as an incentive for the service provider to try to get as close to their SLO as possible.
As opposed to a service provider who offers no guarantees and doesn't even bother publicizing what they aim for.
You're supposed engineer your systems so that an Amazon region going down due to a hurricane doesn't cause you to lose sales, by being highly available across multiple regions. So if
You realize that an S3 object isn't stored on one drive or even one cluster of drives in a single DC, right? There's no way they could offer 11 nines of object-level (scoped to a given bucket in a given region) durability if that were the case. It's replicated across multiple DCs in an AZ, and multiple AZs in a region. Drives fail all the time. When you've got somewhere on the order of millions to tens of millions of disk drives, at any given moment in time, one of them will fail irrecoverably.
bwainfweeze@reddit
It’s a good thing you’re not being condescending.
What S3 is designed to survive and what it has actually survived with people involved is two different things. And everyone in the space is pretty much lying because their past failures would require them to have no additional failures in the next two or three years in order to get back what they claimed they would do.
The only penalty for this is getting talked down to by people on the internet. Where do I sign up? Oh wait, looks like I already did.
CircumspectCapybara@reddit
I wasn't being condescending. Apologies if you read it that way, but that was by no means my intention.
I was genuinely curious if you knew the crucial feature of distributed object stores, because you seemed to imply by your previous comment that you thought a percentage of drives in a DC failing necessarily leads to data loss and missing your durability SLO. But as I explained, that's not how it works with distributed and replicated storage systems that are designed to be redundant and to continually self-heal whenever a drive fails.
What past failures? AWS (or GCP or Azure) have never ever in their history had a single, publicized incident where they permanently lose customer data in S3 or the equivalent for the other cloud providers. They probably have tens to hundreds of physical disks fail every day, but no customer objects stored in object stores have been documented to have ever been lost because the durability model is designed to withstand even the loss of an entire DC, or even an entire AZ's worth of DCs.
bwainfweeze@reddit
They had a region lose 1.3% of data a few years ago. They will never get back to 11 nines before the heat death of the universe, if you measure it globally instead of calling a do-over after every incident. This time we mean it.
When a manufacturer offers a life time warranty on a product that turns out to be a lemon, they lose tons of money or go bankrupt. SaaS people have found some way to evince this vibe without ever having to pay the consequences for being wrong.
I worked at a place that had an SLA of ten minutes per incident and I forget how many a year. When I started in the platform team we couldn’t even diagnose a problem in ten minutes and if you couldn’t fix it with a feature toggle or a straight rollback (because other customers were stupidly being promised features the day they were released) then it took 30 minutes to deploy, after toy figured out what the problem was. I worked my butt off to get deployment to ten minutes and hot fixes to a couple more, and improve our telemetry substantially. Mostly I got thanked for this by people who quit or got laid off. They are now owned by a competitor, so for once they got what they deserved.
Yes, this place was more broken than most, no question, but I’m saying everyone does it, to one degree or another. Usually lesser, but never none. Including AWS. Everything is made up, and the points don’t matter.
inferno1234@reddit
Can you refer to the incident? I can't find anything on it
bwainfweeze@reddit
I really wish I could. I recall reading the headline, it would have been a couple years ago, but every search I try now just gives me strategy guides on making sure you don’t lose data.
Pretty sure it had nothing to do with that Australian data loss. But that’s also another “cannot warranty that failure” example.
CircumspectCapybara@reddit
There is none. People are making things up. It's the age of LLM hallucinations, so go figure.
CircumspectCapybara@reddit
And your source for this is what? Are you sure you're not conflating availability / uptime with durability? Even so, what incident are you referring to?
That's simply not how it works. 11 nines of annual durability means in a year, out of 100B objects, only 1 is lost. Because durability is typically defined annually as the probability of losing an object in a given year, if you were hypothetically to lose 3 objects out of 100B in a year, it doesn't mean the next two years you can't lose any in order to make up for this year's failure. It just means this year you failed (and with that failure come contractual consequences). Next year is next year.
However, this is all hypothetical counterfactuals. Neither AWS nor GCP have ever been documented to have lost even a single out of a 100B objects in any given year.
TMITectonic@reddit
Didn't they (AWS) lose 4 files way back in (Dec) 2012? Also, didn't GCP completely wipe out an Australian account last year, with no recovery possible? Not quite the same as data loss due to failure, but definitely a terrifying scenario for those who don't have local/off-site backups.
arcimbo1do@reddit
True (maybe, I don't have data but it's credible), but the big difference is that when amazon/google/microsoft publish an SLA that means that 1) they evaluated their internal SLOs, found they are better than the published SLA and decided they are confident that can defend the SLA 2) they are putting resources (human and hw) to defend that SLA, and every extra 9 is a 10x resource of ftwn. Moreover, even if they don't meet their SLAs it's very likely that they got very close to it.
However, if you don't publish any SLA (not even a ridicule one) it means you have no clue how reliable your system is, and that's just scary.
crackanape@reddit
You absolutely can. You need a little diversity (2+ vendors, 2+ data centres per vendor) and a couple levels of good failover (e.g. DNS, haproxy) and you can provide 100% uptime in anything short of a nuclear event.
vini_2003@reddit
From personal experience I'd wager Hetzner is mostly useful for disposable infrastructure. Eg. game servers, where going down doesn't matter.
Gendalph@reddit
Cool, your residential connection doesn't matter. If you're out of service for a month? Tough luck!
On a more serious note: Hetzner is great on a budget. If you have a tight budget and someone who can manage the infra - it's a good place to start. It also has been pretty stable, in my experience. However, you must roll your own orchestration and build solutions on top of very barebones services, which is labor-intensive. It's not quite the same as putting your own racks in a DC, but Hetzner made bare metal extremely accessible.
If you don't want to do all of that - AWS, GCP and Azure offer solutions. At a price.
valarauca14@reddit
I'm pretty sure your paying customer's don't consider the infrastructure they pay to access "disposable" even if they are "gamers" (sorry for using a slur).
PreciselyWrong@reddit
If a game server goes down, at worst a group of players are disconnected before a match ends and will have to start a new game. If your primary db replica goes down, it's a bit more noticeable
valarauca14@reddit
Given you can solve this directly in your DB with
synchronous_commit = [on|remote_apply]
to handle events where your DB (or VM) dies.That is assuming you're willing to do a
Writer <-> Writer (secondary) -> Laggy Reader(s)
kind of architecture. Instead of the normalWriter -> Laggy Reader(s)
architecture that causes numerous problems.The failure case you outline shouldn't be visible to your customers outside of a whole region/availability-zone going offline (depending on your preferred latency tolerance).
Proper-Ape@reddit
I mean it does matter if people can't play your game, but it's not the end of the world in terms of mattering.
vini_2003@reddit
Oh, for sure. It just doesn't matter nearly as much as a payment processor going down, for instance haha
hedgehogsinus@reddit (OP)
That's a fair concern but, having worked on large multi-cloud projects, we've had outages and little accountability from cloud providers even with the massive costs paid. We will see if it will be worse with Hetzner, can always resurrect our CloudFormation templates if it is.
It also doesn't have to be all in on a single provider. We found most of our costs came from compute, so we prioritised migrating that. We are within the free tiers for SES and S3, so still use it and have buckets within AWS. Furthermore, we also found Route53 cheap and reliable, so haven't migrated all our DNS management over.
Status-Importance-54@reddit
Yes, we are using Azure for some serverles functions, where there architecture is replicated into 12 countries. There is not a month with a small outage affecting some country. Usually waiting and maybe restarting the functions is enough, but it's always time lost to us for investigation. The dashboard is always green though.
frankster@reddit
Does Amazon habitually achieve that SLO?
krum@reddit
Surely it beats running a box on my home fiber connection.
gjosifov@reddit
if you are worried about 9s SLA uptime
then it is better to go with IBM Mainframe
Current gen of IBM Mainframe is 7 9s + current gen IBM Mainframe can run openShift and kubernetes
No cloud can match that + nobody was fired for buying IBM
CircumspectCapybara@reddit
You don't get nines from the most reliable hardware—you can have the most reliable hardware in the world but a flood or tornado or water leak or data center fire or bad waved / phased software rollout that's currently targeting your super-reliable DC takes it all out and in one day eats up all your error budget and more for the entire year and more.
You get nines by properly defining your availability and SLO model around regional and global SLOs.
gjosifov@reddit
I have tried Redshift around 2018-2019
instead of one button restore button and good UI that is easy to follow
I had to google search it and one of the most recommended result - DBBeaver and manually import/restore
I had to make restore on SQLServer backup on Microsoft VM in some Microsoft studio for DB
just 3 clicks and I'm done
If the cloud can't make easy to use restore db backup and don't believe they can make availability easy to use, you have to do it by yourself and that isn't easy
and in that case there is only 1 question - if the cloud is do it your self then why don't we use on-premise ?
The only value proposition from the cloud is - better customer experience for your users, because you can scale as many machines you need closer to your customers
But with docker and k8s that is easy, unless you don't understand how the hardware works
k8s is automating system administrator boring things and system administrator is a job
CircumspectCapybara@reddit
You're conflating two things here, devx with reliability. My comment was initially addressing how reliability comes from distributed systems (which the cloud excels at for relatively affordable prices) and not from beefier, more expensive hardware like IBM mainframes which no one really uses any more except for highly specialized, niche applications.
Now you're asking about devx and ease of use. If you asked a thousand senior and staff engineers, they will all tell you the cloud is way easier to work with than DIY, roll it yourself.
EKS is a one-click (or in more mature companies, some lines of Terraform or CloudFormation code) solution to a fully managed, highly available K8s control plane. GKE is even easier, as it manages the entire cluster for you, including the workers. Standing up a new cluster is a breeze. Upgrades and maintenance are a breeze. It's billion times easier than Kops or whatever.
Same with foundational technology like S3. Do you really want to get into the business of rolling your own distributed object store with 11 nines of durability? Can you do that?
Because most software companies aren't in the business of running a DC. That's a massive operation. You need a gigantic, purpose-built building, you need to lay power and fiber optic cables, manage all the racks and switches, anticipate future capacity needs 1-2y in advance in order to place a bulk order with a supplier who will not give you as good rates as AWS gets, have staff with forklifts to install them, pay for physical security, fire suppression, HVAC, emergency power generators, and on and on it goes.
gjosifov@reddit
No they aren't going to tell you that
if the cloud was easier, there won't be any wrappers with nice UI/UX on top of the cloud
it will be only AWS, Azure, Oracle etc
No vercel, no Rackspace, no VPS
the market is telling the cloud providers that they are expensive and hard to use
nobody is going to use for VPS or Vercel if AWS was easy to use or cheap
CircumspectCapybara@reddit
Are you a junior or stuck in a past decade?
Workloads are not getting deployed on the cloud via a "nice UI/UX." It's infrastructure-as-code as of a decade ago.
WarOnFlesh@reddit
Are you from the 1980s?
CircumspectCapybara@reddit
Fairly certain the cloud didn't exist in the 1980s, so answer your own question with that.
_hypnoCode@reddit
I know entire divisions of multiple companies that have been fired for choosing IBM.
I hate that fucking marketing slogan with a passion.
Dubsteprhino@reddit
That slogan hasn't been true in many decades
_hypnoCode@reddit
Yeah. I mean, what do you do with the people whose whole jobs are based in AS400, POWER, Mainframes, DB2, etc. when a company finally moves off antiquated hardware and software that only runs on that hardware.
Dubsteprhino@reddit
Generally age discrimination, fire em and hire them back as contractors
gjosifov@reddit
the marketing slogan is
truth - so many people working in IT don't understand IT, but they want to make good and safe choose
if people were honest instead of "Fake it till you make it" then we won't have such marketing slogans
pikzel@reddit
You inherit SLAs. Put Mainframes 7 9s inside something and you will need to ensure that something also has 7 9s.
gjosifov@reddit
IBM Mainframe isn't software it is hardware
what are you talking about put IBM Mainframe inside what ?
loozerr@reddit
If you put them inside a shed with only three nines uptime on roof it won't be seven nines.
gjosifov@reddit
can at least big cloud pay better educated people to spread FUD ?
loozerr@reddit
I was making fun of the guy
goldman60@reddit
You also weren't wrong, gotta have 7 9s of uptime on your power and Internet or it doesn't matter how many 9s the actual mainframe has.
Sufficient-Diver-327@reddit
Oh no, poor defenseless IBM
gjosifov@reddit
Well, what did marketing cloud people said in 2010s
cloud is new and innovative and Mainframe is old
To buy cheap Oracle licences, you have to contact companies, specialized in optimizing Oracle licences for your workload
guess what - it is the same for the cloud today
and lets not start on the worst UI/UX design since the invention of PC
Not everybody needs 7 9s and IBM Mainframe, but at least you have to be inform
making customer friendly software is about how inform you are about cons/pros on the components you are using
loozerr@reddit
Companies juggling Oracle licenses, IBM mainframes and cloud providers do not aim to make customer friendly software, I am not sure what's the point you're trying to make.
gjosifov@reddit
well, you will find companies like that and copy their software with better UI/UX
Companies can't life forever
pikzel@reddit
Yeah my bad, I misread, thought they were talking about virtual z/OS
dontquestionmyaction@reddit
And that can be just fine.
Not every business needs HA or high-nines uptime, this stuff costs money and has downsides too. The projects I see on their front page certainly don't seem to require them.
omgFWTbear@reddit
In German they added a lot of neins
Plank_With_A_Nail_In@reddit
Reddit these guys are just running a hobby business, the whole company is just these two people and they have a total of one product which they call SaaS but its just reselling a PostgreSQL database.
hedgehogsinus@reddit (OP)
I prefer the term "lifestyle business", where we chose projects we deem interesting and worthwhile, but it is very much our day job. Our project work surplus funds activities we like doing, such as product development or seeing if it's viable to migrate to more a cheap, but bare-bones clouds like Hetzner.
That's actually helpful feedback to put more information about architecture. We are using Apache DataFusion, serving all data from block storage like S3, which is what allows complete tenant isolation and bringing your own storage while keeping our costs down (no managed databases to pay for) and still having great performance. We built this "service" in response to client needs and have found it really useful ourselves, but indeed are completely bootstrapped and now are looking for external users.
Just out of curiosity though, even if it was a service wrapper around PostgreSQL, which it isn't, wouldn't us running it for users classify it as a SaaS? Or what bar should it hit before we are allowed to call it a SaaS?
jimbojsb@reddit
Your spend level is definitely in sort of an uncanny valley for tier 1 cloud platforms. You’re not spending enough to really need VPCs and IAM and all the other trappings so the savings absolutely is a win for you. Keep growing and you’ll be moving back. Just the way of the world.
No_Bar1628@reddit
You mean Hetzner faster then AWS or DigitalOcean, is Hetzner light-weight cloud system?
leros@reddit
Looks like you're saving about $400/mo. Do you find that's enough savings to justify the time to migrate and operate the new solution?
I ask because if it were me, I would say it's not enough savings to justify the effort.
api@reddit
Big cloud is insanely overpriced, especially bandwidth. Compared to bare metal providers like Hetzner, Datapacket, etc., the markup for bandwidth is like 1000X or more.
Hax0r778@reddit
There are some in-between options too. Oracle cloud charges significantly less for bandwidth and has some "big cloud" features/services. But definitely isn't one of the "Big 3" hyperscalers.
https://www.oracle.com/cloud/networking/virtual-cloud-network/pricing/
HappyAngrySquid@reddit
But then you’re involved with Oracle, though. I’d rather deal with a more reputable organization— North Korea, Stalin-era USSR, the Sicilian mob, etc.
Plank_With_A_Nail_In@reddit
Its all cloud lol, cloud just means someone elses computer.
bwainfweeze@reddit
The squeaky wheel aspect of AWS has always been pretty bad. And yet they somehow make the bill a surprise every month.
You get an apartment with free utilities, you expect to be overcharged a bit for it. But then not get a bill for the utilities.
If Amazon had continued their trend of keeping the price steady for new EC2 instances I’d be a little more philosophical but now that they’ve got everyone on board they don’t do that anymore. 7 series machines cost more and the new 8’s that are coming out are continuing their trend. There was a bunch of stuff at my last job that wasn’t cheaper to operate on 7 hardware and given they’re raising the prices again, I’m sure they won’t be upgrading those either.
I always figured the reason they kept the prices stable was that it’s easier for them to maintain new hardware then old so they want you to be on the treadmill to be able to decom the old stuff as it wears out. No idea what they are up to now.
rabbit-guilliman@reddit
8 is actually cheaper than 7. Found that out the hard way when our autoscaler picked 8 based on price when eks itself doesn't even support 8 yet.
LiftingRecipient420@reddit
Holy anti-hetzner/pro-aws bots Batman.
yourfriendlyreminder@reddit
"Everyone who is against my worldview is a bot."
murkaje@reddit
Quite puzzled myself. Never had a requirement for HA and most startup apps are fine with some outages. Due to a much lower cost i can hire at least one additional engineer to solely work on the infra with all the savings.
Some domains have extremely tiny profit margins and high volume that would operate at a loss if an expensive cloud provider like aws was used, although in those cases it's good to have the expensive ones as backup to failover on outages.
I have only been pleasantly surprised by Hetzner so far. Providing ipv4 at cost was interesting and i quickly realized i have no need for it anyway, ipv6-only being quite viable, plus none of the whole internet scanning bots would find it and spam with /wordpress/admin.php requests or whatever.
gjosifov@reddit
someone has to defend hard to use and expensive product, because they are certified cloud engineers
randompoaster97@reddit
Everyone works in HA these days. HA as is - our deployments require a careful power off with 3 on call engineers, done at 3 AM.
forsgren123@reddit
Moving from hyperscalers to smaller players and from managed services to deploying everything on Kubernetes is definitely a viable approach, but there a couple of things to remember:
- The smaller VPS-focused hosting companies might be good for smaller businesses like the ones in the blog post, but are generally not seen robust enough for larger companies. They also don't offer proper support or account teams, so it's more of a self-service experience.
- When running everything on Kubernetes instead of leveraging managed services, maintaining these services becomes your own responsibility. So you better have at minimum a 5 person 24/7 team of highly skilled DevOps engineers doing on-call. This team size ensures that people don't need to do on-call too often to avoid burn out and sacrificing personal life and can also accommodate for vacations
- Kubernetes and the surrounding is generally seen as pretty complex and vast. One person could spend his/her entire time just keeping up with it. While personally I enjoy this line of work as a DevOps person, you better pay me a competitive 6-figure salary or I'll find something else.
hedgehogsinus@reddit (OP)
Thanks, these are good points. For reference, we are indeed a small company (2 people), but have worked in various scale organisations with Kubernetes before there were managed offerings (at that time with Kops on EC2 instances). We have spent a total of around 150 hours on the migration and maintenance so far since June.
Robustness is indeed something we are still slightly worried about, but so far (knock on wood) other than a short load balancer outage, we did not find it less reliable than other providers. We had a few damaging AWS and especially Azure outages at previous companies.
These are obviously personal anecdotes, but we have a pretty good work-life balance as a team of 2, but also even previously we did not have massive teams looking after just Kubernetes. In other, larger organisations we worked in, we did have an on-call system, but have always managed to set up a self-healing enough system where I don't remember people's personal life or vacations suffering compared to other set-ups.
I tend to agree with the complexity, but from all the teams I worked in we had the DevOps you build it, you run it mind set (even if obviously there were some guard rails or environment that we'd deploy into). We both have a long term experience with Kubernetes, so it is what we are used to and other setups may be a larger learning curve (for us!).
I guess it depends on your needs and appetite for this kind of work. We both enjoy some infrastructure work, but as a means to an end to build something. Our product needs a lot of compute, so in this sense it is core to our business to be able to run it cheaply. Hence, we made the investment, which was an enjoyable experiment, and we are now getting significantly more compute at a significantly lower price.
mr_birkenblatt@reddit
This reminds me of the story of a junior business man asking his boss.
J: "I just saw how much were spending on leasing our office building. We occupy the whole building, why don't we just buy the building? We would save so much money"
S: "We're not in the building management business. Let the experts focus on what they're best at and we focus on our business"
thy_bucket_for_thee@reddit
I use to work for a large public CRM that did this, then one lease cycle we had to move out of our HQ building because some pharma company wanted the entire building for lab space. That was fun times.
mr_birkenblatt@reddit
Sounds like you got outbid
thy_bucket_for_thee@reddit
I didn't get outanything. It wasn't my fiefdom, was only a serf in it.
Just hilarious how the billionaire owner had multiple attempts to own this very building for like 40 years but was perfectly fine to rent it then throw a massive tantrum when being forced to leave against their wishes.
New HQ location lost a lot of people in the attrition, myself included. I did enjoy the whiplash of being forced back to RTO to only go remote several months later. Definitely ensured I'd never work in an office again, which has been nice to experience.
Swoop8472@reddit
You still need that, even with AWS.
At work we have an entire team that keeps our AWS infra up and running, with on-call shifts, etc.
pxm7@reddit
The above comment makes some good points, but a lot of devs and managers focus too much on cloud as a saviour and ignore building capability in their teams.
For a small startup: use cloud and build your product. It’s a pretty easy sell.
For larger orgs (say a department store or a fast food chain, all the way up): it’s a toss-up. Cloud helps in many cases, but you’re also at risk of getting fleeced. You’ll also need tech staff anyway, and if you get “$cloud button pushers” that can come back to bite you.
In reality, in-house or 3rd party hosting vs cloud becomes a case-by-case decision based on value added. But good managers have to factor in risk from over-reliance on cloud vendors and, in larger orgs, risk from “our tech guys know nothing other than $cloud”.
SputnikCucumber@reddit
From what I have seen the problem is that the major cloud vendors market their infrastructure services as "easy". So lots of companies will pay for cloud and skimp out on tech staff and support because if its so "easy" why do I need all these support staff?
DaRadioman@reddit
I mean it is easy. Compared to doing it all yourself it is 100x easier than making a VM based alternative that you code all the services and reliability for.
Cloud makes that easy in trade for just paying for it. But easy is relative of course and still not no effort.
New_Enthusiasm9053@reddit
Your points are valid but keeping up with AWS products and their fees is also something you can spend an inordinate amount of time on. At least the k8s knowledge is transferable. You can run it on any platform.
rdt_dust@reddit
That’s a pretty impressive cost saving! I’ve been looking into alternatives to the big cloud providers myself because bills tend to balloon quickly when scaling up. Hetzner keeps popping up as a solid option for folks who need raw compute power without the fancy managed services, especially if you’re comfortable handling more of the setup yourself.
integrate_2xdx_10_13@reddit
A year or two back, I thought the same. Put in my details and debit card to get started, instantly banned. Odd. Maybe because it’s a debit card. So I make another account with my credit card, instantly banned again.
I’m not even making it up, as soon as the account would get created, I’d instantly get an email saying the account had been shut down. I tried to get in touch with support and supply a passport or something to prove I’m a real life person willing to hand over legitimate tender and didn’t hear a peep.
nishinoran@reddit
I'm interested in how you guys are handling secrets management, if your infra is git-based.
CulturMultur@reddit
The title also should have absolute numbers, the OP has very few services and a tiny bill. I wanted to use this as example to my CTO to shave off few mils of AWS bill but not relevant, unfortunately.
Cheeze_It@reddit
Imagine how much MORE they could save by going on premise and not dealing with renting.
randompoaster97@reddit
I do something similar with ad-hoc nixos configuration. Though it's a single node setup but I can host many applications on it for the fraction of the cost. Nix declaratives is key. It's a single source of truth so once the project warrants a more enterprise architecture one can simply migrate away parts to it.
dudeman209@reddit
These posts drive me insane
old_man_snowflake@reddit
It’s the ad culture from YouTube coming to programming. Yay! /s
cheddar_triffle@reddit
What language are your applications written in? Can reduce server requirements substantially by using a better stack than something like node or python
hedgehogsinus@reddit (OP)
There are a few different services running on it, but the biggest one is in Rust, it just does a lot of computationally intensive operations.
cheddar_triffle@reddit
Impressive!
I've got a public API, written in rust, on a low end hetzner VPS that handles over a million requests a day barley using a few percent of the available resources.
cheddar_triffle@reddit
Impressive!
CircumspectCapybara@reddit
Ah yes, Hetzer, the most trusted name in the industry when it comes to cloud services.
Yes, moving from an industry-standard hyperscaler to a mom-and-pop startup (/s, but they are a 500 employee shop) cloud provider and building your business on that to ostensibly save a buck is certainly one of the engineering decisions of all time, but for many teams it's a terrible engineering idea that will come to bite you.
Hetzer (ask how many engineers if they've heard that name before) is not a mature platform (again, it's a 500 person shop, I wouldn't expect them to), so it's risky to future devx and engprod and maintainability and scalability and security and reliability to build your whole business on them:
Remember next time you think about saving money by going to a DIY approach: headcount and SWE-hrs and SRE-hrs and productivity are very expensive. Devx and employee morale is intangible but can get expensive if all your talent constantly wants to leave because you have a mess of unmaintainable tech debt. You can get cash by taking on tech debt, but eventually the loan comes due, with interest.
This is the classic "buy vs build" problem. And for most businesses, it makes business (and engineering) sense to stick to your core competencies and your core product, to shipping features rather than operating your own datacenter (one end of the extreme) or rolling your own reimplementation of popular cloud services.
kokkomo@reddit
Good luck getting nickel and dimed for greater control over where you get nickel and dimed.
Pharisaeus@reddit
CircumspectCapybara@reddit
Ah yes, Hetzer, the most trusted name in the industry when it comes to cloud services.
Yes, moving from an industry-standard hyperscaler to a mom-and-pop startup (/s, but they are a 500 employee shop) cloud provider and building your business on that to ostensibly save a buck is certainly one of the engineering decisions of all time, but for many teams it's a terrible engineering idea that will come to bite you.
Hetzer (ask how many engineers if they've heard that name before) is not a mature platform (again, it's a 500 person shop, I wouldn't expect them to), so it's risky to future devx and engprod and maintainability and scalability and security and reliability to build your whole business on them:
Remember next time you think about saving money by going to a DIY approach: headcount and SWE-hrs and SRE-hrs and productivity are very expensive. Devx and employee morale is intangible but can get expensive if all your talent constantly wants to leave because you have a mess of unmaintainable tech debt. You can get cash by taking on tech debt, but eventually the loan comes due, with interest.
CircumspectCapybara@reddit
Ah yes, Hetzer, the most trusted name in the industry when it comes to cloud services.
Yes, moving from an industry-standard hyperscaler to a mom-and-pop startup (/s, but they are a 500 employee shop) cloud provider and building your business on that to ostensibly save a buck is certainly one of the engineering decisions of all time, but for many teams it's a terrible engineering idea that will come to bite you.
Hetzer (ask how many engineers if they've heard that name before) is not a mature platform (again, it's a 500 person shop, I wouldn't expect them to), so it's risky to future devx and engprod and maintainability and scalability and security and reliability to build your whole business on them:
punkpang@reddit
It's fascinating how you can write so much crap in order to sound smart and knowledgeable. Can you imagine what would happen if you put half of that effort into doing something positive? Everything you wrote about Hetzner is a factual lie.
pikzel@reddit
This whole thread is just AI conversations. Bots or just chatgpt copypasta.
lieuwestra@reddit
Well known isnt it? Startups benefit from hyperscalers, the more mature your company gets the more you need to move away from them.