How do you package and deploy your SaaS as a managed service on the customer's cloud?
Posted by drooolingidiot@reddit | ExperiencedDevs | View on Reddit | 21 comments
For SaaS like software products that deal with a lot of critical propitiatory data, large companies typically want to run that as a managed service on their own VPSs (on AWS, Azure, GCP, etc). It seems like there's no appetite to trust a dinky little startup with all of the critical (potentially PII) data.
Throughout my career, I've only worked on cloud based SaaS, and never had to deal with this until now.
Say, you've built a SaaS product already, and you've integrated with the likes of SendGrid for communications and something like Ory or Clerk for enterprise auth. How do you actually package up, deploy, and manage the software on all of your customers' VPSs? Specially when there are dependencies like a databases, and 3rd party services like I mentioned above.
martinbean@reddit
It’s not software-as-a-service if you’re not providing it as a service, and instead giving people turnkey software that they install on their own hardware/infrastructure.
Moist-Ad-2960@reddit
Open AI , called it Private SaaS or Private SaaS, but still SaaS with some benfits.
CryptosGoBrrr@reddit
Came here to say this. Installing software on a client's server or VPS is not SaaS by definition and TBH, sounds like a logistical nightmare.
verbass@reddit
On prem solutions are necessary for enterprise. The software is the code, maintenance and web app user interface
Constant-Listen834@reddit
OPs scenario is actually very common for b2b SaaS companies to face. Look at SaaS like databricks who do this
stevefuzz@reddit
Painfully
Morefey@reddit
For self-managed customers, we provide a docker-compose and our binaries so they can set up the solution "easily". With that we provide documentation giving the prerequisites, mentioning Ubuntu VMs and the hardware and network configuration.
What we often see is a lack of maturity for this kind of customer. They have trouble to setup the infrastructure and ask for a lot of support. Our documentation starts to be extremely detailed and they still skip some stuff because why not. It also leads us to annoying discussions like we have to explain that it's not because you are in an internal network cut from the internet that you shouldn't rely on https and encryption, and no we won't allow our solution to work on http because we must enforce it as part of our ISO compliance you request from us, yes monitoring platform and log export has to be setup if you want any support from us, and you should think about how you will deploy the certificates of your internal PKI because it's your responsibility that your devices can communicate with a solution on your infrastructure, and please setup a f**king backup of your database. And if this annoys you, please keep in mind we sell a SaaS service where we take care of that for you.
BeenThere11@reddit
Using iac ?
Use iac and let their admins spin it up . Then look and see if everything works .
Reverent@reddit
Move from a centralised model to a distributed "fleet management model".
That means shipping the solution including embedded db and backup and everything to each customer, and having the software include a phone home method of administration.
It's more work up front but benefits from having better data privacy boundaries and near unlimited horizontal scaling. Don't have to worry about DB hand holding when they never get larger than a single consumer tenant.
btmc@reddit
Docker containers and Helm charts are a fairly easy solution. Also AWS Marketplace and the like.
Honest_Rice_6991@reddit
If using GCP, you can use the producer consumer strategy
edgmnt_net@reddit
IMO that's one of the reasons to avoid depending on those 3rd party, non-portable and proprietary services. Maybe not sufficient on its own, but once you account for development bottlenecks and ending up with a difficult/expensive-to-test distributed system, it becomes clearer.
On the other hand, if this is just a PostgreSQL or just something S3-like, you can often package (replacements for) those in containers or by baking in some degree of portability (not across database vendors per se, that's kinda meaningless, just make things reasonably easy to run in different environments and document it). As above, maintaining some portability is a very reasonable thing to do, developers should be able to test things locally for example and not need a very expensive shared setup. So the incentives are there anyway.
Unfortunately, many jumped prematurely onto the highly-distributed, highly-fragmented services bandwagon and now that's no longer an option. They might even think "that's fine", "that's how things are done".
SlapNuts007@reddit
Take a look at the way Databricks classic (i.e., not serverless) works. Their whole stack is designed to set up managed services in the customer's account, store data in customer object-storage, and run workloads on customer VMs.
As you can probably tell, the whole architecture was built with this in mind. Working backwards to get there from a SaaS product designed specifically for multi-tenancy is going to be a problem.
seriousbear@reddit
The product I worked on was a general purpose data integration pipeline (batching/realtime ETL). For the reasons you described, I had to build it in such a way that the worker node could be deployed elsewhere. Kubernetes didn't really work for me because I needed to rebalance load between workers based on certain data-specific criteria. The architecture I ended up with was: (1) a generic embryo docker image that customers deploy however they want, and (2) a coordinator that serves binary payload of the service to #1. The embryo pings the Coordinator on launch and can become an API server or a worker node. Worker nodes also receive binaries of source/destination plugins from the coordinator, allowing me to dynamically deploy pipeline-specific components per worker without restart. This allowed me to roll out new versions of components to low-risk pipelines and provided customers the ability to have a staging environment of my service in their infrastructure. The control plane (web dashboard) was still deployed only on my side, but data and credentials never left the perimeter of enterprise customers. Implementation of this approach is very tech-stack specific. In my case, it was a JVM language, so components were signed jar files loaded on the fly using a custom ClassLoader.
andymaclean19@reddit
Something you could experiment with is having the customer provide the resources which store the data and provide access to your service. So, for example, instead of storing data in an S3 bucket you control the customer could create one and grant your account access to it. The customer gets to control things like the encryption at rest and own the information.
andymaclean19@reddit
You might find that for a 'dinky startup' trying to make software for a large corporate who wants to control all of the infrastructure is just too much. Being able to deploy on the customer's cloud is likely just the beginning, they will probably also want design reviews, ISO27001 in the development process, DLP tools and similar things, detailed vulnerability management and patching with deadlines (e.g. critical vulns in dependencies must be patched in under 7 days).
Deploying into a customer environment is hard. If you are making a SaaS product you will need a different sort of mindset to do it. Updates will be an issue because you cannot just use CI/CD and correct/roll back any mistakes.
For a startup, choosing your target customers is important. Depending on what you're building you might be better off looking for customers who will consume your SaaS model first and using those to grow and mature the organisation/product. Then when it gets bigger it will be better at handling the demands of the bigger organisations who ask for more control.
originalchronoguy@reddit
K8 turnkey appliance using gravity
drooolingidiot@reddit (OP)
This gravity? https://github.com/gravitational/gravity/ Looks to be abandoned
originalchronoguy@reddit
It is teleport now,
https://en.m.wikipedia.org/wiki/Teleport_(software)
gosuexac@reddit
I was going to suggest using helm, but I suppose if you have customers with air-gapped systems (AI/military/satellite/energy/oil rigs/MRI/3-letter agencies) then Gravity works.
raddingy@reddit
A couple of strategies that worked for me (AWS specific because that’s where my experience is):
Give each of your clients their own AWS account, and then connect the VPC of their account to their vpn. If they’re an AWS shop, you can peer VPC with them. Package your code with CDK, and you can deploy nearly infinite AWS accounts quickly. This only works if your clients are ok with this approach.
If they absolutely must have their own accounts and resources under their full and sole control, Use Terraform to define infrastructure and have it target the cloud provider of your choosing based on parameters.
Most of the big providers have a marketplace you can sell your software on. They typically have an away for you to define the required infra and enable one click install