Company wanted to use Kubernetes. Turns out it was for a SINGLE MONOLITHIC application. Now we have a bloated over-engineered POS application and I'm going insane.
Posted by Ill_Dragonfly2422@reddit | sysadmin | View on Reddit | 314 comments
This is probably on me. I should have pushed back harder to make sure we really needed k8s and not something else. My fault for assuming the more senior guys knew what they wanted when they hired me. On the plus side, I'm basically irreplaceable because nobody other than me understands this Frankenstein monstrosity.
A bit of advice, if you think you need Kuberenetes, you don't. Unless you really know what you're doing.
sorta_oaky_aftabirth@reddit
Learn to code and make it a micro service
Duh
RedShift9@reddit
Buzzword driven management and engineering is usually bad.
Rhythm_Killer@reddit
Quiet you, and get back in the basement until we all get our GenAI I heard so much about
KupoMcMog@reddit
When a VP goes to a conference to party for 3 days, gets seduced by a silver tounged salesmen, puts ink to paper before jetting back home, and does a quick meeting with your team basically telling you 'this needs to be up and running by the end of the week'
fun fucking times, hope the coke was good in vegas Mr. VP.
dansedemorte@reddit
That's the "move everything to the cloud" set. Woefully unprepared for how much it's actually going to cost.
Big-Industry4237@reddit
As long as reality exists, you can know.. you are paying for that availability and DR capabilities. Assuming it’s implemented correctly 😂
dansedemorte@reddit
and we know how well most of these moves are planned.
i worked for a big re-insurance company for a couple of years. even though they had "computerized" their operations they still followed the same business practices and critical work flows that were from a time where typing pools were still a thing.
they killed half a forest each night so that they could 2-4 pages from that 150 page printout job. with absolutely now way to just print pages 20-25 from the stored PDF file.
Cinderhazed15@reddit
Data ingress and egress fees, oh boy!
chron67@reddit
But just think how much we are saving for not having to pay for on site infrastructure! What's that? We still need all that?
dingerz@reddit
"It's opex, bro!" /Neuman
DrAculaAlucardMD@reddit
Talking with a potential new job and all they kept saying is "We want AI integrated in all aspects!" the pay is stupidly good, but the board does not understand what they want.
KupoMcMog@reddit
just make an API hooked into ChatGPT with a little Clippy GUI that can't be closed. That pings them anytime they're idle for more than 2 minutes.
"Sorry that's how AI works..."
RatsOnCocaine69@reddit
God damn, sign me up, too
Emergency_Ad8571@reddit
It was excellent, thank you.
SvnRex@reddit
I've had this happen many times
gadimus@reddit
Ok the cluster of n90s will cost more per hour to run then a corporate escort. Would you like one or ten?
IAmMarwood@reddit
Our director keeps sprinkling AI buzzwords into every plan he has going forward.
As the admin responsible for our Entra tenant I’m doing my best to hold off the coming Copilot tide but it’s a losing battle.
sonic10158@reddit
Your 20 year employment award? An NFT of the company logo!!
pimflapvoratio@reddit
I got a really nice jacket from LL Bean with the company logo.
Hagigamer@reddit
Which is worth way more than the NFT.
occasional_cynic@reddit
Had a past micro-managing CEO tell the CTO to bring in Cisco ACI to "automate" our networks. God what a nightmare that was.
niomosy@reddit
"Okay so we need a firewall request submitted and don't forget to also put in an ACI contract request...."
Yeah, that's us.
occasional_cynic@reddit
LOL, been through two ACI "attempts" and one we had to abandon contracts altogether when it was realized the 9k's TCAM space could only handle to much traffic before bludgeoning their CPU's.
Anyway, sorry you have to deal with it.
gpzj94@reddit
We're still trying to get aci out but it is somehow harder to remove than put in lol. We never even got contracts set up right because there's so much random shit talking to each other we can't really lock in on contracts other than super known things like active directory that everything needs to get to on certain ports anyway and was really no different than using windows firewall rules.
Cannabace@reddit
Management: “Is it AI?”
Me: “sure it’s in the name but not really”
Management: take my money fry gif
Ill_Dragonfly2422@reddit (OP)
Unfortunately, it pays
nukethedogphilly@reddit
That's literally the deal. If management wants me to pour beer on the servers I will literally do it for a raise. I'll email them saying it's a terrible idea, and then calmly walk into the server room and pour.
Interesting_Scar_588@reddit
"Let me just print this approved change control and the approver list... Ok, lights torch y'all might want to take a step back. When we burned this in dev, there were sparks."
land8844@reddit
Don't forget to forward that email outside of the organization. Can't have the written permission to sabotage go to waste.
Of course this is 2024 and most companies are on 365 now...
SgtBundy@reddit
*whispers* Blockchain.....
FerryCliment@reddit
Feels like if you are developing something, you really need to find the buzzword, even before the idea.
Somethng that really sticks and have a ring being prone to the cheap motivational posts. is the key to success.
freedcreativity@reddit
Almost as bad as resume-driven development. Of course we need custom architecture written in rust on a scalable cluster! For our web services, modest database, and AWS cloud stuff... I mean, that CTO did get a much cooler job afterwards but it was nice to rip out the custom stuff for bog standard AWS.
Moonfaced@reddit
Yeah.. once when my manager threw out the word "Kubernetes" to a bunch of Windows server administrators, that have zero experience in cloud, I was wondering what higher up read a article on it that day.
boli99@reddit
thats some blue-skies thinking right there. let leverage those synergies immediately for a quick win!
Clean-Agent666@reddit
Let me circle back to you on that, I'm in the middle of rapidiously cloudifying sustainable leadership!
e_karma@reddit
Wow, the number of times I have heard that
cybersplice@reddit
Underrated comment
randomataxia@reddit
https://www.youtube.com/watch?v=Bk1dbsBWQ3k
whatyoucallmetoday@reddit
Sush. We are about 70% done into our 25 year project to migrate our code base to Java. /s
Fibbs@reddit
Sales driven development is worse.
sheikhyerbouti@reddit
Attitude like that runs counter to the team-oriented synergy we're trying to foster here.
/s
A_Unique_User68801@reddit
Never a lack of work though!
me_myself_and_my_dog@reddit
Most just want this stuff on their resume so they can move on to the next job.
I see jobs posted and they always want all these cloud skills and Kubernetes but they are small businesses. I assume the last engineer sold upper management on it so they can add it to their resume then they split for the next job with a pay bump leaving the old biz with a nightmare to run while hemorrhaging the IT budget.
the-devops-dude@reddit
Kubernetes is rarely the right answer. It’s often an answer, and can be an answer, but it’s rarely the right answer.
I’ve been to KubeCon a few times and only have heard of a handful of truly k8s specific problems where k8s was the best answer
A lot of it is Platform Engineers, SREs, SysOps, DevOps, etc. wanting to prove themselves and over engineer solutions
occasional_cynic@reddit
But Kubernetes sounds so cool when you say it.
rainer_d@reddit
It was even mentioned in the latest NCIS episode.
On a cluster of Raspberry Pis.
Archon-@reddit
As someone that runs a k8s cluster on some pis at home I feel personally attacked right now lol
BladeCollectorGirl@reddit
Same here. I'm running a cluster on 5 Pi4s and it's running elasticsearch for my home SIEM. Full NFS share for all nodes.
SevaraB@reddit
Ugh. I’ve deliberately avoided thinking about NCIS making any technical references since the infamous “a keyboard under each hand to hack twice as fast” scene.
__ZOMBOY__@reddit
Unless we’re thinking of different scenes, I think you have the memory backwards. The infamous scene is two people on ONE keyboard “trying to stop the hacker faster”.
Then the old guy pulls the power cord from the ~~monitor~~ computer, and the show immediately gets auto-renewed for infinity more seasons.
Either way, it’s fucking awful
Dumfk@reddit
looks around
/hides my glove80 and pulls out a cheap logitech keyboard from 2005
pointlessone@reddit
Do you have them mounted to flip down attachments to the arms of your chair?
Dumfk@reddit
No I don't have them on my chair. It took me about 2 weeks to get "normal" on them. I'm an almost 50 year dev/admin and got them more for wrist pain and I didn't like the old microsoft ergonomic ones. It was a tossup between that kenisis advantage and these won more as I didn't want that huge footprint on my desk.
pointlessone@reddit
That's really not a bad adjustment curve considering how big of a departure they are from a traditional keyboard. I've never really liked the Microsoft split myself, but I've been lucky enough not to have any of the wrist problems that plague our industry so it's never been an issue to work off a standard board. Thanks for the info!
Taur-e-Ndaedelos@reddit
*One google search later*
What in the everloving fuck?
Sushi-And-The-Beast@reddit
A 16 core with a 10 meg pipe?
Algent@reddit
WAIT, it's still going ? holy shit.
Morkai@reddit
They keep launching more spin-offs too.
goferking@reddit
For a test/home environment right???
OcotilloWells@reddit
Using VeeBee six?
-Shants-@reddit
No one knows what it means but it’s provocative. Gets the people GOING
thatpaulbloke@reddit
Don't you worry about Kubernetes, let me worry about blank.
catofcommand@reddit
Blank...? BLAAANK!?
MarzMan@reddit
You're not looking at the big picture!
terryducks@reddit
Does Jackson Pollock mean anything to you ?
0ld_Gr1m@reddit
My only regret is.. That I have.. Boneitis
TheFluffiestRedditor@reddit
It's more to deploy uwubernetes and piss everyone off.
fresh-dork@reddit
back in the day there was a company that had a java web app built around supporting the finances of city sized orgs - they advertised their use of EJB as evidence of how advanced they were. turns out that they had 2 EJBs connected together and no plans for more.
on the plus side, the app seemed fairly well constructed, even though the specs were super bureaucratic
aes_gcm@reddit
I mean, lets be honest, that's a big motivating factor for most of this nonsense.
blaktronium@reddit
"let's run it on kube" is worth the headache later you mean?
Thomas_Jefferman@reddit
k8's!
jjwhitaker@reddit
Which way? I only know the wrong pronunciation.
degoba@reddit
Nah dude K8s sounds way cooler.
cheese_is_available@reddit
Flow so much better when writting k8s via slack like the professional we are.
VolcanicBear@reddit
As a senior Kubernetes consultant... Most companies that want Kubernetes don't need it.
Sorry to hear your monolithic application is an over-engineered POS though.
vantasmer@reddit
Can the companies that don't need kubernetes still benefit from some of its features? Or would you suggest an entire deployment / orchestration strategy? I've done some uber small k3s deployments just to run static websites, this was more of a personal project, but I still leveraged reasonable gitops principles which made the app development super enjoyable.
Hot-Profession4091@reddit
JFC no. It takes a team of people to properly setup and run a k8s cluster. Absolutely not worth the overhead for most companies.
vantasmer@reddit
Depends on the size and complexity of the cluster... i guess? You do not need a whole team for one cluster if its set up correctly.
DorphinPack@reddit
We’re talking about more than just node count and stack choices though. You can easily maintain a cluster by yourself for your own purposes. Interop with other teams and responding to business needs adds a whole other layer.
Organizational demands combined with technical complexity is why you’d need a team in most places.
Hot-Profession4091@reddit
This. Running a k8s cluster in production requires an awful lot of other stuff running in that cluster and people to make sure that stuff is healthy, as well as the nodes, and rolling out new nodes with the latest OS patches, and, and, and… it’s a whole job.
Which actually reminds me of when we needed to increase our node sizes because of all the other pods we had running just for monitoring, metrics, etc we’re taking up half of every node and actual app pods couldn’t be scheduled.
It’s a great tool if you have the problems it solves and the people to support it. Not everyone has those problems.
BioshockEnthusiast@reddit
Sounds like a recipe for getting your pto fucked with.
EmergencySwitch@reddit
Yeah - you don’t need k8 if you app has no concept of redundancy. But docker + CI/CD is an excellent way to get used to gitops even for prod apps
niomosy@reddit
Podman if you're running RHEL since Red Hat replaced Docker with Podman.
terryducks@reddit
grumble ... flipping IBM ... grumble ... rabble ... RHEL ... very microsofty ... NIH bullshit.
IamHydrogenMike@reddit
Having solid CI/CD policies and procedures are more important than whatever tech you are deploying to. If your deployment policies suck, no matter what you use in the end will also suck.
SgtBundy@reddit
That is true - for most of what I have seen K8s used for, docker-compose and effective deployment automation would cover it.
It comes into its own when you start wanting to have more complex ingress and access controls, need redundancy and some form of scalability, or you want to more effectively binpack across tin and share the infrastructure.
Where I am brought it in because someone went "we are containerising!". What they meant was the developers were learning Dockerfiles. The leading application had fundamental architecture issues that ran at cross purposes to kubernetes (we want to run 1 pod per customer - it's how we designed the container!). I love it for our own infra services, we can bang out all sorts of our own containers and tools. The actual app teams seem to struggle to use it.
XylophoneFromHell@reddit
How would a young sys engineer develop the skills to be a Kubernetes consultant? I’m trying and not really getting anywhere yet.
VolcanicBear@reddit
Unfortunately I can't really help I'm afraid. It's an awkward technology that's relatively cost prohibitive to learn.
I was lucky enough to be hired a few years ago, with no Kubernetes skills (but 10+ years of Linux experience, 100% on rhcsa and rhce etc) on the basis that they'd be able to teach me.
All I can really say is grab whatever opportunity related to the platform you can. If your employer uses it, try and talk to some people who work with it etc.
sschueller@reddit
What would you recommend if you're already running docker swarm but are looking for "easier" management of micro services and scaling?
For some reason I just feel docker swarm has been abandoned by docker.
I need something where I can update/replace the underlying hosts and not have docker swarm loose it's quorum randomly for no reason without any recovery options other than a full reset.
lebean@reddit
It takes some work to break swarm's quorum, so if your team is breaking a swarm, then the complexity of kubernetes is likely to bring nightmares. We've run several swarm clusters in 24/7 prod for around 6 years now and I've seen quorum broken exactly once, and it was my fault for being impatient during upgrades.
sschueller@reddit
Maybe it was an old version that we were on but we successfully replaced every single node until the last one which as we removed it killed the quorum for no reason. There was nothing we could do to recover it other than to reset the dam cluster which meant all container get stopped and everything needs to be redeployed.
nullbyte420@reddit
Kubernetes. It's not that hard, these people are overexaggerating. Just don't add every single feature you can find. Keep it simple and it's great.
fat_cock_freddy@reddit
Honestly that sounds like a reasonable use case for Kubernetes.
You could try a lightweight distribution such as K3s. One nice thing about it is that you can use something familiar as the state storage - Postgres or MySQL databases for example.
Seven-Prime@reddit
Rancher has a few k8s distributions that I've enjoyed using. But that's still k8s. Maybe quadlets with pacemaker for lower level?
vantasmer@reddit
nomad by hashicorp is always an option, I've always heard great things but never had a chance to try it in prod.
vikinick@reddit
The vast majority of companies could probably get away with just running everything in a single VM and calling it a day tbh.
FRALEWHALE@reddit
The security team would like a word with separation, isolation, redundancy etc lol
itsjustawindmill@reddit
What I would give to have your problems lol… where I work, it’s an ongoing, years-long fight just to get folks to use a real database system instead of some homebrew CGI script that writes JSON files to NFS… mind you this is a critical system for all our software, peaks at thousands of requests per second, and has over a billion records in it…
or how about synchronizing distributed state by polling NFS files? who needs a message queue or even just basic TCP sockets…
oh, and a single VM takes weeks for the relevant team to provision.
No matter how many problems it would solve, nobody except me seems to ever want to change the shitty decades-old system despite there being clear and popular off-the-shelf alternatives, most of which I’ve POC’d for them. Not sure how to fix this pathological mistrust of anything new, much less the attitude that any work not directly solving customer-facing requests is a waste of time.
Sincerely, A radicalized developer-turned-devops-wannabe
htmlcoderexe@reddit
fucked up...
vantasmer@reddit
Updates and patching would be such a pain, but I generally agree with just running docker + VM with a little bit of ansible in there
jewdai@reddit
I've recently learned the art of microservices and I'm sold. The key thing is you need to work in a monorepo so you can get some reuse but I'm no longer trying to make some weird pattern with with another when every services stands on its own as an aws lambda or ecs
jaydizzleforshizzle@reddit
Well yah, I’m assuming the bleeding tech companies that want/need it have engineers for that, the consulting is probably an insane amount of legacy lift and shift.
CompWizrd@reddit
Years ago, we had an application that the consultant said we needed 2 8-core machines to run. I asked if we could just run it on a single 16 core cpu via VMs. They pushed back and finally said ok, but we don't guarantee performance.
System purchased and installed, and the thing doesn't use more than a couple percent CPU under load. It later became our virtualization server to move half a rack worth of physical servers into VM's. Still didn't touch more than a few percent cpu.
Later, I did my own research and found that the specs they were quoting were for something like 10,000+ users.. We had maybe 40 using it..
SgtBundy@reddit
Blockchain. Need I say more.
Tasked with deploying a blockchain system. Blockchain requires a 2N+1 nodes for fault tolerance, in our case needing 7 nodes. The actual application sat on the blockchain and had to form consensus across the blockchain nodes, which meant all nodes had to be scaled for the overall performance of one. The projected requirements we initially got worked out at nodes with \~120 cores, 3TB of RAM and 14TB of NVME storage. There was also room left in case we needed GPUs for offloading crypto signing. All environments considered we had to buy 26 of these monsters.
Eventual tuning later in the project drop the requirements down to about 40% of the original sizing, but turns out duplicating your transactions 7 times over the network kinda sucks for throughput. Eventually the project had to admit defeat and go back to the drawing board.
__ZOMBOY__@reddit
Can I ask what kind of project this was? I’m genuinely curious about what kind of application they thought blockchain would be good for (assuming they weren’t just trying to create a “CorpCoin”)
SgtBundy@reddit
The use case was a ledger service, which made sense for blockchain, but usage was not distributed, so all the overheads made no sense. But the project had a view that it would eventually be distributed with clients, but there was no real roadmap for it other than "future".
Don't want to identify further though.
notospez@reddit
You'd be surprised how often we encounter the reverse. My usual answer to resource needs is "start here for the initial onboarding and then scale CPU, RAM and disk based on monitoring of real-world usage". Somehow that's not acceptable and most companies want hard numbers, so then we respond with way oversized specs to make sure they won't come back to complain about performance.
da_apz@reddit
This reminds me when we were getting a new server for a new version of our ERP and the ERP provider got asked about the specs. They specified it needed 3.4GHz processor. A 3.4GHz what? Nothing, just that it's 3.4GHz. No one had better technical explanation for it.
RedShift9@reddit
There's three constants in life: death, taxes, and ERP vendors wanting a bigger server.
PlatformPuzzled7471@reddit
Yeah that was SAP’s solution to slow performance back in 2011 and it wouldn’t surprise me if that was still the case. My response: “My servers can handle 10 times the traffic if they weren’t busy apologizing for your crap codebase.”
perthguppy@reddit
Heh. We had a client years ago who’s global head office engaged Deloitte to deploy Citrix for our countries branch office. There were about 12-18 employees who needed to use the apps deployed on Citrix for their office. I was then brought in after it was all deployed to help debug issues with it. There were 38 virtual machines deployed just for their office Citrix deployment. For at best 18 users. And none of the existing IT support resources had ever touched Citrix.
wirral_guy@reddit
Having been the Vmware\Hyper-V\Azure etc specialist at many places I can tell you - software companies literally look at the current base spec server available on the market and use that as their baseline.
I just gave up trying to convince anyone that you start small and ramp up if needed and just gave them whatever they want, even if I knew it'd be slower.
unethicalposter@reddit
Man I've used some black box VM software that has stupid requirements like that but they make you reverse the CPU for as well or it won't start. Thanks I just reserved 12ghz for your app that uses 100mhz.
CauliflowerProof3695@reddit
Sounds like you don't know what you're doing. I use k8s every single day and have never had a single issue. Maybe it's a skill thing?
Ill_Dragonfly2422@reddit (OP)
I'm not the developer dipshit. I don't have access to the source code. I don't want access to the source code. I don't build the docker images. Don't talk shit about something you know nothing about.
malikto44@reddit
If someone gets a hammer, they think everything is a nail. Kubernetes is great for cattle, but anything stateful or "pet" VMs, it is just a layer of moving parts that isn't needed, and even though Kubernetes does support it, it is just best to throw that stuff on a Bog-standard VM farm.
I've encountered similar stuff at previous jobs. Just because a tool is there doesn't mean it is needed, and for almost anything and everything on-prem, VMWare or another hypervisor with a solid control plane is good enough, especially when it comes to backups and DR.
altodor@reddit
I've found that this winds up with devs focusing on features and not maintenance, so we wind up with OSes in production several years past the EOL. I'm putting K8s into my environment to push them up the stack so I can maintain the underlying infrastructure and they can worry about the software and not the platform it's on.
Critical-Explorer179@reddit
...then you end up with a gazillion images inheriting from EOL base images, with outdated packages. CI/CD rebuild/redeploy of everything every day is possible, of course, but you just moved your EOL-OS-on-VM problem to the devs down the stream. And if you have many teams, they all need to know they have to bump up and freshen up their Dockerfiles once in a while...
altodor@reddit
Well in my environment devs are the primary managers of the entire Linux environment now, so it's probably not going to make the problem worse.
I plan to push up and make in-roads into the base images, but for now I'm just pushing them out of the OS. They don't want to be responsible for it anymore, historically ops/infra/IT has been hands off after VM creation, and there's been an unclaimed middle area between hypervisor and application layer that I am now taking ownership of, with everyone's blessing. It's the push up into image bases where I suspect some friction will occur, but what containers they do have are built by Spring at the moment so I assume that's not doing terrible things by default.
Marathon2021@reddit
But but but ... how will the devs pad their resumes if we don't?
Heard the phrase "resume-driven development" over in /r/devops once, and it was just such an apt description.
vantasmer@reddit
I’m stealing resume driven development lol
Entaris@reddit
Should have probably used blockchain instead. Maybe with an NFT for synergy.
OpenScore@reddit
Don't forget AI for the next gen pixie dust.
soundtom@reddit
Having worked with Kubernetes since 2016, it's a super powerful tool if you actually need it. It needs investment, staffing, and expertise. You need to integrate tooling and workflows. It's a LOT of work, but worth it if your org needs the benefits of scaling and workload management. If it wasn't for Kubernetes, my company would have had a lot more roadblocks in building out our products, but we have 2 whole teams who manage the clusters themselves and another 5 teams who maintain tooling that integrates into those clusters.
It's crazy to me how far companies who don't need Kubernetes get into the ecosystem before realizing that they're spending more time dealing with Kubernetes than their product. Like, I get it, buzzword-driven-development, but wow.
Comfortable_Gap1656@reddit
I think Kubernetes is just assumed to be the only way to do modern containerization. They don't realize that Podman and Docker exist. If you don't need the features provided by Kubernetes there is not much point in using it. You can move your docker containers to a different host by manually kicking off some automation.
donjulioanejo@reddit
Podman and docker don't really do distributed computing well.
You can deploy easily enough on a single machine. But you can't exactly handle keeping a fleet of pods running at the same time without building a decent chunk of automation around it.
At which point, you've put in almost as much work as just deploying managed Kube like EKS or GKE.
GauntletWizard@reddit
Podman and Docker don't even do "restarting when something fails" well, primarily because Kubernetes is so overkill but also so good at it that nobody bothers to invest.
RichardJimmy48@reddit
What part of distributed computing does docker swarm not do well?
soundtom@reddit
Speaking as someone who joined a team at $PREV_JOB right after they migrated from Docker Swarm to Kubernetes, Swarm runs into performance issues well before Kubernetes does when scaling up. Granted, that was for mid- to large-scale data processing+storage, so maybe other usecases work better under Swarm
ztherion@reddit
I think the critique is when the company doesn't actually require distributed systems to achieve it's business objectives.
I've worked at successful companies where the only thing that needed multiple replicas was the frontend, and even then mostly for zero-downtime deployments during daytime.
Cjacoby75@reddit
Job security. IBM product?
nelsonbestcateu@reddit
Can somone explain to me what kubernetes actually does? All I see is vague terminology.
spokale@reddit
I think it would be easiest to understand if you consider how things might work without kubernetes.
Let's say you have an ASP.NET program. How could you deploy and manage this?
You can do this, but there are particular pain-points especially related to how you route requests, scale, instantiate instances, etc. You can get the job done but there are many ways of doing each thing so your processes tend to be bespoke and reliant on human processes and documentation.
With kubernetes, by contrast:
You can more or less accomplish whatever kubernetes does in some other way, using conf management tools and other kinds of automation, but kubernetes standardizes the toolset and presents a single management pane for it all.
Captainjim17@reddit
Just think of it as a different operating system like Linux or Ubuntu. It's kind of custom made to support applications running in a cloud environment. So rather than carrying a bunch of code to act like a VM it only carries what it needs and is very light weight. It also does containerization which essentially means it carves up compute power across one or many physical servers. So if someone was to blow one up another would jump in and pick up the load without anyone ever really knowing. Similarly it will only take what it needs so in cloud environments which bill on consumption it can be much cheaper as the clusters scale down in low use times.
There's also a ton of benefits to being able to deploy software faster and monitor the resources... But yeah it's just a different flavor of OS essentially.
LeiterHaus@reddit
Kubernetes is a powerful tool for managing complex applications across multiple servers.
nelsonbestcateu@reddit
Lol
Mammoth-Writer7626@reddit
Move to k8s without knowing the app, neither the steps needed to break the monolith, is a big fail
vantasmer@reddit
Though its not designed for monolithic apps, you can still leverage kubernetes for some things to make development less painful. What caused such a mess?
Ill_Dragonfly2422@reddit (OP)
Devs don't know how to build docker images
808estate@reddit
is that something you could help them with?
Ill_Dragonfly2422@reddit (OP)
They know they need to do it, but just don't for some reason. I also manage an HPC for our Bioinformaticians. I'm stretched extremely thin.
vantasmer@reddit
Wait so are they just hand jamming everything into a writeable pod? if there is too much friction in their deployment process then they devs will always find janky ways to do things
Ill_Dragonfly2422@reddit (OP)
Yes. I have to exec into our pods to install python packages. It's insanity. The devs know it. I know it. Management knows it. Yet it continues.
vantasmer@reddit
oooof yeah that's pretty rough. Sounds like you need to automate the image building process
Ill_Dragonfly2422@reddit (OP)
Devs can't agree on how to build the image manually first
vantasmer@reddit
So is this a kubernetes issue or a management issue?
Ill_Dragonfly2422@reddit (OP)
It's both
__I_use_arch_btw__@reddit
It’s not both. You shouldn’t even be able to run updates on pods. It sounds like you and them are not managing this correctly. I understand this is a rant but this could be made better and easier with a little bit of time from you.
Ill_Dragonfly2422@reddit (OP)
Okay. What do you reasonably expect me to do?
__I_use_arch_btw__@reddit
I didn't intend for this to come off as an attack even though it did. I mean without knowing exactly what is happening there and what you are doing I can't answer. But if you want to make it easier on yourself, get a pipeline built. Get your docker containers corrected and use base images so you can update that and automate the deployments. At least this way you arent on the hook for everything.
IamHydrogenMike@reddit
No, it’s not both…it’s only a management issue. Kubernetes has nothing to do with the problem and the problem is management. If everyone knows a problem exists, including management, then it’s a management issue and everything else is just an excuse.
TMS-Mandragola@reddit
But blaming the technology decision is so much easier than blaming the technical management!
vantasmer@reddit
\^ exactly what I was getting at, generally speaking the technology is not the issue. Unless its oracle
adappergentlefolk@reddit
it’s definitely a management and people issue lol
arcimbo1do@reddit
If I were an SRE there i would set up a ci/cd pipeline and simply prevent anyone else from doing anything other than following the standard procedure.
__I_use_arch_btw__@reddit
Nah something is off. He shouldn’t be able to run updates in containers unless you are running as root. And even then why would you update them in container and not just update the image.
Skylis@reddit
I'm sorry but this is hilarious.
thabc@reddit
It takes less time to write a Dockerfile than it does to exec into a pod and run the same commands. Just do the needful.
donjulioanejo@reddit
WTF bro
TheFluffiestRedditor@reddit
I'll take you a step further, hand deploying Helm charts to Azure kubernetes, when the DevOps stack is right there.
flummox1234@reddit
as a developer... wut? 😱
__I_use_arch_btw__@reddit
Thats like the least tough thing about using k8s. Who decided they would use it if they dont understand that part.
IneptusMechanicus@reddit
Yeah you can absolutely pipeline that, the dockerfiles can be a problem if your devs absolutely don't know how it works but if you can get them to do basic docker stuff you can deploy quickly, particularly if you have something relatively well coupled like AKS/ACR/ADO
donjulioanejo@reddit
Yep. Just because it's a monolith doesn't mean the app can't have multiple types of pods (i.e. backend, frontend, async workers), or that it can't benefit from horizontal scaling and resiliency features that are baked into Kubernetes.
It also makes deploys significantly easier.
RichardJimmy48@reddit
You're really underestimating how monolithic actual monoliths can be. There's an awful lot of applications out there in the world that are a single WAR file deployed to a single JBoss server. Backend, frontend, and background processing all running out of a single artifact with stateful sessions and no session replication.
IN-DI-SKU-TA-BELT@reddit
That's how we use it, it's very easy to boot things up, and scale it up and down, but it depends on your application and your traffic patterns.
Miserygut@reddit
People.
Ikhaatrauwekaas@reddit
People, what a bunch of bastards
rjchau@reddit
FTFY
superspeck@reddit
People were a mistake. If there weren’t people, people wouldn’t have taught sand to think. That was also a huge mistake.
Miserygut@reddit
In the beginning the Universe was created. This had made many people very angry and has been widely regarded as a bad move.
Fibbs@reddit
Exactly the kind of project that needs kubernetes
SirDeadHerring@reddit
Pratchett?
Miserygut@reddit
Douglas Adams! (Hitchhiker's Guide To The Galaxy)
Marathon2021@reddit
Oooo, I'm giving a presentation on GenAI. Tomorrow. In the middle-east ... what a perfect analogy!
WriterCommercial6485@reddit
The Devs don't know how to build docker images
jaw1040@reddit
So many people who read and don't actually do make decisions not actually knowing what they are doing. I have fought many groups before who out containers on virtual hosts saying how efficient it was to host single threaded containers. Took 1.5 years to rip and replace over 3 dozen.
Ended up saving the group about $400k per year.
Mysterious-Tiger-973@reddit
You actually can save costs by boosting migration to kube and splitting up the monolith later on. Also, you can quickly cut the dragons head and force people to obtain skills early. Its gonna be hard start early, but it was the same moving from bare metal to vm's and now from vm's to containers. Next step is to functions and data. But that will also work off from kube. Dont look so down on it, i know the start is a struggle, but eventually you will be miles ahead and support hours will shrink down. Im running 28 clusters with god knows how many applications, some are also dumb monoliths, but devs are working on them and everything gets better. I do this 50% load.
eulynn34@reddit
Lol, someone heard a buzzword and went "we gotta have that"
fatbergsghost@reddit
I still don't really understand what K8s is supposed to be for. Or Ansible.
ivebeenfelt@reddit
Micro. Services.
TruthYouWontLike@reddit
You want K8s? Sure, let's start with a single docker and then scale as needed.
Bubbagump210@reddit
Can’t the app live outside K8s if it’s monolithic? Or did they do a bunch of plumbing in K8s that is hard to tear out?
samtheredditman@reddit
Honestly I don't understand the hate for k8s. It's basically software that makes automating huge amounts of your infrastructure very simple. As someone who used to do all this work manually, I love k8s.
I don't get why anyone would prefer to not use it if they know how to use it. I think the hate comes from people not wanting to learn something new (that is actually relatively simple).
wpm@reddit
I don't hate k8s but I do hate any "declarative" framework that doesn't actually explain any of the underlying procedural process in a clear way, or document clearly what shit I should be putting in my goddamn yaml files. It's always "here just copy paste this huge file and run
kubectl apply
". Well, why? What the fuck is this thing doing? What can I change? What can't I? If I change this field, what else will break? It's all just a maddening pile of "try and see".terryducks@reddit
Back in my younger years, the team lead dropped JCL on my desk. "Don't fuck with it, just use this." Which lead to almost the exact same questions.
nullbyte420@reddit
No it's not lmao just read the documentation?
Ohhnoes@reddit
There might be 50 people on the entire planet that fully understand how k8s works. It's a bloated over-engineered mess that unfortunately we're stuck with now.
nullbyte420@reddit
You could say the same about Linux and it's very nice to work with too
occasional_cynic@reddit
Yeah, Docker compose files are far more human readable.
If you have some authority - Docker Swarm is far easier to stand up, and is still supported. It lost the war with K8s.io for market share, but it still works well.
Ohhnoes@reddit
To this day I'm salty Swarm lost the war.
nullbyte420@reddit
Yeah it's really great. I don't get it either. I'd rather not set up high availability and self healing and secrets with docker and ansible...
FarmboyJustice@reddit
I think the hate more likely comes from people being handed a tool and told to use it by management who watched a powerpoint presentation at a conference.
Ill_Dragonfly2422@reddit (OP)
You just answered your own question.
iamvictorjones@reddit
Funny story I worked for a company that did the same thing. Key word “worked” I seen the writing on the wall and quit.
Fatality@reddit
Should've deployed AKS/EKS to make it more manageable then used ArgoCD for the deployment.
perthguppy@reddit
I have noticed a pattern recently where application vendors have dropped all docker/container support and instead just say deploy on kubernetes. Which isn’t always a great move.
Affectionate-Cat-975@reddit
You don’t work for anchor do you?
garaks_tailor@reddit
On the plus side, I'm basically irreplaceable because nobody other than me understands this Frankenstein monstrosity.
Gotcha always make sure to drive complexity and bad decisions as long as you are the linchpin.
1ishooter@reddit
It's a two edged sword. Sometimes staying on despite trying to explain and make fit the migrated solution when things are obviously not working out because of an older senior IT management in the company's recommendation to non IT knowledgeable board or business management could put you squarely in the implementation hot seat. In the blame game the newer guy will always lose out. You do not want to be irreplaceable when the breaking point; shit has hit the ceiling moment ... just saying
Loan-Pickle@reddit
A shitty Kubernetes implementation will drive you mad literally. I quit my job last year without another one because I had a mental breakdown driven mostly by our piece of shit Kubernetes deployments. That’s right we didn’t just have 1 shitty cluster. We had 6. It took nearly a year before I was willing to look at a computer again.
lightmatter501@reddit
Does the application do HA? It might have asked for kubernetes because kubernetes will generally stop people from putting all 3 instances on the same host.
Zenie@reddit
This is basically anywhere I've ever been. There's always some overbloated pos software that is hanging on for dear life and like 1 person left who only somewhat manages it and never documents shit.
shmehh123@reddit
Yep we have a single DBA dude that understands our payroll software that we use to run payroll for clients. The thing runs on ancient SQL servers and our internal users still access it using some crazy customized version Access 97 .mde or whatever we licensed from some company 25 years ago.
I never want to understand a single thing about. I just make sure we have backups of his SQL VMs and spin up test environments for him when he asks.
tactiphile@reddit
I see so many companies that got sucked into the buzzword with no concept of CI/CD and try to run it like all their other shit with quarterly patch cycles on bespoke nodes still running v1.16 in 2024. Gah.
oldfinnn@reddit
Job security! lol
ToastedChief@reddit
Graveyards are filled with irreplaceable employees
edthesmokebeard@reddit
You should not have pushed back harder.
Let them wallow in their own fail. They don't give a shit about you - why do you care about them?
wasabiiii@reddit
I kind of disagree with this. I use K8s for similar things. Orchestration provides more benefits then just management of individual containers. Resiliency, monitoring, and programmatic deployment. Not to mention a path to start breaking the app apart.
Ill_Dragonfly2422@reddit (OP)
I assure you, we are getting none of the benefits.
Barnesdale@reddit
But at least you're not tightly coupled and locked in to one cloud provider, right?
wasabiiii@reddit
Well the part you stressed in all caps was that it was monolithic. Not getting the benefits is a different issue.
AGsec@reddit
So that's an interesting concept to me... my understanding was that monolithic was a big no no. am I to understand that it's not the boogeyman I've been led to believe, or that it is still less than preferable, but a separate issue than lack of benefits?
jonboy345@reddit
A bit of an extreme example, but Amazon Video saved 90% by moving from microservices to a monolith:
https://web.archive.org/web/20230323220106/https://www.primevideotech.com/video-streaming/scaling-up-the-prime-video-audio-video-monitoring-service-and-reducing-costs-by-90
Tetha@reddit
Operationally, you have different issues. Some approaches work better for a small infrastructure, and some work better for a big one.
Monoliths are easier to run and monitor. A friend of mine worked at a company and their sales-stuff was just a big java monolith. Deployments are simple - just sling a jar-file on 5 servers. Monitoring is simple, you just have 5 VMs or later on 5 metal servers with this java monolith on it, so you can easily look at it's resource requirements. You have 5 logs to look at.
If I was to bootstrap a startup with minimal infrastructure, just dumping some monolithic code base onto 2-3 VMs with a database behind it would be my choice. This can easily scale to very high throughput with little effort.
However, this tends to be slow on the feature development side. Sure, you can make it fast, but in practice, it tends to be slow. Our larger and more established monolithic systems have release cycles of 6 weeks, 3 months, 6 months, 12 months, ... This makes updates and deployments exciting, and adds lead time to add features. And yes, I know you want to deploy early and often to minimize the number of changes to minimize impact and unknowns, but this is the way these teams have grown to work over the years.
The more modern, microservice based teams just fling code daily to production or weekly at most. Safely. The deal is if they cause a huge outage, we slow down. There was no huge outage yet. This allows these teams to move at crazy speeds. A consultant may be unhappy about some UX thing, and you can have it changed on test in 2 hours and changed in production at the end of the day. It's great and fun and makes many expensive developers very productive. That's good.
The drawback however is complexity at many layers.
Like, we need 20 - 30 VMs of the base infrastructure layer to run until the first application container runs in a new environment. That's a lot. That's basically the size of the production infrastructure we had 6 - 7 years ago. Except, the infrastructure from 6-7 years ran 1 big monolith. This new thing runs like 10 - 15 products, 900 jobs and some 4000 - 5000 containers.
This changes so many things. 1 failed request doesn't go into 2 logs - the LB and the monolith. It goes through like 8 different systems and somewhat fails at the end, or in the middle, or in between? So you need good monitoring. You have thousands of binaries running in many versions, so you need to start security scanning everything because there is no other way. Capacity Planning is just different.
Smaller services allow the development teams to make a lot more impact, but it has a serious overhead attached to it.
FarmboyJustice@reddit
All claims that a given paradigm, architecture, or approach is "good" or "bad" are always wrong, without exception. Nothing is inherently good or bad, things are only good or bad in a given context. But our monkey brains like to categorize things into good and bad anyway, so people latch onto the word "good" and ignore the "for certain use cases" part.
axonxorz@reddit
There's no hard and fast answer, it really depends on the project scope.
Monoliths are nice and convenient, the entire codebase is (usually) there to peruse. But they're less convenient when they're tightly coupled (as is the easy temptation with monoliths) leading to more difficult maintainability.
Microservices are nice and convenient. You can trudge away making the changes in your little silo. As long as you've met the spec, everything else is someone else's problem. Oh and now you've introduced the requirement of orchestration, which is an ops concern, not typically a straight dev. One major detriment to microservices is wheel-reinvention. The typical
utils
packages you might have are siloed (unless you've got someone managing the release of that library for your microservices to consume), everyone makes their own.CantankerousBusBoy@reddit
uhh... ill upvote both of you.
Ebony_Albino_Freak@reddit
I'll up vote all three of you.
FarmboyJustice@reddit
I'll upvote all four of you.
Apprehensive_Low3600@reddit
You don't need kube for any of those things.
justinDavidow@reddit
You're right; You don't need kube. ..but it's much easier to find people who understand enough k8s today; than people who know how actually understand how shit works.
The controller-driven manifest-in-api approach is powerful; it creates fundamentally self-documenting infrastructure that solves a LOT of problems common in the industry.
k8s is rarely the BEST solution to any problem; but its absolutely one of the most flexible solutions that can fit well (if well designed and used!) in nearly any situation.
Comfortable_Gap1656@reddit
docker compose can have the same benefits if you don't need a cluster. If you are running your VM on a platform that has redundancy already it isn't a big deal.
justinDavidow@reddit
Docker is paid software; If you're into paying them for licenses: cool.
The application being deployed is a small component of the environment.
Want to pass secrets managed by a different team (or a distributed team?)
Need an external database that someone else is in charge of?
DNS records that point to the application?
Load balancer; configuration; monitoring; service endpoints; etc: There's a lot more to an application than just the container(s) themselves.
Critical-Explorer179@reddit
Docker engine is not paid. Only the GUI for Windows/Mac, i.e. the Docker Desktop.
FarmboyJustice@reddit
There is an important qualifier that must be added to this claim: "When used correctly..."
justinDavidow@reddit
I disagree.
K8s; even if used "incorrectly" can still really benefit a business.
It's much easier to hire a consultant today that can; looking at a k8s-running workload; work with the business to determine what their actual needs are and how they want to improve things.
Hiring a consultant to come in and say; add functionality to Quickbooks on a single small business server; I tend to find that businesses have a very hard time articulating what they even want done in the first place.
bad common tech; in the business world; usually wins out over amazing but rare tech.
I don't like it; but that's how it is. I just work with it. ;)
FarmboyJustice@reddit
When I said "use correctly" I meant using it in an environment that actually justifies that use and doing so properly. I am not talking about a less-than-optimal environment. I'm talking about convincing some SBO they "need" to set up a cluster in order to host their Wordpress site, or other equally idiotic nonsense.
Apprehensive_Low3600@reddit
It solves problems by adding complexity though. Whether or not that tradeoff is worthwhile is determined by a few factors but ultimately it boils down to business needs. Trying to shove k8s in as a solution where a less sophisticated solution would work just fine rarely ends well in my experience.
superspeck@reddit
The shitty thing is that those of us who do understand how shit works, and have been maintaining all kinds of wild shit for decades, can’t get jobs right now because we don’t have 10+ years of k8s.
justinDavidow@reddit
I call this the coal miners fallacy.
"Sucks that people don't need coal anymore; that's what I know how to mine really good".
There's nothing stoppig you from learning it; hell; there's resources available to help! https://kubernetes.io/docs/home/
K8S isn't all that hard to learn; it's hard to master.
MOST businesses need people to get shit done; not to master the ins and outs. Apply places that will help you grow into those skills while you can provide what you do know to them.
Best of luck!
superspeck@reddit
I run k8s at home. It's not "I don't know it" or that I haven't set it up or that I can't run it. Not having pro k8s on the resume gets me rejected early.
sexybobo@reddit
You just said the reason people use K8's is because its easy to find people that know how to use it not because its the best tool. Then you replied to some one saying they can do it better by saying they need to learn K8's even though its not the best tool.
You are really the person who has a hammer and thinks everything is a nail.
IamHydrogenMike@reddit
Would take a few days to teach their devs how to build their containers and to deploy it properly. All of this is a management issue…
SensitiveFirefly@reddit
This. This. This.
justinDavidow@reddit
Right?
Honestly; k8s mandates a significant portion of configuration management. Add version control to manifests and BOOM; you suddenly have the ability to roll infrastructure backwards and forwards to any point.
Want to desctibe your entire DNS infrastructure in code? Cool! Need an externally provisioned resource on a cloud provider; there's a controller for that! Want to boot up a grid of x86 servers from a k8s control plane and register work onto them with minimal setup? (prob going to need a custom controller; but awesome!)
posixUncompliant@reddit
If you don't understand how shit works, k8s isn't going to help you. You need to get the low level stuff to be able to leverage the higher level stuff. I can't count the number of times a poor understanding of storage led to really stupid k8s setups.
justinDavidow@reddit
And yet; those businesses usually continue along doing just fine.
Shit doesn't need to be perfect to be useful (and profitable!)
Don't get me wrong: K8s has a steep learning curve and you're not wrong: it's NOT the be-all-end-all solution. Hell; it's a BAD solution in MANY cases.
but for MANY orgs; k8s means the ability to speak a common enough "language" to really get shit done.
Can it be done better? Even the best solution in the world can be done better. Is it good enough for many use cases? yep.
IneptusMechanicus@reddit
It's also great when you find you're using a lot of PaaS web app thingies, deploying those components to a properly sized cluster can often represent a decent cost saving.
jake04-20@reddit
Off topic, but I had to look up what K8S was and I had no idea it was semi standard to count, then omit the characters between the first and last character in a word and replace them with the sum of characters omitted. I'm going to start doing that for words I don't like spelling. Like infrastructure will be i12e. Well, maybe that's a bad example because I already just say infra. But you get the idea.
Perkeie@reddit
https://en.wikipedia.org/wiki/Numeronym
timallen445@reddit
this is the guy OPs management talked to
Ill_Dragonfly2422@reddit (OP)
XD
thefpspower@reddit
IMO kubernetes should only exist if you need your application to scale on demand or you want it to be fast to recompile and deploy, neither of those happen with a monolithic application.
wasabiiii@reddit
This isn't true. A monolithic application is defined as being a single large executable of some fashion. That doesn't rule out scaling or quick deployment of that single executable.
posixUncompliant@reddit
k8s is way for people to pretend that understanding the underlying systems don't matter.
Like so many other things, it's an attempt to cover over the hard part of complex systems. For the most part it's really useful.
But, it's still something that reduces performance, and introduces its own complexity on top of already complex structures. It's not a panacea.
And in the end, hiding the complexity of infrastructure is damaging, because it lessens the expertise of the community in dealing with infrastructure.
wasabiiii@reddit
This type of argument reduces to absurdity. Every peice of instructure hides some other complex system underneath.
AGsec@reddit
Disregard my previous question... this comment summed it up perfectly.
thefpspower@reddit
To me monolithic usually indicates the application has every resource in 1 spot, so maybe resource files, database, the code itself and other dependencies.
You'll find it very hard to scale if the resources are all pooled in 1 spot, unless your application is read-only which would be a niche use-case.
If it was just a BIG application but it can operate independently when you spawn many of them then yeah it's scalable but I don't get those vibes from this post.
hornetmadness79@reddit
The least hair pulling maneuver would be to stand up a single large node and use taints/tolerations to put the monolith on that single(?) node. You can still reap the benefits of kubernetes by migrating some of your aging systems and move them to normal auto scaled sized nodes.
punklinux@reddit
A previous client we had did something similar, but they got some devops hotshot who wanted EVERYTHING kubernetes. Some applications might do well, but others did not. Some software even specified "this is not supported or recommended as a containerized solution." They once had the 5 nines for uptime, and their uptime dropped sharply because not only could their own team figure out what was going wrong, but couldn't get vendor support because the app didn't work on docker. But the hotshot said, "Oh, that's just a suggestion. Look, this github account did it with some kludges and we're doing what he did." And the github page hadn't been updated in years. So when we were hired, we were asked to work with this guy, but he didn't like the fact we told him to take some of this crap off docker/k8s and have it standalone like it was before he worked there.
Eventually, my boss said we would no longer be supporting them as a client if they didn't do anything we told them to do to fix it. And the company management hemmed and hawed, because their hotshot had their ear, "these consultants don't know ANYTHING." Okay, then, good luck. My boss ended the contract.
That company has been out of business now for a few years.
punkwalrus@reddit
I was in a shop like that. Yeah, it was strange because my boss wanted orchestration on stuff that was working fine. "Everything is working. Why add this complex layer?" "Because nothing we have scales!" "Do we NEED a git server to scale like that?" Half the time, our git server was down, out of sync, or being restored from backup. His team of sysadmins and programmers couldn't keep it running: git nodes were constantly timing out, the IaaS systems were sluggish, and just like you said, he'd apply k8s where the application developer (like Atlassian) didn't support it (at least at the time). But he found some Danish guy with a github account who had a proof of concept, so we had to follow that.
Just because you CAN doesn't always mean you SHOULD.
And don't get me started with how he shoehorned terraform.
oldvetmsg@reddit
Sorry I saw POS as the old you meaning on the army.Army..
Dude most be mad...
deltashmelta@reddit
"Put the big data into the cloud, with the AIs."
stiny861@reddit
Does the software start with a C and is 3 letters long?
supercamlabs@reddit
Somebody really had to open the Pandora's box that is kubernetes...
Avocado_Infinite@reddit
My new contract is trying to do the same thing cuz modernity. They want to turn perfectly running into micro services and host it on EKS. I’m dreading the day
SikhGamer@reddit
No one I ever meet in the SWE world has ever needed to use k8s. It's all want and shiny new technology. It's funny to watch them firefight.
Meanwhile I'm over here with my https://boringtechnology.club badge replying to reddit threads.
Bordone69@reddit
Is that you?
UninvestedCuriosity@reddit
lol
We are just slowly working on moving SOME LXC's into the docker right now. Next we'll see how far we can get into docker swarm where it makes sense but the more I work with docker swarm, I'm not sure it's even worth the additional complexity for most things.
It's not like this place is going to start 10x'ing over night or in the future. We just need things to be easy to update and backup lol.
Comfortable_Gap1656@reddit
Set up some automation to deploy your app to any VM with Ansible. Docker compose works well with some Ansible playbooks. If you need to move your container to a different host just manually trigger it.
UninvestedCuriosity@reddit
We were using ansible with a web frontend for a while with playbooks but I don't know if it was just the period of time I was using it or what but it felt like my playbooks constantly needed to be changed and fixed on update. It was enough time suck that I said to hell with it but maybe things have calmed down and I came in at a bad time?
fresh-dork@reddit
i mean, if they were going to fragment the app into a reasonable number of services as step 2, i could see that, but i'd want to see the plan up front
intoned@reddit
Sounds like the senior guys were wanting to improve their linked in profiles.
eat-the-cookiez@reddit
Seen this far too often as a cloud engineer. Thankfully not at an msp any more
glisteningoxygen@reddit
I run K8 behind my enormous ERP system, works great, 10/10, no notes
yamsyamsya@reddit
k8s is great when you build your application for it. trying to move an existing application over to it is a huge pain in the ass.
Ill_Dragonfly2422@reddit (OP)
No, I barely make over $100k
Sushi-And-The-Beast@reddit
Yeah eff that…
Big_Comparison2849@reddit
ServiceNow was the main bane of my existence for a year or so before I left my last role, but Kubernetes and Jira were close behind. They couldn’t seem to get just to one CRM system, so just used all of them.
jhaand@reddit
Just remove k8s in the coming months and nobody will notice.
mini_market@reddit
C’est la k8s vie.
cmack@reddit
But you should be so happy that you did it modernly correct instead of the way we did it in the wild, wild, west....huk, huk
salty-sheep-bah@reddit
Oh neat, the last place I worked for did this and now they're shutting their doors at the end of the month.
That wasn't all the Kubernetes projects' fault but rather a long run of misinformed leadership decisions like that project.
Ill_Dragonfly2422@reddit (OP)
Yup, looks like I'll be going down with the ship. Just a tragic history of impotent management
matt95110@reddit
I’ll never forget when one of the head developers ran out of ideas in a meeting a said we should “migrate our app written in the late 90s to Docker.”
Sit down and shut the fuck up.
Comfortable_Gap1656@reddit
It will be great! You can run Windows software in Wine right?
matt95110@reddit
There was a lot of VB6 code in that app, so Wine would probably work well with it.
maniac365@reddit
I still really dont understand kubernetes
Comfortable_Gap1656@reddit
Does anyone?
RichardJimmy48@reddit
Whenever you bring up the idea that Kubernetes might be bringing unwarranted complexity to your workload, you get all of these Kubermensch telling you that it's solving problems you don't know you have yet.
I'm still waiting to find out about those problems 6 years later....
Comfortable_Gap1656@reddit
It is good that 90% or more of your problems come from Kubernetes
lazydavez@reddit
Docker compose up -d
Comfortable_Gap1656@reddit
I wouldn't even do that. Use Ansible to connect to the host and deploy the docker compose. It is much cleaner and more reliable. Plus you can setup your automation to be able to blow away and recreate your VM in case of massive failure.
Comfortable_Gap1656@reddit
Don't tell them about Docker compose. It might blow there mind that they probably don't need a k8s
dRaidon@reddit
Lol, do we work at the same place?
CeldonShooper@reddit
Reminds me of the company that had the policy from management to "docker everything". We had a lambda in AWS and they wanted to docker that lambda. I said "what's the purpose of this?" and they couldn't tell me.
cybersplice@reddit
Deploy a Harvester cluster, run it in a kubernetes VM and don't tell him the VM part
AHrubik@reddit
Could be worse. They could be trying to solve years (decades even) of tech debt caused by cost cutting to increase shareholder value by insisting the "cloud" will solve all the problems.
Seven-Prime@reddit
Find the seams in the monolith. Rip them open. You got this.
flummox1234@reddit
As a programmer, I think of this type of situation every time someone says tries to sell me on the virtues of Kubernetes as a better concurrency alternative than Erlang's BEAM. 🤣 Good luck with that one, buddy. Meanwhile all I have to do to get concurrency in Elixir is use Task.async/1. I don't have to bootstrap and figure out how to deploy an entire kube based setup. If I want it to be distributed all I have to do is connect my nodes and execute an rpc. Kubernetes definitely IME tends to be "I have a hammer so what screws can I hammer?" type of situation. 🤔 There is a time and a place for everything and everything in its proper time and place.
FrankVanRad@reddit
I am in the same terrible boat. This gets shared with every new person that asks "Why is it Kubernetes?"
https://youtu.be/cfTIjuW6SWM
FrankVanRad@reddit
I am in the same terrible boat. This gets shared with every new person that says, "Why is this Kubernetes?"
https://makeagif.com/i/RwsqNg
dts-five@reddit
SAS Viya?
Chaseshaw@reddit
Oh I've seen this before. They're looking to sell the company because the numbers are bad. "homebuilt monolith" roughly translates to "duct-taped BS" in the boardroom unless you can back it up by something like "Google Architect Consultant Homebuilt Monolith."
Two things:
This is just the beginning. Expect Snowflake or SalesForce or Azure/AWS or who knows what again in a month.
Your business is failing. If your resume is not up to date, now's your chance to get ahead of things. Better to prepare to jump when you smell the winds change than to wait for the ship to capsize and sink.
Ill_Dragonfly2422@reddit (OP)
Solid advice
unethicalposter@reddit
I'll take regular servers or vms over k8s any day. I only end up paying k8s when a customer claims they have to have k8s
ryebread157@reddit
The answer with technology is "it depends". To say Kubernetes is not needed is a bit obtuse. For many orgs, it enhances productivity significantly, for others (often smaller orgs) it's not a good fit.
FarmboyJustice@reddit
This is ALWAYS the correct answer, but never the one you're going to get from evangelists and salesmen.
Burgergold@reddit
Maximo?
Ill_Dragonfly2422@reddit (OP)
LMAO, close!
Burgergold@reddit
Which one then?
Ill_Dragonfly2422@reddit (OP)
I don't want to Dox myself
Burgergold@reddit
Private msg me then, I'm curious
Ill_Dragonfly2422@reddit (OP)
Reddit won't let me message you. Probably because my account is too new
supershinythings@reddit
HAHAHAHAHAHA
That's the job I quit - developing automation for Kubernetes. The guy writing the controllers didn't test his code so those of us integrating were constantly getting sabotaged.
I quit, and now I don't care! BUHAHAHAHAHAHA!
IMHO it's TOO extensible - too many ways to paint yourself into a corner and fail to document. It's difficult to get logs because nobody wants to instrument, say, ElasticSearch or OpenSearch so you can actually debug a problem that could be happening on any one of a dozen controller hosts. And there's always some sanctimonious architect-type who claims everything is "soooo easy" but doesn't document how he debugs so you're funneled into his little superiority hellhole. Nor will he tell you what he did - he'll look at it, tell you what happened, and walk away without explaining what broke. If you're lucky you MIGHT see a changeset fixing it in a few days, which will of course tell you WTF happened. By then you've moved on to a completely different set of clusterfucks.
Great-Ad-1975@reddit
Sales and marketing gets to say the tech team runs Kubernetes to let people's imaginations run wild, and implementors get to add Kubernetes to their resume, while maintainers get to learn Kubernetes when things break. It's maybe still all the same untouched Java application on a Windows NT virtual machine but now the virtual machine starts through Kubernetes and by resolving the Jira ticket for Kubernetes enablement, sales engineering earned 3-topping pizza at the company end of year party 🤙
St_Sally_Struthers@reddit
I could swear there was a Dilbert comic about this..
CryostaticLT@reddit
We setup docker for one application. Now i use it when i need to try out something quick. Some apps stick like grafana, some die. Think of it like your tool shed.
akerro@reddit
It's all gonna be fine and eventually you will produce practices that just work 95% of times. I'm in a company where we run 16 pretty large monoliths in kubernetes. 15 of them are statefulsets :) KeyCloak is the average sized one
Shedding@reddit
No one is irreplaceable. It just comes down to how much money they want to spend.
packetgeeknet@reddit
K8s is great if you have applications that were designed for it. Most applications are not.
ZantetsukenX@reddit
Probably a long shot, but anyone here have any experience setting up Kubernetes to work with CA Workload (Dseries) Batch Scheduler? We have a group at our work who wants us to look into it and Broadcom's support of it seems slightly questionable.
BCIT_Richard@reddit
Lol, I'm sorry to laugh at your pain but that's funny.