What was your biggest ideological shift, and what lead you to it?
Posted by GolangLinuxGuru1979@reddit | ExperiencedDevs | View on Reddit | 243 comments
I've been in tech for 25 years, and at least 20 of those years I've been a dev. I will definitely say early in my career as a Java dev, I definitely fell for the "thought leader koolaid". I would see all of these clever patterns, feel that designing with heavy abstractions was the way to go, and I judged maturity by patterns. It almost got to the point where I would have to see code that looked way too simple. I would ask:
"What if we got another domain"
"Yeah that works today, but what about the future"
"We definitely need an interface just in case"
And I was big on one thing DRY. I thought DRY was the undeniably design idea, and as long as you adhered to it, you were probably going to be ok
My big ideological shift was when I moved to Go. It was a struggle the first few years as I was like "where muh abstractions". But Go helped me build and architect systems by just looking at data. I picked up a very data centric mindset. I stopped looking at objects and started thinking in terms of data and data transformation. I saw the beauty in minimalis, and stopped trying to future proof my program.
Now when I talk to younger engineers, I really to to jump in and tell them to solve the problem in front of you. And not abstract for abstraction sake. That sometimes DRY is a huge trap. And patterns are useful, but lots of times aren't needed before a code base reaches maturity.
What huge mindset shift did you have in your career? What was the catalyst for it? What shifted your mindset?
Marceltellaamo@reddit
One shift that hit me quite hard was moving from "engineering as problem solving" to "engineering as tradeoffs". Early on I used to think there was always a clean, correct solution if you just thought hard enough. Better abstraction, better design, more elegant code. After a few years working on real systems, I started noticing that most decisions weren’t about finding the best solution, but about choosing which problems you’re willing to live with. You can make something more sclable, but harder to maintain. More flexible, but harder to reason about. More optimized, but more fragile. At some point I stopped asking "what’s the best design here" and started asking "what kind of problems do I want to have in 6 months"... That shift made things both simpler and a bit more uncomfortable, because you realize you’re always choosing your future constraints.
EatMoreKaIe@reddit
I had this happen to me just last week. Code is now disposable and commoditized and maintainability is getting more and more irrelevant when you can just easily replace it.
Up to this point, I've been adamant that LLM-based development will never be what the CEO class has been shoving down our throats for the past few years. But then last week I tried an experiment: I took a medium-sized product that, 4 years ago, took a team of 6 developers half a year to create and asked Claude to rewrite it while at the same time fixing some fundamental architectural problems that were inherent in the original product that were always too big for us to fix.
It took me 2 days. By myself.
To be clear, I barely looked at the output - there was simply too much of it to review and it might all be slop but you know what? Who cares? It works. In fact after putting it through our manual and e2e tests, it's clear that it works much better than the old one and it's now shipped to production and making our company money. I almost don't want to believe that this was true and I'm sort of secretly waiting for the shit to hit the fan but the longer it gets used without issues, the more I start trusting that Claude did the right thing.
Still working through my feelings about all of this. I could use a hug.
Venthe@reddit
The is a small thing you've missed - you've provided a perfect setup for the LLM. Seasoned codebase, a suit of tests and a transformation of input.
But please take a look at the results of the agentic development 9-12 months in - human developers are no longer capable of changing the code, and the LLM's break down under the slop they generated. And they will not have a good, more or less well architectured input as a base anymore.
LLM's can produce amazing results, but far too many people are amazed enough to start to trust the god in the machine; wherein this god is only a highly sophisticated parrot that creates overly verbose output, with terrible abstractions.
TylerDurdenFan@reddit
Code now now being to a degree "disposable" is what I think is going to have the biggest effect in how the business of software is done, in the long run
Robodobdob@reddit
I’ve been a web dev for 25+ years too so I’ve run the gamut of ideologies and factions.
In recent years, I’ve swung away from the entire JS framework mentality. It just adds a whole layer of complexity and fragility to applications that 99% don’t need. The recent Axios attack has only strengthened my resolve.
I now embrace a simplicity mindset which is usually server rendering with HTMX. It’s miles better but the hardest part is convincing other devs who are entrenched in SPA frameworks.
TylerDurdenFan@reddit
Hey, I'm old too and I've settled on HTMX+JTE+Javalin (only do personal stuff now). HTMX is great.
Did you find it funny when the young JavaScript crowd rediscovered SSR?
hikingmike@reddit
Hah yeah this is freaking insane…. Rediscovering server side rendering 🤯
The server is fast, we have control over it, and it can do the processing great. The browser/client is unreliable and maybe slow. HTML is basic and simple and easy to send across the network (and improvements to HTTP help). So do the damn work on the server and send the dead simple basic HTML to the damn client. Result: Web app is faster!
Robodobdob@reddit
In my experience, HTMX (or Fixi or Datastar etc) dissolves the argument that you have to send a lot of HTML over the wire. You only send fragments and these compress really well using server compression.
Robodobdob@reddit
Yeah it’s weird to come across people who have no idea it’s a thing. I think many devs have been raised on a diet of React and don’t know any different.
scientific_thinker@reddit
YAGNI - You Aren't Gonna Need It
RewRose@reddit
Should just be YANGNI and do away with that contraction
poolpog@reddit
you are almost perfectly describing the YAGNI pattern
scientific_thinker@reddit
That's what I thought too.
Straight_Waltz_9530@reddit
You should never abstract on the first implementation. Rarely even on the second. Only when you see what the actual shared behavior/interface is with multiple implementations do you refactor with abstraction. This way you're abstracting based on reality, not on your wild imagination and prognostications about the future.
Buttleston@reddit
The biggest revelation that I had is tl;dr a whole lot of things people argue about the correctness of are essential just matters of taste.
Secondarily, the harder people argue over somerthing, IME, the less important it really is
I worked at a job early in my career closely with someone who I truly always respected, even then when I had first met him, but we really disagreed on a lot of stuff. I left that job and went and worked somewhere else, and during that time, I came to see his point of view more and changed my mind of a lot of things
Later, I came back to work with him, and in the interim HE had changed his mind about a lot of the same things and we'd essentially reversed opinions. This led me to understand that really both opinions were valid and we shouldn't mistake opinion for "best practices"
hikingmike@reddit
That’s great. Did you ever talk about that with him, that you noticed you had both swapped stances?
Buttleston@reddit
Yeah we talked about it many times over the years. It's not just in programming, in lots of topics where there are "schools of thought" there are a lot of arguments over nothing and both schools are basically fine. People just like being on teams
At a young age we both liked to argue, also, which probably contributed to it. One of us would state and opinion and hte other would feel obligated to refute it
Early_Rooster7579@reddit
Once I realized all that mattered was deliverables and being friendly my career rocketed. I stopped caring about writing the cleanest code, covering 1000000x edge cases etc. if stuff breaks, we’ll fix it. Making stuff that works minimally and quickly is far less stress and way more reward
coredalae@reddit
The best thing is when you revisit your crappy temporary solution 5 years later to make a proper implementation. And you know it's worked perfectly for 5 years.
hikingmike@reddit
Aint-nobody-got-time-for-that.gif :)
Full-Extent-6533@reddit
Not experienced as you - but at my current firm there is too much deliberate reckless debt. I’d like to think that fast delivery is good but we are facing two prod issues right now that could have been avoided either by a bit clean code and better tests
Early_Rooster7579@reddit
Yeah my point was dont just do the dumbest fastest thing possible but understand that an mvp delivered in a week or better than a perfect v1 in a month.
Bousha29@reddit
I guess I'm still in the stage of my career where that boogles my mind.
I mean writing quality code should be more valuable for your business. But I guess there's just no metric for "things that could've broken but didn't because of foresight".
jayd16@reddit
One trick is to hit all the common cases and just spend the time to fail safely for everything else. IF you even hit those problems, you can just handle it. It's really not an issue. In that case, even if you hit a failure, the recovery cost is minimal.
edanschwartz@reddit
Another aspect that is often devalued: it just really sucks working with bad code. It makes your job less fun, and quickly causes burn out. Burning out and eventually quitting is expensive for the business, and for your own mental health. So take care of yourself, and write the code that you want to maintain!
But like OP, I have often confused "high abstraction" with "good code". I think a whole generation of engineers got brainwashed by Big Abstraction early in our careers 😆 The first time I heard "a little duplication is better than a little abstraction" my mind was totally blown .
potatolicious@reddit
The trick is that quality is a tradeoff and isn't free, and at some point it's all about tradeoffs.
If making sure some component of the codebase is completely bullet-proof and handles every edge case gracefully is going to take 5x longer, and the odds of hitting an edge case is very low, and the severity when someone hits the edge case is not really a big deal, then that extra effort is hard to justify.
Think of it like building a gaming machine. Some people really do enough to justify getting a massive $3K+ GPU, but most people don't. So yeah you can go balls to the wall and drop $10K on a rig, or you spend a fraction of that and be perfectly fine. Same deal in engineering - everything has a time and money cost.
The other part of this is that the world is constantly changing and so will your requirements. It is possible to be so quality-obsessed that not only are you paying steep costs, but also what you have built is wrong by the time it is finished, not because the work was done poorly but because the requirements changed while you were building. The time tradeoff is very real in many businesses and very painful - the longer something takes the more risks it takes on in other dimensions.
Head-Bureaucrat@reddit
Quality activities technically are not value added activities. They are risk mitigation, which is still super important. Once I started communicating with non technical people and management like that, it got way easier. I'm not longer trying to convince someone that if they spend a little extra money, some nebulous, unspecified bug or bad thing might not happen. Now it's, "if this feature breaks, how big of a deal is that? Would you spend $5,000 to fix it? $10,000? $100,000? Will the company be fined? Will people die?"
Suddenly it becomes easier to argue "well, if this breaking isn't a big deal, and you'd pay to fix it when a dev has free time, less just knock it out as fast as possible," or, "if sometime would die and you'd spend $100,000 to fix, let's really think through the scenarios, design good architecture, write comprehensive tests, etc. to minimize that scenario as much as possible."
hippydipster@reddit
The fact that there are tradeoffs should absolutely end this argument from a completely abstract perspective. People arent saying anything useful beyond that point, as its almost impossible to do so.
Whether a given bit of effort into code quality is worth it depends on a zillion concrete factors of each individual situation. End.
alpacaMyToothbrush@reddit
I mean, obviously there are some lines to be drawn where 'gold plating' is not worth it.
If, on the other hand you take 1.5x as long to deliver a piece of code, but it's clean, understandable, and properly covered by well written tests? That's gonna pay dividends in the long run with fewer bugs.
I've noticed quality has really started to decline in software development with the push to 'vibe code' everything, and I no longer have the air cover from management I used to have to push back because I'm 'delaying the merge'. Ok, fine, but the work I deliver I still hold to high standards. It's nice when someone asks me if I tested $cornerCase because I can not only affirm I did, but point to my whole suite of unit tests written and what scenarios they cover. Keeps jrs from breaking my stuff too, so that's a bonus.
koreth@reddit
Sometimes that's true. But I've had experiences where the dev team delivered a clean, high-quality code base and then a couple months later the project was cancelled before any actual customer had used what we built.
"Spending time to write clean code pays dividends later" only holds true if the code sticks around around long enough to recoup the cost. Which it often does, of course, but the payoff shouldn't be treated as an inevitability.
Early_Rooster7579@reddit
This is my feeling too. You can always spend more time on a feature for improvements. You can never refund time wasted on requirements changing or cancelled spec
Wide-Pop6050@reddit
This has been a big learning for me. Always get an MVP out. People give much clearer feedback when they have something to respond to
TylerDurdenFan@reddit
Also:
The developer who pays the extra cost (even if it's just 1.1x) is not necessarily the one who will reap the benefits.
The developer who ships fragile minefields that work for now, is not necessarily the one who will have to deal with the repercussions.
Nowadays, with loyalty between employees and employers (in either direction) being at a historical low, incentives are skewing heavily in favor of the second.
max123246@reddit
This is partly why as a junior I have put in the pain and effort to try and get things right the first time. I've felt what happens when you don't, people independently have to go through the same heroic efforts to fix things and work with what exists before and it's never captured or surfaced as an issue to improve
Maybe that's the wrong judgement call, but I always assume that for every interface I write, there will be 10 people in a years times who will have to work with it and probably won't ever reach out for help. They'll either struggle through it or be relieved, depending on how good a job I did, lol.
djnattyp@reddit
"I got drafted in the war. In bootcamp, the Army forced me to run a mile and taught me how to shoot a gun. Then I just stood around at a guard post at an airfield and never had to fire a shot. What a bunch of morons."
Conscious_Support176@reddit
I’m don’t think that’s close to being true. Abreast in my experience, edge cases are more often than not the byproduct of poor quality design rather than actual business requirements.
Thegoodlife93@reddit
It depends on the industry. Some industries have all kinds of Byzantine requirements that result in weird edge cases. Some of those edge cases might have stemmed from poor design in another system created years ago, but now that the business has filtered through those systems, it's a business requirement.
Conscious_Support176@reddit
I see the same. Bad database design is often something that can be fixed. Normalisation can take some effort but it is relatively straightforward and the business will be aware of the consistency issues that need to be addressed.
The weird one of exceptions that I see are more difficult. They are the result of pretending that there aren’t exceptions when there are, which results in poor communication flow from business to development. This produces a misshapen implementation where the impact of the commercial decisions comes through in dribs and drabs as it affects different departments, getting reimplemented in multiple places as a result. This is much harder resolve because the underlying problem is the business knowledge you would want to encode remains tribal knowledge within the sales team, and leaves the business when the members of that team move on.
zshift@reddit
The tradeoffs vary heavily by the business domain. Anything that involves human safety needs to work 100% of the time, by which I mean that failures must not cause injury. Any systems involving money must either succeed or fail completely. Bugs can and will show up, but the ability to work on bugs in a safe and recoverable manner is important to these domains. Taking an extra month to validate that chemotherapy machines never miscalculate dosages is critical. However, most of game development takes the complete opposite approach. If it works, ship it.
Understanding where to draw the line between delivering on time and ensuring complete safety is going to vary across your career, and it’s something that needs to be handled on a case-by-case basis. For most enterprise development, ensuring that data can’t be corrupted and won’t be lost is the most important guarantee. Occasionally failed transactions that require users to fill out forms again may be acceptable behavior given other priorities and deadlines.
In my experience, this also varies pretty drastically in environments where software is or isn’t the main profit maker for a business. Internal IT development is considered a cost-center, and has much heavier emphasis on delivering for the smallest budget. Companies that sell software, SaaS, etc, will want higher quality, as it directly impacts the brand and profitability to an extent.
WellHung67@reddit
The real answer is build the maximum quality code that allows you the most flexibility while still maintaining a reasonable velocity. Like you said it’s all about tradeoffs. A good rule of thumb is to make the minimal thing that allows the functionality to work, while never painting yourself into a corner and allowing for additional abstractions and/or pivots with a minimum of hassle.
Sometimes that means you do have to spend a bit more time making something that has good abstractions, just because any less gets you something that is fundamentally less resilient to changing requirements. That’s where experience and judgement comes in, making the call when and where it makes sense to spend a little or a lot of time on the quality or building out some abstractions even if they are just thin wrappers today.
Plenty of people are not flexible enough in their thinking and create abstractions for things where it’s not realistically needed in the name of quality but paradoxically make the quality worse because it becomes harder to pivot and adapt. YAGNI and all that
Gooeyy@reddit
Yep. At a certain point, refusing to ship because it’s not beautiful yet is self-serving.
Spider_pig448@reddit
You can write quality code and provide good deliverables and grow in your career. Real growth and experience is finding ways to achieve it all.
boring_pants@reddit
Why? The business needs money to function.
Quality code does not bring in money. Deliverables do.
Shipping something is the absolute uncontested #1 priority, and it has to be. Shipping something that's pure shit is better than having the finest, highest quality code base in the universe but shipping nothing.
Sure there is. You can measure how often the product breaks in the field. How often do you get unhappy customers? How much do all the support calls cost you?
The thing is, you need to have customers before you can worry about happy customers. If no one is using your product at all because you're not satisfied enough with the code quality to ship it then it doesn't matter how often it would have broken in the field.
Quality is important, but only for its knock-on effects. There's no code quality fairy that'll deposit money onto your bank account if you have really good code. So quality only matters insofar as it affects the things that make you money:
Those are the questions that matter. And quality influences all of those to an extent. But code quality has no inherent value. It's a means to an end.
Early_Rooster7579@reddit
What is quality code? To me its something that solves the problem for the least amount of time and money.
wvenable@reddit
What is any quality product?
A quality chair is comfortable, lasts forever (both in build and taste). A chair that solves the problem in the least amount of time and money is not a quality chair.
Early_Rooster7579@reddit
Sure, if you plan to be using that code for 30 years. Countless features and integrations often never make it out of review or have short-lived lifespans.
My experience is largely web dev. Its pretty rare that a feature remains un refactored in places I’ve worked for more than a year or two. If I was writing embedded code my perspective might be much different
wvenable@reddit
That's fine. But that isn't quality code. You don't go to McDonald's and call that a quality burger. It's food that gets the job done quickly and cheaply.
My experience (mostly web dev) is that a lot code lasts a lot longer than most people think and I almost never start a project completely from scratch. So good quality code propagates. It sets the standard and ultimately saves time and money in the long term. But not everything needs to be quality.
Early_Rooster7579@reddit
McDonald’s may not be quality but its an extremely successful business model. Your code can operate similarly. Totally depends on your goals. If it won’t kill anyone or cause millions in losses, I’ll take a speedy MVP over a slow, “perfect” 1.0
Izkata@reddit
The hot coffee lawsuit almost cost McDonald's about $2.8 million. It was reduced and then settled though, so we don't know the final amount.
Tundur@reddit
Okay great, but how much did they save in the meantime on buying cheaper cups? Was it greater or less than 2.8 million?
wvenable@reddit
My team recently completed skunk-works project to replace a bunch of terrible 3rd party products that didn't work together. After another frustrating meeting that went no where, I asked a member of my team to build a front-end web prototype in 3 days (no backend) to replace it all. He built an amazing prototype that easily and immediately won over management. But he didn't start completely from scratch; we have solid framework and code that he was able to utilize. Without that, 3 days wouldn't have accomplished anything and a junky prototype wouldn't have won over anyone.
WellHung67@reddit
That can’t be true, because you could argue a spaghettified mess gets the thing working but then is nearly impossible to maintain. Quality has to include flexibility and resiliency too, as well as maintainability, debugability, and ability to pivot.
You have to balance more than just “does it work” because you could probably quickly design things that work but are actually pretty bad.
Code is read 10x more than its written, so you gotta factor that in too
UK-sHaDoW@reddit
But the fact you have maintain it increases the cost. So it in fact more expensive.
WellHung67@reddit
More expensive long term to write the quick and working solution assuming it’s sphagetti? Yes, quality should include the maintenance, because that is part of the cost. Quick can be costly
Bousha29@reddit
The least amount of time and money today could cost you much more time and money tomorrow.
I've worked with some devs who would create ten tickets worth of bugs just to close one ticket quickly. And then they close those side-effect tickets the same reckless way, and management would see that as "productive" and "responsive".
Early_Rooster7579@reddit
Obviously theres a balance but proper ci/cd and review should really stop most of these things
Isogash@reddit
My experience tells me it doesn't. I've seen engineers ship plenty of things that pass CI today and break down the line, or become almost impossible to modify.
The problem isn't just the code, it's the design. If a business process is split across 20 different subsystems with no proper orchestration then it quickly becomes a nightmare.
Early_Rooster7579@reddit
To me thats a failure of ci. If things are passing CI and breaking then the CI process needs to be updated. If a feature/project is 20 balls in air, my main point was know which 15 you need to keep juggling and which 5 can hit the ground and be caught in a later bounce
Isogash@reddit
If you don't design things well according to the processes you are modelling, you'll run into the situation where your CI is almost entirely useless because even if it's correct, it doesn't cover every edge case, and you can't cover every edge case because there are too many of them because your process is poorly modelled.
This matters more for software that covers complex business processes involving multiple systems and users. If your feature is a button on a website then it's easy to CI.
WellHung67@reddit
Not really - ci/cd doesn’t help if your code is a hopelessly coupled sphaghetified mess - not to mention the hidden cost of shitty code, longer time to onboard new hires and longer and more frustrating debugging of every single thing that goes wrong.
You have to consider the full picture, quickly getting something working and passing tests isn’t inherently the highest quality option for a full picture definition of “quality”
ncmentis@reddit
Proper cicd and review cost time and money.
Sisaroth@reddit
For me I let the quality decide by the kind of user story. Some examples:
client is pissed that there is no client side validations anywhere in the app and wants them asap -> i quickly implement some validations and push them to the test env even though they are still very rough around the edges.
A bug comes back for the 5th time from acceptance testing -> as part of my bugfix i will do some refactoring to improve code quality and hopefully put a stop to this case of wack a mole bugs.
SquirtGun1776@reddit
Yeah yiu can't quantify quality. Quality isn't really subjective but since it isn't a quantifiable thing you can't really talk about it in clear objective terms
BusinessWatercrees58@reddit
This isn't that unusual. It's how everything else works in the world. There are so many low quality products and services out there that people still pay for. And you can't blame them either. Do you deeply research every single thing you pay for to make sure it's of the upmost quality? Probably not. Most stuff is just expected to be good enough because people have more pressing issues to deal with.
peripateticman2026@reddit
Exactly.
Klinky1984@reddit
I think an important balance here is knowing where quality matters and being able to advocate for it in a way that convinces people. There's plenty of people who mistake quality as being and overengineered piece of shit that then gets scrapped because it doesn't scale or work.
I think where it matters is that it does what it's supposed to and maybe how it does it exactly is less important unless it breaks something else. Test your code before you send it to QA for them to reject it on the first step. Too many devs practice "push & pray". If you're not doing that, then your quality is always a step a in above many.
Vega62a@reddit
The thing is, code of middling quality isn't necessarily going to break. It might just be hard to change. It might be longer than it needs to be, or take longer to ramp up on. Sometimes that's really, really bad - but a lot of the time, who cares? Build it, ship it, move on.
andreortigao@reddit
Depends a lot.
I've worked in large projects that if anything failed it could potentially cause several millions in damage. They invested heavily in quality, had several test sites release cycle, and code you wrote could easily take 6 months to a year before reaching worldwide release.
Most companies don't operate at that scale, and sometimes having a quick release they can pitch to clients is way more valuable. They're OK with a few bugs cropping up along the way
For most projects, you should be able to identify risky areas and invest your limited resources in tests accordingly. As an example, in an e-commerce system your payment system should have a decent amount of automated tests and and be throughly reviewed, whereas if the "add to favorites" button breaks it wouldn't be a huge issue, so it can have simpler code and fewer tests.
hippydipster@reddit
AI slop is best, true.
CorrectPeanut5@reddit
You don't need to write the best code. You need to write good code that's well documented. If anyone can clone your repo and get it running locally in 5 minutes by following your README you did a good job.
Grand_Pop_7221@reddit
In a similar vein. If they can run tests in those 5 minutes, they stand a much better chance of changing code with a modicum of confidence.
It's the codebases that you can't test, with 5 compose files(some of which are FUBAR copies) that need to start a billion things to get the dev environment even thinking about running the 2 tests that rely on massive swathes of the code to run, that really fuck you up and slow you down.
Commercial-Ask971@reddit
I was even took twice by C-lvl and told very short sentence: „progress first, perfection later”. As you know, perfection never comes haha, but its ok. To be friendly.. its a tough for me as an central-east europe resident.. people are so different. Saying „no” to american seems like insult to him, british guy will tell you an novel but doesnt answer/tell you straight „no”.. i get along best with balkans, and east of europe because we have same mindset. No bullshit. No politics. But its not very promoted in corporate, and people from that regions are not usually in charge, so I do my best to be polite, especially to our friends from asia..
bottomlesscoffeecup@reddit
I agree with this, or the mindset anyways. But then I also think that no one would let my PR's through if I stopped caring about writing clean code - surely there is a minimal standard too?
VizualAbstract4@reddit
It's strange, because I still deliver and write clean code. I don't see this as a binary.
Letting your user's feedback drive product is always important though, and you never want to anticipate what the feature is before you put it in someone else's hands to break and play with.
But I also don't see that as a binary, that's just something you always do as part of feature and product development.
Healthy-Dress-7492@reddit
Yeah it’s a dangerous topic; some people may take it too far and end up making things much worse for themselves and everyone around them.
It also depends if you’re in a. Silo or working with others, I have seen so many times when quick simple minimalist code turned into a train wreck simply because 10 other people bolted their widget on in different and weird ways; it reaches a tipping point where it’s too confusing and/or too much effort to refactor so nobody does. What is needed is enough guidance built into the code so that others can easily see how to extend it.
Early_Rooster7579@reddit
A big part of it is being able to say this is good enough it doesn’t need to be perfect.
hiddenhare@reddit
I had a big realisation about six years ago: "You're being too perfectionist. After polishing a feature for months, you often realise that it isn't actually useful or you made some incorrect assumptions, so all of that effort is wasted. Better to settle for lower-quality code, so that you can test your ideas against reality more quickly, like running a scientific experiment."
I had another big realisation about one year ago: "When you're working on a crucial part of a boring project in familiar territory, you can have real confidence in your design choices. Under those circumstances, you should try to approach as close to perfection as you can, using a combination of careful requirements-gathering, good architecture, strong types, tests, and telemetry. It's expensive to write code this way, but it mostly pays for itself in the long run."
Humor-451@reddit
At some moment of time most of "crucial parts of a boring project in familiar territory" are boring and familiar in a good sense because you've seen so many variations of it.
And when you try to prevent teams from designing something that will become a pain in the production, there is always someone who wants to "test your ideas against reality more quickly".
I think the good practice here is to have the person who designed this be the person who fixes the bugs in production and writes postmortems.
Tundur@reddit
That's always the kicker. Think this can be standardises and turned into a framework? Are there at LEAST three different places it can already definitely be used? Are those places changing on a frequent basis and part of crucial infrastructure?
If no to any of them, just write a shitty script and be done with it. Never try to predict future requirements, always work on facts and solid plans.
epelle9@reddit
Is this at FAANG?
Can’t see your whole tag but my experience at FAANG has been everything has to be 100% clean and consider all future scenarios. But maybe that’s due to my senior engineers specifically.
peripateticman2026@reddit
Yeah? Go watch Sean Parent's experience at Google. Heh. Google would rather have a 1000 loc wheel-reinventing PR than replace it with a single line of STL code
Drinka_Milkovobich@reddit
Varies heavily by specific company and team, my experience has been the worst code and practices I have seen in my career
Early_Rooster7579@reddit
FAANG and startups, yes. Obviously it depends on the scope of the feature but speed still is king. Move fast and break things is still (kind of) the motto in my team at least
TheOneTrueTrench@reddit
Yeah, that's all that matters for your job. Personally, I like, hell, I'm passionate about well structured, well thought out, efficient, beautiful code.
Is that holding me back in my career? Yeah, probably. But it's not holding me back in writing what I want to write, and I'm doing well enough.
Horror-Primary7739@reddit
I've been at the same place for 10 years. I've had to go back and do major rework on tools to support data sets not accounted for in the original development.
And they still paid me to do the refactor.
mltcllm@reddit
This, is why apps sucks now.
mwax321@reddit
One thing that stuck with me was when a new CTO came in and assessed our code, site reliability, and our roadmap.
He said "we don't have enough emergencies. We are too safe."
I knew exactly what he meant. If there are zero bugs, we are shipping features too late.
Different-Star-9914@reddit
This line of action also protects your mental wellbeing when leadership callously deletes 30k rows on a spreadsheet of employees.
The code stopped mattering when the culture was ran through the en-shittification tube known as capitalism
siliconsmiley@reddit
This is a lesson that I learned some years ago that I'll forever remember as, "sometimes you gotta punt."
Scooby359@reddit
Delivering results and keeping it simple is worth way more than trying to stick to principles like DRY and having perfectly optimised code, or using the latest trendy libraries. Working products pay the bills, not perfect code.
And as much as it can be a pain, when you boil the agile manifesto down, it's about having something that works and meets user needs than having a technically perfect process.
ugh_my_@reddit
But you get paid less if your aren’t on the cargo cult
editor_of_the_beast@reddit
I have an amazing new tool for you. It’s called the LLM :D
Early_Rooster7579@reddit
We are well ahead on agentic dev lol. Meta has been pedal to the metal with it for a year now
editor_of_the_beast@reddit
Hell yea
epelle9@reddit
Until you get to FAANG where you design doc goes through 6 revisions because code and architecture isn’t 100% clean and considers all possible scenarios.
PurepointDog@reddit
100%. The thing I really focus on nowadays is "can we tell that it broke". With the way LLMs sprinkle error handling in some languages (especially python with try-except around everything), making sure failures boil to the surface is critical
lovelybullet@reddit
For me it was the simpler is better. In the beginning I was overabstracting the code to the point nobody could understand or debug it. Everything was decoupled, dynamically injected, mediator patterns everywhere, cqrs. Later I found out it doesn't really make any sense except lifting my ego.
Now I just write simple code, easy to read, easy to understand and focus on tests way more.
08148694@reddit
Gradually less and less ideological and more pragmatic and flexible
Some of the things I let pass in code reviews today would have really triggered my younger self. I just learned the difference between bad code, broken code, and code not written exactly how I would have done it
A happy team matters far more than code cohesion and purity, and endless debates in code reviews is a sure fire way to destroy team morale and inter-personal relationships
Now as a lead the only rule I have on code style is that it needs to pass the automated lint and static analysis, any personal preference beyond that is not up for review
Vega62a@reddit
The most difficult people I work with are the ones who will just suddenly jump in and try to tank someone else's well thought-out plans because of dogma or some abstract notion of technical purity.
Like I dread their slack messages.
XenonBG@reddit
On the other hand, if the plans are well thought out, tanking them should not be that easy.
Saying this as the guy whose slack messages you dread, but if I haven't done that my organisation would have 30+ Laravel microservices spread over 4 teams.
BadLuckProphet@reddit
Nah its crazy easy. You just have to nitpick variable names, call things out for being inlined or not inlined depending on how you feel that day, cry about any function with more than 3 args and insist on one time use DTOs instead, nitpick people for not using a full on builder pattern for an object with only two variations, etc, etc.
I just point these things out to my juniors in an effort to give them opportunities to learn for the future but I won't hold up a code review over these things. If it means THAT much to me I'll refactor it later myself when I work on something related.
XenonBG@reddit
Of course, that I agree fully with. I do nitpick sometimes on variable blames, but I never block the PR because of that. Such a comment is also always marked as nitpick.
BadLuckProphet@reddit
No idea. Reddit is weird and has bots or automated upvote/downvote mechanisms.
Personally I have no opinion on laravel microservices as it's not something I work with.
Dizzy_Citron4871@reddit
These are usually mid level engineers. They know enough to know something, but they don’t know enough to know better.
nog_ar_nog@reddit
We had one of those and he would just lose it when we called out the same anti-patterns in his code that he was complaining about. Principal engineer fighting battles over variable names with L4s instead of defining the org engineering strategy.
young_horhey@reddit
A happy team may matter more than core cohesion & purity (& I agree with you there), but missing cohesion can create an unhappy team. If one member of the team is always off using their own patterns, or writing code that others in the team find really hard to integrate with or maintain then that is going to cause friction.
Leading_Yoghurt_5323@reddit
biggest shift was thinking in data flow instead of objects… makes systems way more runable and easier to reason about
Runner4322@reddit
Two ideas that are fairly related to each other made my biggest ideological shift:
A lot of "software architecture" and "software patterns" have a string attached to them, and if you follow it, it turns out it all starts from the same post that says "but what if we need X later?". It turns out, the vast majority of time, you do need some shape of X later, but it's almost never the shape you established, and you're going to have to write more code anyway so your choice was irrelevant and many times, detrimental as you try to make the new requirement fit with the old design.
The way we teach about OOP (mostly, polymorphism) and its use cases is the opposite that we should be focusing on. Basic OOP tutorials are some variation of "Dog, Cat and Bird are all Animal, they all have a move method and an eat method, so we can just define them in the base object(...)". You know how it goes. But in my experience, when you're dealing with business logic (i.e. the animal/vehicle/person examples) specifically and not "library code" (things that don't map to a real world object), you don't actually want to model your code like that; Bike and Truck may both be vehicles but you usually want the logic to evolve differently and not ever have to think "we can't do something with Trucks like load cargo because Bikes can't do it so it would be messy"
Radiant_Equivalent81@reddit
B: can you defend this further, obviously you are seeing something (along with others) but what is the cross section im missing here? Truck implements cargo not bike
Runner4322@reddit
The devil is in the details, and simple examples like the ones you'd see in books like Clean Code (which has a whole host of other issues) don't illustrate these. I'll do my best to show it with a slightly longer example, but it's genuinely hard to explain and it's why experience is valuable. The summary is "model your objects according to your actual problem domain scope and not some Plato-style Theory of Ideas applied to programming" So let's stay with the "Truck and Bike" example into a practical example.
You are tasked to make a fleet management software for a vehicle renting/courier company. This company has plans to expand business in the future where it won't just be "someone shows up at the door, pays, leaves with keys to a vehicle", but for now, they've basically just been doing said work manually with some spreadsheets and notes; they have a couple of team members that deal with truck rentals and another two team members that deal with bike rentals. The software is not meant to automate their job, but to aid them and guarantee consistency. In any case, let's focus in one very specific part of the process: before giving the user the keys, there's a validation step. This could be something like (Basically python, but technically pseudocode):
This seems to make sense so far (you might already be thinking "validate_rental should be an abstract method" but bear with me for a minute), it's a simple use case, but now, as it usually happens, we get more requirements:
"Bikes can be rented by just about anyone with the most basic license, but for trucks, there's a local regulation where trucks can only be rented by people that haven't committed any crimes, which is done by checking this local government website with our API Key" (Yes, crazy requirement, but I've seen crazier)
Cool, so, because this is authenticated it's most likely going to translate into some kind of
UserFelonyServicethat probably can't just be initialized out of nowhere, either because it's not just as simple as an API key, or because you can't just have that in your env vars for security - case in point is, you have dependency injection wired up so that yourVehicleServiceLayerhas access to saidUserFelonyService, but that's not too important. Anyway, let's head back tovalidate_rental. We have a few options:user_can_drive_itbe a big method that accepts anUserFelonyServiceand, in the case of the Truck implementation of BaseVehicle, will use it, while on Bikes, it will stay completely unused.assert service.user_is_not_felon(user)explicitlyassert service.user_isnot_felon(user)it's something likeassert self.check_felony_status(user, service), soTruckdoes actually implement something useful with it whileBikejust returns TrueYou can choose one now if you want, but it all translates to "validate_rental needs the user felon service". In any case, I'm going to introduce more requirements
"We have acquired of fleet of electric scooters! We want to let users rent them, go ahead"
Well, it's not too hard, they are similar to bikes, but they don't even require a license, and if you're pedantic, they don't have fuel, they have charge, but you can probably just keep using the "fuel" variable defined in the base class, and hope that you don't run into any other fuel-related bugs or that implementing the electric-related business logic, if any, is not too painful)
Yet another requirement, business is booming and technology has advanced:
"Hydrogen powered self driving cars are here! Anyone can rent them! The only thing is, we have to check, on http://can-i-self-drive-to/location using this other API key to see if the vehicle has the necessary regulations and software version"
...I guess we'll make
validate_rentaltake that extraSelfDrivingRegulationServiceargument?Now, just to be clear, you can make this work with
BaseVehicle, and makingvalidate_rentalan abstract method that each vehicle sub implements is probably a good idea . But you can already see there's a lot of friction (a few examples: you'd have to consider, "do I makerental subvalidation#34an abstract method too? Do I find the intersection of all common methods in validation and make them abstract to guarantee all the vehicles implement it? And do I still pass all the dependencies as arguments even if some vehicles won't even touch them?"), some vehicles are taking parameters they don't use and/or (depending on the other mess that you have) the implementations of those services themselves might even need to be aware of some of that business logic.Testing also means you need to deal with more fixtures/mocks/FakeService, so even when we know a type of rental (like ElectricScooter) is supposed to be simple, it's no longer simple to work with.
I didn't even touch the "check cargo load" part or any of the "courier business logic" yet; but you can imagine that you load cargo on a truck, but you load cargo (small, but still cargo) on the driver of an electric scooter which might lead to interesting implementations ("check_user_has_doctors_note". Only half joking here)
So all in all, sure, Bike, Truck, ElectricScooter, are all vehicles. But read the premise of the software, the scope of the domain: We know we are going to deal with Trucks, and Bikes at first, and we know they are different enough for this domain, because they were already being handled in two completely different flows, by two different sets of people. So, what did "start from BaseVehicle" actually bring you? Did you, at any point, receive a requirement that made you say "I only need to add a line to the VehicleTypes Enum and everything works"? Compared to just duplicating a fair amount of the existing code to implement a new vehicle, what did you gain?
In my experience, you almost never get that "Ah I barely have to do any work, everything works with minor changes" situation, and most of the time you wish two separate things were actually separate even if they are all technically "A bunch of atoms tied together". I will also say, I kept the object inheritance simple because this is not an attack on inheritance, or "inheritance vs composition" thing, but you can imagine this could get even messier if you started introducing things like
WheeledVehicle,ElectricVehicle,ICEVehicle.I didn't even touch on the development experience of having to check multiple files for each implementation and "Go to definition/implementation" not giving you what you want half of the time
And I need to insist: This applies to "business logic code", the one in the example (yes, generic CRUD app #43), not (usually) to "library code". And just to play devil's advocate, a case once again with Bike and Truck where I would totally go with a BaseVehicle: You're making a library that, given some basic specifications for the vehicle ("motor power, shape-type"), it will do some physics simulation to see how far it can move at what speed or speeds. I would use BaseVehicle here with some subclasses because the domain is actually "a bunch of atoms tied together and bound by the same rules". Another example? Some videogames. If you are making something GTA-style, where the player can just walk, but also take a vehicle, you probably want that base vehicle because the domain is "Something that you attach to the player and lets them move with different physics, and let's be honest, gamedev is full of hacky stuff like that (don't look into how Fallout 3 trains work...or do!), it might even let you do wacky stuff like "what if I just want to ride a chair"!
SansSariph@reddit
I enjoy your first example. I feel part of my own growth is treating the future as an expanding probability field and identifying where the ball might or most likely will move on various time horizons, while simultaneously holding that we could reorg and drop all priorities literally tomorrow, so not overinvesting in that future - but rather ensuring we aren't closing doors and boxing ourselves in with design choices, or if we do so, treat it as intentional debt.
With the flipside internalizing that debt isn't always a bad thing. You can take on debt as a resource, for leverage, and gamble that not paying for it won't actually matter, or that the ROI at the time paid for itself.
TenYearsOfLurking@reddit
Yes you can. You implement load cargo on truck not vehicle.
Almost all examples of "inheritance bad" are strawman.
True Is-a is never wrong to model as inheritance. Only inheritance for the pure sake of code sharing is wrong/debatable.
FlailingDuck@reddit
As a C++ dev, I feel sorry for the Java developers who were encouraged to abstract, abstract, abstract. Somehow it kept a lot of you paid for a long time. But abstraction itself should never be the goal.
I also feel like DRY is one of the overly misunderstood principles and is too often applied as a design decision "because a book said so", rather than attacking the problem itself.
babababadukeduke@reddit
What about DRY is misunderstood?
MrPinkle@reddit
In some situations, the code is more readable and maintainable is you repeat yourself rather than creating another abstraction to strictly adhear to DRY.
Venthe@reddit
That's still is a misunderstanding of a DRY in itself. Dry is not about code duplication, but about knowledge duplication - that's verbatim - and also more loosely; deduplication based on the reasons for a change.
Unfortunately, people came to think of it as a code deduplicator.
yxhuvud@reddit
People think it is about repeating code and not repeating problems to solve. Different problems can sometimes have the same code, and it is ok to repeat that code.
FlailingDuck@reddit
I see and hear too many people strictly adhere to DRY because anything WET must be a bad thing.
They want to turn every 5 lines into a new function or create 5 layers of abstraction not because it adds utility but because they read about a principle someone smarter than them told them it's something you should do. They never question it because it's easier to be a follower than a critical thinker.
HeroicPrinny@reddit
This seems like a rather cynical interpretation. I feel like DRY just appeals to the innate sense of engineers types who prefer to automate a task than do it manually over and over. In this case it’s just encapsulating some logic and calling that instead of writing it several times.
hiddenhare@reddit
The problem of duplicate code is that you may need to write the same change in two places, which is inefficient and hard to remember. Therefore, if you won't need to write the same change in two places, duplicate code is not a problem.
If two functions which look the same are actually semantically different, you should not deduplicate them. At some point, you'll want to make a change to one function but not the other; when that happens, the best-case scenario is that you'll have to split them apart again, the worst-case scenario is that you'll try to handle two very different cases with a single function.
Isofruit@reddit
It often leads to scenarios where you can either have one thing do two very similar things or have two separate things that are fairly similar, but also have their differences.
When you focus on DRY, you start bundling more and more things together even if they're for solving entirely different problems whose problem-domain is not intrinsically connected, i.e. they do not intrinsically need to be solved the same way and do not intrinsically need to have the same code for their solution.
Taken too far that leads to systems that are so configurable and flexible that you need to build up a very big and complex mental model to understand what it does in certain scenarios.
ZergTerminaL@reddit
a lot of the time the abstractions used to dry up code end up being the wrong abstraction and cause a refactor down the line that costs the project way more than the duplicate code would have.
mckenny37@reddit
Incidental duplication
AvailableFalconn@reddit
Idk I’ve worked in java-adjacent systems for years (Kotlin, Scala), and while some frameworks are sometimes cumbersome, it’s pretty easy to keep things simple. Better than ruby or python by a mile.
TylerDurdenFan@reddit
Today it is.
However, back in the days of EJB 2.x, JSF and OSGI, things were awful.
My feeling is that todays Java not only can make many things very simple if one makes the right choices, it's also a rather good fit for know LLMs token based attention works. However, I'm terribly biased.
Gunny2862@reddit
Biggest ideological shift was knowing I didn't want to climb the ladder. Once I knew that, I could detach from crappy, ambitious-focused BS and could also figure out what roles I would be OK jumping to at other companies.
PureCauliflower6758@reddit
Functional programming and type safety
CoreyTheGeek@reddit
It's honestly truly believing and adhering to "keep it simple." It was a thing I'd say but check this rube goldberg app I wrote that works cause I used language quirks!
I can still vividly recall conversations with Senior devs and mentors when I was a junior and they'd always hammer in this keep it simple mantra and then turn around and build the most complex, insane, cluster fucks. And it still happens.
I have to constantly fight to do things the easiest, simple, basic way. But as a result I can understand how it's working and when things go wrong it's so much easier to fix.
Ambitious-Garbage-73@reddit
mine was giving up on DRY as a default. spent years extracting every repeated pattern into shared utilities and then watching teams struggle to modify anything because changing the shared thing broke four other things nobody remembered depending on it. now I duplicate freely until something is repeated three times AND the duplication has caused an actual bug. the threshold went from "I see repetition" to "repetition has cost us something measurable." way less elegant. way fewer incidents.
hellotanjent@reddit
Do the stupid thing that works first. Never do the complicated thing until you've built (AND TESTED) the stupid thing.
O(N\^4) solution? Heck yeah. Recursive implementation that could blow up the stack? Awesome.
The stupid solutions are quick to build and will sanity check your system. And when you do build the just-slightly-smarter thing, they are your regression tests.
And in all likelihood, 90% of the stupid solutions are actually fine to ship because you were worrying too much about corner cases that can't actually happen.
Ambitious-Garbage-73@reddit
mine was giving up on DRY as a default. spent years extracting every repeated pattern into shared utilities and then watching teams struggle to modify anything because changing the shared thing broke four other things nobody remembered depending on it. now I duplicate freely until something is repeated three times AND the duplication has caused an actual bug. the threshold went from "I see repetition" to "repetition has cost us something measurable." way less elegant. way fewer incidents.
dfltr@reddit
My biggest ideological shift in my entire career has definitely been adoption of LLMs over the past year or two.
If you have the communication, planning, delegation, and architectural skills to thrive at Staff+ already, LLMs in modern agent harnesses are essentially 24/7 mid-level engineers who never sleep, argue, or bikeshed.
I used to think “AI” was a neat party trick that might be useful in ten years or so, and there are definitely a lot of people chugging kool-aid flavored snake oil, but there’s some serious engineering to be done with this new set of tools if you’re smart about it.
Tundur@reddit
For me the challenge with AI is the level of output has massively outpaced our ability to properly tool up and review. We have a team of 50, working across the whole business, and usually we'd be going live on a new service once a month.
They'd then run a session for the team on what they did, how they did it, anything interesting they learnt. Then we'd have an architectural session to decide if there are patterns we want to adopt as a team. Our philosophy was "let a thousand blossoms bloom", do whatever you want as a dev, and we'll endorse (or condemn) patterns retrospectively.
Now we're going live almost every week, on huge business transformation projects, with a single developer leading it out. The outcomes are good, the savings are tangible, money and customer satisfaction is up massively.
But as a leader in the team, I now have absolutely no chance of keeping up. I used to drop into most projects to advise on design, and was familiar with the majority of our codebase. Now I know only the most crucial parts. And patterns? Why do we need a pattern when none of us are familiar with the code? The point of abstractions is to make code easier to reason about and maintain, but we ain't maintaining it anyways!
It's an incredibly exciting time to be a developer, if you can focus on solving business problems and not being a purist about the code. A lot of people here will call what we're doing a disaster, technical debt waiting to explode in our faces. But tech debt is just the gap between your current implementation and your future requirements, and all signs point to our ability to operate like this only improving over time so... the debt may not really exist in any meaningful sense.
niowniough@reddit
It sounds like at least one tech debt problem has already arrived, which is that you now lack familiarity with the codebases. The value of patterns is very much there: if you have a standardized pattern and get the AI to adhere to it as much as reasonable, you ease the mental burden for everyone (including AI) to accurately learn what a new codebase does, and to focus mental/artificial context on the meat living within those standardized bones. Then you can easily become acquainted with or spot issues in a new codebases by knowing how the bones should be and having AI traverse and summarize to you what the meat is.
Tundur@reddit
If I were writing all the code I'd agree, and the projects I work on make heavy use of frameworks (by necessity, it's long-tail "make 1 thing then make 100 variants").
The thing is, many devs aren't great at identifying and encapsulating abstractions, aren't great at communicating with their team about them, and aren't able to commit to maintaining them. The vast majority of junior devs (pre-the last 12 months of AI madness) who said "I've got a framework prototype", I've had to shut down before they waste too much time on it, because it's the wrong play at that time. Love the energy, but not right now.
So if the rest of the team are delivering insanely quickly and meeting all their requirements, and doing so in a way that's hyper-self-contained and contains all the context in one repo, that's actually ideal for me right now. If it becomes a problem in the future we can weigh up all their solutions individually, identify patterns, and go from there.
What would worry me is having a heap of overlapping and competing vibecoded frameworks, creating dependencies between otherwise unrelated solutions, and making any refactor require cramming a dozen repos into the AI instead of just one verbose one.
nonasiandoctor@reddit
The problem I see is that it will be harder to create new staff+ engineers if the new people are relying so much on llms.
BusinessWatercrees58@reddit
Or future staff+ engineers will just be less technically capable, but able to do much more, than those of the past. Just like today.
g0ggles_d0_n0thing@reddit
It feels like I have done a shift each time I’ve had a new job. Biggest one might be not have to dry across the whole code base.
Vega62a@reddit
You don't look like the smartest person in the room by arguing with everybody. You look like the best colleague by being friendly, humble, and willing to help.
My career absolutely took off when I refocused myself on being friendly, helping others, positioning myself as a constant learner (not the guy who had to override everyone else's opinion), and building simple things that met the needs of my stakeholders. I've had two promotions in five years (across 2 companies), and my income has 3x'd.
Meanwhile, my colleagues who are way smarter than I am but are difficult to work with tend to get passed over for promotions.
HeroicPrinny@reddit
I wish it were always like this. The most arrogant and difficult guy on my last team got promoted pretty highly - higher than everyone who was nice and easy to work with. Then again I think he was good at sucking up to those above him.
Good_Roll@reddit
was he actually good at his job? Some of the difficult people do genuinely make up for it by being hyper competent.
5oy8oy@reddit
In defense of Java interfaces, which I heavily use, it makes dependency inversion and creating stubs for unit testing much easier.
idontmeanmaybe@reddit
Was wondering why no one had mentioned this. Same in C++. I'm realizing most aren't doing proper unit testing.
niowniough@reddit
I can't speak to C++ but beware the assumption that people aren't doing proper unit testing just because they don't make heavy use of your specific strategy. In Java stubbing libraries like Mockito can handle most any unit testing needs that interfaces can help with.
5oy8oy@reddit
Creating stub interfaces in Java is way cleaner than using Mockito for everything. We used to heavily use Mockito, but now have a rule of only using Mockito to mock external dependencies we have no control over. For everything else, we create stubs by implementing our interfaces. It is so much easier and cleaner to write and read our tests now.
ShroomSensei@reddit
It completely depends on your architecture, which most people suck at. I loved interfaces at my last Java job. My lead had a bad habit of making an interface for almost every class/service and it ended up just becoming a pain in the ass to manage. It looked pretty but functioned stupidly. Now I had to maintain definitions in two places instead of one. Even worse if there was an interface, abstract class, then finally a single implementation all because "we may want to expand this in the future"
ShroomSensei@reddit
I am not super far into my career, but I can think of at least three. Two of them are actually in college but apply to the career in general.
The first was actually in college, after I absolutely bombed 3 internship interviews and failed an exam all in 2 days. I realized I couldn't just skate through the degree and more importantly my career for the places I wanted to work at. I started putting in way more time in developing myself and (imo) more importantly learning how to do it effectively. College became a min-maxing game where I wanted to maximize my learning in the smallest amount of time. Projects became about learning more and pushing myself instead of just getting a good grade.
To tangent off of that, I also had to realize that not everyone was like this, and it was something I just needed to accept if I didn't want to piss of everyone around me. I was pushing myself extremely hard for little to none monetary gain (grades). My peers did not want to do that and I shouldn't expect them to. The same thing applies in the workforce. You will meet a lot of people who come in, do their tickets, and clock out and that's okay. You will only burn yourself out trying to hold others to your extremely high expectations.
Lastly, YAGNI. Time and time again I have tried to build something "just in case" or "for future iterations" and it ends up just being a waste of time. Unless you know with a pretty high certainty that something is coming in the future you should not build for the "what aboutisms". I think a lot of this was influenced by really ramping up in Java where my lead built stuff like this. Everything was an interface or an abstract class even if there was only going to be 1 in the foreseeable future.
bushidocodes@reddit
I think the big one was realizing that programmers as a whole are unable to learn the lessons from the past. We are balkanized by linguistic differences and obsessed with what’s next for ego-driven reasons. There was a hope of literate programming, where programmers would learn the craft by reading great programs of the past. That didn’t take off. We’re thus doomed to have cycles of forgetting and relearning and things going in and out of fashion over time.
Scooby359@reddit
Something that works and is simple is better than complex and technically perfect.
On PRs, keep personal opinions to yourself, focus on whether it does the job, and only flag bugs, security risks, or really bad code.
There are times to fight for your beliefs. And there are times to shut up and let things go. Focus on what's most important.
max123246@reddit
Really? I feel like the only time I can express opinions and share and hear feedback is on code reviews. I express my opinions and I make it known when they are not blockers, because if not during code review with explicit specific examples, then when?
niowniough@reddit
I think it also depends on the degree to which your suggestion is providing a notable difference and your teammates.
Some of my teammates are perfectly eager to know about slightly more syntactically elegant or language idiomatic versions of what they implemented. Some enjoy good comments about variable naming as pertains to accuracy or typing. Some people get annoyed when you mention things on this level of granularity. Perhaps in some seasons of your relationship with them an unrelated tension between you two makes the comment seem differently motivated.
So I'd say you can freely share more comments the bigger an impact it has (ideally moreso now or imminent vs hypothetical), but the more hypothetical and granular it is, the more you have to be cognizant of the cost in team cohesion.
max123246@reddit
Yeah I guess that makes sense. I hope I don't bother people because of that mismatch in granularity which is why a lot of my review comments can be very wordy and polite and hedge my thoughts. Unfortunately, that can also be annoying to some people so idk, I just hope they assume the best in me and I check in with the few people I see in person to make sure at least someone else thinks how I review is fine
gUI5zWtktIgPMdATXPAM@reddit
This is the pitfall of juniors chasing new libraries and hubris they can rewrite whole legacy applications in a week.
This extends to languages, some people want to just pad a resume, good for them but not the business as now the team needs to support this new language, framework, or library on top of everything else.
bsenftner@reddit
Software for more software is pointless, software that affects the physical world is the entire point of software.
Eligriv@reddit
This one project in Rails made me hate having to manually do any "plumbing code". Time spent on plumbing meant that i was taking a wrong path.
Ten years later and i still see teams reinventing the wheel, do everything by hand, especially in JS shops. How are some teams still writing sql queries in strings and opening and (forgetting) closing the sessions manually ?
sisyphus@reddit
Just experimenting with languages led me to conclude that dynamic/static typing is less important to me than mutability/immutability and that I'd rather have Closure or Elixir which emphasize immutability but have dynamic types over Java or Go that have bad static type systems.
ciynoobv@reddit
Imo the biggest headaches with mutability is when it comes with shared references.
for(int i=0;… cases a lot less issues than HibernateDomainEntity.mutatingMethod() in my experience even though both are examples of mutable state.
It has gotten a bit better, but I still think Java community especially has a cultural problem with resorting to shared state far too quickly.
max123246@reddit
Same in Python. I've seen spooky action at a distance where the type of your variable transforms underneath you because you pass it into a constructor that takes ownership and mutates the type.
I wish Python made pass by reference vs pass by value more explicit
TylerDurdenFan@reddit
Interesting. What is your opinion/impression about how much LLM agentic coders benefit from (or suffer with) the trade-off in those languages?
I believe for example, that java benefits from being statically typed (easier for LLM attention), suffers from being verbose (a lot of tokens to express a simple thing), benefits from being explicit (specially if you avoid annotation magic), suffers from it's inconsistent presence in training materials (too much online content about old/obsolete and yet differing java ways), suffers from limited presence in training materials (as opposed to JS, TS, Python), and benefits from good quick feedback from the compiler and mature static analysts tools (as opposed to Ruby or "best effort" linter tools)
Careful_Let509@reddit
I kinda don’t agree with the general sentiment in this thread that seems to suggest that accepting slack is a more senior thing to do, than caring about quality.
I’m not saying to write perfect code, but jeez people, sometimes it doesn’t cost that much to implement something in a good way.
IMO commercial codebases are no place for personal artistic expressions and preferences. That is reserved for personal stuff. The key is consistency and if you want to use a different pattern COMMUNICATE, make a plan to refactor and execute it, or use it if there’s is a very clear benefit of using this particular pattern in this case.
There is nothing more annoying and time wasting than having a codebase where it’s a free for all and everyone uses whatever pattern they liked at the time. Some services are functions, some are classes, some use DI, some don’t, some services accept raw ids, some accept objects, some models use calculated properties, others use specialized helper functions.
On top of that the hidden benefit of caring about consistency is that AI agents can easily pick up these patterns and continue to write a pretty good base for new features.
Regarding the question - killing ego, actually understanding KISS, dropping DRY and accepting that YAGNI.
As a junior and mid dev I clearly felt I had something to prove. There was a lot of ego attached to my code, I wanted other devs to respect me and prove how smart I am by writing overengineered solutions that even I could barely keep up with.
Fast forward couple of years I met my mentor and he crushed me. His code reviews were so brutal and fair that at some point I just thought I am not made for this.
It really was like he had twice the brain that I have, yet he wrote the simplest, dumbest and consistent code I have ever seen. Like, dude just used a function instead of a polymorphic service class that automatically generates handlers that can be customized through hooks? Or like, dude just used a plan api views instead of generic viewsets that need like 5 method overwrites to do what I want? Or like, dude wrote two explicit functions that are 90% the same instead of coming up with one generic function?
It really humbled me that the smartest guy in our company wrote the simplest code I’ve ever seen. Onboarding new devs was so simple they could make meaningful contributions the same week they joined. It was super easy for reason about the code and you didn’t lose time overthinking which convoluted pattern out of 40 used in the project should I use this time?
Another shift was realizing I don’t have to communicate everything I do with my manager. If I think we need to refactor, I will just add it to estimates on top of he tasks I’m currently working on. Developers tend to overshare technical details with managers that have 0 business in making technical decisions.
throwaway_0x90@reddit
This sub will probably downvote me into oblivion, but AI is the biggest shift for me. Was anti-AI as recently as 1 year ago but the exponential leap in all of 2025 opened my eyes to the fact that "just knowing how to code" is not going to be of much value in the years to come.
Elavina@reddit
This is me too. It was only February where I had to change my mind on this. I read Steve Yegge's Gastown and while at the time I thought it was a bit insane, because surely AI isn't that good, eventually I latched on to - "hey, what if they're right?" Because the implications are huge. And decided to spend a while really giving AI a go.
It started with little things - I attached it to JIRA and had it write new tickets for me, because it was good at filling out acceptance criteria and I was lazy about that. Then I attached it to Sourcegraph and Confluence too. I got it to go through our backlog that we never get around to doing - it searched Sourcegraph to look at the code and could tell me "the code this refers to doesn't exist any more" or "this would be a one line fix (and here's what it is)". I checked its work quickly and mostly it was right. Piles of tickets closed or updated to be quick to do.
I built out a workflow where I gave it a ticket, had it write the plan. New agent to review the plan and suggest fixes - I give the plan a good review to make sure it's looking good. Then new agent to implement and make sure the tests pass. And then a new agent to review it. I only look at the code once it's checked I'm getting a green build in CI.
And it's actually really good now. I did a complete upgrade of an old repo we needed for running compliance checks from Net 472 with some random Windows dependencies to running on Net 10 in a handy CI pipeline paying basically no attention to the code, but checking on the plans and review feedback. The biggest blocker was me at the end, making sure it was in good condition, that the results looked good, and that I understood it so I could send it to my team for review without being embarrassed. And breaking up the commits so I wasn't throwing a 1000 line PR for review.
I run a "backlog scout" which looks for those small tickets in our backlog, creates the MR for the fix without my involvement, and sends me the MR. Dozens of things we never had time to fix and now I just need to review a tiny MR for correctness.
I don't think it's perfect for everything. It can get rabbit holed - attempting to decompile a library when it already has access to the code, or creating a whole new script when an existing one is already there. You want to run it in a sandbox so it can't break things (though it hasn't tried often) because constantly saying yes to tool use is tedious. Giving it LSP tools makes it so much better. It handles repos with good encapsulation and docs better (like devs do) so best practices still apply. But I wouldn't have learnt any of these things if I hadn't given it a really good go. Yes, early 2025 AI was shit. 2026 AI is powerful and you should learn what it can do for you.
timabell@reddit
How do you handle the effect on "peer review" (or PRs) of so many large patches, and the effect on the broader team of such a high churn in the codebase?
Elavina@reddit
Honestly we haven't figured out the peer review problem yet. It's my biggest question to solve. I'm working with my team to try and come up with an answer where we can maintain quality but not turn everyone's lives into eternal review.
Some of our thoughts have been - you check in the plan document along with the code, so others can read what your intent was along with the code changes. Maybe eventually we treat that as the "code" and the actual code as more of a build artifact. Or I'm testing out a PR annotater that highlights what I think needs a second pair of eyes - validating core business logic, config changes, breaking changes, so reviewers can get through the noise faster. We haven't settled on any of these - it's all so new we're just trying to see what works.
The churn is honestly not that different to what we already have with multiple devs working on the same repos. Encapsulation remains important so you can reduce the blast radius and people don't need to rebase all the time. Sometimes you need to hold off on a larger change because it's a busy period, and you announce them ahead of time so it's not a annoying surprise. But this was always true, regardless of the source of the code.
I just think we need to start working with the AI tools, and solving those scaling problems (new and interesting problems! How long since we had some real ones of those?), rather than ignoring the potential for what we can do with them.
timabell@reddit
Well said, thanks for sharing.
Having the AI generate code-tour (.tour) files for the vscode extension is an interesting trick. Could include one of those in the PR
v-alan-d@reddit
Everything is systems within systems. (Donella Meadows)
Systems have leverage points to tip it over (Donella Meadows)
Minds, including you, can and will make change to systems
DRY, YAGNI, and other reductionist ideas are mostly worldview model. They expire. Either let go when it's inconvenient or crash hard.
Some ideas actually approach mathematical truth, like Liskov's substitution principle, which is cool
You can't build a stable system without a loop (Cybernetics, The Macy Conference)
More generally, all stable things need eigenforms, eigenstate, and other eigens, including systems, language, etc.
AI makes knowledge cheap and sense-making expensive
Our role and value as a mind is to make things make sense
mrfoozywooj@reddit
As I got higher up the ladder and the company I was with grew significantly I realized a lot of "dumb" things I would see in big corps are actually a necessity, you cant trust a few hundred devs to not do something stupid because you cant be everywhere anymore.
HoratioWobble@reddit
It was a nightmare for me to get into this industry, I didn't have a formal education but i'd been coding since very young.
So without a formal education or approach to the industry, it was very apparent how rigid other developers are.
Everyone had a preconceived notion of what "good" looks like, how to scale, how to secure, how to build things.
But my take away so far, over the last 20 years is business don't want that. They don't give a fuck about the code, what framework you use, how you build it, how many tests you have - they care about delivery.
Engineers that dig their heels in don't seem to get very far, at-least in my experience and too many engineers can't seperate their responsibilities as an employee against their own personal ideologies about technology.
You see it with AI at the moment.
Business want AI. They're enforcing AI. Whether you think it's good or not - they're not likely to listen.
So if you want a job, you'll likely have to use AI and that's the end of the discussion, that's what you're paid to do.
hippo-and-friends@reddit
how utterly soul crushing
ilyas-inthe-cloud@reddit
Mine was going from "the best architecture wins" to "the org you actually have wins." Early in my career I thought technical correctness would carry the day if I explained it well enough. It usually doesn’t. The solution that survives is the one a tired team can operate at 2am and leadership can understand without a translator. Boring got a lot more attractive once I had to own incidents, budgets, and hiring.
exomyth@reddit
Ironically to this post: planning ahead. While I agree that the Java landscape is too anal about always applying design patterns there are also significant costs to pick the most simple solution for a problem.
Refactoring code also bares a significant cost that all the YAGNI people seem to forget, especially if you already hear about future plans in the hallways.
This mostly means writing code that is easy to modify. APIs are notoriously hard and therefore very expensive to modify, so make sure you plan ahead, before you start maintaining 3 API versions because another team doesn't have the capacity to upgrade to the latest version
If an abstraction is just as easy to write and maintain, pick the abstract solution. There are many projects where you can see certain mechanisms being fundamental to the whole project. Getting those abstractions out early are going to save you so much time long term.
I have seen too many projects just turn into a bunch of tumors glued to each other because they were afraid of using abstractions, because they thought the simplest solution was always the best solution
SansSariph@reddit
I'd say internalizing where and why abstractions provide value and framing everything in terms of requirements and contracts between system components. Thinking of everything as a risk gradient and what trade-offs we're making with any given decision or direction - what are we enabling vs closing off, what time are we saving now to deliver faster vs how much will it slow us down in 8 weeks, etc.
What is the system I am working in, contributing to, designing, what does it interface with, what are its dependencies, what depends on it, how does data flow through the system. Where is tight coupling risky vs worthwhile, how are the tentacles of dependencies propagating and where are they isolated.
I have lived through multiple iterations of the stereotypical "what if we swapped our entire auth framework" and I find myself explaining to more junior engineers all the time what I consider to be a good interface and why.
With experience you learn to separate design patterns as dogma vs tools that mitigate specific kinds of risks.
Oakw00dy@reddit
"Perfect is the enemy of done". Be disciplined but priorititize progress. Good habits lead to progress a lot quicker than pursuit of perfection, and learning good habits is a lot easier than forgetting bad ones. Become a teacher, but always remain a student.
franz_see@reddit
Interesting. I was a java dev in those days as well. But you and I approached future-proofing very differently 😅
I picked up the big blue book and that was one of my unlock - make the technical design to match the domain design to future proof it!
18 years later since i’ve read it , still holds true
TastyToad@reddit
My job is to solve problems using computer.
Took me way too long to internalize. Probably because I'm a part of the "I've learned to program as a kid on my own" crowd, so I was too much in love with writing new code to realize it's a waste of time in many cases.
single_plum_floating@reddit
sometimes "Just load it into excel you idiot" really is the correct option.
basically this xkcd.
franz_see@reddit
I freelanced back in college. And as much as i want to solve problems with my own custom code, i find myself often saying “you know you can solve that with excel right?” 😅
franz_see@reddit
I freelanced back in college. And as much as i want to solve problems with my own custom code, i find myself often saying “you know you can solve that with excel right?” 😅
mixedCase_@reddit
Functional Programming. Don't have to buy into Haskell, don't have to go balls to the walls with everything but just starting with treating most non-trivial problems as a compiler and applying functional maxims makes everything way easier to maintain with few bugs. Imperative shell, functional core. Parse, don't validate. Call it whatever you like it always ends up boiling to functional principles.
With the age of LLMs I have never been as validated in this belief as I have ever been before. Forcing it to use strong types and write functional code results in way more stable and reliable feedback mechanisms that allow it to run much further with the same problem than with its default coding styles. Specially noticeable with cheaper models.
Internal_stability@reddit
Hi guys, do you trust AI-generated code in production?How do you check it?
zergling321@reddit
Some time ago, I learned about Netflix's Kepper Test and Amazon Stack Ranking (with its unregretted attrition). And I even felt sick about the idea of getting rid of people just like that. I was used to working with people who had a growth mindset. So I always thought that if there was an area for improvement, it was fixable, and we could work on that as a team.
I joined the corporate world about two years ago, and man, my mind about those things has changed. There are people who don't produce results at all and just BS their way up the ladder.
jimmytoan@reddit
the Go-era shift from 'where are my abstractions' to data-centric design is one I hear from a lot of recovered Java devs. did it feel like loss at first, or mostly relief once it clicked that you could build the same things with way less ceremony?
namelesshonor@reddit
My biggest shift was when I put in a whole year of 10-12 hour days, every day. Completed the most stories / work items of anyone on the team. Management said throughout how I was "killing it".
Came performance review time and I received a "meets expectations" and would receive no raise. When I questioned it, manager said he was awarding the only exceeds expectations to some guy we had just hired that was a member of his church (they were both Mormons).
When I tried to further push back ("what did you base the rating on? I completed the most work, with zero issues") he said "we didn't consider any of that. But he's trying to start a family so I think he's going above and beyond".
After that, I don't go above and beyond l anymore. Bare minimum.
germanheller@reddit
"the code doesnt matter" was mine. spent years obsessing over clean code, perfect abstractions, elegant solutions. then shipped a product with the ugliest codebase ive ever written and it made money and users were happy.
turns out users dont care if your service layer has a hexagonal architecture. they care if the thing works and if you fix bugs fast. the ugly codebase that ships beats the beautiful codebase thats still being refactored.
now i write code thats good enough to maintain and ugly enough to ship on time. the perfectionism was just procrastination wearing a lab coat
susanne-o@reddit
maybe two effects overlap? Yes, there is a change in perspective, in priorities.
in addition, however, this change is enabled by choosing a technology stack that allows you to do that.
in linguistics there is a sapir-whorf of hypothesis:
to which extend would your change in perspective, from object oriented to data driven to which extend would that even have been possible had you stayed in Java?
cuntsalt@reddit
i used to give fucks, now i do not give fucks. burnt out spectacularly (i.e., nervous breakdown complete with mild hallucinations, a week-long panic attack, etc.) at my last job and now all said fucks have fucked off. i realized continuing at that pace of intense care and trying to fix everything and workaholism was going to put me in a mental hospital or grave so I got used to letting things go and doing the bare minimum.
gUI5zWtktIgPMdATXPAM@reddit
I watched a co-worker go down this route sadly. Not mental hospital but he definitely ruined his health and now is on dialysis. I learnt from that be very careful about over applying yourself because working hard, caring, is not always valued as it should be, and bragging about hours worked is not an achievement badge.
Anytime I hear the rubbish about working hard and getting rewarded, I laugh. It's a myth and an exploitation. He was the lowest paid to add insult to injury and held up vital parts of the system.
cuntsalt@reddit
it's quite dumb. my manager identified me as a high performer and tried to squeeze me further in a year where there were no raises. and I also learned I made $35K less than other engineers hired after me.
used to look down on people for laziness/not giving it their all, oh how the tables have turned. safer/healthier this way though. sucks that it is usually a lesson won firsthand with difficulty and lasting consequence.
gUI5zWtktIgPMdATXPAM@reddit
Yeah it's unfortunate. I also feel the co-worker was also the type to naively believe the best in people. He was taught to work hard and honestly and sadly the world chewed him up. I was a graduate going in on not much less than him, and later surpassed his pay. He coped with an amassing credit card bills, to get a dopamine hit whilst ruining his health. Eventually it got serious enough he was hospitalised and had an awakening but this stuff is lasting and burnt his career.
I remember being fresh and wide eyed thinking gaming consoles and energy drinks were cool in an office but all this showed me how exploitative it is and these are red flags.
I love being a developer, solving problems but it really is an exploitative market out there.
Disastrous_Poem_3781@reddit
Depression
tmclaugh@reddit
“good enough” is often really all you need.
I inherited 4 services as a part of a corporate spinoff. One on AWS EC2, 2 on EKS, and one on ECS. All bespoke IaC, CI/CD pipelines, and SOPs. They could have all been containerized and run on ECS with common IaC and pipelines. They come from my portfolio at the former company where I kept saying we needed to reign this in but other leaders insisted it was important teams pick “the best” architecture, IaC, pipelines, etc. for their needs. Well this week we’ve learned that while these teams were reinventing the wheel they missed basic stuff like database backups. Or documenting to counterintuitive way to update SSL certs. Basic stuff like that which they would have had time for if they weren’t focus on other things.
Agreeable_Office_28@reddit
ai slop
Frequent_Policy8575@reddit
As much as I hate to reference Fight Club, it was the ability to let go of what truly doesn’t matter.
The catalyst was I got tired of swimming upstream and I realized that, in practice, so much of that just doesn’t make a difference. There’s some stuff that does and I try to stick to it and expect the same of my team, but for the most part, the code works, the product is delivered, and management will never care how much tech debt is slowing you down.
Just deliver and go home to live your life. Work is a means to an end and that end is forgetting about work and enjoying myself.
TylerDurdenFan@reddit
The code you own, ends up owning you.
Sometimes it's better to let the chips fall where they may
PopularElevator2@reddit
Its better to ask for forgiveness than permission sometimes. I have ran into managers that were to scared to implement new projects or features. Directors who didnt understand nor care about user or dev experience. Other developers who didnt want something implemented unless is was their idea. If a group of users are having problems or if I have an issue that is bugging I'll go ahead an implemented it.
We manually migrated db. I started using an automation db tool becuase it was a headache. I had fewer migration failures compared to my colleagues when updating db tables and I was done quicker. My colleagues didnt want to use becauaw they didnt underground and my manager was scared to use it.
OtherTourist5535@reddit
I learned to stop planning as much. I used to obsess over getting all the plans right, making sure i did a ton of research before starting a new project, making sure everything was well documented etc. Now i try to get out of the planning phase as quickly as I can. I still do my due diligence but I try to get the minimum research done that unblocks me from getting started
NoobPwnr@reddit
I naturally tend towards, and see value in, planning. I've been trying to loosen this the last few years.
However with the advent of Claude Code I feel planning pays off more. Plans have become instructions for the computer.
Just some observations. Still figuring this all out.
Buttleston@reddit
Absolutely, often the only sure way to know if something will work is to try it, with some minimal scale and then iterate on it. Nothing worse than arriving with a completely finished solution that ends up being wrong at it's core
TylerDurdenFan@reddit
I started programming when I was 8 or 9 (BASIC), Pascal and C/C++ while in highschool, Perl during college (all self taught). After graduating in Electronics engineering just after the dotcom bust, it seemed that software development was the only jobs that wanted me. 2nd job had me for 9 years using java and SQL, developing software for big telcos.
That's when I read the gang of 4 book, which led to tons of books, and like you, got a bit caught in the java abstraction mindset. It was good for a few few things, I luckily managed us to avoid some of the bad bad ones (early ORMs, JEE).
My huge mindset shift came when the very small startup I had been at for almost a decade, where I was now the most senior technical person, the one "Architect" in title, author of the core architecture, told "without you this place would collapse" by an intern... The place changed owners, and the new ones put up a new GM that made many people 's lives miserable, including mine.
I'm not in the US, so this was not a silicon valley hot FAANG wannabe, not an US equity paying startup, just a third world startup that changed owners in 2011, that's the catalyst you ask about.
What shifted my mind was realizing that all of my caring about the future, not just about quality, but about making sure our architecture, platform and code was easy to grasp for the unending stream of juniors that joined and left, while being robust, performant, scalable etc, none of that was valued, none of that was for my benefit. A muggle coworker in charge of QA made almost as much as me, without the responsibility, without having to travel so much without notice, without having to take the new GMs yelling...
So I quit, not just that job. I quit the dev career. I took a sabbatical year to finish a Masters degree I had previously interrupted due to work commitments, and patiently sought the best local job I could find. I was very, very lucky to land a job managing Solution/Service delivery for a big telco. I had to learn ITIL. I had to learn to manage vendors. I had to re learn to manage people, who now were not software developers but IT specialists. I learned what was like to be in charge of mission critical systems. What it was like to be appreciated. To be well compensated. To have a budget. To be the only one with dev experience in the room.
When that company was acquired almost a decade later I went on to further roles in IT. I've only coded for myself in 15 years. At work: lots of SQL, many java tools to scratch my own itches. At home, I've coded personal projects over the years, and since I'm the customer for those, I care more about outcomes like stability and operational cost, than patterns/beauty/maintainability/fluff/etc.
cr1mzen@reddit
I realised that most of the time I didn’t need over engineered abstractions, and half of the tome I don’t need OOP. Just write the dam function that you actually need,and then stop.
boring_pants@reddit
I don't think I was ever an ideologue in that way.
I've gotten a lot better at judging problems: how simple can we go and still solve the problem in front of us? Do we actually need to solve this now or are we getting ahead of ourselves? Is it worth adding these abstractions?
I think I was always pragmatic, I've just gotten better at it over the years.
ooplesandbanoonos@reddit
Your boss will not save you. Having a boss that is a genuine shit umbrella is rare. Had this shift after a series of very bad bosses and I think maintaining that mindset has actually improved my performance. I am more communicative to stakeholders myself, follow up on things myself, and document more.
acidfreakingonkitty@reddit
Embracing communism in 2016.
Oh, you mean for software?! Uhh, I think docker’s ok now, yeah, that’s it.
G_Morgan@reddit
The biggest change is me deciding that code is not always the solution. Sometimes a system that handles 90% of cases but somebody steps in 10% of the time can be better than trying to handle 100% of cases.
tekno_soul@reddit
Code is malleable. Perfect is the enemy of done.
Material-Smile7398@reddit
That's a pretty standard path to follow to be honest, you want to 'do it right' so you read every article going on design and code patterns, then get burnt by the amount of work maintenance involves and how brittle 'perfection' is.
Now I aim for designs to be simple but modular.
TPM2209@reddit
That just sounds like "doing it right" without extra steps. 😉
Isogash@reddit
Your data is only as good as your business processes are accurate, which means in a big project there will always be incorrect and inaccurate data.
What this means is that you should not attempt to overconstrain your schema or be too aggressive about invariants, you should instead accept bad data and build fault-tolerance and the ability to move into a failed state into your processes so that they can handle it gracefully. Something unexpected or undefined will always happen and business users will need a way to deal with it.
As an example, you can validate form data for an input, but you should not assume that all previous forms were validated the same way. You need to re-validate the data again later when applying a process to it. (Real world example: sometimes a business process needs to be paused pending a court case, in which case it can live in your system for years.)
on_the_mark_data@reddit
The data engineer in me is smiling at the phrase "I picked up a very data-centric mindset. I stopped looking at objects and started thinking in terms of data and data transformation."
It sounds obvious now, but "what stakeholders tell me they want and what they actually want delivered are often two separate things." Once I got out of my own way by saying things like "but I gave them exactly what they wanted..." I started making strides in my career.
ugh_my_@reddit
I learned to just stay away from software organizations
MediocreFig4340@reddit
Ask the dumbest question you have first. Maybe you’ll look dumb, or (more likely) it will clear up any misunderstandings or explain why they did it that way early on so you have a solid foundation to move forward with.
Waiting to see if it comes up later and then it doesn’t so you have to ask is a much more uncomfortable situation for everyone.
bogz_dev@reddit
i wish i could use Go at my job, it's so nice
bluetista1988@reddit
I can echo the same sentiment about Go. Enterprise software development in Java and C# was getting a little bit tedious before I got the chance to work with Go a few years back. There's only so many times you can create an
IAbstractUserRepositoryFactoryBuilderon the off chance (0%) that some other part of your application needs to build some other way look up a User before you feel like you've lost the plot.It was pretty refreshing to have a relatively simple, no-nonsense language that just let you do things.
Anyways to answer your question, I think separating myself from the code I wrote as a junior helped a lot. If someone is critical of the code, wants to change the code in some way, or wants to add some functionality that goes against my vision for the code's elegant layers of abstraction, I no longer see it as an attack the way I would have in the first couple years of my career. The code just needs to work, be easy to read/debug, and easy to modify when the business rules of whatever it implements changes in some unexpected way.
gUI5zWtktIgPMdATXPAM@reddit
I think that's real maturity, understanding when to apply "best practices" rather than just blindly following them.
There's a balance in making something generic and reusable, sometimes just focusing on the problem is the way to go as making it too generic does hinder the solution, and other times it's the right choice.
There's a cost and trade off to the techniques we choose and I wish best practices explained that instead of instilling them as "always do this".
Outside-Storage-1523@reddit
Realizing that work matters at least as much as the team, and decided to give myself a last shot to switch to a system programming job, or at least a developer's developer job, before I finally give up and grit through another 15 years until retirement.
spline_reticulator@reddit
That code should be beautiful. Code should be nice to read and nice to write. You should be able to read it from top to bottom without worrying about any mutation or goto statements (that includes continue and break). Data should be unidirectional. If you just keep drilling down deeper in the call stack, you should easily be able to trace the data.
I still think these are nice things to aim for, but a few things made me relax that viewpoint. First is working with Go. To be honest, I find it to be quite an ugly language, and it specifically incentivizes procedural patterns like mutation and goto statements. However the semantics are so constrained that my time to review PRs dropped to almost nothing. There's little room for discussion on different design patterns, and that's nice when you're working on a big team.
Second is working with Cursor. Now that we're working in a world where code is mostly read and written by machines, this craftsman POV becomes less important. I still try to get Cursor to write code this way, but I'm definitely less strict about it then if I were writing it myself.
Conscious_Support176@reddit
I’m not sure I get what you’re saying about dry. Why would that give you abstraction for the sake of abstraction? It sounds like the opposite of the motivation: the idea is to reduce the repetitive work you are doing manually by having the machine do it for you, and use that as the litmus test for if it’s appropriate to create a new abstraction.
GolangLinuxGuru1979@reddit (OP)
I just want to say that I’m not really arguing that dry can’t be useful in some circumstances. But I do think trying to use dry too early can add more complexity, lead to less clarity, and potentially hide more bugs when things that apparently repeat themselves only kinda sorta do.
The “kinda sorta” is where dry starts to really become a trap. Because patterns can seem to emerge from things repeating but in that one little case they kind of don’t. So now your generalized system has to become more complex to accommodate this new data that 90% similar. Then you get more data that’s kinda sorta similar to everything but 5% isn’t, then your dry abstraction gets harder to maintain or understand
So I’m always about repeating myself a lot because it’s repetitive but there is clarity. And when I see patterns and it’s seen over many cycles of iteration. Then I’ll try to build a useful abstraction. The trap is early abstraction before you really understand the needs of the data.
Conscious_Support176@reddit
That’s reasonable. Dry is just abstracts SOLID into a simple practical guideline. It’s not meant to be read as DONT repeat yourself, it’s meant to be read as don’t repeat YOURSELF: let the machine donate boring repetitive work. It’s not about repeating lines of code, it’s about repetitive work.
If you’ve create an interface that makes it more work to implement 90% of your changes, that’s perhaps not entirely successful at meeting the objective?
dethstrobe@reddit
That's why I try to follow SOLID. If you start to get a super interface that can do anything, well...you probably just broke Single Responsibility.
Educational_Smile131@reddit
Orthogonality and composability >>> clean code Encapsulation is about evolution, hardly about data protection at all Correctness by construction, make invalid states unrepresentable Principle of least surprise The right way is on the path of least resistance
No offence, while Java was verbose as hell (got better gradually in recent years), Go fails almost everything I value
tictacotictaco@reddit
Being really really nice and making things easier for your coworkers vs being totally right and only focusing on writing the best code. It’s easy to get your ego tripped up in the code you write.
ninetofivedev@reddit
Respectfully, this is the popular zeitgeist shift at the moment.
OOP came around when we had people like Uncle Bob and Martin Fowler as our dev "influencers".
It's 2026, and people don't want to listen to these old fucks. They want to listen to people like Primagen and Casey Muratori, who push more so the former functional design paradigms.
SteveMacAwesome@reddit
The idea that in 20 years ThePrimeagen will be the new uncle Bob is pretty wild. Can’t wait
ninetofivedev@reddit
Probably more like 10. And the new guy will be making fun of how people were reluctant to use AI or some shit.
spline_reticulator@reddit
Those aren't functional design patterns. They're procedural design patterns. Procedural design patterns fix a lot of the issues with object oriented design patterns but introduce a bunch of their own. IMO functional design patterns have the least number of drawbacks, but of the three it's the most challenging to learn, so most people don't.
ButtSpelunker420@reddit
https://github.com/ThePrimeagen/refactoring.nvim
Primeagen made a neovim plugin based on Martin Fowler’s refactoring book. The old fucks voices are being heard through the new ones.
sisyphus@reddit
And of course, like everything else, the current popular zeitgeist is a recycling of a previous zeitgeist before class-based OOP took off. Fred Brooks famously said "Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowchart; it'll be obvious." in 1975.
Old-Television-2189@reddit
Lol chill
WellHung67@reddit
So the shift is….yagni?
fixermark@reddit
The fundamental shift from thinking about individual responsibility to thinking about systems and their consequences, and "my mother-in-law got shot by someone who should have been institutionalized three states ago but there is fundamentally no mechanism nor incentive to track people with mental issues across state lines. The perpetrator used two legally-purchased handguns because one of the states he went do doesn't have background-check requirements for private ownership sales so he literally bought them off a guy in a parking lot."
... oh, sorry, this is ExperiencedDevs.
The fundamental shift from thinking about individual responsibility to thinking about systems and their consequences, and "I worked at Google, which has cultivated a culture of 'While we all take responsibility for the quality of the system, the ultimate health of the system is maintained not by every individual doing the right thing all the time, but by making the wrong thing hard or impossible." This is because fundamentally, training every single person through the front door to not flip the switch that breaks everything doesn't work, but throwing the software equivalent of a molly-guard over the switch and a notice that says "WARNING: this breaks everything, please read [this document] to confirm you know what this switch does" can, when coupled with good hiring practices.
As a practical consequence of this philosophy, honest mistakes are not a reason to fire someone as long as they are willing to put the work in to help with making the mistake harder in the future, because when the system has been burned down once, the person least likely to make that mistake again is the one who was holding the accelerant last time.
As a second practical consequence of this philosophy, counter-intuitively, is a bit of Not-Invented-Hereness and a bias towards building solutions in-house, because even though the consequence is the solution has fewer eyes on it, the flexibility of being able to change every piece of it without coordinating such changes with external stakeholders makes it much easier to get those urgent molly-guards and redesigns in place and maintain them.
chrisxls@reddit
It's so interesting, because as a (mostly former) Google customer, this philosophy of care does not extend to customers' code. "We thought of a better name for this field, so we sure hope you read this email because we're going to break your code, not for a functional reason, but for aesthetics" is a different approach then the above. We spent a fortune to move to Amazon as a result because we just couldn't stand it.
fixermark@reddit
It's actually the same philosphy but poorly applied.
One of the things about Google's internal philosophy was that (as long as the tests and documented constraints hold), anything can be changed at any time. Teams are basically given the same experience: "BigTable team has decided that API 1 is deprecated. Here's the training on API 2. Migrate by this date. We are checking which stakeholders aren't migrated [because they have excellent internal monitoring and can know what team owns RPCs hitting their service]. For large core projects we will negotiate with your VP if you can't make that date, but there'd better be a good reason because it's costing us $X per month to maintain this redundant API. For small projects, we are just going to cut you and it's up to you to explain to your leadership why shit broke when we told you it was gonna break."
Is that good? I mean when you are literally paying everyone to do that work it's fine. When you're not... I was actually on the Cloud team and I watched that team learn the hard way that you can't treat paying customers like that. They can and will just say "no more." I was involved in migrating the Cloud Logging API and when we did the agile "timeline estimates" game on the migration, one engineer submitted an estimate about 3x longer than the rest of the team. Manager asked him what the holdup was. His answer was "clients." Manager noted that Cloud had a documented deprecation window and the project was well within it. He responded "Are we paying the clients or are they paying us? And how much money are we willing to lose in customers migrating away instead of migrating from version 1 to version 2?"
In the end, one of our clients was Niantic and they were too busy putting out Pokémon Go trash fires to even think about a migration mid-launch, so guess which estimate was closer to reality. ;)
chrisxls@reddit
Yup, that's why I am a former customer.
I work in enterprise software. I switched to an enterprise software vendor.
The whole frame of the thinking in that story is something other than enterprise software. Deprecation window? I don't think salesforce retired an API in its first decade.
Steve Yegge's piece on it was spot on. I received those emails and read them the same way he did.
jssstttoppss@reddit
Test pack quality trumps implementation quality
shadowndacorner@reddit
I had a similar shift to viewing programs as just sequences of data transformations relatively early in my career. That's still probably the biggest shift, aside from just becoming more pragmatic and less ideological in general.
Teh_Original@reddit
In terms of technical execution, Mike Acton's 2014 CppCon lecture, and then reading Richard Fabian's Data-Oriented Design book. Completely switched me away from everything must be OOP as I experienced in industry.
mx_code@reddit
"pick your battles"
"strong opinions, loosely held"
In short, most engineers want to have strong opinions for some reason.
But in the long term most of these opinions end up not mattering, and what matters is the impact of the deliverable.
So... learning to navigate these opinions and only let them be blockers when it actually matters. Also, learning to influence others to help them understand them when it actually matters to be strongly opinionated
olzk@reddit
stay away from any ideologies, they turn your brain into cabbage
CNDW@reddit
Careful now, I tried to say DRY can be a huge trap in a thread a few weeks ago and got ratio'd into the ground...
I don't think there was any one point for me, but there was definitely a shift for me at one point. my career has been distinguished with working in legacy systems and solving problems that others where unable to solve. There is one common thread in every difficult system - over abstraction. Cargo cult programming or people attaching languages or programming styles to their personal identity. Blindly applying principles or design patterns without thinking about what those patterns are solving and what the tradeoffs are.
Dense_Gate_5193@reddit
Okay so my manager was Patrick Naughton, one of the original inventors of Java at sun in the 90s. I worked directly for him for a few years.
he said “Joshua Bloc is an idiot…” more than once. He hated that his baby was being bastardized by someone who thinks he invented patterns but actually rehashed ideas that have been around since 1970. “we haven’t learned anything new about CS since 1950.” he would also say.
abstractions can be important but they need a usefulness. if something is single use, it’s single use. it doesn’t need abstracted further. if it can apply to multiple domains, yeah abstract it.
i did that for GPU acceleration i wrote a single wrapper so i have a clean interface regardless of the underlying hardware. that’s the proper use of abstraction.
not in patterns for patterns sake.
MB_Zeppin@reddit
I had a similar change in thinking going from Java/Ruby/JS to Swift, albeit with a longer road
That language got classes but they don’t want you using those classes
Spinach-Eater@reddit
Ha had a similar change in thinking when I moved from .NET to Golang.
Once you embrace the minimalism of go you just can't go back.