Modern chipsets are monsters, but software feels heavier than ever
Posted by honest-dude911@reddit | ExperiencedDevs | View on Reddit | 325 comments
As a dev, I've started working with some legacy codebases from the 2000s lately, and honestly, the level of optimization in those older apps is amazing. Minimal memory, tight CPU usage, and still doing the job efficiently.
Now we have insanely powerful chipsets, larger batteries, and tools that automate half the dev process-but most modern apps feel bloated and battery-hungry. Phones lasting one full day is considered "great" despite all the hardware advancements.
It feels like we've prioritized fast releases and flashy features over software discipline. Anyone else feel like software optimization is becoming a lost art?
Wanna hear what the senior devs think??
Whoz_Yerdaddi@reddit
Back in the day it was "Moore giveth, and Gates taketh away."
i_exaggerated@reddit
For grad school I worked with a numerical model from the early 80s, written in Fortran77. It was fun reading the manual for it, where it listed the hardware and runtime for different simulations. "The calculation required 16.5 min of CDC-7600 CPU time to reach the completion time of t = 70." I just looked up this CPU, it was $5 million at the time.
This run took maybe 5 seconds on my laptop at the time (2016). My runs took about a day, so they would have been impossible to do back in the day.
kevinossia@reddit
Performance engineering is an art form and most people don’t give a shit about it.
Takes a certain mindset and personality alongside management who actually supports doing it.
ComprehensiveHead913@reddit
I suspect most companies aren't even aware that their huge cloud costs are partly the result of not considering performance engineering a priority. Maybe that's how a skilled and curious developer should approach this; i.e. make it known to higher-ups that they could be paying less for the same amount of work by optimising some of the modern junk.
freekayZekey@reddit
late to the party, but yeah. my current gig had really bad throughput with a netty server. we were paying an insane amount for cloud until i sat down and figured out what they did wrong. the team/company was fine with spending > $2000 a month because of this. it’s pretty wild
kevinossia@reddit
Uber famously replaced their Python backend with Go and immediately saw massive savings.
PHP was so slow for Facebook that they wrote their own HHVM and Hack to replace it.
Instagram found Python so slow on the backend they ripped out the runtime and wrote their own dialect to speed it up because god forbid they use something other than Python.
Speed matters. Some people take a little longer to realize that.
Flat-Pen-5358@reddit
Then you replace go with rust, Java, c# and do it again.
nullpotato@reddit
Eve Online was written mostly in python and they converted bottlenecking section to C++. I like this approach because premature optimization is a waste of different resources.
kevinossia@reddit
This is a bad approach to me.
Not all performance problems are just about finding hotspots and replacing them with faster code. Not even close. Many performance issues are endemic and architectural in nature. Using a language like Python exacerbates that.
I’m not gonna argue with you but picking the slowest mainstream programming language to write performance-sensitive code is just a bad idea, full stop. Doesn’t really matter if it worked for Eve Online or not.
officerthegeek@reddit
but it didn't start off as performance critical code, it took a damn long while until Eve became big enough for them to need to start ripping Python out. This again butts against the same "we're not rich enough for this optimization from the start" thing everyone else is running into
kevinossia@reddit
It’s an MMO backend. That’s almost by definition performance-critical.
Unless the game was intended to be a one-off PoC/prototype or something….
officerthegeek@reddit
yes, almost. The way Eve works means that it didn't reach Python's limits for years. They did eventually run into problems with massive battles - iirc more than 10 years after the game's release - but solved them pretty well. Just because you're making an MMO doesn't mean you get to escape "is dev time for this worth it"
nullpotato@reddit
I'm a python developer and I wouldn't pick it for a MMO server backend either. Also python lends itself to certain patterns but poorly designed architecture is not limited to any language, people make horrible designs in everything.
caboosetp@reddit
I agree with the premature optimization part, but also there are simple design patterns and simple architecture decisions that only take minimal implementation time and save lots of cost down the line. Part of being a great engineer is knowing when it will actually help (but also a big problem with premature optimization is being wrong about what will help.)
Things like caching responses from APIs that you know don't change often and precision isn't important. It's a handful of lines of code, slight increase in complexity, but can saves a fuck ton of money and the pages will load faster. This is something that will often be missed because it's often not a bottleneck. The code often gets written and then forgotten, and those small things add up.
Other things like batching db requests when the difference is a for loop vs a call that already exists on your db handler, or waiting to call save changes in entity framework.
A lot of it is the difference between over engineering code when you don't know if it will matter vs small differences in coding that you know can have a big impact. Those small differences are an art form.
Maktube@reddit
Yeah, people like to complain about the "tricky data structure/algorithm questions" tech interviews, but this is why they got so ubiquitous in the first place. Being able to know when it's worth it to optimize performance at the cost of additional complexity is one of the most valuable skills you can have as an engineer, and one of the absolute easiest scenarios to decide on is when you happen to know there is a data structure/ algorithm that will solve your problem in constant time or space.
It's especially easy because 99 times out of 100, when you're in that scenario, there is an off-the-shelf library that just does the thing, and if you already know the name of the algorithm or data structure you're looking for, it'll be real easy to find that.
If you don't know your data structures and algorithms, you are almost guaranteed to waste massive amounts of money, either in server costs, development time, or both.
Now, that said, obviously your ability to remember how the fuck an interval tree works, or which weird-ass flavor of union-find is optimal for this particular case, and then write it from scratch -- all in the space of a very stressful 45-minute interview -- is not necessarily all that correlated with your ability as an engineer. But there is definitely something to be said for the notion that if you don't know your data structures and algorithms, there is a limit to how valuable you can be and the kinds of work you can effectively take on.
thekwoka@reddit
I always find it confusing when people say it's totally unrelated to their work.
I mostly do front end, and I still have DSA type stuff to do...
-Nocx-@reddit
I don’t disagree with your point, but I am still more tempted to say that the DS&A coding interviews have more to do with being the best solution to a difficult problem (hiring) rather than the wisdom that you’re sharing. Knowing when to apply these skills because the cost to the business is small now, and will yield much greater savings later is a much different skill that I feel only comes with experience.
It brings me back to my first job where I optimized a LINQ query to run in O(n) instead of O(nlog(n)) and my mentor said “why did you do that? There are seven gasoline tanks.”
To be completely honest nowadays for most roles I need a dev to know of these principles and how to recall them if necessary, but there is no moment in the workday where it needs to be done in forty five minutes. Being able to think in these terms is insanely valuable like you’ve described, but I generally expect that to come from their degree, not the last six months they spent cramming for the interview.
I guess what I’m saying is this speaks more to why the Computer Science degree focuses on the theory of computing rather than just software engineering, and why those degrees stand out compared to web development boot camps or coding schools.
thekwoka@reddit
but you could also just not use python in the first place.
It isn't easier to write good python than many different options.
zeloxolez@reddit
its so funny to me that companies will grill people hard on super optimized algorithms yet completely use the wrong language to power the whole thing
doyouevencompile@reddit
Performance matters but business has to be making money first.
When the product market fit isn’t clear, marketing and monetization isn’t in a good place, market share isn’t growing , high performance isn’t a priority.
A lot of engineers don’t know about high performance strategies and optimization takes time and engineering pools shrinks.
You don’t want to optimize before you solve your business income problems and you don’t want a small hiring pool. Engineers are more expensive than cloud too.
Faster iteration and flexibility is generally more important except a few specialized areas.
Healthy-Kangaroo2419@reddit
Yet most inefficiencies I've seen could be solved with basic SQL knowledge. Missed index here, fancy for-loop over query executions there, N+1 joins in application layer instead of Specialized queries, hidden by abstraction layers in code or misconfiguration of persistence framework. You don't even need rocket science algorithms for this. Monitor your DB and query execution times and counts, and you'll spot this. Basically the 80/20 of performance optimization.
Maktube@reddit
I hear this a lot, but I think that's misleading. It's certainly true to a point, but it's equally true that if I write the naïve O(N^2) algorithm from scratch when, if I'd put a little more thought into it, I could have used an off the shelf library that did it in O(N), I have fucked up.
I feel like we all know that so instinctively that no one really talks about it, and then junior engineers take "premature optimization is the root of all evil" too literally and turn into midlevels that actively fight you when you say things like "if you put all of this in a hash map, you don't have to use four nested for loops here."
doyouevencompile@reddit
My experience tells me otherwise so I don't think it's misleading - no one is asking you to write obviously wrong or inefficient code, you have to do the bare minimum but the diminishing returns for optimization come at you really quick.
It's actually quite write to write search algorithms from scratch in enterprise / startup / web context, because there are plenty of libraries in every language that you should be using. More common errors would be to use the wrong DB query or an equivalent API query, not using connection pooling etc - which are easy fixes and the bare minimum. And if you did something obviously stupid, it's caught up in the code review. If it's missed, it will pop up in the metrics and you can go and figure out.
It's harder to solve a premature optimization because an engineer thought they should optimize the code to the brim. Highly optimized code generally is less maintainable and less flexible. More often than not it's the wrong piece that's optimized anyway - if you have a 200ms API call chain, optimizing your tiny little code to shave off 5ms isn't really worth the effort. Moreover, if you have to simplify it you must rewrite the code that was an unintelligible mess, and rewriting code after a context switch is more likely to introduce new bugs.
You have to measure before you know what to optimize and then evaluate the weight of optimization vs the hassle optimization can bring afterwards.
Maktube@reddit
Yeah, I agree with all of that. I had a mentor that used to say that it's a lot easier to optimize correct code than it is to correct optimized code, which I really like.
I guess I think the disconnect I see comes from the definition of "optimize", and what's obviously inefficient to whom. If I'm understanding you right, you're saying that you should shoot to write reasonably performant code out of the gate, and only spend extra time on optimization if and when profiling etc shows actual problems, which I think is 100% right.
I do also see a lot of younger developers taking it too far, though, and using the same arguments to justify not learning how to write reasonably performant code in the first place. There's a lot of things that I think aren't obvious to do/not do when you're first starting out, where doing it the "better" way isn't any more complex or less maintainable, and if your mentality as a new dev is "literally never think about performance until the company is profitable", you're much less likely to learn those things.
I don't know that I have a perfect solution, really, but I do think we could do better as an industry about teaching junior devs about all the nuance here.
quentech@reddit
https://ubiquity.acm.org/article.cfm?id=1513451
doyouevencompile@reddit
Yeah that’s pretty much what I’m saying. Agreed everything you said
nasanu@reddit
Yup, come here to post just this, as I always get a billion downvotes when i say that Knuth quote is misused and not understood. Its dogma in FE, everyone seems to believe that optimisation is some sort of evil and if you do care about performance then you are a shit dev.
Slggyqo@reddit
Yeah…that list of companies that saved a ton of money by optimizing isn’t exactly a good case study for most users.
The sheer amount of money on the line, the number and quality of engineers, and financial value of a little bit of extra speed in terms of revenue and cost savings is just…wildly unrelatable for most of us.
nasanu@reddit
False dichotomy. If might take experience to make something faster, but not more time or resources. Typing out a query that will be slow is no different than typing out a faster query. Same as on the FE, usually building some performance comment takes no more time than installing and learning how to use one from NPM.
danielrheath@reddit
Speed matters a great deal more when you have 10k+ servers. The examples you gave all have huge server fleets.
I run a "very large" website (that is, in the top 10k sites globally by visitor count, but not FAANG-sized), and our cloud costs less than any single engineers salary.
LucasOFF@reddit
Speed matters at scale. FTFY
When a startup or company that's just about to break even - spending way more time and resources on something that won't bring you closer to profitability is frankly a bad way of running the company.
Ok-Scheme-913@reddit
It matters if your product managed to get enough traction that anything matters about it - and arguably it managed to get any traction because it was ready with this and that feature, in that amount of time, developed with these specific constraints (e.g. the creator not knowing any better?)
Having speed as the biggest problem is often a very good place to be at (though of course it is domain dependent, e.g. it's probably not a good problem to have at high-freq trading)
oupablo@reddit
Because spending 5 man years to get a 10% cost savings on cloud costs isn't going to be worth the opportunity costs that 2 man years of new features will bring.
oofy-gang@reddit
That is reductive. There are company that spend billions on cloud compute a year.
oupablo@reddit
and they still make 10's of billions in profit. They are most definitely looking for cost saving opportunities but there aren't many places that are going to consider rewriting their services in another language or combing through services looking for bloat unless the savings are going to be massive.
Stephonovich@reddit
I have repeatedly offered my current company to save them ~$300K/month by changing a few lines of TF: we have triple-AZ DB clusters, and in nearly all cases, none of the readers are even used, because devs don’t know how to use them.
“Oh, that’s interesting anon. Anyway…”
pheonixblade9@reddit
I saved my company over 10 million bucks annually by spending a month or so tuning our data pipelines. so many expensive VMs being kept alive just polling to ask if the prior step in the pipeline was done. so much wasted VM time.
ComprehensiveHead913@reddit
That must have felt good!
Stephonovich@reddit
Until they gave you a shout-out in Slack instead of a giant bonus.
I’ve seen that play out time and time again, and it never makes sense. If an engineer finds and solves a problem that solves you millions of dollars per year, give them a giant fucking bonus. They earned it.
ComprehensiveHead913@reddit
Pizza party! Yay! :P
Ok-Scheme-913@reddit
Less by what? 2%? 10%? 50%? Of what, 100 dollars? How many engineering hours salary would you spend on that ridiculously tiny amount?
Maktube@reddit
I mean, for a concrete example from my last job, 80% of $90k per month, so. A lot. I would spend a lot of engineering hours on that. But it turned out we didn't have to, all we had to do was not use Python for our dead-simple document pipeline. It took a couple of engineers about 3 weeks.
Ok-Scheme-913@reddit
Could you expand on this a bit?
In my experience (though pretty limited with relation to actual billing) one of the big costs are DBs with backups, and those are probably absent from performance optimization resulting in cost reductions (though probably some cloud-specific cost optimization is possible).
Maktube@reddit
It could be that we work in pretty different problem spaces, but I've almost never seen database backups be a significant cost in cloud computing. Generally storage is very cheap compared to compute, and that's especially true for things like DB backups where -- since they almost never get accessed, and since even when they are accessed latency is typically a non-issue -- AWS and GCP can put them on literal tape decks and store them for pennies per GB.
Database servers that need really high concurrency and low latency can be pricey, though, especially if you do the typical start-up thing and use something super "general purpose" like Redis that in theory is very well optimized but, because it happens to not fit your specific use-case very well, is awful in practice. Redis is excellent at being a cache for web requests, but if you try and use it as some kind of pseudo-CDN for boatloads of data you will have a very bad and very expensive time.
Where I usually see people run into trouble, though, is just basic compute/server costs. It can happen in a bunch of ways -- most obviously, Python is much slower than most compiled and non-GCed languages. If whatever you're doing isn't time sensitive, that probably doesn't matter, but if you need a lot of concurrency/throughput that's going to translate into way more instances (or more CPUs per instance). If I'm processing ~1k documents per hour (let's say 0.3 per second), and Python takes 100 seconds per document, you need 30 instances to keep up. If, say, Rust could do the same thing 30x faster (which is a pretty average speedup from Python->Rust in my experience), then you only need one. As a side note, the global interpreter lock in Python can make this way worse, if you don't know about it. If you're CPU-bound but the problem is amenable to parallel processing, you have to know that python multi-threading does not use multiple cores unless you go out of your way to use multiprocessing (which is often a pain in the ass).
Memory usage can trip you up in ways you're not expecting, too. Python uses way more memory than most of the C-likes. If your workload is really variable, you want to use the smallest instance size you can pet job and horizontally scale the number of instances up and down as you get more/fewer jobs, so you don't have a bunch of compute sitting around costing money but not being used. It's pretty easy for that smallest instance size to be dictated by memory with python applications, but it's the CPUs that really cost the money in cloud computing, because they're what use the voltage. You might end up paying a lot more per instance with python, just because you need to have bigger instances with more memory and more cores, even though you don't actually need the cores.
My last job had both of those problems, and also, because the document processing time in Python was so slow and we were trying to get ingestion+processing time low enough for it to be used on an interactive webpage, people had started to look into GPU compute. GPU servers are usually horrifically expensive, and massive overkill during periods where you have some jobs but not many (so you can't scale them down below 1, even if you're using like 1% of its compute). So we were spending tons of money on servers, plus more money on GPU instances for testing and development, until we just wrote the damn thing in Rust, did away with the GPU instances, got rid of a bunch of database and cache instances that were basically just there to provide buffer storage for when we had a ton of requests all at once (and also ended up needing to spend less money on load balancing and autoscaling) and our monthly bill went from ~90k to ~20k.
This isn't to say that you should never use Python in the cloud -- I'm actually quite fond of it for the sorts of things that you can solve with AWS Lambda. Also, it can be optimized way more than most people think. You can use really highly optimized third-party libraries for things that are performance bottlenecks (numpy is famously written in c for this exact reason), and you can also look at things like Cython. If you're a small startup that doesn't have the time to put into optimization, though, and you're spending money on compute at any kind of actual scale, imo Python is almost certainly not the right choice.
Ok-Scheme-913@reddit
Thank you for the detailed answer!
My comment regarding the database was based on a friend's startup where that has a very high cost, and I might have extrapolated a bit much from there - but he uses a simple MySQL database with high resilience, but I don't know the details.
Regarding compute/server costs: that definitely checks out, but I think these massively parallelizable tasks are themselves a bit of a niche, but yeah, letting the cloud scale as it wishes can end up expensive, especially when you need special nodes e.g. GPUs.
I was mostly thinking of traditional backend services which may only be scalable vertically (but even if horizontal scaling is required, it would be in the single digit number of instances) - here a "rewrite in Rust" could definitely make better use of the existing hardware, and might prevent the need for vertical scaling, but that may not significantly decrease the costs - but do correct me if I'm wrong, it's really not my area of expertise.
chaitanyathengdi@reddit
this is why
technowomblethegreat@reddit
Management is not willing to pay for performance engineering. If it works and consumers aren’t shouting, that’s good enough. People are pressured on to the next ticket.
kevinossia@reddit
Somewhat. It's also up to engineers to actually take it seriously.
ta9876543205@reddit
Https://www.kohala.com
The pages are light and airy being at most a few kB each.
Rockstar developers today would create the same pages using their favourite tools weighing in at around a few MB each.
A bit of efforts and those programs could be ported to C/C++/C# and run about a few hundred times faster.
SituationSoap@reddit
No, they wouldn't. They'd create that website with a static site generator that's plugged directly into GitHub and updates whenever they make a commit.
technowomblethegreat@reddit
A lot of web devs will just their heavyweight JS framework for everything regardless of requirements.
xmcqdpt2@reddit
I have the opposite problem.
We have this app written in Java because it’s so much faster than python. We have meetings talking about how much faster it is than python. We have senior devs critiquing python all the time.
Many perf critical sections would be easily 100x faster if we could use Jax or pytorch. The premature optimization of choosing the “fast” language means that we don’t get access to auto vectorized GPU offloading.
considerfi@reddit
I used to work in firmware. I would think twice about using ints vs bytes. The computation time of any nested loops. If I could compress a short array of numbers into a single 32bit value.
Now I work in webdev and I have to force myself to not have my chest tighten at the loops within loops within loops js calls and massive stored data. But it is what it is, just a different world.
It's funny to me that web devs typically interview for all these algos and big n stuff but no one seems to actually think about it in practice. Whereas in firmware we didn't really do the complex Leetcode stuff in interviews even though we did in fact use the concepts.
dendrocalamidicus@reddit
I will say that the mindset you are describing is not always a positive thing.
Like dude the query takes 4 seconds to run and pulls back 2kb of data, who gives a fuck if the 5 lines preceding it make a 16 byte heap allocation and add 10ms to the function. Congratulations a junior now cannot understand it because you've gone all fancy with it, and they'll introduce a bug that causes us a real problem as a result of your hubris.
kevinossia@reddit
No. Performance engineering includes knowing when to optimize vs when not to.
We do it all the time. I do some extremely perf-sensitive work yet there are still plenty of places where I’m like “eh, that part’s fine” because I’ve measured it to not require any more tuning.
The mindset includes understanding those tradeoffs.
dendrocalamidicus@reddit
I don't think that was implied in your original comment. In the context of what you explicitly stated in the comment I responded to, my point stands
sebzilla@reddit
Doubling down eh?
You know it's ok to say "oh sorry, I misunderstood"...
The "context" in OP's original two-line message is what you brought to it, if you didn't think it included understanding when to make trade-offs.
"Performance engineering" is a well-understood discipline and it does in fact include knowing when to make trade-offs.
So.. it's totally ok that you didn't know that (or didn't think of it in the moment), and misunderstood what was being said. We're all here to learn and grow..
Better that than just digging in our heels when we're wrong, don't you think?
dendrocalamidicus@reddit
In my experience, the people who do not know how to make that trade-off I have worked with have in many cases been very experienced and in the role for multiple decades. So yes, it is relevant to bring to a discussion with experienced developers.
Why try to patronise me? Not cool.
sebzilla@reddit
What you call patronizing, I call giving you a different perspective to consider, given that you are demonstrating a strong lack of willingness to consider other perspectives.
Truths are not universal, but many are shared, often referred to as "conventional wisdom".
So you may personally know some people who don't understand performance engineering even if they have decades of software development experience, I don't question your personal truth there.
But I am trying to point out that the conventional wisdom here does not align with your experience, and that's perhaps an opportunity to bring more data into your world view around this topic.
I tell all the teams I lead "prove me wrong, please!" because I want to know when my perspective or experience doesn't align with what is commonly accepted or understood around a topic or idea.
I'm always open to people telling me "what you believe is not really in line with what most people understand to be the standard".. And then I can adjust my understanding, and learn.
If that's not you, just ignore me! I'm just a rando on the Internet. ;-)
kevinossia@reddit
Heh, I’m scratching my head what this guy thought I meant by “performance engineering.” LOL
Mechakoopa@reddit
Definitely not runtime injecting my own version of a system library with a highly optimized version of a function that only works for our very specific use case.
Definitely not that.
pheonixblade9@reddit
also, when you're really good, you can isolate the crunchy stuff behind nicely consumable APIs with idiomatically defined usage that should ensure good performance.
Stephonovich@reddit
If you’re hitting an OLTP DB for realtime and it takes 4 seconds to run, you’ve gone horribly wrong. The fact that you’re returning 2 KB of data also supports that, because the only reasonable way that happens is you’re doing something insane like storing a serialized class in a BLOB, or a gigantic JSON object.
dendrocalamidicus@reddit
Or you just have a reporting system in your software...? If a customer wants to report on all of their 250k users data then they can run a report that will bring down a several mb excel file, and unsurprisingly that takes e.g. 20 seconds to run the db query when it's doing multiple joins and groups to pull back the requested info that they are able to dynamically build the requirements for in the UI. That is not just fine, but completely expected. There's also out of hours batch processing jobs and all sorts of other scenarios where multiple seconds queries are fine in the context of the application's usage. Things are only horribly wrong from a performance perspective if users encounter issues.
Stephonovich@reddit
OK, fair point. I’m so used to seeing what I described that I didn’t think about legitimate use cases.
not_napoleon@reddit
"premature optimization is the root of all evil" --Knuth
I would say what you're describing is not the performance optimization mindset, but rather the clever showoff mindset.
I do a lot of (necessary) performance work in my job, and we usually start by analyzing where the thing we're trying to improve is spending most of its resources. Macro benchmarks and flame graphs are the starting point, to find where we can get the most impact.
The other key is constant monitoring. We pay a lot of attention to changes in our benchmarks, to make sure new changes aren't making existing stuff slower (or more memory intense, or...). It's about knowing what your software is doing and how that changes over time.
Maybe I'm making a no true Scotsman mistake here, but in my experence, engineers who care about performance don't do what you're describing and engineers who care about being the smartest person in the room at all times do. (Pro tip - if you find you are usually the smartest person in the room, you need to find smarter rooms to hang out in.)
GaboureySidibe@reddit
This quote is used to rationalize leaving a lot of speed on the table, but it was about junior programmers shifting around variables in their for loop to help the compiler before the program was working.
The optimizations he was referring to don't really exists any more because compilers do it for you.
Meanwhile you have to design for speed ahead of time if you want things to go smoothly.
nullpotato@reddit
The quote also implies "don't guess where the problems are, look at data to know where to spend effort"
GaboureySidibe@reddit
To good programmers they see this and realize they need to profile and not sweat the small stuff. To bad programmers they see this quote and think it's ok to create bloated electron nonsense that makes a 400MB GUI that takes up gigabytes of ram, stutters and lags on a new PC, but ultimately runs a command line program.
nullpotato@reddit
Completely agree. Although not much you can do to stop bad and/or lazy developers.
not_napoleon@reddit
I agree the specific optimizations he was referring to don't exist any more, but the thinking problem still does. E.g. the comment I was replying to. Everything, or at least most things, in software development is a trade off. In this case a lot of readability is being traded for a small gain in performance, which is likely invisible against larger costs. And I would argue that a person who makes that choice is still pretty junior.
I absolutely agree you have to design for speed (and speed at scale even more), but that starts with knowing what parts of your application will be slow and focusing on improving that. Getting a 90% speedup on something that takes 10 milliseconds is a lot less useful than getting a 2% speedup on something that takes 10 seconds, and if the 90% comes with costs in maintainability, even worse. I think Knuth's observation generalizes just fine to that case.
xian0@reddit
I would also consider some of the secondary optimisations worth doing if the complexity can be kept out of the way of the code that actually gets worked on. The few milliseconds here and there can add up to 150ms which you can then "spend" on a nice new feature.
Dry_Author8849@reddit
You need to times that by the number of concurrent requests to that function... If you are the only one using it it will not show up. The 16 byte heap is not a good example but 10ms may be a no go.
When your workload increases that things will be noticed in no time.
PragmaticBoredom@reddit
I guess we've worked in different universes, because loading times and end-user visible latencies have always been among the top priorities for most of the products I've been on.
Some companies are so strict that any change isn't allowed to pass if it causes regressions in the speed benchmarks like app startup time.
Ever since Amazon made the famous statements about how every X milleseconds of loading time reduces sales by Y%, most product managers have been obsessed with performance metrics.
kevinossia@reddit
You and I are fortunate to work in groups like that.
Your average web coder likely isn’t in a group like that.
Mechakoopa@reddit
The biggest competitor in our market is notoriously slow, like... You can go get a cup of coffee and have a chat and it's still loading by the time you get back to your desk kind of slow. I work in process automation and data manipulation where speed is king. It's kind of frustrating how many devs just don't bother with stuff like multi-threading, caching or query optimization because they "never had to do it before" like... Okay, gun to your head you didn't have to do it, but have you seen how slow this code is?
nullpotato@reddit
Honestly that is probably some insane thing the devs would also like to fix but can't. "There is some data we need that has to load off a 50 year old mainframes tape drive and management won't give any budget to upgrade it."
Perfect-Campaign9551@reddit
Or the developers of JIRA, crap had gotten slower and slower over the last three years
nullpotato@reddit
My org: dockers go brrrrr
Tundur@reddit
The difference is that a lot of performance these days is actually more resource intensive. We optimise for the user's experience by caching ungodly amounts of data in the most expensive forms of memory, preloading resources that may not ever be used.
Which is fine in isolation, but often results in issues when resource contention comes into play. Actual users have hundreds of open tabs, a game running on their other screen, and other competitors for cache, ram, and so on
ScientificBeastMode@reddit
While this is true to an extent, aggressive caching doesn’t have to be so ridiculously memory-intensive. A lot of the memory usage is not required to achieve the same runtime performance.
gefahr@reddit
You're being downvoted by engineers who don't know how to do their jobs. :(
Raildriver@reddit
If you're doing software that the general public interacts with, like google or the amazon store for example, it absolutely makes sense to make that a top priority.
On the other hand, if you can measure your end users in the tens or hundreds, that can easily fall quite low on the priority ladder. And that's not necessarily because this is an early stage startup. My previous company had hundreds of millions in revenue, and my current one has tens. This was just on B2B websites for heavy industry, warehouses, etc, and they care about functionality a lot more than a couple seconds of load time. These guys have been used to waiting hours (or tens of hours) for reports to generate, so waiting 20-30 seconds for a dashboard to load is nothing.
It can be painful for me sometimes, because I'm one to over optimize, but I have to remind myself that it frequently simply isn't worth spending a bunch of extra time to eek out meager, if any, savings.
pheonixblade9@reddit
counterpoint - the McMaster Carr website is one of the best performing websites ever made.
needadvicebadly@reddit
Wrong kind of performance optimization.
Yes, virtually no project I know of doesn't advertise or try to maintain some sort of KPI on performance. But that's just it, KPIs are easy to game. "load times", "end-user visible latency" are KPIs and are gamed all the time.
Some of the slowest shittist tools I've used have amazing performance KPIs.
whisperwrongwords@reddit
obsessed with user perception of performance*
couchjitsu@reddit
Yep, I remember circa 2007 I remember reading forums where people were saying things like "Memory is cheap, developers aren't. It's better to just add another stick than spend the time optimizing"
Stephonovich@reddit
Funny how the people saying that aren’t the Carmacks of the world.
IME, that phrase has always been a way for people who don’t know how to write good code to excuse their ineptitude.
DagestanDefender@reddit
it is still true that memory is cheap and developers are not
Romeo_y_Cohiba@reddit
*developers who think about performance
SituationSoap@reddit
Since 2007, memory has gotten a lot cheaper and developers have gotten a lot more expensive. That statement is more true than it's basically ever been.
krista@reddit
performance engineering is what i do, and it's hell finding employment currently. it seems few actually care past ”the consumers aren't leaving”.
MechanicFun777@reddit
I like the way you think, you must have some experience in this business. Lol. Cheers!
Prize_Response6300@reddit
The time it takes often is not worth it for a business
r_Yellow01@reddit
Commodore 64 enters the chat...
One-Employment3759@reddit
Management is the biggest issue. They want features delivered in 1 or 2 week sprints.
They don't want to wait 2 months for an engineer to refactor an app to be faster. Because it doesn't directly relate to giving people new things or making money. It's just the vibe of having an app that is fast. Which I appreciate greatly, but many consumers have just accepted that they have to wait seconds for things to happen instead of 0.1s or less.
Frequent_Ad5085@reddit
Most Management people give a shit, thats the Problem imho.
keiser_sozze@reddit
Tbh my least favorite personality to work with, and there are lots of them, especially born pre-90s generations. Even without having any proof, they drop the ball whenever there‘s something that might or may not perform bad. Then it‘s upto others to prove that it would perform fine. Argh.
just-dont-panic@reddit
I’d say the requirements have changed dramatically and everything is built on top of another abstraction now.
So it’s like where do we start?
Love this topic
indifferentcabbage@reddit
For most management its just a checkbox
DagestanDefender@reddit
not even a checkbox
Impatient_Mango@reddit
They say the same thing as when I complained about our abyssal accessability. "Its not affecting the important numbers". Those where signups that quarters. I spent the time writing A/B tests to test if the header on the frontpage would improve our ~~product~~ numbers.
Fuzzytrooper@reddit
I think project speed is a big part of it - Get it working and fire it out the door asap instead of optimising.
SoggyGrayDuck@reddit
And then they wonder why the recommendation is to redesign and why things can't continue to scale.
Shazvox@reddit
So it's like winning the jackpot in a lottery...
...twice...
...in a row...
kevinossia@reddit
Like everything in life, yeah.
Though there are steps one can take to make it more likely to happen.
Learning C++, for one. That’ll force you into roles where perf work actually matters.
But even that’s not strictly necessary. I was multithreading the shit out of my mobile apps early career and that was just Java and Swift. If you care about performance you’ll find a way to make it happen.
Gugu_gaga10@reddit
Hell yeah c++ mentioned
utilitycoder@reddit
Exactly why I can't stand the npm JavaScript react world 😷
Rascal2pt0@reddit
Node and scripting languages are more popular now. They ease the interface but require translation and trade offs not apparent to people. Most people don’t know that the event loop even exists. Languages like Rust and Go are closer but a lot of people don’t embrace the restrictions and become frustrated or fight it. I re-wrote an ETL application in golang from Ruby and took import time from hours to 15 minutes.
Even react the panacea for frontend is slow and clunky compared to jquery of 10 years ago.
matthedev@reddit
I'm not sure how accurate your assessment is of code from twenty years ago! Maybe it's survivorship bias, or maybe it's the problem domain you're working in. Business apps in the 2000s weren't usually written with minimal memory or tight CPU usage in mind. Business apps tend to be I/O bound anyway.
Maintainability and speed of delivery are going to tend to be prioritized over these low-level performance metrics unless there's data showing it's becoming a bottleneck for some business-relevant metric.
I kind of agree in spirit with you, though. It's kind of obscene how much memory and processing power is now needed to do the same basic computer tasks with modern software and hardware in comparison to decades past. I get the advantages of using Electron or a Web view for your "native" app, but back in the '90s, a chat application could easily run on machines with megabytes of RAM and CPU clock speeds measured in megahertz.
13--12@reddit
Because people who pay money don't care about performance. For example, Cursor that's based on bloated VS Code is getting insane growth and popularity, but no one cares about Zed that was written from scratch and is highly performant. Or no one is earning big money from the very fast NeoVim.
m-in@reddit
What? I’m using their products and they are pretty damn good!
pyramin@reddit
Arguably every update to iPhone UI in the last decade has been a downgrade. Buggy, laggy, not as user friendly. Updates for the sake of updates or enshittification so that they can sell you a premium version. (talking specifically about GarageBand vs Logic Pro).
I had bought Adobe Photoshop at some point but it was under the PowerMac architecture so when they switched to Intel, I lost it and now I'd have to pay for a subscription when I already own the software that was good enough for my needs.
m-in@reddit
Buy from Serif instead of Adobe. Buy once. Use forever. And it’s cheap!
andymaclean19@reddit
In the last year or so I have noticed that browser based apps have become very heavy on systems and there are a lot of them these days. Some of this seems to be down to the very excessive amount of tracking code in some of them. In the GitHub ui, for example, it literally makes a rest API call to track you every time you scroll up or down in a PR.
This bloat of many API calls seems to be the main contributor to the ‘heaviness’ on my system at least. Jira is the worst offender, using 10s of API calls just to show me a ticket. It does the ‘Jira dance’ often where components fill up over a few seconds, causing re-renders and small resizing of things over and over.
m-in@reddit
Just look at how much memory the tabs take viewing almost… anything :/
crusoe@reddit
The use of bloated slow runtimes hasn't helped. The birth of electron apps as common was the first big problem. I mean java could have its own problems too back then, but later with the advent of code stripping and jar re-bundling, when used, you could get rather small lean apps. Anything using the Eclipse framework though is a nightmare.
Python will be this generation's sin as it has exploded in the AI space. Its a notoriously slow runtime.
m-in@reddit
Look at JetBrains products. Everything they put out runs on a JVM and uses a Swing-based GUI. Yes. SWING. You wouldn’t know if you didn’t look at their code.
Same goes for Syntevo apps.
Java desktop apps written by competent people are pretty damn good, and quite portable.
thewrench56@reddit
Half the world is IO bounded. Python is good enough for that. And you can always just write a C module for Python. Guess what, the LLMs are doing this. Your sentiment that LLMs/AIs are slow BECAUSE of Python is simply wrong.
Interesting-Rent6615@reddit
I read this comment like 4 times and each time I thought to myself
"What the hell does advent of code have to do with this"
deadron@reddit
Abstraction simplifies development and improves security/stability but comes with a cost. In most business environments speed is not a priority compared to correctness, reliability, and ease of maintenance.
Perfect-Campaign9551@reddit
Only for people that write shit abstractions
Efficient-Pianist-83@reddit
Finally, somebody said it!
dumdub@reddit
It's possible to write highly flexible well abstracted code with 2005 era code efficiency. The idea that you have to choose between speed and abstraction/flexibility is a falsehood.
Both performance and good abstraction are hard to do. Most are too lazy to master either. Few master both.
flowering_sun_star@reddit
The fact that it's really hard to do is a cost in itself, and one that isn't worth paying most of the time. And if you spend time mastering that, then you won't have spent time mastering something else that's more important in the modern world.
Efficient-Pianist-83@reddit
Yes. Having the windows Taskbar be a react app is true sign of mastery. Instead of mastering optimization I can truly see the geniuses leaning on the really important stuff for the modern world. No man, you are just arguing in favor of laziness and mediocrity.
iPissVelvet@reddit
That’s not true, and frankly egotistical. Like “I’m a master of writing amazing code because I’m not lazy.”
The world’s demand for software engineering is so high, that shipping velocity and actual delivery is the most prioritized skill of software engineering. So the world adapted to that. The new generation optimizes on that. Efficiency is never learned because it’s not taught because it’s not needed. You really think the 22 year olds that put hundreds of hours into Leetcode, then another hundred on applying to thousands of jobs, are lazy? You’d prefer they spend those hundreds of hours on skills not valued by the industry and become a wizard of optimization, that remains unemployed?
sweetno@reddit
You're overgeneralizing here. There are domains where optimization skill is a necessity and well paid (partly because of the talent scarcity), say HFT. It's just not what the majority of projects is about.
(Also it feels that demand for software engineering has declined as of recently.)
iPissVelvet@reddit
Those specialized domains like HFT are a tiny fraction of the general engineering population. My comment holds for general software engineering, and specifically addresses the original comment.
dumdub@reddit
You're expecting a 22 year old to be a master of anything?
iPissVelvet@reddit
No, but ideally that’s where the journey to becoming a master starts right?
i_would_say_so@reddit
I takes much more effort
dumdub@reddit
Once you have learned the skills, yes. But it takes effort to get there.
iggybdawg@reddit
Speed of the engineers developing the software is generally a top priority of the business.
Something happened around 2000~2010 where everything flipped upside down. Before then, the computer was expensive and the engineer was cheap. Now the computer is cheap and the engineer is expensive.
IvanKr@reddit
People got tired of limitations of Windows Forms (remember doing lists with it?), they tried everything and it turned out you could sneak webview on a desktop without too much complaints from user.
FluffyToughy@reddit
You just reminded me that MFC existed. Day: ruined.
SuspiciousBrother971@reddit
Good Abstraction does this. I've seen more leaky abstractions than well-designed ones that are harder to maintain and read.
djnattyp@reddit
Primary-Walrus-5623@reddit
I design high throughput data retrieval services. Think 5-20 ms range for a good amount of data. Takes an ENORMOUS amount of effort to write, verify, and ship something like that. One that isn't worth it to the business 99 times out of 100. The path of least resistance is what the business requirements and shipping schedules demand. For the most part I even agree with that approach. Business demanded something different when resources were constrained.
Boom9001@reddit
Yeah back in the 90s if you weren't very efficient you program wouldn't run. But there's also a point to make that sometimes they were so aggressive you actually had bugs in edge cases. Look at old video games weird bugs where memory areas are often reused. The average user may not have hit them, which is a sign of skilled engineers but still.
In the modern industry with an abundance of resources you just don't need to be that aggressive. If given the choice to remove edge case bugs at the cost of using more memory every software company today chooses the latter. In the past that choice would mean the game/software doesn't fit on floppy/cartridge/etc. so their choice was the former.
Efficient-Pianist-83@reddit
Your mentality is beyond toxic. This is exactly how we got a chat application to take 800 fucking MBs of RAM. Because people are afraid of even basic optimizations or worse don't even know the code they write is complete shit.
cannedsoupaaa@reddit
Yea. surely theres gotta be some middle ground though between what you're talking about and whatever the hell Microsoft teams is.
Primary-Walrus-5623@reddit
I think Teams is the best example of it. While its super bloated and finicky, the developers were given a goal with a timeframe - murder Slack as soon as humanly possible. That's the type of business requirement that necessitates the approach of not caring about efficiency
Captator@reddit
Until it impacts UX sufficiently to affect retention/NPR etc
Primary-Walrus-5623@reddit
yeah, and that's when product would prioritize stability performance. All just a balancing game when its business
Captator@reddit
Arguably Teams has been past that point for years and leaning hard on the integration with the rest of their suite of tools to compensate, but I agree with your point in the general case.
bynaryum@reddit
My major advisor in my CS undergrad came from industry into academia and worked for either Bell Labs or Xerox PARC (I don’t remember now). He drilled optimization into us in every single class.
I also remember a friend and former coworker of mine at HP (yes I’m old) who complained about the fact that we were using significantly more power to print something than it took to get the space shuttle into orbit.
We have lost the art of software optimization due to the relative lack of restraints.
FriendZone53@reddit
Yeah. We’re also trying hard to keep programmers away from pointers and for loops. I swear modern language design is driven by professors who never want to see another student make a beginner error ever again. Also modern cpus will take absolute garbage code and somehow run it almost as fast as optimized code in many cases. Also, you’re not old, because I’m not old ;)
bynaryum@reddit
I actually prefer Assembly over higher level languages.
thewrench56@reddit
Well, what takes me a weekend in Python will take you a year in Assembly :)
You will also end up writing slower Assembly than LLVM generates (or GNU, doesnt matter)
FriendZone53@reddit
I get that. Writing clean efficient assembly is like painting or playing piano, a performance art.
IvanKr@reddit
For every one like you there are 1000 bootcampers who just do whatever works.
jatmous@reddit
Just write everything in Rust. Easy 10x performance gains.
enserioamigo@reddit
Tbh the biggest drainer of a battery is the screen
ichig0_kurosaki@reddit
Would you recommend any books or resources to get better at performance engineering?
Optoplasm@reddit
I think the practice of software development used to be done by nerds who really took pride in doing things the right way and getting better. These days the practice of software development is about having a cushy, high paying job. In fact, the number of people in the field has virtually doubled every 5 years the last couple decades. Similar trends for entrepreneurs and management as well. You either rocket your valuation into the stratosphere quickly or you give up quickly and move on to the next lottery ticket.
ITalkToMachines@reddit
I think a decade or so ago the constraints changed enough that memory and cpu were no longer the bottlenecks users were most likely to run into. It’s network constraints, especially if you’re dealing with mobile devices.
As a result there was a shift away from optimizing for cpu and memory. It would be good to see it swing back towards a balance, but as others have noted, memory and cpu today are relatively cheap. Our phones outperform a lot of servers that were used a handful of years ago.
The returns for optimizing memory and CPU performance diminish rapidly outside of some very specific use cases.
jepperepper@reddit
Alan Kay does a really good talk on how current software and hardware is so shitty compared to what it should have progressed to by now, given what he was working on at PARC 50 years ago (the Alto, smalltalk, the dynabook, etc.).
Basically the companies prioritize quick releases because first-to-market is important financially, and they don't invest in research any more because the return on investment time is not in line with management politics - you have 2 or 3 years as a top manager to make a profit, and research takes more like 6 or so to start returning.
Alan explains it much better than i could, you should look up his talks.
YareSekiro@reddit
do you stop using that software and choose a competitor because it feel heavy as shit? You probably don’t, in fact most people don’t, and that’s exactly why softwares are heavy these days, because companies can afford to be heavy without hurting the bottom line.
mjdfff@reddit
People were saying the exact same thing in 2000. At that time writing in assembler was becoming a lost art.
mjdfff@reddit
People were saying the exact same thing in 2000. At that time writing in assembler was becoming a lost art.
TheFallingStar@reddit
My CS prof used to say: often in business, buying faster CPU and memory is cheaper than paying for programmer’s time to optimize things.
Perfect-Campaign9551@reddit
So instead of wasting a developer time they waste every customers time. Which is multiplied by every customer thus actually causing a detriment to humanity in general from the productivity loss
ElbowWavingOversight@reddit
Then the customers should be willing to pay for more expensive software that's better optimized. But they don't. It's not like it's some huge coincidence that the inefficiency of software happens to scale exactly in lockstep with the advances in hardware. It's because we spend engineering resources to optimize software up to the point where the ROI (i.e. the customer's willingness to pay) no longer becomes worth the cost. And since hardware gets cheaper over time but engineers don't, that bar for ROI on optimization gets higher and higher.
CoochieCoochieKu@reddit
Half the sub is dumb about business context
light-triad@reddit
Mine used to say “I’d rather waste a computers time than a persons time.”
iamacarpet@reddit
I’ve heard this overused soooo much by people who don’t actually weigh up the implications.
…so you’re saving a couple of days a month half-assing your development tickets, and production runs like dogshit eating ever greater resources, fantastic!
As an SRE in a field where response time does have a measurable impact financially (e-commerce), I try to reframe it:
You can spend an extra day to shave 500ms per request off what you are implementing?
Ok, say that’s in a hot code path used on every page load, and we get what, 5mil+ cache miss/dynamic requests per month?
5 million x 500ms is a lot of CUSTOMER time to be wasting.
theuniquestname@reddit
I've had the displeasure of using Oracle forms... It's impressively slow. And it has the pattern where if you wait too long (like tab away for it to take it's 15 seconds to load the next form field) it throws away your input without saving it.
There was an era where you had to press save in case you lost power, then we had a golden era of auto save... Now we are in the auto-discard dark ages. For "security".
iamacarpet@reddit
At-least the person at Oracle who implemented it saved a few days on their sprint though, right?
The years of your life you’ve wasted using it mean nothing compared to saving a day or two of developer time :).
theuniquestname@reddit
I think they sold it as something that wouldn't take developer time to build each different form... But here I am, a developer, taking time away from working on our actual product waiting for it.
I probably didn't spend enough time to matter myself but I work for a pretty large company!
Bobby-McBobster@reddit
If you can save 500ms off of your calls by spending only a day then you did something very wrong to start with. The reality is more than your backend calls take 300ms and to shave off 50ms of that would take weeks of work.
iamacarpet@reddit
You aren’t wrong, it’s an example on the extreme side, but it would be a lie to say I haven’t seen similar from developer who don’t even want to think about performance.
I’m not advocating for optimising to the point of writing everything in assembly like in the past, but developers should at-least consider performance - that was my point, way too many just parrot something similar to the above as a carte blanche excuse to do terrible, terrible things :D.
In my real world example, on first engagement on the front end website that customers primarily interact with, most page loads were taking ~2.5 seconds.
We got this down to a p90 of 600ms within 2 weeks by using some basic tracing tools (Cloud Trace on GCP), and code wise, it was some pretty easy wins.
It’s obviously diminishing returns if everything is already implemented in a fairly sensible manner, but if you’re consistently sloppy and pay no attention to performance, lots of 25-50ms blocks quickly add up to 500ms in the longer term.. Death by a thousand cuts?
Perfect-Campaign9551@reddit
But it does waste a person's time, the users of the software
_raydeStar@reddit
I actually like this a lot!
My first CS classes were C++ then C, so I was shocked when on the job it was C# where they concealed a bunch of memory options, and triple nested for loops were a thing.
Interesting-Win6338@reddit
...your for loops only nest three deep?
maria_la_guerta@reddit
Bingo. It's not an excuse to abandon best practices, but I also don't need every piece of furniture in my house to be hand carved when Ikea exists, either.
rar_m@reddit
Developers have been focusing on making their job easier and more pleasant regardless of the impact on the product.
Developers productivity has taken over as the highest priority over anything else, which kind of makes sense .
Also platforms not coordinating together makes it harder to, so you reach for something like react native to avoid maintaining multiple code bases to have an app on Android and iOS. Or other frameworks that basically just do a web frame
Then in the webworld, there are so many abstractions on top of abstractions people use to do the same thing and so many apps are just webpages.
That's my take anyways.
Commercial-Ask971@reddit
Progress first perfection later - every PO motto
midwestcsstudent@reddit
I was just thinking about something similar yesterday. I’d much rather release one polished feature I’m proud of at a time than 15 undercooked features that then need to be slowly perfected.
As a user, I would also prefer that. But it seems that businesses (both producers and consumers of software, save for a handful of companies known for strong engineering culture) prefer the latter, and it frustrates me.
No-Extent8143@reddit
IMO this is just economics. Why would I spend extra time and money optimizing code if the first draft is good enough for people to pay for it?
There's also a secondary problem - optimization skills are rapidly declining. We, as an industry, lost the collective knowledge of how to build efficient software.
Nofanta@reddit
For the last 20 years, all people have been doing is reinventing the wheel with another ‘framework’. It’s embarrassing how immature software development is as a discipline.
Albannach02@reddit
Web pages too: that started when WYSIWYG (bad) design apps came in, although at the time dialup was usual and vast expanses of spaces added to download time and costs. My wife and I made a print-ready book using a desktop publishing application on a desktop computer with no maths coprocessor and a hard drive of 128 MB. Now, many phone apps would be too large for that hard drive. Bloatware is everywhere.
Alternative-Wafer123@reddit
I am the only guy who did significant optimizationz for my 20years old monolith app, I joined this company for 3 years. I can't imagine how the ex colleagues can wait for 2 mins to load a page, wait for 20mins for spring startup. I just simply reduce the bottlenecks to less than 1s.
Nowsday people don't have solid and fundamental CS concepts and not able to deep dive into certain level. And those offshore SE even not able to write simple crud operations. I can imagine giving you 22th century tech, and they can bring you back to the stone age
roger_ducky@reddit
The apps you’re praising for being lean and memory efficient were considered total memory hogs back in the day.
The only reason they aren’t anymore is because the chipsets are faster and the amount of RAM is now several times larger.
When I was working in the 2000s, I observed that the C/assembly code were also way more efficient memory-wise than the C++/Visual Basic code.
Again, different upper limits causes you to be more or less memory efficient.
thekwoka@reddit
"developer time is expensive, cpu time is cheap"
JaMMi01202@reddit
Watch this video to understand why: https://youtu.be/QUhC5BDZt-E?si=gd4o344lHaVi2TWP
(Warning - it's long - but I'd argue it's worth watching the whole thing.)
Meta and other big tech are basically shipping software based on its impact, and revenue or profit generation capabilities. Quality only matters when it generates revenue or profit. Anything that takes down a revenue or profit making service is EXTREMELY unwelcome. Anything that takes time out of revenue or profit generating capacity, for the sake of improved memory management, would not be entertained for a second - unless that memory management improvement somehow can be proven to generate... You guessed it; revenue or profit.
This is the world we live in; these are the "skills" (really it's more of an attitude; arguably more business-led than legacy development practices which cared about engineering excellence) that are being passed down to the next generation of developers at big tech (FAANG/MANGA etc).
It's not necessarily bad - but it does allow for toxic, addicting, parasite-like software to get shipped without any foresight or quality-of-experience-for-users/ethical checks taking place. Remuneration and promotions/growth at these companies is based on 'impact' and nothing else. Literally nothing else.
glordicus1@reddit
This is fundamentally a problem caused by capitalism. Capitalism is excellent in supporting innovation by giving people the incentive to be the first person to do something - they own the idea and get rewarded. However, it disincentives spending any more effort than is absolutely necessary on developing a product.
FluxUniversity@reddit
I remember an article from a decade+ ago that talked about the completely bug free code that NASA writes
I dont' know if this is that article
https://www.lesswrong.com/posts/TYDqF4EbH3hDDvPaB/when-programs-have-to-work-lessons-from-nasa
But you should look into the code that NASA has to write. Literally bug free code.
paradoxxxicall@reddit
You’re right, but back then highly optimized software was a business necessity, so resources were allotted towards it. Now that standard hardware is good enough to handle more, business aren’t nearly as interested in it.
Boom9001@reddit
Yeah in modern software resources are abundant and the business pressures tend to favor more features not a small set of tight efficient features.
The fact is business does care about performance, however only to the point customers care and not more. Otherwise you're just using developer time on something that generated no additional business value.
supercargo@reddit
I think it’s worth pointing out that this isn’t a quality unique to modern software. It has more or less always been true in the moment. Old software can appear highly efficient relative to the capabilities of newer hardware (or compared to newer less efficient software). But at the time, a lot of those applications were slow and bloated on older hardware at release.
TruthOf42@reddit
Converse, look at the car industry. Efficiency went from little concern to one of the major things people look at now. And it's all about what customers want.
informed_expert@reddit
Is it though, at least in the United States? People buy bigger vehicles than ever before, and that's inefficient.
bluespringsbeer@reddit
The gas mileage of the F150 is higher than ever and it’s also more capable and bigger than ever. 23 MPG, while not good in general, for that thing is wild.
TruthOf42@reddit
Yes, but even those huge vehicles are more fuel efficient than their older and smaller cousins
deux3xmachina@reddit
It's important to remember though that speed is itself a feature. Especially if you want user retention. If you're "fast enough", great! But if you can make it faster, that can open up compositions that weren't practical before or even keep users from tabbing away to do something before your code responds.
The obvious, pathological example would be something like a website that takes ~400ms to be usable, regardless of page. Maybe fine for most people, but that's long enough that some users will either avoid opening new pages or generally avoid long sessions if they can, since that's stupid annoying. Getting that down to 100ms or less could mean that you have users interested enough in the site that they want more features to get more work done! Hell, see how long it takes to get sick of a REPL that injects an additional 250ms pause before returning. Hard to experiment much when results are slow.
It can be hard to get buy-in that things need to be faster without showing how it either saves money OR enables new ways of using the software. So the trick is to keep an eye out for less immediately obvious uses.
Boom9001@reddit
As I said. Speed matters as much as users or the marketing care. And yes it's a tradeoff of new features.
Speed tends to matter tho, rarely tho is speed is wasted today. Also it's rarely a thing that software in the past was faster. They were far more efficient in how they used memory. And memory usage is the one that's far harder to sell as important to marketing until it actually becomes an issue.
ToThePastMe@reddit
Yeah in my experience I usually see optimization tasks pop up if two conditions are met:
CreatedToFilter@reddit
You say that, but I have a reasonably decent work laptop and it starts chugging because of all the endpoint management junk and virtualization distribution systems as soon as I have more than 2 excel documents open along side teams and 2-3 edge tabs. It’s wild how bad it’s gotten, imo.
paradoxxxicall@reddit
Oh I feel your pain. But the incentives for enterprise software are a little different, since the ones buying the products aren’t actually the ones using them. They won’t clean up their products until the money is threatened
0wnage2@reddit
This is basically it right here.
LitSaviour@reddit
Ever heard of "Andy and Bill's Law"?
Pbd1194@reddit
it’ll only get worse. very very hard to reason about time complexity when ai is dumping massive loads of code on you
BLOZ_UP@reddit
Wait, so my typescript transpile to 20mb of js running in node, in docker, in k8s, serving a js app that has to make 9000 API calls to 90 different internal and external services to function isn't performant?
Sheldor5@reddit
disable LTE, WiFi, GPS, Bluetooth, NFC, ... and only enable it when needed and your phone will last 3 days
subma-fuckin-rine@reddit
phone battery isnt much a priority because we're around power outlets 24/7
dogo_fren@reddit
I use my work phone for TOTP only, it easily runs for a week or more with a single charge. ;)
Sapiogram@reddit
Disable Wifi? What do you even use your phone for at this point?
Comprehensive-Pin667@reddit
More actually. I have a secondary phone that's always on but in airplane mode with wifi on most of the time and it lasts over a week. Even though I occasionally use it
Sheldor5@reddit
the cellular phone chip uses the most power because it has to constantly recalibrate its signal strength so that the signal arrives at the nearest tower with the same strength as all other signals, otherwise they would interfere/override each other
CowBoyDanIndie@reddit
Gps is pretty low power if you have a low update rate. It’s only a receiver. All of the others involve transmitting. Cellular uses a ton of power when you are far from or have no signal to a tower because the transmitter goes to max power to try to be heard. When I go hiking I turn off my wifi and when I know I won’t have service I turn off cellular. Bluetooth is on cause it talks to my watch.
brainhack3r@reddit
The problem is that humans are more expensive than computers.
Writing tight code used to be expensive.
nasanu@reddit
It was only like a week ago that once again I was shouted down in r/Frontend for saying that the Knuth quote of premature optimisation doesn't mean you don't optimise (because if you actually fucking read it it says nothing of the sort). Most devs these days seem to think that optimisation is an antipattern.
Still-Cover-9301@reddit
This is going to get deep - well, hey. Maybe some of you will think it's trite.
I agree with the sentiment. But really we are in a mess here. In the bad old days it was a struggle to make anything: it's hard to know what people want without testing it with them. So we came up with more and more lean methodologies to allow building product with fast iteration - that's how you build great product. I'd argue that the web and mobile phones are all, in part, because people want to build better things.
But then, people who fund these things realized you can do that methodology to get to the feature set that people will pay for... and then stop. Now it's a cash cow. Even though it's terrible.
And users (all of us) encourage this because we constantly go for shiny things over quality things.
And in the end this feels like an inevitable consequence of technological change. Just before covid I was still meeting people at work who didn't use a computer!!! Modern humans live for \~75 years but are open to change for only about the first 30.
The_Real_Slim_Lemon@reddit
That happened at my last job lol, CTO thought his feature set was rich enough to focus on stability and maintainability, hired me to get more resources for the task, and then the CEO fired him, myself, and most of the team lol
ok_computer@reddit
Network round trips and from api calls and compounding frameworks for design for visual stuff. Nesting objects deep within objects and passing those around for rich experiences and user tracking. Inefficient database design around ingestion vs consumption.
Maybe old software used to be written by actual leading edge devs whereas now it’s just a regular job.
I still have my sublime text and that’s snappy even with multiple language servers and text indexing so it’s possible.
darkveins2@reddit
This is sufficient for software companies to make profits and for people to wield technology. But if you think about the negative effects, it consumes an excess of energy, and it creates a lot of waste.
darkveins2@reddit
Definitely. A good example is 3D game development. Each new generation of GPUs enables higher fidelity assets, but it also creates more headroom for quickly-written, unoptimized code.
My theory is that with each new generation of hardware, the extra headroom is organically filled by more lazy code. Necessitating the release of another generation of hardware, ad infinitum. Whereas otherwise we would hit a stopping point where our hardware is sufficient for the vast majority of applications.
blokelahoman@reddit
You’re absolutely right. It’s a heinous waste. Best you can do is write things efficiently and hope some of it sticks with the developers in your sphere of influence.
old_man_snowflake@reddit
Since the 90s or so, the primary cost driver is not the hardware itself, it's the developers and how much time they spend on it. If spending time/effort on performance paid off, everybody would do it.
If spending an additional 500k on hardware means you don't have to hire 3 or 4 senior developers, that's a rational decision. And it's often not even that much. In theory of course, you're also deferring some tech debt that may blow up catastrophically. But that's true if you're relying on lone wolf programmers.
Antares987@reddit
I’ve been developing software since the 1980s. I can make software running on 1980s hardware using 1980s technology outperform most stuff done by others using modern technologies. And using modern hardware and doing things my way, I have gotten fired more than once for making entire teams look bad.
I got my MCDBA certifications over 20 years ago. The exams put a heavy emphasis on efficient loading of data, indexing, storage optimization, et cetera, and using those techniques on mechanical drives will often outperform stuff done on SSDs/cloud infrastructure.
unicyclegamer@reddit
Still seeing lots of optimizations in embedded
Prize_Response6300@reddit
For every amazing legacy codebase there are 10 horrible messes
Pale_Height_1251@reddit
Totally. Computers are just astronomically faster than they used to be but us software developers make them feel slower than ever.
I think the fact is that we software developers made more of a mess of our industry than the hardware developers did.
Hardware developers give us tiny devices for cheap that run as fast as a supercomputer did in the 90s and we just shove a load of Docker and Node garbage onto it and call it engineering.
motorbikler@reddit
Optimization is cool as hell. I think why we don't do it is covered by everybody else. I just want to share this video about a modern homebrew NES game and how they managed to fit it into 40KB.
https://www.youtube.com/watch?v=ZWQ0591PAxM
wrex1816@reddit
I think a lot of facets of software engineering has gone by the wayside. I say the same thing over and over but it's not very popular to admit. We let the barrier to enter to software engineering drop massively in the past decade and standards and practices are almost non-existent now.
The specific problem you mention though, I've noticed that too. Google Maps is one app that never fails to make me laugh. On long car journies it's always trying to re-route me onto back roads or dirt roads, tries to automatically change my route which it has multiple times if I didn't catch it. Offers me other routes which only add 47 minutes to my journey... LOL.
And I know why this is happening because I've seen how the sausage is made. You've got devs circlejerking themselves over their cool graph search algorithm to constantly find those other routes that nobody would ever choose to take. But they don't give a fuck that the app is inefficient as fuck. I mean, you'd think that when traveling, the option to conserve battery or data might be more useful, than that circlejerk? Nope. Those devs don't live on the real world. They build navigation apps but don't live places where they really get used... A guy with a push cart and a few burner phone literally broke the app. It's hilarious. These devs just circlejerk the things it's possible to do and not what people actually need.
mechkbfan@reddit
Electron apps are the prime example
Run like a POS, but they're easy to develop and deploy
Until people vote with their feet and wallet, the cycle will continue
SemaphoreBingo@reddit
20 years ago people were singing the same tune about bloated modern apps.
jorvaor@reddit
I can not believe I had to scroll down so much to get to this comment. Really, at the end of the nineties we were already having this conversation.
I guess that optimization is very expensive in time and effort.
syklemil@reddit
Yeah, I remember some stuff from that time where you'd really notice the lag on just keypresses. I was taught Java & Eclipse around that time and, well, I picked up vim and basically swore off Java.
lupercalpainting@reddit
If my car only gets 5 MPG I’m a lot more conscientious about how far I’m driving vs if it gets 50 MPG.
The chip can do more, so we use more of it.
PragmaticBoredom@reddit
To extend the analogy: You can go buy used cars that get 50MPG right now, cheaply. People don't buy them, though, because they prefer all of the features and nice things about modern cars. So they buy the modern car and then complain about gas mileage.
Same scenario with software: You can go back and use all of that old software that people praise, but people prefer the features and benefits of modern software.
smartello@reddit
As far as I'm aware, 1998 Toyota Camry has MPG of 23/32 while my current one from 2021 is at 46 (although it's a hybrid, but there were no hybrids back then, prius was more like a proof of concept at the moment). Car engines became much more efficient thanks to European regulations and gas price.
lupercalpainting@reddit
Civic VX got 45+mpg in the early 90s. The only engine was less efficient for sure but also the car weighed practically nothing.
PragmaticBoredom@reddit
I don't recommend anyone go buy a nearly 30 year old car unless you're a fan of constant repairs.
I was referring to vehicles like a 10-year old Prius that gets 50MPG.
Jaded-Asparagus-2260@reddit
Except that old software doesn't run on modern OS or hardware anymore, the license servers are long offline, there are active exploits etc. pp.
I'd gladly use old or optimized software. It's just that I can't join Team's meetings with my Miranda messenger.
wvenable@reddit
But when you run Miranda on your high resolution display and notice that you have lean in really far and squint in order to read the messages.
HerbertMarshall@reddit
It's called proportionality bias or proportional reasoning.
lupercalpainting@reddit
Is it? https://en.wikipedia.org/wiki/Proportionality_bias
Seems more like induced demand to me.
HerbertMarshall@reddit
Huh, yea you're right. I stand corrected.
Brahminmeat@reddit
It’s quite literally a power vacuum
gomihako_@reddit
Because you need to download 100mb of assets to interact with any shitty site. DB/network are the bottlenecks there.
Looking at you, Jira.
lacrem@reddit
Everything nowadays is a waste of cpu cycles, specially web and mobile. Things get built over the top of the top of the top of the top of a framework with 9678557 dependencies.
Add unskilled labor force following Uncle Bob and other like a sect and you get where we are now.
Upper-Discussion513@reddit
Given that there is a lot of performant software out there being written still, and that there has been this huge shift towards dedicated coprocessors - be it some AI chip, Secure Enclave, GPU - I wonder if the drive towards higher abstraction may be due to device compatibility.
For example, Electron apps work on MacOS, Windows, and Linux. React Native supports iOS and Android. This isn’t even getting at ISA, which itself is fragmented. Even if you go with single OS, single Arch, you still have that OS API that can support multiple devices.
So in this situation, we’d likely see much more optimized code if there were some standardized device that everyone uses. However, this assumes some sort of hardware monopoly which can’t be achieved due to antitrust law.
In other words, the generalist animal like the rat is not known for being particularly fast or strong or smart. However, it is optimized for adaptability and so can thrive in many different environments.
Goodie__@reddit
I feel like there's an acceptable level of performance for a given "thing". And as long as we reach that, then time and effort isn't put in to optimisation.
Yes, we have enough processing power to load every user in to memory, transform them, send it o the browser, and let that handle pagination locally. We possibly shouldn't, but we can. And the alternatives are more complex. And for better or worse, by moving 10,000 user records to the end user's browser, switching pages is really snappy.
Sethaman@reddit
Absolutely true but I wouldn't call it lost... just less common and not as important. For awhile, it was hardware that was the constraint. Then it became software. Then hardware again. Now we are back to software. I relish an optimized codebase and applaud those who take it seriously. It'll be back.
“Hard times create strong [programmers]. Strong [programmers] create good times. Good times create weak [programmers]. And, weak [programmers] create hard times.” ― creatively edited quote originally by G. Michael Hopf, Those Who Remain
agumonkey@reddit
sorry mention the r-word, but some rust cli did bring back the sense of speed that matches the silicon speed.
there's something indeed weird about performance perception, i felt the paradox few years ago, when systemd, cheap good ssd, improved kernel, high core count, wayland all popped up.. you'd get a nice speed bump, but still there's always too much lag or flaky perf.
meanwhile I found some old 2.4-based kali linux live usb key, everything was crude but there was a sense of direct action that threw me off, i expected the thing to feel slower than the most recent desktop stack you know...
lantrungseo@reddit
Here my 2 cents: - In the past, we have limited resources, hence the code needs to be super optimized. But that means only few excellent people can do engineering - Now, with abundant resources of highest quality, we can lessen the focus on hardware optimization to some extent, i.e, the engineering entry barrier is lowered. So more people can do engineering, more things can be produced, and the chance that one of those become great stuff is higher than ever - Some modern tools have great optimization algorithm under the hood that works well 99% of the cases, while hiding insane complexity behind abstraction layer. Well, shitty code would produce shitty app for sure, but with modern hardware and modern tools, the impact of those shit is reduced to some point I believe.
pheonixblade9@reddit
makes me think of Mel
NoCreds@reddit
I believe this is the story of VSCode -> Zed, but could be wrong.
VSCode: easy to build, refine the idea, heavy but versatile like clay. Zed: harder to build, requiring more upfront design and understanding (thanks vscode), prioritized optimization.
Simple-Box1223@reddit
I really don’t feel this.
Around the year 2000 I was avoiding Java GUI apps that were worse than Electron apps today. The same thing happened with shinier apps replacing functional ones.
UncleSkippy@reddit
Been in the industry for about 30 years, back when you needed to understand the thunk layer on win32.
Modern software is further and further away from the hardware which means the opportunity for optimizing close to the hardware is left to the layers under you. If you rely on other people to write (near-)optimal code and you get to take advantage of that, then you don't need to think as much about it. It is definitely becoming a lost artform.
Is that bad? Not necessarily. The original dream of "write once, run everywhere" is closer than it has ever been - though we aren't quite there.
That said, it does make me sad that (pick your favorite electron app) is hundreds of Megs or over a Gig in size and consumes RAM on the order of hundreds of Megs or Gig for the level of functionality that it provides (SD card writers/copiers I'm looking at you). A similar native application would probably be dozens of Megs in size and RAM consumption. But, the hardware is at the point where consuming that many resources means less. But but, the software would run better, be more responsive / performant, and be a better software/resource neighbor on the system if it was native or even running on a single layer of platform abstraction, with the developer getting a better understanding of how write efficient code.
End of "old man yells at clouds" moment.
Choperello@reddit
Engineers from the 80s looked at those 2000s apps and said the same things as you. And engineers from the 60s looked at those 80s aps and said....
Ilikewatchingtv@reddit
Senior dev at previous companies, but regular sde at a faang currently
Completely agree. In the done right, fast, cheap, get two saying. Companies don't wanna hear "right" just fast and cheap. because their customers want it now.
Most software/program management books I've read talk about how a project launched without tech debt is a project launched too late.
Ok-Scheme-913@reddit
I think phones lasting one full day couldn't be a worse example - feel free to run that "optimized, lean code" on the same hardware, e.g. on a pine phone and see how it boils itself running x11 or any other "traditional supposedly efficient code", while it will be much better with a normal android OS (though the hardware is still terrible). Racing to suspend was not done back then, and an idle CPU still eats a shitton of energy.
That you can run an actually reasonably fast CPU (there are ray traced games!) in a body that fits into your hand and can do 4k videos at 120+fps, while navigating via GPS, now even sending messages directly to satellites, but in general being able to handle a vast amount of connection protocols, handling multiple touches at very high resolutions seamlessly, and lasting more than a day puts it into the magic category even from the perspective of 8-10 years ago.
Nonetheless, there are many areas where I do agree with your observation.
TopSwagCode@reddit
Earlier it was a requirement to performance optimize for devices to be able to run your code. So if you didnt do it, people wasn't able to install your software and company didnt make money.
That's not really an issue anymore. So it's cheaper to spend less time on building and optimising, and just spend more system resources.
Take Electron for example. Build cross platform apps using Web tech for desktop. It makes it easy for certain developers to target new audience with minimal code changes. Draw back its memory hungry.
Same can be said for many other framework that aim to make barrier to entry easier.
lennarn@reddit
This is the curse of no-code development platforms that hyperabstract layered frameworks one over the other, duct-taped together for the sake of non-developer convenience allowing designers to make their business mvp apps without learning to code. It becomes a layered, bloated mess. Please buy my my "no-code" Node.js app—running on Firebase, wrapped in React, bundled with Webpack, served through Cloud Functions, and still struggling to render a basic form without spiking the CPU.
alannotwalker@reddit
In this modern fast moving world everyone wants everything fast, who really has time to optimise these days ?
mysteryihs@reddit
Turns out that adding more lanes to the highway doesn't fix traffic, it just means more people who weren't driving before are now driving
Far_Archer_4234@reddit
There is a tongue in cheek philosophy in software development:
"Premature optimization is the root of all evil"
Obviously there are evil roots that are not premature optimization, but the existance of the statement reveals a strongly held belief: Software devs only optimize for performance when it is a known requirement. Very few of us spend our employer's time finding an O(n) solution when a brute force solution is good enough. Readability trumps performance in most cases.
editor_of_the_beast@reddit
Individual chips don’t really matter when a single user interaction ends up talking to many (dozens? Hundreds?) individual machines. We’re jamming more and more data onto disks, in memory, and transferring it across networks.
I don’t think instruction cycles are the bottleneck.
Mourningblade@reddit
Optimization can be along many axes. Resource use is only one of them.
Here's a few observations that help explain why we are where we are:
Most software that's maintained is internal software that's used by at most hundreds of thousands of people. Spending a SWE-year building features is likely to save those people much more time than reducing their wait time. And if that's not true, it's likely that optimization has already been performed because the users hated the slowness so much.
For most software, it's more important that it gets the right answer than it gets the answer very quickly.
The "right answer" changes due to discovery. Most software is not a finished product but part of a distributed search problem. This means that quickly getting the next right answer enables the next step in search. The cost and time to change the software is a larger determinant of its quality than how long it takes to get to the old, now sub-optimal answer.
The performance bottleneck of the software is usually a small portion of the whole. So the particular part of the software you're working on is likely not the problem.
Frequently there's "room at the top" - choosing a better data structure or system has larger returns than minor optimizations spread across the software. This can frequently mean that "optimizing" a particular program can look a lot more like "move this calculation onto an optimized system" (for example, I've seen a relatively large analysis program rewritten to have nearly all of the transforms done in SQL by a reporting service - it was much, much faster, and the original code wasn't slow).
That said, I've also worked on systems with frankly bad developers. For them, the properties above did not hold because their code was so damn awful (O^n everywhere...), but on those systems it was also true that providing THOSE developers with a person-year to make the system better would NOT be a good investment. We were better off getting a talented senior to come in and help people learn how to un-fuck the codebase and stop adding to the garbage fire.
Wide-Gift-7336@reddit
I’m doing our factory programmer to flash a small microcontroller. It’s only a megabyte image but the factory is complaining about flashing times. The amount of black magic wizardry we had to do to optimize the entire pipeline from optimizing the DMA statemac brine on the UART bus, lowering the priority of other threads, looking through the assembly to find some instructions that could be optimized. Etc you get the point.
It got us from 110 seconds down to about 50. Was I in integration hell for a while sure but man is it rewarding to see how we could making it literally as fast as possible in our achieved objective.
I think in smaller embedded systems we still think very seriously about this.
successfullygiantsha@reddit
My theory has been that every problem has been SaaS-ified in that tons of money is dumped into building huge tools to solve problems that could narrow scope.
ub3rh4x0rz@reddit
There's a bit of decomposition fallacy going on here. Just because some software is resource hungry, that doesnt mean every part of it is resource hungry. Modern software often has resource intensive features that were fully out of scope of its older ancestor.
...that and as others have mentioned, "good enough" is the requirement for business, and it's bad engineering to needlessly optimize something simply because you can, it's elegant, etc.
stupid_cat_face@reddit
As a senior dev from back in the day, seeing the mindset of younger devs baffles me sometimes. The coding paradigms that confuse me:
Just crash the thing on failure. (Relying on containerization to just spin it up)
Shitty eventual consistency implementations that have no upper bound and no feedback
Just run another container mentality with no regard to the capability’s or resources needed on the machine
Just use X framework without caring about dependencies
Expectations of successful operation. Not handling error conditions or error cases effectively
Not packaging software up nicely w/ docs, CI/CD relying on manual steps and calling it production ready.
Admittedly I come from some old school development and have some “dinosaur” philosophical values wrt software but these things seem like basic parts of a production system.
Alive_Direction6123@reddit
I began my SWE career in embedded systems/real-time software using C++ and Python. Optimization and resource management were mandatory.
The past year I've been doing C# and JavaEE full-stack development revolving around updating legacy codebase and end-of-life platforms. The amount of garbage is astounding. Adding to that is an obsolete vendor tech stack due to "existing support" contract, near non-existent program management, zero documentation, and unclear requirements.
Environment consists of QA and Prod. No other stages. Zero tools, no testing framework, and no automation.
A release to prod is made by a senior dev manually building and making the release.
Only supported and approved IDE is Eclipse 2018-09
potatolicious@reddit
This is part of the problem. Our choices of tech stacks is a very large part of why stuff is heavy and slow.
Like, here's the absolute insanity that is a normal piece of desktop software for a very basic use case (think something like Slack):
The core business logic of the app is implemented in an interpreted language with a notoriously slow VM. The language is also single-threaded in 2025 with minimal to no capability for concurrency. The reason for this is to save on developer time and effort, and avoiding maintaining duplicative code bases.
The GUI, despite being fairly straightforward both visually and functionally, is generated in a notoriously slow text-based format, which inherently is memory-inefficient to parse and render. The core GUI "framework" relies on another text-based layout and styling language with a notorious performance-poor layout process. The reason for this is to save on developer time and effort, and avoiding maintaining duplicative code bases.
The whole app is large to download, because it brings in an entire browser codebase, despite not using the vast majority of the bundled and imported code. The reason for this is to save developer time and effort - deeper optimizations on the distributable does not save developer effort but in fact costs ongoing developer effort.
The call is coming from inside the house! This stuff sucks because of us!
ClayDenton@reddit
Yup. Yesterday I had the misfortune of doing a large order on the IKEA online shop. It's wildly unresponsive, very laggy and crashes all the time on my i7 32gb ram developer machine. Feels like the epitome of bad modern web decisions.
randylush@reddit
I had to buy a new phone recently because my 2020 phone was getting too slow.
Phones do not do anything more useful today than they did 8 or 10 years ago. Let's be completely honest with ourselves.
But today they are so slow from all the dumb bullshit that's going on in the background.
Like even just taking a photo is something that a phone from 2020 struggles with now, probably because of all the dumb AI bullshit that's going on. Can we not just take regular fucking photos anymore?
My job requires me to use an iPhone but I am thinking of trying out LineageOS just to see what it's like to use software that's actually debloated and optimized
rottywell@reddit
Yup, ads.
Monitoring, etc. apps have become way more than the tasks they served and devs don’t have to actually learn how to optimize them. They just gotta work and not be looking and acting weird.
SituationSoap@reddit
Takes like these have always been very funny to me. Most of our software runs much faster and much more securely than software from 10 years and 20 years ago. Anyone lionizing software from the aughts is simply missing mountains of context about how utterly horrible that software was. Microsoft would regularly need to ship multiple emergency OS-level patches every month because their code was unsustainably insecure.
But like, React isn't 8kb, so I guess it sucks, or something. Gottem.
koreth@reddit
Back in the day, we optimized the living daylights out of things because we had no other choice. Try running those legacy codebases on the hardware they were written on in the early 2000s and they won't seem so lightweight and snappy any more.
In the mid 90s I worked at Sun Microsystems and had a low-end workstation on my desk. I remember griping to my boss that we should force the guys working on the GUI libraries to use the slowest possible workstations so they'd see how annoyingly laggy the UI was on non-top-of-the-line hardware.
In other words: this has been the way of things for a long time. I expect people in the 1960s were pining for the days of fast hand-crafted machine language instead of slow bloated compiled code.
-FAnonyMOUS@reddit
Yeah. We have a growing storage and compute speed, but damn, 1 minute 8k video would probably a GB in storage size. A single photo now is around 50MB, I guess. A game app now is like a GB in size minimum.
So it neutralizes the advancement and usefulness of those storages and computes.
Ok_Technician_5797@reddit
Today - "Who cares, we have plenty of memory"
vs
Then - "We ran out of memory to store our code"
IvanKr@reddit
In 2000s PCs where single core! They probably had less than 128 MB RAM and maybe a GB of free slow disk space. Software that could run decently on that would fly at lightspeed on todays machines by default. And web technologies were not as developed to make better UI than Windows Forms.
Stubbby@reddit
I remember taking to the leadership once, I said I need about a week to rework our data processing pipeline since the data sets that we are getting now blow through the 256 GB of RAM we have on our processing server, I just need to make it work in sections without loading everything into memory. The response was: "does 512 GB of RAM solve the problem?" I said, "yes, for now...".
Ok, they called the IT guy, said, you have a special project, go to microcenter and buy what you need to get that server to 512 GB RAM.
Next morning, I had 512 GB of RAM and postponed the need to do it properly.
suntehnik@reddit
>> feel like software optimization is becoming a lost art
Before flash memory on motherboards became a commodity, BIOS (and other low-level microcode) stability was much higher. But this is my perception, not fact proven. Maybe I am too old.
droidekas_23@reddit
[Andy and Bill's Law](https://en.wikipedia.org/wiki/Andy_and_Bill%27s_law)
"what Andy giveth, Bill taketh away"
nickchomey@reddit
People here will likely appreciate these phenomenal series on performance, people at the margins, JS disasters etc
https://infrequently.org/series/performance-inequality/ https://infrequently.org/series/reckoning/
C2-H5-OH@reddit
This is one of those things I rant about with my more technically inclined friends. Decades ago, hardware being crazy expensive and limited forced engineers and devs to think outside the box to make their games and software run on it. So many optimization techniques being used today were born in those times.
Now that almost every has a TB of storage and having 32 GB RAM doesn't break the bank, and good chipsets don't cost an arm a leg, almost all devs went from trying to optimise to just assume more RAM/better graphics/better chipsets will be needed. Games especially being made now are more and more exclusive to the latest graphics card or other such resources.
I don't really have a point, shit just sucks and I can't think of a solution.
martinbean@reddit
It’s cause and effect. Most developers don’t care to bother optimising for that very reason: computers are more powerful than ever, so is far more forgiving of “unoptimised” software as it has the power to just run it any way.
imdavidmin@reddit
Squeezing extra performance out of technology is only necessary when it's needed. It's always going to be a balance between speed of delivery (which benefits real people with new capabilities) and optimisation (when the existing solution can be improved to benefit people)
If we go extreme on one end, you have to machine code every single program and we'd be still in the digital stone ages. Other extreme you'd be running multiple nuclear reactors to power Cyberpunk 2077.
What you observe now is simply the most optimal solution
AndreVallestero@reddit
Wirth's Law
It's hard to justify optimizations over feature releases unless your org leader has a very strong technical background.
autokiller677@reddit
Optimization in the past was only done because hardware was not powerful enough. It’s not like people 30 years ago were intrinsically more disciplined. It was a necessity dictated by the circumstances.
As for today, I rather have more features than just few that have been optimized far more than necessary for a smooth experience. So as with everything, balance is key.
SeaLouse6889@reddit
More powerful devices enable software businesses to hire a bunch of cheap devs to slap together giant piles of low barrier libraries. In business, you move product. To a web agency, that means cranking out websites by the gross.
baldyd@reddit
It's why I enjoy working in videogames. You're often running on fixed hardware so optimisation is still incredibly important.
ImYoric@reddit
Absolutely. We're currently living on the assumption that CPU speed will scale up infinitely (which has been wrong for 15 years or so) and that we can infinitely scale up by adding more CPUs/GPUs/nodes (which is extremely wasteful, was mostly true when VC money was free but isn't anymore, and anyway doesn't happen by magic, you still need lots of effort and skill to make it work).
I believe that Python and Ruby on the server are, to a large extent, symptoms of this. Since CPU is (considered) cheap, let's prioritize iteration speed!
cd_to_homedir@reddit
The performance angle is really mostly influenced by business decisions, not raw competence. We still have competent engineers.
I've created API endpoints that fetch data from a large database and certain API calls take several seconds, sometimes even longer. Could I optimise it to achieve subsecond response times? Yes but not without refactoring the entire app and the database structure, which would be prohibitively expensive. Does the current solution solve a business use case? Yes, because it's just good enough.
As an engineer, I would feel better if I was given the opportunity to optimise such software but I don't get to. In the past you had to aggressively optimise software because you had a very limited resource budget. Nowadays though hardware is relatively cheap. It's just the natural consequence of technology reaching a certain level of maturity.
DigThatData@reddit
rayfrankenstein@reddit
You also didn’t have the “reuse cheap web developers to write native apps with JavaScript” approach back then. And don’t even get me startted on electron for desktop apps.
And to top it off, they then added agile on top of that so any work you do that does not meet your story points quota for the fortnightly sprint is risking your own termination for a technical excellence that management doesn’t cate about.
SuaveJava@reddit
THIS. Modern Software "engineers" don't have the autonomy to build quality if their management doesn't value it.
progmakerlt@reddit
I completely agree. Software was way more optimised 10+ years ago.
Now people care less about that - if there is a performance issue, we will give more CPU / RAM (we’re in cloud anyway, so editing a config or moving some slider is enough) and that is it.
Plus, time to market now is crucial. There are bunch of companies competing for the same place , if you over optimise things - you might go out of business really quickly…
MiataCory@reddit
That's it. There it is. That's the thumb on the thing that makes it feel like we've killed optimization entirely.
We move as fast as we can. Sometimes that's hardware limited. Sometimes it's that dev's can't spend too long optimizing AutoCAD27 or they'll never start AutoCAD28.
But, if you import most of 27, fix a few bugs, add a few features... Voila, 28 is here and the cycle continues.
It's as optimized as it needs to be. When you need real-time, come on down to us embedded engineers and we'll ask you for a memory map. We've got plenty of other work to do too.
Scrubbing through Chrome's memory trace log to try and figure out the timing manually sucks. Adding timing to functions to figure out the best one to start optimizing sucks. It's all a huge time-suck on a thing that really... already works "well enough".
mxsifr@reddit
I've been in the industry for 20 years. It's too hard. Seriously, companies just don't want to pay for it. The consumer market has no idea how much better it could be, so they don't complain. They think computers are just "like this". The industry is built around short-term private profits, not long-term quality. It's not worth it to the shareholders and execs when they can get their payday and leave everyone with an inferior product, just like every other industry. We're no exception.
abandonplanetearth@reddit
Would you rather have powerful hardware that is left idle?
kbder@reddit
Yes, that’s exactly how you get maximum battery life. “Race to sleep”.
kracklinoats@reddit
Jevon’s paradox: software edition?
uuqstrings@reddit
Someday in the future, ironically, they're gonna talk about how "it took a long time for software to catch up to advances in hardware"
Scottz0rz@reddit
Yeah, there's a mix of factors and ultimately it comes down to money.
The vast majority of businesses and apps don't really care about performance as long as the UX feels good: they prioritize development velocity over correctness right up until it becomes a bottleneck, then suddenly it becomes a priority. If your app is a battery hog and your API call takes 50-100ms longer than it should due to the server doing dumb stuff, the problem isn't going to be addressed until the UX is degraded enough that it pisses off users and costs the company money.
Businesses are very reactive when it comes to technical debt and 99% of the time will not give a shit unless you can show financial impact from addressing technical debt that would outweigh feature development.
As a result of business priorities shifting away from performance, developers don't really put as much thought and often don't know when they're doing something stupid, or more likely, they do know that the thing is stupid, but they don't have time to do things 100% the correct way because the thing works.
I don't really see it as an issue, because the upside is that when you see dumb stuff in the codebase 5-10 years later because it's become a bottleneck when business requirements change enough and stuff has scaled up 10-100x, senior engineers can still find and fix goofy shit in the code. Case in point, we're all talking about this right now.
A differentiating factor between a regular and a senior/staff engineer is not only identifying performance issues and technical debt in general, but being able to strategically target them, roadmap things, and properly communicate and justify them by showing financial impact.
Business people only understand money, so you have to find ways to speak the language.
latchkeylessons@reddit
That's accurate. Most businesses are targeting highly specific, timely revenue targets broadly these days - not just in software development. Many would say this forces quality downward. That's not a function of this career/industry so much as the evolution of global business. It definitely manifests in non-performant software development in different ways, though.
SoggyGrayDuck@reddit
Yes! The cheap storage and processing has led to some terrible practices. My arguments for using best practices went out the window 10 years ago and now companies wonder why everything needs a complete redesign. We've also been hamstrung by getting people without hands on experience in leadership positions. They feel foolish when digging into a problem so they just let the engineers decide. Unfortunately that leads to spaghetti code and now it's found its way into the backend. The backend used to hold the important pieces so the front end could move fast and even start over quickly if needed. Now that everything is so interconnected and intertwined you can't fix something without breaking something else. Worse of all, no one in leadership even wants to admit this problem exists!
Western_Objective209@reddit
There's still some amazing optimizations going on in things like core libraries and compilers, but I think most devs will agree with you that application software is pretty wasteful and sloppy.
In my experience, it's always a trade off. I build a new application, it's tight, it's snappy, feels great to use. Then as time goes on, we keep adding more and more features, they are considered essential to the business, but they no longer fit well with the optimal architecture that was initially designed. I used to try to push back with mixed success, but the thing is the users care about features a lot more then they care about performance.
powdertaker@reddit
Because layer upon layer, upon abstraction, on top of translated languages, on top of frameworks, on top of other stuff..........
lastPixelDigital@reddit
I am sure there are different niches that prioritize performance but a lot startups building mvps are just trying to get that money. You could argue the same for certain gaming studios.
sethamin@reddit
Constraints breed creativity, but a lack of constraints breeds productivity.
Software is more of a volume industry now as it doesn't have to be as heavily optimized (in most domains).
son_ov_kwani@reddit
I’ve been ranting to my guys about it but they just don’t get. They just say “compute is cheap”. In my childhood years i used to use the Apple iPod 1st generation and mahn it was so smooth and fast. 1GB of RAM on a desktop was quite a lot and I could play some heavy games like Doom 3, Dues Ex.
Today’s software is all about shiny UI and zero resource optimisation.
dethswatch@reddit
it's worse than that- we change frameworks (or similar) so frequently that we can't master anything.
Fun-Shake-773@reddit
Performance optimization won't be paid at least at most places I know so far. Only ifs really becoming an issue. But more of I can't work at all slow is fine 😅
SolarNachoes@reddit
They released Cyberpunk about a year early on a project that was several years “late”. Saved the project by making sales but took another year to fix and optimize.
pinpinbo@reddit
How many people have the backbone to delete code instead of adding new ones? Very few.
PopularElevator2@reddit
This is a huge issue for me. People love to throw more hardware at the problem and then start complaining about budget issues, especially in cloud environments.
For example, I worked for a mid-size company that was spending close to $250k a month in cloud cost because of poor optimization. I reduced the spending by 50k a month by fixing their db design issues and optimizing their queries and code. We scaled down their servers, db, and cache because we needed less hardware to run the software.
EmilSinclairsFriend@reddit
You do realize that most people write software as a part of a profit driven organization, right? Devs do (and should) align their priorities with the organization's priorities. If I'm asked to make a cross platform desktop app in 2 months, and I deliver an electron app, and have loads of happy users who pay money - do you think anyone cares that this app could have been written in another way to be 100x more performant but took a team of 5 a year to finish?
awkward@reddit
It’s an enormous waste of time and resources. In order to squeeze out marginally more value from developers we’re making products that waste everyone’s time.
Website bloat is a big one- every site is designed for mobile first, but also assumes full WiFi connectivity over broadband for basic functionality.
Softmotorrr@reddit
I’ve felt this as well for a while and my view is it comes down to two major things: 1. Time to market is often held as the #1 priority, with the intent to establish a monopoly of some kind after that. With this business strategy, time spent optimizing beyond the absolute bare minimum is seen as wasted dev time. 2. Most software businesses sell to other businesses and the end users have no choice but to use those applications, so performance isnt a concern until way later, and then only considered in specific cases. With the development ecosystem primarily driven by factors like these, the need for efficient resource usage has fallen and the practices around making performant software have atrophied.
Tldr: there’s rarely a business need for high performance software. As someone who loves performance tuning and optimization, i am sad.
EntshuldigungOK@reddit
Performance definitely has taken a secondary citizenship to scalability and fast production, and by a distance.
It's understandable though - computing costs are going down.
Performance matters only when it's noticeably.
Hundredth1diot@reddit
I once wrote a content management system that ran perfectly fine on an original Playstation Portable browser.
Arkarant@reddit
Back then: Build unoptimized software> doesn't run on end users PCs > no sales
Today: Build unoptimized software> runs on end users PCs, laptops, tablets, phones > sales
Simple as that
UnregisteredIdiot@reddit
Once you get past the table stakes (use a hashmap instead of searching a list 15,000 times), high performance code tends to come at the cost of readability and maintainability. A famously extreme example of this is the fast inverse square root function from Quake: https://en.wikipedia.org/wiki/Fast_inverse_square_root
To some degree, sacrificing performance in favor of readable code where it's easy to come in and fix bugs is a good thing. The trouble comes when devs stop thinking about performance entirely. We have Streams/Iterators/IEnumerables to enable deferred execution and reduce memory usage. What do devs do? Drop it to a list and then stream it again. Or worse, iterate an IEnumerable repeatedly causing the deferred code that generates its elements to execute multiple times. Need to intersect two sets? Let's do an n^2 "algorithm" that involves a million .contains() calls. The performance implications genuinely never occur to most devs.
SnooStories251@reddit
Software bloat, software scope, software chrome, etc.
More features are delivered at the cost of making code faster.
Antique-Stand-4920@reddit
Performance optimization often adds complexity. If a performance optimization won't yield significant benefits, then it's not worth the extra complexity.
Solome6@reddit
Performance optimization starts at the data structures. If they aren’t optimal, you will never get to true optimization. Start there and and it won’t have to be super complex later.
rashnull@reddit
If it don’t make money, it’s useless!
PragmaticBoredom@reddit
There are phones designed to prioritize battery life. You can even get simple phones that last an extremely long time.
You just don’t want them, so you haven’t sought them out. And because people don’t buy them, companies don’t product many of them.
This is a perfect example of revealed preferences differing from stated preferences. You don’t actually want the phone with long battery life, you want the phone with the best features that is also small and lightweight.
dendrocalamidicus@reddit
Some of these inefficiencies are just the natural progression of the tech imo. Things like everything being a web app just makes sense. It is a good thing to have consistent cross platform UI tech stack (HTML, CSS, JS) and it means you can make the app once and have a desktop app, mobile app, and in-browser app all through one implementation. I could not care less about the processing inefficiency of this - it has become popular because it allows you to deliver value effectively.
I would say the same for front end frameworks like react. Is it crazy complicated for glorified crud apps and calculators? Yes. Is it extremely easy and quick to use? Also yes.
Technology exists to facilitate our needs, and the increase in processing power facilitates us doing that with more layers of abstraction, and thus ease.