Are your companies actually saving money with AI? Or just putting time into it, hoping to do that eventually?
Posted by Complete-Equipment90@reddit | ExperiencedDevs | View on Reddit | 148 comments
To me, it’s feeling like a hype cycle. But, I’m not sure of this, because my view may be too narrow. So, I’d like to hear from you what you are seeing and experiencing at your own companies.
Details, to explain my perspective.
I’m an IC, 10 years in dev with a publicly traded software company, 25 years in the software industry. I mention this as during my time, I’ve experienced the dot com bubble, and several other cycles. Investment trends aside, there are always 3 core cost-reduction strategies, that get applied at opportune points: layoffs/reduced hiring, offshoring and automation.
AI seems to me to be this moment’s attempt at cost savings through rapid automation (and sometimes offshoring, in the cases where it’s been companies using cheaper labor under the guise of using AI). I also am thinking that this can provide a convenient explanation to investors in regards to RIFs. A way to remedy the common situation that a lot of companies don’t need the growth workforce that they had in 2022 anymore. Simply put, telling the market that you’re leveraging AI for cost savings sounds better than reducing hiring because you can’t produce at the same profitability as before.
As interesting as AI is, at least for some tasks, I’m not seeing that it’s really up to the task of writing important code without a lot of hands on attention. Again, feel free to correct me! I’m only one person. I bet it works well sometimes, when the application really matches something it can automate reliably. But, not in general. And, therein lies my skeptical view of the level enthusiasm I’m seeing at the C level, and in the media. While there is a lot of sign on for AI, there usually aren’t a lot of details provided on any specific projects.
So, where are the breakthroughs? Microsoft is going to give AI tools to teachers in WA state. But, I’m not clear on what scenarios they will help with. I’ve heard: lesson plans and grading. Ok, but those really aren’t the hardest parts of teaching. I suppose chatbots can reduce customer service burden. But, what more than that?
briannnnnnnnnnnnnnnn@reddit
this is going to be a bigger bubble than 08 and the dot com crash honestly as someone who has won ai hackathons and overseen ai products
bluetrust@reddit
If you search Google trends for "ai bubble (topic)" it started building up steam starting in August. No idea if this is a reality or not, but I remember 08 and by 06 remember people asking if we were in a bubble and when it would pop. So maybe we have a while to go, maybe it'll happen tomorrow. Likely if it happens it'll be sudden and shocking.
OneCosmicOwl@reddit
I'm starting to lose hope on this bubble bursting. Or as you say, when it bursts it'll be so bad that I won't even have a job to support me to feel schadenfreude at the same time.
boringfantasy@reddit
Yeah It's not gonna burst, the models genuinely are improving at warp speed. Codex is oneshotting most tasks in industry now.
danintexas@reddit
It is a bubble and it will pop worse than 08. I am in agreement with /u/briannnnnnnnnnnnnnnn
With that said I think the bubble is being supported by the metric ton of money thrown into it by the giants like MS/Meta/Google.
It will pop and it will be VERY bad. IMO we are starting to see the music slow and folks are scrambling for chairs.
OneCosmicOwl@reddit
i don't want to leave mr danintexas
Complete-Equipment90@reddit (OP)
I get you. There’s a high investment. Are you thinking that buyers will pull back on buying ai services, or just that investors will stop responding to the same song from so many companies, when they don’t return on investments?
SnakeSeer@reddit
Without AI speculation, the US economy is already in a recession.
subma-fuckin-rine@reddit
no idea about money side of things, it hasnt saved me much time. but it has saved some effort, although not much. mainly used it to write tests, which works pretty well if the code is structured to be testable. but even then, i've gotta double check the output, check its actually testing the right things and not mocking everything or disabling a test so it passes (lol). so the only tangible benefit for me is that i can kick it off and do something else for a little while in the meantime and come back to the tests later. kind of a toss up if its been worth it or not
maccodemonkey@reddit
I keep hearing analysts going “We’re talking to CEOs who are talking about cutting 25% of their workforces! It’s real! People need to stop doubting!”
CEOs are always talking about cutting their workforces by 25%. If the day of the week ends in “day” they are thinking about cutting their workforces by 25%. I’ve been hearing about that since the Covid boom stopped.
What changed was two things: - CEOs have always thought about cutting, but never wanted to scare their employees into leaving before they were ready to cut. Now with the bad CS job market they have no problem talking about cutting because people are job hugging. If you’re scared your CEO is going to RIF you your only choice is to work extra hard instead of job hopping. - AI gives them the cover to talk about it as “well I’m not a bad person this is a technology change.”
So yes, totally not ready to replace devs unless the work was totally brain dead to begin with. It’s just continuing the post covid trends but now out loud.
I hope devs remember this going forward. I remain confident the market will rebound in a few years. But everyone should remember behind the perks they’re always thinking about cutting the moment it becomes convenient.
Which-World-6533@reddit
AI is at most replacing the tasks that previously would be automated with scripting, etc with "AI".
And there's a lot of stuff that can be automated in most businesses.
PopularElevator2@reddit
Someone gave a huge presentation at work with 20K+ members in attendance. It was an Excel macro that was done in Copilot Studio. They made a huge deal about it, but it was pulling data from multiple db and Excel spreadsheets, then importing them into a main spreadsheet. They could've done that in vba.
djslakor@reddit
Could've done that in VBA < write a single prompt in 30 seconds and get the same result.
bluetrust@reddit
That's not how it works. It's non-deterministic so you have to test the hell out of it to quantify how reliable it is. Accounting automation that's 90% reliable? That's a disaster every month. 98% reliable? Maybe it's ok.
It costs a ton of money to test a big automation and quantify its reliability, cause you have to do it by hand to check it -- hundreds or thousands of times, possibly thousands of fields each time. And you have to recheck it regularly because the black box you're building on top of is not stable. You don't know what the llm providers are optimizing on their end as time goes on.
Particular_Maize6849@reddit
Yes. I use AI basically to write the scripts I would have written manually. In that sense it saves time. But I would not trust it with the meat of my job.
maccodemonkey@reddit
Yeah, the CEOs talking about it as if automating things with software has never been before possible is a little nutty.
And in most-but-not-all-cases the script is still the better option.
Which-World-6533@reddit
Yep. Scripts are deterministic.
The vast majority of CEOs fundamentally don't understand what Devs do.
maccodemonkey@reddit
I’ve also read that the “problem” with scripts for the consulting or SAS class is you “only” pay once or monthly, depending on the product or service.
Part of the push to LLM based automation on the wrapper products is they can charge you per use. The inefficiency is the point because it needlessly drives more revenue.
johnpeters42@reddit
I worked for consulting outfits for several years (I mostly do in-house work nowadays), and for the most part, the mentality was not "squeeze more dollars out of the client", but rather "do a good solid job so they come back to you for the next thing they want built / refer others to you".
xSaviorself@reddit
I also don't trust this whole token-based charging scheme either given how wasteful these things are.
Material_Policy6327@reddit
Yeah most solid use cases we’ve seen AI for is just more error prone automation
donatj@reddit
They cut 15% of our workforce the day before mandatory AI training. Everything is on fire.
MoreRopePlease@reddit
I wonder what they expect to happen.
WrongThinkBadSpeak@reddit
We are all sacrificial lambs at the altar of next quarter earnings
Relevant-Ordinary169@reddit
To save the day after creating the problems and cash out after sitting on their asses for years not providing value to the company.
_mkd_@reddit
Number go up?
Pleeeease.
OneCosmicOwl@reddit
I wonder for how long can this go (beatings will continue until morale improves).
These last few years it became increasingly tiresome to work in IT under this environment of CEOs and managers daily threatening with automation. We have to log in day after day knowing that these people are desperate to fire us. It might've always been like this but I'm not sure if it was always this explicitly. As you say, now they don't need to at least be "civil" about it because it's a technology change, they can post on Twitter 24/7 how yet another new model absolutely changes everything.
budding_gardener_1@reddit
I mean that's always been the case but a lot of devs are somehow bothered blinded by the perks and the high compensation and think they're immune to it. That's why we get stupid shit like wE dOnT nEeD a uNiOn because people think unions are only for plumbers, electricians and airport baggage handlers etc.
So here we are: companies can lay off huge amounts of their workforce and nothing happens because of decades of this dumb shit
bluetrust@reddit
Something I've been thinking lately is that if ai actually made my team 10xers, I wouldn't lay off 90% of them, I'd tell the ceo, "this is our competitive advantage! Give me the budget to hire 10x more developers! We'd be a 100x team and drown our competitors in software: experiments, a/b tests, automated tests, APIs, features, bug fixes, documentation, open-source projects, mobile apps, developer relations talks, scaling -- anything and everything. It'd be an 'I WIN AT BUSINESS' button. Our competitors couldn't keep up."
And yet people don't go on hiring sprees to become juggernauts in their industry, instead they engage in cost cutting. So, this is yet another way to see that nobody in charge actually believes this stuff conveys a real competitive advantage.
disposepriority@reddit
AI is not doing the hardest parts of neither teaching or programming - however in our case it really does get to deal with a lot of shit you'd be doing slowly simply because of how annoying and uninteresting it is - in that sense, it really reduces burnout for me.
For me this is the biggest thing it affects - I can easily find what I'm looking for online without AI, I can also open other parts of the project (or past projects) grab some similar code and modify it - or even, god forbid, just write the code. However at the end of the day you're less tired and less annoyed when AI is able to take care of finnicky little tasks that sometimes you'll lose 2 hours on because of a tiny detail you missed because of how boring they are.
Bobby-McBobster@reddit
I have the exact opposite experience where if you care about anything that you do even a bit then AI will massively increase your frustration because of how dogshit it is.
Confident_Ad100@reddit
That sounds like a user issue. LLMs are pretty good at replicating already existing patterns, and can save a lot of times if you know how to prompt it and give it the right context.
I personally migrated 50 services to a new format using copilot. I had to do some hand holding for the first few examples, but after that it was doing what takes me hours in minutes. I ended up saving at least a week if not more.
Bobby-McBobster@reddit
If I was replicating know patterns I would be using a library or a framework, not duplicating existing code using an LLM.
I feel like everyone boasting about AI use is generating 0 business value and only writing code for the purpose of writing code.
shared_ptr@reddit
Can’t agree: if you’ve built the right abstractions and frameworks then it allows you to use AI with a huge amount of leverage.
Simple example is when you have a great abstraction around writing forms. You document the abstraction and a golden example, now you can give an AI tool a screenshot of a design, show it an existing form, and it’ll build an almost pixel perfect implementation in 30s.
No abstraction will stop you needing more forms, not in a growing product. But AI can help you create, change, and consolidate them for near zero effort.
Now replace form with API endpoint, external integration, whatever X thing your product needs N of to build your product surface.
Bobby-McBobster@reddit
Create a model in your backend, have the backend send the frontend the schema of your model, have your frontend auto-generate a form that matches the model.
Thanks for proving my point and showing that you're clueless and a terrible developer.
shared_ptr@reddit
I’m interested in how many engineers in this thread feel exposing your domain models wholesale via dynamic APIs is either a good idea or results in a quality product.
I am likely a terrible engineer though 😂 you won’t get an argument from me on that.
Ddog78@reddit
I've had both experiences now. Just depends on the codebase I'm in.
disposepriority@reddit
It's pretty decent at common tasks, semi-boilerplate, reading through shit docs for you, generally does great as long as you know exactly what you want and keep the scope as small as possible.
I view it as a version of google if being annoying, intentionally obtuse, overcomplicating things to sound smart and padding everything you wrote with personal anecdotes was illegal.
darthexpulse@reddit
Agree with this take. If you tell it exactly what you want to do and how you envision it should happen it works really great.
anonyuser415@reddit
I recently had Claude experiment with JS bundling strategies, piping the outputted bundle gzipped sizes to a file and running hyperfine on the build script at each step to understand speed impact. I then had it summarize the results into Markdown tables and pick the winner.
Took me like 10 minutes to set up and saved what would have probably been at least a couple hours of my time.
RabbitLogic@reddit
How confident are you it didn't just lie to you about the results. Did you check it's work? How long did that take to understand the test bench it created?
anonyuser415@reddit
Completely confident; yes; seconds.
There’s a reason I outsourced benchmarking to other tools. Claude effectively was just a tool fuzzing a JSON file and rerunning commands I provided.
All the raw data was saved and the commands are replicable. The end result speed up is also readily apparent from our CI jobs.
DrIcePhD@reddit
Using the AI ourselves? Ehhhhh I dunno, maybe? I think it depends on the job, personally I hate it and haven't seen it be super useful.
At client request we're shoving it in everything they want though so we're part of the problem.
Immediate-Cap2128@reddit
depends a lot on what you automate. On the business/marketing/ ops side, if you scope the workflows properly,like reporting, client follow‑ups, internal Q&A, or cross‑tool updates -> you can realistically save up to 6 hours a week per person. that’s what we’ve consistently seen when things are designed well and connected to the right data.
On the dev side, my CTO keeps saying it’s a game changer — not in the “AI writes all the code” way, but because it handles context‑switching, debugging help, and quick implementation checks insanely fast. it’s less about replacement, more about flow and speed.
so yeah, there’s hype — but also real, measurable time wins when you aim it at well‑defined, high‑volume tasks.
dorkyitguy@reddit
My boss was all in on AI in the beginning. He’s been talking about it a lot less, recently. Instead of “AI is going to do everything and revolutionize our work” it’s now “we should try to design our data models in a way that they’ll be useful when AI is up to the task”.
00rb@reddit
Every company is burning money with AI. It's just a cost sink.
throwaway_maple_leaf@reddit
My tin foil hat theory is that they are planning for the long term. They know it’s dog ish right now, but hope that if every company train the AI, 1-2 decades from now some of the brain work can be more automated, and they can save on workforce
CodeGrumpyGrey@reddit
This goes double for the actual AI companies. None of them are profitable at this point and none of them have a realistic path to profitability
rentar42@reddit
As usual, the ones selling the tools for the gold rush are the ones making the most money.
NVidia is a clear winner of the AI bubble.
yxhuvud@reddit
Thing about NVidias position is that it is very much in danger of and margins pushed down to nothing through being hit by commoditization. They do have a hardware and software lead, but at some point the competitors will be good enough and then all that value will evaporate.
reboog711@reddit
Are there any businesses where this is not true?
fire_in_the_theater@reddit
idk steam somehow has absolutely dominated video game sales/distribution with no meaningful competition in sight
consumer software inertial can be a pretty incredible moat, especially if maintained well.
yxhuvud@reddit
Yes, there are plenty of businesses that are already operating in areas with products that are competing with each other, without any company having the kind of competitive advantage that NVidia currently have.
DeadlyVapour@reddit
Wait? You mean selling pickaxes is more profitable than mining Ethereum?
csanon212@reddit
Ironically there wouldn't be a good market for pick axes right now because of industrialization on the mining front and tariffs and high domestic production costs in the home gardening front. it's probably a losing venture to sell pick axes now.
csanon212@reddit
My side hustle is jewelry supply and it's a steady business.
Meanwhile the jewelers who sell actual gold are freaking out over supply / demand and futures contracts / pissing off customers by not buying when gold is record highs / having cash flow problems. Meanwhile I'm just out here selling shovels minding my own business.
margincall-mario@reddit
Dont be fooled that AI is unprofitable. Tokens have a ~70% margin
Which-World-6533@reddit
It is for us. It causes a lot more problems than it solves.
Stubbby@reddit
No. They cant. They could perform the same tasks a graphic user interface could and if we cant figure that out, no AI can fix it.
I will give you an example - I tried to re-book an airline ticket recently, I used their interface, selected the booking, confirmed the upcharge, and the moment I got to pay and complete, the UI said I need to use an assistant.
The assistant was the AI bot. This time instead of using a calendar and a list of flights like every sane human being, I had to describe what I am looking for with words. I got to the right flight, it said there is an upcharge, I agreed. The moment I got to pay and complete, the LLM said I need to speak to a person.
Then I got to chat with a person, did the same thing for the 3rd time, this time I was also typing into a word box but the assistant on the other side used calendars and lists of flights (user interface) and they informed me about the upcharge, I agreed. This time I got to pay and complete.
If you look at the progression, we did GUI -> LLM chat -> human chat -> GUI and the chat section was a complete waste of time, money and development effort since it served no purpose. The customer service person could also be completely eliminated since I got everything I wanted from the first GUI.
AI LLM is not fixing anything in this situation, its only making it worse.
Confident_Ad100@reddit
I have first hand experience of LLMs helping reduce customer support need, because I built one.
It can’t fully replace customer support, but there are a lot of junk and bad questions that LLMs can deflect without the need of a human being.
Your anecdote is just 1 data point. I saw a 30% reduce in customer support volume after utilizing AI.
It’s not like human agents are infallible, I’ve personally had to coach customer support agents many times to do the right thing.
obviousoctopus@reddit
Is it possible that people were just giving up on getting help and leaving in despair?
Confident_Ad100@reddit
You can track things like NPS and CSAT to make sure that is not happening.
It was a fintech app, so unlikely people would just give up their money. Most of the questions the LLM was answering was things that you could find in our documents but people wouldn’t look at them.
Most common questions were things like “why haven’t i gotten my reward” when transaction was still pending or “why haven’t I gotten my ACH” when we were pretty clear it takes 48 hours for funds to settle.
We did a gradual roll out, and had to rewrite some documents and the chatbot settings to make sure it doesn’t overhelp like it always wanted to.
Stubbby@reddit
There may be a whole load of customer service requests that I would never make - like calling a bank to ask whats my balance so my experience is limited to a "digitally literate person". But again, I still believe what you call a successful rollout of an LLM remains a failure of user interface in my book.
In other words, a good UI/UX that enables you to get the information and deal with your problem would have greatly higher benefit over an LLM (which is an inferior interface) - like ordering Amazon products through Alexa.
I wish I NEVER had to use bank or airline customer support and I really wish I could do simply things on my own - like setting up recurring mortgage payment without calling customer support or chatting with an LLM.
obviousoctopus@reddit
Thank you for clarifying, this is very sensible.
If you are able to share -
How did you approach ensuring correctness? And, assuming you did iterate, what are some of changes that led to improvements?
snowystormz@reddit
One anecdote. Ill counter with mine. We use openAi to leverage for initial reduction of FAQ and escalation based on sentiments and product information in the text. Instead of agents and employees answering questions and setting up appointments, we have reduced those types of contacts to store employees by 85%. Some people refuse to use UI to set appointments. Some people refuse to search FAQ documents. AI is incredible at doing that for them.
When you have reduced those off your employees plates, they are able to focus on increased sales and better in person customer service in the stores. They are happy to not be agents anymore.
thephotoman@reddit
Very much the latter: putting time into it in the hopes that it'll save them money.
The problem is that AI just isn't a productivity booster. Asking AI vs. asking Google just isn't a big speedup: I still have to read and evaluate what it said as whether it was relevant. The difference is a pair of alt-tabs and maybe a typing speed increase. Agents are a crapshot.
The problem is that AI doesn't actually reduce the amount of time I spend on the hard stuff. It's actually crap at design (I've fed it Stack Overflow demos, and AI claimed that the demos was production-worthy). It tends to give things stupid names. It isn't very helpful at making me understand the problem I'm trying to solve. It just helps with the easy parts: writing down the solutions to my problems. And since it's only helping on the easy part, it just doesn't help that much.
The problem is that AI is obsequious, and managers tend to be grandiose narcissists. As long as AI glazes its users, managers are going to think that AI is the greatest thing ever. They'll see its ability to make a mockup based on Figma as it actually being productive.
shared_ptr@reddit
We’ve seen a huge boost using AI for our engineering team but while we’re obviously spending large amounts now (maybe $100/day/dev for all the AI usage) we see it more as an advantage we can use to speed up.
If we can capitalise on it faster and keep up with the industry then we can move faster for a while, at least until they catch-up. Which presumably they will, they’re not dumb.
But until the value of building a great product for our business decreases the additional spend ends up looking like great ROI when considered as a productivity multiplier on very expensive engineer salaries.
snowystormz@reddit
Don't confuse leveraging AI with difficult tasks, or replacing people doing them. The prize today is its ability to quickly get done the menial, trivial, boring, redundant, time consuming tasks. Its increased my throughput tremendously. It handles all the stuff around the edges and lets me focus on the difficult tasks.
Chat, make me a PowerPoint presentation, here are the slide themes, here are the points for the slides, clean them up, give me a color theme suited to the presentation, present key take aways, etc...
Chat, scaffold me a vue3 project with vite, i need these services: i need these components: i plan on using these 3rd party components:
Chat, i need some test cases for this function I am writing... it has these inputs: try and break it...
Chat, here is a link to API documentation for this company I need to integrate with. I need to do ___ and ___ write me up the functions for authentication, token management, and calling those APIs I need to.
And boom its all done in minutes, you can review, tweak, and off and running. Management is happy as you are fast, productivity is up, you still aint getting a raise, but might get a saturday off from running TPS reports.
newprince@reddit
Companies spent way too much on AI and when the bubble bursts, it's going to be a bloodbath of layoffs (which they will blame on AI)
TJGhinder@reddit
R&D is always a money sink... until it isn't.
My perception is that yes--right now companies are burning a lot of money trying to figure out how to properly integrate AI into their companies' workflows.
Personally, I spent about 3 months testing different approaches, and now at my small business I have an AI Project Manager (who runs great--better than most humans I've had work for me), and I have greatly increased the speed of code reviews, because an AI can catch most simple errors before my seniors ever need to waste their own (valuable) time reading it.
Yes I still have humans "in the loop," and yes it took a few months of (expensive) trial and error. But now my company is saving thousands per month... I'm sure at large companies, the research cycles will be longer, and the eventual savings will be much larger.
rayreaper@reddit
In my experience; basically, no, we're not seeing any meaningful cost savings yet.
The two main categories of tools I’ve used are:
Additionally we're losing "knowledge" of how these systems work because we're not fully writing them ourselves, more on that below.
AI-generated prototypes and POCs (e.g., Lovable, GeminiCLI)
So, it's not that the tools are useless, they're just not delivering the kind of transformative savings or productivity gains leadership seems to be banking on. We haven’' yet figured out how to integrate them cleanly into workflows without introducing new friction points.
xMcNerdx@reddit
Definitely agree with you on the knowledge aspect. Using AI agents to write code in a hands off manner is not something I see being sustainable. I don't want to be responsible for a production application if I or my team isn't familiar with how it works. In my experience so far using the agents to write POCs and entire features is neat, but ends up taking me longer than if I had written it from the start because I need to go through and learn how it all works. Wrapping my head around that takes longer and I feel less comfortable with the end result.
fried_green_baloney@reddit
Where are the vast amount of AI created open source code?
Where are the screenshot YouTube vids showing how an application was created in minutes that would take an unaided developer a (day/week/month/year/the age of the Universe)?
LiveMaI@reddit
I think that 'saving money' is hard to quantify for this question for the same reason that it's historically been hard to measure developer productivity with metrics.
Since you've been in the industry for around 25 years, you probably remember hearing the infamous stories about how added lines of code was once a productivity measure, and I suspect that this kind of metric will be making something of an unintentional comeback when we start discussing how to measure productivity gains from AI-assisted code.
Another difficult part in measuring this is: how do you compare the time it takes to implement something with AI assistance vs without? For anything non-trivial, nobody is going to sit down and take the time to implement something twice. Even if you do implement something both ways, whichever method you choose second will have the advantage of hindsight from when you were solving the problem the first time, and will inherently be faster to implement.
If we move to code quality as a metric, that becomes a bit more comparable between human-written and AI-assisted code. But code quality is also one of the areas where the 'money saved' part of the equation is really hard to estimate.
I work primarily on software that goes into manufacturing facilities, so I can get some sense of how my company can make more money from faster code, but even that estimation requires information on how many units we're producing once a product is launched, which is information I don't have access to, so I can only make a guess there. For people with a less direct connection to their company's revenue, I have no idea how you would measure that. As an example, maybe you save one headcount for your frontend team, but your UX is worse with generated code and costs your company some sales.
Point is, there are a lot of situations where you can't really account for all of the positives and negatives to the business using AI, and people who give you a simple yes/no are probably just guessing. Just like with human developers, unless you build good metrics and have hard numbers to work with, you can never really know for sure.
smutje187@reddit
https://www.reddit.com/r/AI_Agents/comments/1ky7lli/two_thirds_of_ai_projects_fail/
On the hype cycle and results
charlottespider@reddit
That's the same as any enterprise software project failure rate. https://www.3pillarglobal.com/insights/blog/why-software-development-projects-fail/
remimorin@reddit
I don't see "improvement in velocity" per developer.
I do see an improvement in quality though. There is no reason you don't have a very nice automated test suite on any feature.
I am full stack developer (not a lot of UI, enough to debug and do fixes but that's beside my point) and I am quite good with SQL. Although I am not a DBA. Now I write better SQL closer to a DBA. Because I have enough knowledge to understand advanced SQL (and AI suggestions) I can now go much further and handle the complex case more elegantly.
Like others have said. AI didn't blow the roof, it raise the floor.
kagato87@reddit
As an experienced devdba, don't trust it too much. It's good, but it also is prone to all the usual AI code flaws.
It's pretty nice though. I would call sql my "main" programming language, and it's still handed when I've created some massive analytical monstrosity and I'm trying to get its resource usage down. It is good at finding things like "maybe join this table sooner in a cte" and if I realize I can probably drop a column that I know is causing problems I can explain what I want to do and it'll point out if I've missed anything.
I've had a few notable wins with it for sure. If you tell it to turn a parameterized query into a strongly typed function it gets it right (once you tell it not to use float), and I was able to get it to create a little PS script to turn our table definition XML into a compact csv it can use to get the right output data types. A context rule and it formats them perfectly every time, and even checks I've enabled RLS properly.
remimorin@reddit
I am clearly not a DBA. Although I understand everything you said, this is not my concern usually when I write SQL (hence my need of DBA when I hit my limits).
But the example with CTE is good one for me, I've never used them much before, and now I always use them because Claude refactored my queries like that. So my queries are easier to read and to maintain.
Also, I tend to manage complexity in code, and make "boring SQL". In my last task, Claude made me handle default value with COALESCE in SQL for example and handle 2 order-by scenarios with a with CASE WHEN. I would have created 2 SQL for each.
So the more "DBA like" I meant using more SQL features.
I don't trust AI code at all, but I don't trust myself either (I believe in tests/testing).
And this is also what I meant by "raise the floor". You probably felt usually that full stack wrote Neanderthal SQL. Now I think I write in the same language as you, but this doesn't means I can handle correctly the same problems a real DBA can tackle (neither I have all the right 'design reflex' etc).
charlottespider@reddit
I'm in consulting, and here's what I'm doing with Gen AI:
Right now I'm leading a project where we're using an AI module to rewrite thousands of pages on content to conform to new style guidelines and laws. We now need 1 human editor to spend ~1 hour per page, instead of a copy writer for 4-8 hours plus the editor. Apologies to copywriters, but we saved hundreds of thousands of dollars on this project.
For my next project, I'll be leading a group that transforms figmas into React components. We have working prototypes, and as a result will need half the FE devs for our next client engagement. That's another several hundred thousand dollars in savings.
A different group (a much larger one) is developing risk models and reports with AI/ML and gen AI. This will save months of analyst time, shrinking the team and shifting the analysts work to verification and refinement of the risk analysis. Our company will save 1M+ over the course of the next couple of years.
You can scoff all you want, but it's real, it's happening on a large scale, and of course there is substantial ROI immediately.
seven_seacat@reddit
oh man, I've seen what happens when you try and auto-convert Figma docs into reusable components using AI. Good luck...
charlottespider@reddit
Maybe the pattern/technology wasn't ready when you tried, but my team has it working very well.
It's not perfect, but given a component library and the right prompts, we are generating functional apps from mock-ups in minutes, and an FE engineer is then integrating into and building additional functionality (async operations, component interactions, etc.).
This fits our needs and speeds delivery by quite a lot. It doesn't eliminate the need for FE, but we need about half the staff.
ebtukukxnncf@reddit
Would you be asking this question if the answer was what you are looking for it to be?
klowny@reddit
My company has an AI offering (non-LLM), so that's been greatly profitable for us while people on the hype train throw money at us to use it. Now whether that tool is profitable for them, sales team feedback has been: "shrug, but it's not obviously not useful for them so it seems like they'll keep buying"
Now, of course we're blowing that money on buying AI tooling, and our internal feedback has been much of the same: maybe useful, but probably not useful enough to justify the cost but we're keep buying while everyone else is maybe it'll get better.
It does feel like one giant bubble.
Zeikos@reddit
Every time I see these discussions I think that most people are missing the point.
Yes, AI currently isn't effective at cutting costs nor at increasing revenue.
But that's not the (main) point.
Companies care more about staying relevant and appealing than being as effective as they could possibly be.
Not "buying in" AI is seen as not being willing to keep up with competitors, which is incredibly unappealing to markets/investors.
Then there is a second point.
Now AI is largely ineffective, but it's very hard to predict when it will become effective.
The technology is so new and volatile that it could happen overnight for all we know.
A new model and/or a new workflow that increases effectiveness ten-fold could be possible for all we know.
We simply have no idea.
This forces companies to edge their bets, and to buy into AI tools while they're still in their ineffective infancy.
They might stay ineffective for years, or they could become incredible next week.
What's the rational choice?
Qwertycrackers@reddit
This is a framing tightly constructed to make silly decisions appear reasonable. If someone is ineffective now but might become awesome later there's no reason to think you can't just buy the awesome version if it ever happens. If anything investing in the immature version would just waste the capital you could spend switching to the good one.
Also you don't edge bets, you hedge them.
Zeikos@reddit
Thanks, fixed that :)
Which is exactly what executives do.
I don't think most reason through it, the vast majority does follow the hype because of FOMO.
To add to that, if it ever happens, whoever develops it would have absolutely no incentive to let other people use it.
yxhuvud@reddit
So essentially FOMO and basically identical to how consultants were paid ridiculous money for building websites during the dotcom boom. This will probably not kill most of the companies doing the investments, but at some point the tide will retreat and there will be plenty of dead companies that won't handle that.
The FOMO leads to a feeling that 'this time it will be different' and that people rush bad solutions instead of building following established processes. LLMs are great and they will provide a stupid amount of value in the coming decades, but there will be a lot of really costly mistakes being done as well. So have some caution and keep using your brain.
GronklyTheSnerd@reddit
Or, like the first AI boom, they may never get any more effective. I can still remember “expert systems.”
I think it’s very likely that we’ve already seen all that’s going to happen with the current technology, and that only minor improvements will follow.
If I’m right, all of this will be a gigantic waste of time and money.
Zeikos@reddit
I find that unlikely.
No system has been as effective as transformers to process natural language.
Thing is LLMs are a drop in the bucket for the application of the transformer architecture and more in general the attention mechanisms.
I do agree with you that the current iteration of the stack isn't adequate, there are missing essential pieces to increase the model's sophistication.
But the capability is there.
We have seen anything but minor improvements in the last few years. Concepts like agents weren't a thing a couple years ago.
Imo the perception hinges on the fact that the gap between "not good enough" and "incredible" is very small.
An agent system that can't actually understand code is not good enough.
The moment it can, well, all bets are off.
The issue imo is that this tech's improvements aren't necessarily gradual. Or rather they are until they aren't.
We'll continue seeing AI as subpar until it suddenly isn't, there aren't other technologies that have this property.
That said, taking for granted that it will happen is foolish. It's the reason why I try to distance myself as much as possible from AI hype, the hype is just manufactured marketing crap, what matters are the underlying principles and the research that's being actively pursued.
Esseratecades@reddit
What I've found is that when you keep your current processes, especially those of review, QA, and validation exactly the same as if AI didn't exist, and you let people use AI as they please, there is a bit of a learning curve where things slow down. But once people get over the learning curve they are usually noticeably more productive than they were before.
I'm not talking 10x productivity, but still a noticeable leap.
Where a lot of organizations mess up is in mandating AI, or changing their processes to put AI first. Once you begin to create or perform actions that you can't review you've gone too far.
reboog711@reddit
My employer is looking to get AI to do first pass PR reviews...
PetroarZed@reddit
It's great for first pass, it catches a lot of dumb shit before a human being ever has to see the PR. It also suggests a lot of deeply stupid shit you just ignore.
It can't replace the review process, it just saves a minute or so on some PRs where a human being would write "fix this dumb but obvious error, now on to the real issues." The savings are real but trivial.
Esseratecades@reddit
First pass is fine as long as a real engineer does a second pass
seven_seacat@reddit
How do you deal with the code review bottleneck in this case?
Esseratecades@reddit
Where's the bottleneck?
Are the reviews too big? That's a scoping failure. Have the author break up the solution into smaller chunks for separate reviews.
Is code getting to review faster than reviews can be cleared? That's a bit of a champagne problem, but also that's just how queues work. As people get more familiar with their codebase they become faster reviewers. However there's also a bunch of tooling, strategies, automations, and conventions that exist to do away with the need for frivolous discussions in PR(effectively shrinking the scope). Scope your tickets well and remove the need for frivolities, which is what you should be doing anyway.
Singularity42@reddit
Agree completely. The people who have been consistently using AI get more benefits as time goes on. It's a skill that you have to learn like anything else.
It's like when docker first came out. People got slower as they started to dockerize things, but hadn't learned all the skills yet. But after time it becomes productive.
SporksInjected@reddit
Yeah I definitely agree. It’s still a tool and you still have to know how to best leverage it and when it makes sense to use it.
Material_Policy6327@reddit
At my place we’ve seen our AWS bill rise due to AI but C level doesn’t seem to care lol
jb3689@reddit
Define "a lot". The sell to me for AI is that I can multi-task and I can sustainably brute force through problems that would otherwise not be practical.
heubergen1@reddit
It allows me and my co-workes with almost no coding experience write the things we need (pr gets approval from a SWE) instead of waiting for the devs to have time and to understand what we need.
I think that counts as saving money as we otherwise would be idle while the company needs to hire new devs.
fishfishfish1345@reddit
crazy how this is the only few professions that your employer actively wants to replace you
RobYaLunch@reddit
Saving money? I have no idea, but from my perspective there are different ways of viewing cost savings regarding AI tools -
One company might see AI tools as a means to make cuts to the work force on account of fewer devs being more productive while using these tools. The jury is still out on this one.
Another company might see AI tools as a means to increase their productivity and output while maintaining and even growing headcount. If the assumption is that developers are more productive with these tools, that means that every developer hired is theoretically going to contribute more for the same price as the company has been hiring devs for already.
My employer seems to be operating off of the second concept. Some larger companies understand the hype around new technologies and if they're in growth mode, these AI tools will only be a benefit for each new dev they hire. If these tools end up not being as beneficial as the hype is making them out to be, no harm done because they were going to be hiring anyway.
What I'm trying to say here for anybody worried about AI and their job is, be wary of working for a company that is so susceptible to the hype around these technologies that their strategy is to cut labor as opposed to seeing them as experimental and a way to further increase productivity with the headcount they already have (or a growing headcount).
---why-so-serious---@reddit
You must be my age - I'll always fondly remember being hired at 19 as a Java engineer for 100K, thinking, "Well, this is going to last forever," while obsessing over how the 26-year-old receptionist was "the hottest girl I'd ever seen, dude."
Thank fucking god that I did not drop out of college, as I had intended to, had the bubble not burst.
I was a CS major and never took an econ course, which means I can confidently speculate that it's the free money being thrown at AI products. I believe 2025 is projected to hit around 2 trillion in investment, up from ~200 billion in 2023. I'm too lazy to source it, but you get the picture.
Complete-Equipment90@reddit (OP)
Yeah others I know dropped out to high paying careers around that time. Due to circumstance, I found myself working at a startup before I graduated, and started for a few years to go to another. It was a stressful time of personal growth for me. I would years later that I worked for a big company and got another perspective.
Either way, higher ups get excited about big deals, and it’s difficult for those on the receiving end of orders to tell if the ideas are profitable or not. I took every job as an opportunity to get paid to learn.
BeerPoweredNonsense@reddit
- In my day job - part of a 20-strong team managing a fairly large project with a 10 year-old codebase - minimal impact IMO. There's so much technical debt and unwritten business rules that AI rarely produces workable solutions.
- In the evening I'm writing a website for myself, from scratch. I'm surprised at how productive I've been - especially on boilerplate tasks "here's my setup, suggest a Docker Compose file" or "here's a .po file, fill in translations for Spanish please". I'm impressed at how fast my project has progressed, even if a LLM almost never produces 100% correct solutions on first try.
SporksInjected@reddit
I’m curious what languages are the two projects in. Llm quality really degrades when you get outside of typescript and python.
BeerPoweredNonsense@reddit
Python, VanillaJS, Docker.
But yes I've noticed that if you start asking questions about less-well-known libraries it all falls to pieces. Basically, LLMs work best if the problem has (mostly) been solved on Stack Overflow.
SporksInjected@reddit
This is also a model specific problem as well. Sonnet is tuned to make code changes but doesn’t have as many parameters as gpt-5 so you’ll notice that less mainstream tasks benefit more by planning with gpt-5.
reboog711@reddit
I've a poor success with some of the "lesser used" SPA frameworks, such as Svelte and Vue, even though I'm working on TypeScript based projects.
SporksInjected@reddit
Oh yeah good point. More popular frameworks usually have the best success unless you’re bolting on some kind of guide for the agent to use. I’ve had some success with adding a “help” tool that’s just an interface for the agent to look up help docs. You may be able to do this with frameworks as well as long as the agent knows when to use it.
eggplanthead123@reddit
We’ve automated 10 engineers in our group using AI, now it’s just me and one other guy. I’m genuinely concerned about the future of this career and white collar jobs in general.
Once AGI comes to fruition, there won’t be a need for anyone
Complete-Equipment90@reddit (OP)
What kinds of tubs were they doing?
garfvynneve@reddit
Even if it is used for nothing else - it’s better than google and stack overflow, and right now it’s ridiculously cheap.
Individual_Sale_1073@reddit
They aren't saving any money on me...I just use AI to become more efficient and slack off with any time savings.
VolkRiot@reddit
I have the same observation as you. I work at a company and especially with the threat of AI on the business itself they have pulled out all the strategies to push automation and insert AI into workflows and the resulting world doesn't really work much differently from before, and we are just crunched as hell and not really hiring, but the world looks pretty much the same but with AI IDEs and occasionally good N8N automations.
It's really quite the shit show to be in technology these days, but I guess we are the lucky ones who still have jobs.
stevefuzz@reddit
I have given up on agent mode for any core code, I'm about to give up on it with stuff like bash scripts. It seemed cool at first, but it has wasted a lot of time for me.
throwaway0134hdj@reddit
From what I can see it’s mostly hype. There are gains in some of the more repetitive tasks of development. It’s more of a new tool our toolkit than some game changer the media is having us believe. This isn’t genuine “AI” as we see in the movies, not even close… it is not aware. More of a fancy web scraping tool with advanced computational linguistics.
tomqmasters@reddit
for me personally, it's about break even.
scodagama1@reddit
I work in large organisation (1000+ of engineers working on monolith code) and we have AI connected to our Slack and wiki - it's became invaluable in searching for stuff, the amount of time I save on not having to ask around what can be done and what can't is quite big
And we didn't even connect it to our code repository yet. I can imagine there will be huge productivity boost once some agentic AI will be able to browse our code base with some ide-like tools (I.e. not open files blindly but execute actions like "find usages" in indexed source code) - if I will be able to just paste the stack trace and ask the robot "explain me what happened here" that will be hours saved
That being said, I think AI boosts productivity of senior engineers by eliminating mundane and mentally taxing tasks. These tasks are important for juniors though as that's how they learn to think and optimise, I'm worried that vibe coding in the long run will decrease their performance. AI is a bit like having a personal intern - and interns need guidance, personal intern shouldn't be given to junior engineers, they need guidance themselves
Firm_Bit@reddit
We have 1 ML guy and a bunch of coding agents/chat subscriptions. So the investment is fairly small. The return so far has been very good. A few product features but nothing revolutionary. A lot of process improvement for some old analog tasks that we have to do. And some exploration of how else it can be used. We don’t have an ai mandate and always opt for simple and efficient. But again, our overall investment in AI has been solid.
thatVisitingHasher@reddit
The only thing keeping the software engineering sector alive right now is AI. It was shrinking quickly before OpenAI made its announcement. This bubble will pop. It's just a matter of when. It'll probably surge five years later to be larger than ever, but it will keep shrinking for a while.
phoenixmatrix@reddit
We have been fairly successful with AI. We're a small company (like 50 people), and we've been able to save a few headcount that we had planned because our senior folks get more work done. Also have been able to do a large 0 to 1 projects in about 2/3rd of the planned time).
We have some stuff we weren't going to do at all (some admin tools, internal) that non tech folks like customer support built in their own with minimal engineering oversight.
It's not all roses and rainbows and it has limitations, but it saved us a lot of time and money
Jolly_Air_6515@reddit
Most of my job is writing good JIRA tickets, assigning them to objectives that managers buy into, coordinating with customers to put out fires, reviewing code done by engineers I assign JIRA tickets too.
Basic coding can be done by AI but you have to know how it’s architected, how it will be tested, deployed, documented, used, etc. writing the code is trivial.
JohnnyHopkins77@reddit
Documentation, unit & integration tests, internal MCP that ensures a set of standards across company projects
Experimental review bots, JIRA bots, and other agents
Work for a digital media agency where all the time away from clients is used for AI/ML learning and POC’s
yourgirl696969@reddit
It’s made our small startup 10-15% more productive I think? It’s really hard to measure so it’s an estimate. But a 3 person engineering team made a production ready dating app (with more features than other apps too) in 5 months.
We’re all seniors though and work extremely well together and with product
TheSpanxxx@reddit
It's helpful when there is someone who already knows what needed to be done and how to do it without AI. In those scenarios, it can greatly improve productivity when there are many repetitive tasks that need to be done and which AI tools are good at.
I think right now this is where wise teams are getting the best bang for their buck.
But can you hire a junior and shove them at those same problems with only an AI tool to help them and feel confident they will succeed? Generally, the answer is most often "no." But, more realistically, the answer is that it depends on the junior.
yourgirl696969@reddit
Completely agree and I think it’ll stay like this for a long time
po-handz3@reddit
Agreed. Im a datasci/ai engineer at a pre-seed start up. In two weeks ive integrated research team work into our database schema, built extraction pipeline from free text, deployed an elastic search index on k8s, added typescript controllers and routers to backend for said elastic search index, built several marching algorithms for different searches and built a poc web scraping pipeline to fill in data gaps, written unit tests for ES, wrote the only documentation at the company for on boarding and backend and ES indexes.
Ive used kubernetes once before in my career. Ive never written a line of typescript in my life. Ive used elastic search once, but it was app search and the swe ran most of thr deployment/load etc.
There's 2 other engineers on the team. We dont need scalability, robustness or whatever - we need a working PoC in under 3 months to secure seed funding.
hellowhatmythere3@reddit
This is the answer. In small engineering teams which would love to hire more people but simply can’t, AI is helping the engineers we do have to get grunt work done faster (1hr instead of 3hrs) type thing. Means less burnout, and more features can actually get built within the budget constraints we have
wardrox@reddit
Same result here: it's helpful for seniors, but the bottleneck was rarely speed of writing code. It has sped up the work that would be outsourced for cheap.
yourgirl696969@reddit
Yeah it’s insanely impressive what we’ve accomplished. Genuinely the best accomplishment of my career thus far
Galenbo@reddit
AI could do half of the tasks we give to absolute juniors.
It would be better to replace Management with AI.
You feed it the specs and whatever the outcome will be implemented.
No ego/cult blockades, no endless meetings and changes of idea, no corporate slowdown.
TheSpanxxx@reddit
If you have management that AI could replace, you have bad management. Likewise, if AI could replace all your engineers, you have bad engineers.
You give junior engineers basic tasks because you need them to learn, with a safety net around them.
The day I fear is when we are 10 years down the road and there are no 5+ year experienced devs who know how to work without AI. Finding those who can work WITH AI will be easy, but I still contend that the core skills of a great engineer are about problem solving, capacity to learn quickly, discern meaning from a field of unorganized chaos, apply patterns and critical thinking, and build organized consistent output. These skills require a creative and analytical mind in my experience. One both capable of understanding formulas and techniques for solving repetitive problems, but with the creativity and discernment to know when a new solution is required to go around a problem that isn't shaped like previous problems.
The greatest hurdle AI coding has is context. The greatest advantage a really good dev has is the capability to understand contextual problems quickly and asymmetrically. Yanking at threads of a problem from multiple angles and then based on years of education, experience, and observation, find enough commonality to deduce the root of the problem before even finding it.
If the entirety of software needing to be built was greenfield boilerplate, AI would be the champion of all. But we've had a veritable sea of boilerplate projects available for ages, and the same thing is true of them as is true of using AI to do the same - they quickly become useless on large, complex, nuanced problems.
ancientweasel@reddit
I get about 10% efficiency looking at the docs less. It's basically better Intellisense. I asked it to do a code review once and holy shit that was one big hallucination. I also use it to generate tests that I usually completely rewrite.
Unsounded@reddit
I think there is a big push from higher ups to continue to use AI because there are actual efficiency gains now and they want more. But I also don’t think AI does everything they imagine it does, nor do I think it will be replacing devs anytime soon based on the current implementation and usage.
I’ve found that AI tools are great for prototyping, scripting oncall/ops, generating new tools, quickly throwing together scripts, distilling down documents and SOPs, and helping to write tests and boiler plate.
Small, well-scoped changes are good. But by the time I have then scoped out I probably could’ve written the code myself.
Leadership broadly wants more cost savings, but I don’t think we’ll really get those from AI. Writing code has always been the easy part, in my opinion, it’s the judgment and ability to tackle ambiguity that most devs are valuable for. A lot of clarity comes from programmatically defining business processes, but a lot of work is required to do that.
freia_pr_fr@reddit
My company, a not-for-profit research organisation, is earning money with AI.
We got some productivity gains on some tasks, of course, and more importantly we got a lot of research projects related to the AI hype.
Surfing the hype waves, and scientific testing of what works and what doesn't, is something we do.
fear_the_future@reddit
As a contracting business we make the same per hour with or without AI. Many clients require it though.
droi86@reddit
If by AI you mean An Indian (and nowadays AS, A Southamerican), yes companies are saving a ton of money, in investor calls my ex company would mention how much money they're saving in payroll thanks to AI, they never mentioned that my team was more than twice its original size, except most Americans were fired and replaced with Indians and South Americans, my current company only has leadership in the US the rest is overseas
SporksInjected@reddit
Where I work (Fortune 500 company) we have successfully reduced a meaningful amount of contractor spend with an application we built in house. I don’t want to dox myself but it has to do with automating an easy but time consuming admin task that happens 50-100x per week.
stumpyinc@reddit
Yes, we do save money, but in very specific ways, most of which is just reviewing things that saves us money of having to redo/remake custom products.
We sell custom coins, pins, and other promo items, and we always create proof images of what the customers product is supposed to look like, and then the customer approves of and orders one of these proofs.
We have AI basically check the proof and check the order details to make sure that they actually match and catch typos and it's caught a lot.
Another way was checking customer addresses, to try and see if the customer SEEMS like a military customer and they are getting it shipped overseas, and they DIDNT choose APO/FPO/DPO as their shipping address, then that's probably a mistake and we have checks for that too.
cocada_@reddit
I work for a big UK company, and they have no idea what they’re doing with AI. We were told to start building things even before we could think of any use case. We’re spending lots on infrastructure to enable AI and it’s not adding any value to the business. I feel like the only reason we’re doing it is because the board wanted to do something with AI.
meester_@reddit
I have a copilot that costs 100 euros a year and its a good tool for me to not do the things i dont like basically. So idk if hes saving money but it has increased my happines which, according to research, will increase the efficiency of my own work.
So for 100 euros a year i mean.. yes hes saving money id say!
MushroomNo7507@reddit
I think you’re spot on. Right now, most companies are still spending more time wiring AI into their processes than actually saving from it. A lot of what’s being called “automation” is really just people manually prompting models and cleaning up results. So the productivity gain isn’t there yet, it’s just shifted labor.
That said, I do think that there is a way to apply AI correctly and that’s where real advantages are starting to show up. In my case, I’ve been building a system for PM that connects feedback, requirements, and dev work. It basically generates the boring process pieces automatically (requirements, epics, stories, tests) and linking them into Jira. That’s where the actual cost savings show up: not by cutting devs, but by removing the hours lost in planning and context switching.
If anyone’s curious, I'm happy to share access for free if you want to test it or talk about how it fits your workflow.
fkukHMS@reddit
From an adoption perspective, AI isn't something you can just slap on top of your existing work process and expect to see great results. In many aspects it is almost identical to the outsourcing craze from 20 years ago or so. The only difference is instead of getting piles of garbage code from clueless offshore coders, with AI you get the same garbage code MUCH faster and cheaper.
What were the best predictors of success/failure of an offshoring project?
1- Detailed specifications and task breakdowns prepared in advance
2- Entire rhythm of business centers optimized around the "jump the chasm" challenges- handing off batches of work, accepting/assessing the results, rinse repeat.
3- Knowing what types (and scopes) of tasks are candidates for offshoring, and what quality of results should be expected.
Success with AI is very comparable.
My company is pretty far ahead of the curve when it comes to AI adoption, I'm personally at the point where I've done multiple "weekend" projects with AI which would have been > 6 months of effort each without AI. Obviously not every task fits that description, but I can already recognize the ones that do, and confidently commit to timelines which would be outright impossible to someone not familiar with our AI assisted work flows.
CodeToManagement@reddit
It does make coders who know how to use it more efficient. And it does save time and therefore money.
As an example in personal projects I use it for stuff like “here’s some json make me classes to represent it” and it does that grunt work in like 30 seconds.
Where it’s got further to go is making full applications or features. I had part of our product which nobody wanted to touch, I’d already spun up a team to offshore the work and we were investigating moving that work to AI but it wasn’t quite there yet.
I don’t think AI is going away but I also don’t think it’s going to have the mass reduction in workforce everyone expects either. I think we are probably in a bubble where expectations are high and access is cheap - once that price jumps up use will drop and people will get more and more annoyed with it.
It already annoys me I’m paying for a usage quota and when it does something completely against my instructions that still costs me. That kind of thing is waste and once you evaluate it against an entire engineering team suddenly it becomes a cost issue.
Tundur@reddit
We've managed to stop hiring in some teams, and have put the surplus hands to work on service improvements. Not in development directly, but in other teams using solutions we've built with AI. We've decoupled our scaling from our hiring in some very key areas.
Mostly the tools are doing classification and regression, with some summarisation for research purposes.
FooBarBuzzBoom@reddit
I’d say the situation is similar to the difference between classic cars (with manual gearboxes, no power steering, etc.) and modern cars that have automatic transmissions, intelligent systems, and even self-driving modes. You’re still the driver. Yes, some parts have been automated, but no driver has been replaced by a “self-driving car.”
I believe no developer, regardless of their seniority, can be replaced, and the overall number shouldn’t decreas, just like the number of drivers hasn’t. It’s about responsibility and control. Sure, the self-parking feature might park better than you if you’re a beginner, but it can also fail in a way that causes major issues, and guess who’s responsible for that?
Regarding your question, I don’t think AI leads to cost savings; in fact, it often brings additional costs because it’s not used intelligently enough. I’ve seen a study from PwC that clearly notes: AI improves productivity and helps employees generate much more profit through their activities. Wouldn’t it be logical to hire more people if that’s the case? That’s exactly why software engineering has been so profitable, because one person can generate huge value without requiring huge investments.