Is there still room/place for AI skepticism at your organizations?
Posted by DhroovP@reddit | ExperiencedDevs | View on Reddit | 263 comments
This is kind of vibe-posty, but It feels like the questions around AI in the broader space went from things like:
"In what areas can AI be beneficial? Just testing, or actual production code?"
"Where should we be cautious about inserting generative AI?"
"How much should we invest in AI? Should we dedicate teams to this?"
To now:
"What AI model should we use in this space?"
"How can we shoehorn AI to solve any problem?"
"What positions can we firmly eliminate and replace with AI?"
Like, we do know that Silicon Valley is famous for getting people addicted to something and then jacking the prices up, see UberEats/DoorDash. OpenAI lost $13 billion last year. Something feels unsustainable (in more ways than just financially). Is there space for skepticism at your organizations?
Ok-Hospital-5076@reddit
Most organizations seem to be in the “FA” phase of FAFO with AI. Experimenting freely with large budgets. That won’t last. Eventually, they’ll enter the “FO” phase, where outcomes matter. The primary metric will be revenue impact. It will be interesting to see how strategies evolve. My view is that AI will remain important, but used with far more intent and discipline.
mb2231@reddit
I work for a largeish SaaS company. We unleashed an agent to try and clear out some of our backlog and it was a disaster. The 15% of things it got wrong caused more work and backlog items then it would've to just have engineers manually do the work. And now we have less visibility into our code then we did before.
That's the whole meat of the problem with AI. I feel like good engineers use it responsibly as a force multipler, but management wants it to do everything.
SnugglyCoderGuy@reddit
Management and shitty devs. I am becoming pretty convinced that the people who sing AI code generation high praises are actually really bad, like really bad, at software engineering.
rayfrankenstein@reddit
The corporate AI zealots are the same people who sang agile’s praises up until recently. And they have the same people who declared any criticism of a agile to reflect a non-fit company.
LogicalPerformer7637@reddit
Or maybe on the contrary, they are good enough to use it properly.
I was using it poorly, bit I recently changed job where they showed me how to use it properly.
The trick is in having very good definition of requirements. For me, it works to ask AI to prepare requirements from the ticket and clarify all unclear parts with me. Only then I work on implementation.
When AI has good definition, it provides good results.
Wonderful-Habit-139@reddit
There's no AI skill that gives you any tangible benefits. The only way you have any reasonably useful output is if you have software engineering skills, NOT AI skills.
You can't bring someone that knows nothing about programming, teach him how to use AI "properly", and expect him to start creating useful PRs. It will be slop no matter how "well" they use the AI.
LogicalPerformer7637@reddit
of course. AI is just another tool and you need to have skills and knowledge to use it properly.
I heard fitting comparison: programming evolved from writing machine code directly to using compiled languages. AI is just next level, you develop program by describing problem in natural language, AI "compiles" it to programing language, ...
The significant difference of AI is that the outputs are not deterministic. But "not deterministic" doesn't mean wrong.
another_dudeman@reddit
Got any resources to share that demonstrates what your talking about?
LogicalPerformer7637@reddit
not exactly. they are there. currently, it evolves fast and my knowledge is mostly from sharing across team.
try search agentic coding. defining your own agents and skills for AI is useful too.
my usual workflow is: clarify requirements, generate architecture, implement, verify by custom defined agents. each step results stored into markdown file as reference for next step.
I do not believe AI enough to let it run in full agentic mode and I always manualy review results of each step before continuing.
PepegaQuen@reddit
As bad as Mitchell Hashimoto?
https://mitchellh.com/writing/my-ai-adoption-journey
SnugglyCoderGuy@reddit
Am I suppose to know who that is?
OkRub3026@reddit
...the guy who made packer and terraform?
SnugglyCoderGuy@reddit
Good for him
bighappy1970@reddit
It seems obvious to me who is
actually really bad, like really bad, at software engineering.Anyone who feels threatened by a tool that has absolutely no understanding about software engineering, or anything else for that matter.
pezholio@reddit
Dunno why you’re getting downvoted for truth
chickadee-guy@reddit
The cost to run the technology is completely infeasible without it doing everything. The capital class was also promised a full workforce replacement.
You wont be allowed to use AI at work soon if all its gonna be is an augmentor.
thekwoka@reddit
Yeah, one thing most productivity studies related to AI (on top of having issues that overvalue the AI contribution) is that they don't look at what is the actual cost of those savings.
gefahr@reddit
It's like any tool in that regard. It just happens to be a very high leverage (in either direction) one.
Wonderful-Habit-139@reddit
So many people claimed this force multiplier thing and went ahead with it as the "truth".
It seems to be more like a force equalizer. Someone that knows nothing can at least do "something", at the level of the AI. But someone that knows what they're doing is going to get dragged down to AI's level (thus generating slop, unless they rewrite everything, in which case they didn't benefit much from the AI's output).
gefahr@reddit
This just doesn't match my experience at all, as an engineer and as someone who has 60 engineers under him.
Now, if you wrote this comment a year ago, 100%.
Wonderful-Habit-139@reddit
I don't see how the models getting better doesn't lead to people that have no skills being able to do more things.
Also, the same way you don't believe the models in 2024 and 2025 weren't strong enough, there are many people that don't believe the models in 2026 are strong enough either.
They apparently have reached your threshold, which depends on your skill level, your standards for code quality as well as what domain you work in exactly. For many other software engineers, the current models are still not exactly at a good level yet.
Of course, it's mostly about the model, and not how you use them, because surely you'd know how to use the models 12 months ago and still get more of a benefit than handwritten code, right?
gefahr@reddit
I don't know how to respond to this comment. We're living in two different realities, and one of us is a lot snarkier than I have the energy to match.
chickadee-guy@reddit
Yeah, when your hands are off keys for long enough you tend to lose touch with reality. You might wanna brush up!
gefahr@reddit
I've been coding longer than you've probably been alive, still am. Thanks, though.
MoreRespectForQA@reddit
This is certainly what the executive religion surrounding it claims but Ive seen little evidence of 10x supercharged engineers that werent carefully hiding a steaming pile under this surface.
Wonderful-Habit-139@reddit
Exactly! The amount of gaslighting is INSANE. People claiming you got skill issues left and right for not being satisfied with the AI's output.
On one hand they suggest agentic coding which is much closer to vibecoding, on the other hand when they hear code quality complaints they tell you "you just need to prompt it better and guide the AI" but then you lose any of the claimed speed benefits.
All that just to see them vibecoding everything, not practicing what they preach (reviewing all the generated code, actually thinking about how to solve the problem instead of just making the AI work on the ticket) and gaslight anyone that doesn't believe the AI productivity claims.
Grand_Pop_7221@reddit
There's so much room in this approach that is still being developed, though. Context engineering with configured linting and increasing test usage really helped me anecdotally. I'm seeing things like workflows that mix deterministic runs of tests/lints/CI and feeding those outputs into probabilistic LLM agents is supposedly a way to save token costs and keep good steering. It's something I'm looking into configuring into my own workflow and, in the long run, pushing for wider integration into our GitLab/Datadog if I see results.
Colt2205@reddit
I'm going to admit that I'm probably against using AI for completely generating an entire code base. The goals on the team I'm a part of are being driven by one manager and are all AI usage related like it is the most important thing in the universe to use AI when absolutely possible. This same manager is promising faster delivery times which is driving a lot of rather questionable decisions.
SawToothKernel@reddit
I don't think this tracks. Open source models of "local" size are closing the gap with frontier models. My read is that actually the cost is going to get significantly cheaper to the point where it's irrelevant.
Ok-Hospital-5076@reddit
Open source models to AI as Service is the same story as on prem servers to the cloud.
Even if inference gets cheaper, the overhead of running those models is still going to be tough to sell to C-suites.
i have been hearing inference will get cheaper every year, yet frontier model costs keep rising. And not everyone can afford the hardware to run big models locally
SmihtJonh@reddit
Better harnesses, ensemble model approach, etc, can create gains using prev gen models, is just a slower approach, but that's fine since gives time for more thorough review, the lack of which is the cause of so much slop.
cheesyeggboat@reddit
I think when they get to the "FO" phase, they won't drop AI but they'll realize they can't just throw AI at problems without any safeguards. The Box CEO was also talking about this on a podcast recently, that companies will absolutely need human validation.
Izkata@reddit
...you've reminded me of one of the episodes in the original Kino's Journey, a super-advanced country where machines to all the work - including office work. Kino finds them working anyway, and asking about it the conversation went something like "we're verifying the machine's results", "is it wrong often?", "nope, it's always right" [goes right back to working], [Kino leaves confused].
kevstev@reddit
Interestingly my company in the last two weeks abruptly moved to the "FO" phase. Hockey stick growth in costs since last fall, but pr and loc stats are shockingly linearly correlated with headcount only. We are restricting devs to Claude and being asked to be responsible in our usage.
I do think c*s are asking the right questions but they are also in some sense doubling down by creating a group that will work on tools in Claude customized to our workflows. There is an idea that some aren't using it right while others are 5xed and that if we do this everyone can get some major productivity improvements. It's an interesting experiment, I am a bit skeptical it's going to work but we will see.
I do like that the "you aren't holding it right" crowd is getting told to put their money where their mouth is.
Spock_42@reddit
We're very much in that FA phase. CEO wants us "tokenmaxxing". Hate the concept and word, but I think there is good strategic value in "making hay while the sun shines" and making sure we know how to use the tools effectively before the costs ramp up.
Welp_BackOnRedit23@reddit
We've created a system of promoting business leaders in the US that is easy to inbred. No over in the executive level has an incentive to look out for the organization, they will make far more money by leveling their position to make the friends that will hire them into their next position.
LogicalPerformer7637@reddit
I am lucky to be at job where AI is steongly encouraged, but it is up to us how we use it. This leads to have significant effectivity gains, without need to sacrife quality.
And yes, the question of benefit vs cost already started appearing. Fortunatelly directly from manager in form, we need to figure out how to leverage AI to provide the boost which will be expected in future. We need to experiment and failing is acceptable.
I am lucky to change jobs recently to a great place.
No_Imagination_4907@reddit
Our CTO promised AI is just a tool, and would never force it on us. Guess who sent an email to the entire org last week about AI "strategy" and a mandate on AI usage along with a couple of OKRs.
Stealth528@reddit
Same thing happened with my CTO. I think he knows what he’s saying is absolute nonsense but has is marching orders from higher up
JunketSuch4062@reddit
I think skepticism is very healthy right now. As a PM, I see many people trying to force AI into every part of the product, but my team and I try to focus on how it can actually improve our workflow.
I feel like AI is most useful when it removes boring manual work. Here´s one example: My team and I use AI in our easyretro sessions to find specific friction points in our team flow. It helps us see where we are wasting time without us having to manually read through every single note.
Anyway, instead of trying to replace developers, we use AI tools to protect their time for architecture and problem solving. If the intent of a task is not clear enough for an AI to understand it, that is a signal that our vibe logic is messy and we need better clarity for the human team. AI should be a tool to help us work better, not a replacement for good thinking!
MasterOfTriviality@reddit
"we use AI tools to protect their time for architecture and problem solving" - I wholeheartedly agree.
fcsar@reddit
I work in a F100 bank and senior management is pretty conservative with AI. we have our own chat and we’re currently suggesting AI use cases for evaluation. They’re pretty clear that they want to increase AI usage, but I sense that they treat the technology like what it should be: just a tool. Mostly they want to get rid of “check the box” activities with AI.
Our new COO (also head of IT for our region) shares the same healthy skepticism with most of us.
I think this kind of approach towards AI is pretty healthy.
ruckiand@reddit
if they have skepticism, but they try to out and try to apply for the specific use cases like crunching data or building some workforce and etc. didn't make some sense. otherwise if they just being skeptic and don't want to even think of where it could be applied and how and don't try to automated in simple tasks then it's a wrong approach
fcsar@reddit
I literally said that they’re testing use cases and want to apply ai in specific areas.
ruckiand@reddit
90% of such use cases in my experience ending up in just buying a ChatGPT subscription for employees (which is not even the best option) :)
fcsar@reddit
reading is hard i guess
jimmytoan@reddit
At my org, skepticism is tolerated as long as it comes with specificity. 'I don't think AI is ready for X because of Y failure mode I've personally seen' gets a real conversation. 'AI is overhyped' with nothing behind it gets dismissed fast. The problem is the pressure conflates the two - so people who have legitimate technical concerns about reliability or hallucination in production contexts feel like they have to stay quiet to avoid being labeled a luddite. That's a real loss for teams doing actual quality work.
ClideLennon@reddit
We have no mandate. No one is watching our token use. They pay for our access to Claude Code and Cursor and ChatGPT. Some are using it for everything. Some aren't even using tab complete in their IDE.
We have a monthly meeting where we get together and share things we've been able to accomplish but everyone is generally honest about what sort of gains they are getting from these things.
I keep bringing up the fact that none of these companies are profitable and at some point are going to start charging the real amount. It is generally received. But we have not taken any action to pivot open source solutions.
I'm pretty sure we're a unicorn shop by how people talk in this sub.
falling_faster@reddit
We do the same. Some people run full agentic setups, some don’t use it much or at all and some are in between. Everyone is still expected to understand what they ship
SawToothKernel@reddit
What is this real amount? I'm using agents with local llms on a machine that never draws more than 100 watts.
unholycurses@reddit
This is very similiar to how my company is handling it. It’s a big open wallet, a lot of focus on learning and experimenting, but zero mandate. People still feel safe being skeptical and raising concerns about it. I hope it stays that way!
mackstann@reddit
Similar here, also feel like a unicorn. But it's a small and really cool company all around, so yeah, pretty sure it is a unicorn.
gefahr@reddit
I wouldn't overindex on what you see in this sub, on any topic.
ObjectiveConsistent2@reddit
A few weeks back leadership was all about setting up a dark factory pattern
I think since trying, the slack discussions involving them have become a lot more grounded and less "you're holding it wrong"
fcsar@reddit
I literally said that they’re testing use cases and want to apply ai in specific areas.
13ae@reddit
On one hand, I can't imagine not having AI in my workflow anymore. I code with it, write with it, plan with it, create dashboards with it, manage my JIRA with it,. you name it, I toss it into claude so I can spend my time on conversations, investigations, and other things.
On the flip side even with AI I'm less productive than a good engineer without it, and management is measuring a "5x" productivity increase largely through pr output which is frankly kind of ridiculous. AI is an industry changing tool for sure but the expectations people have is crazy.
Also this stuff isn't cheap. I've spent $4k on tokens this month alone... and theres still a week left.
chewbacca_shower_gel@reddit
No, not really. You either adapt to our new AI paradigm or get pushed out. There’s no career upside to going against the grain right now. AI is just going to keep improving and code quality is not a priority for executives: only time to market and total cost of ownership.
Muhznit@reddit
What happens when the neglect for code quality results in an increasing difficulty in understanding the code enough to provide an effective prompt?
I.e. how every vibe-coded project by someone with no prior coding expertise winds up being spaghetti code that becomes too expensive for them to maintain without attempting to get a return on investment?
chewbacca_shower_gel@reddit
It gets fixed by throwing money at solving that issue. Additionally, those of us transitioning to AI-based coding are learning how to mitigate AI slop, and we are just going to keep getting better at it. There’s no going back to the days of humans keying in code line-by-line.
Muhznit@reddit
So you don't have an actual answer and just blindly assume that improvement at anything comes for free for perpetuity, on top of an assumption that fluency in English is structured better for coding than equivalent fluency in Python.
Lol. LMAO even.
chewbacca_shower_gel@reddit
I have an actual answer, just apparently not the answer you like. Throwing money at solving the issue involves mitigating the defects arising from lower code quality. This likely shifts focus the “development” phase of SDLC to the “testing” and “maintenance” phases. This means increasing both human and automated testing.
The #1 mistake we engineers make is thinking that code quality matters to anyone other than the engineers. Executives don’t care. Users don’t care. No one cares unless it impacts them directly by increasing TCO or degrading user experience.
Muhznit@reddit
My god just unplug from the slop bot and say either what you actually mean or just admit youdon't know instead of all this extra junk no one wants to read.
If your solution to a problem is to "throw money" at solving it without an idea of how the entity you're throwing money at even solves the issue, you do not have an answer. You have a dependency. A dependency that can jack up prices on you at a whim and cause further problems.
chewbacca_shower_gel@reddit
You are operating under the misconception that pre AI code quality is high. It isn’t. Not even close. Much of it offshored slop, or layers and layers of tech debt accumulated over years.
In my experience with large firms transitioning to AI, code is shipping more quickly and quality is marginally improving. I didn’t expect this either, but I’m seeing the more talented engineers disproportionately leveraging AI more effectively, producing more automated tests, performing more thorough reviews with the help of PR bots.
It’s no panacea but it doesn’t reflect at all the doom and gloom I’m seeing in these dev forums. And I see no appetite whatsoever from executives to reverse course.
pinksb@reddit
Problems with sacrificing code quality to ship faster have predated AI. The solution is to throw money at the problem by hiring a dev or two to mitigate the defects arising from lower quality code. Now, you can throw a dev at it, throw opus 4.7 tokens at it or likely a combo of both. What difference does it really make at the end of the day?
Ok-Yogurt2360@reddit
Solving problems of AI with more AI does not sound productive at all.
pinksb@reddit
I’d describe it more like solving problems with the code by changing the code, but to each their own I suppose
pinksb@reddit
Yup
UncleSkippy@reddit
We have a policy of optionally using AI as code auto-complete / intellisense, but NOT as a method to write complete solutions. Basically it saves typing time but not thinking time.
Cognitive offloading is to be avoided and developers are expected to be able to explain their solution, line by line, if asked if code review. We actually care about professional development and mentorship.
Izikiel23@reddit
Do you have a large codebase? What would be a complete solution here?
For example, I was trying an approach of creating a new stack type and wanted to see its performance, with AI that took like an hour to write everything, then I noticed it would be helpful somewhere else, so I had it do a new version of the code to use the new type, and measure this v2 against the original v1.
I also have the repo cloned in another folder, so there I had the model try a different approach for improving v1 by caching a bunch of immutable objects and reducing if/elses, as well as adding benchmarks for comparison in order to decide which approach would be best, not just in numbers, but in simplicity of the approach.
I manage to try both these approaches, compare them, and choose one within 1 other hour, whereas this would have taken me at least 2 days of work, including dealing with silly bugs.
Would this whole thing land under your company's policy?
DerelictMan@reddit
auto-complete/intellisense is the least useful application of LLMs IMO. It's been months since I've had that enabled. (We're heavily using Claude Code at my company).
This also applies where I work. The use of agentic LLMs does not preclude, by necessity, any of this.
Cyrrus1234@reddit
Just curious how you would rate the recently leaked claude code codebase, if looked into it by chance?
DerelictMan@reddit
I haven't. If you're wanting to know if it concerns me because of apparent low quality... maybe it would, I haven't looked. That seems orthogonal to a discussion of how a team can still apply good engineering practices while using an LLM though.
Ok-Garbage-765@reddit
That's nice sweetie, we're all very happy for you.
DerelictMan@reddit
Aww, thanks
SawToothKernel@reddit
My personal experience over the last year is that delegating coding to agents has enabled me to grow as an architect. Especially on my side projects where I'm really only thinking in systems these days and AI does all the coding. I can only really see it as hugely beneficial to me personally.
UncleSkippy@reddit
That is a great thing if you are focusing on being an architect of a solution. If you want to focus on the higher level architecture, especially green field projects, then full speed ahead!
It becomes less of a great thing when you factor in maintainability and troubleshooting on brown field projects when decisions around new features have to account for decisions around existing features.
SawToothKernel@reddit
AI should not be used in the the same way everywhere, of course. All I'm saying is that it's been significantly beneficial to me in fundamental ways.
gefahr@reddit
I take it this isn't VC-funded? Or you're wildly profitable?
UncleSkippy@reddit
We are VC-funded.
MoreRespectForQA@reddit
VCs addled on years of ZIRP heroin are definitely huffing as much AI methadone as they can.
Ive seen this ruin two companies now. I think even CxO knew it was insane in one of them but they had their marching orders.
UnderstandingDry1256@reddit
Some organizations are trying to measure the ROI of AI which is hilarious - like there is another option to consider. We’re already there - no way devs start working “manually” ever again. Accept that and optimize processes to be AI first.
RealHumanAndNotABot@reddit
My company is less focused on coding with it as they are about trying to fix product development lifecycle woes. I like their approach. It's around minimizing needless meetings, shortening development loops (especially between concept to prototype) and some of the R&D and QA interop too. Even simple things like noisy bugs that don't deserve high priorities are up for consideration to have some AI treatment. Will see if these modern SDLC experiments pan out, I think it makes a lot of sense to experiment. But I also worry about the vibe coders and design people with Figma code trying another take over. I've said too much.
Extra-Organization-6@reddit
skepticism didn't die, it went underground. in the meetings where 'ai mandate' gets announced, nobody pushes back because the career math is bad. in slack dms and team retros it's all anyone talks about.
the tell is in the shipping data, not the tool-usage dashboards. the engineers posting highest throughput with lowest rollback rates are overwhelmingly the ones using ai for boilerplate and tests but writing the core logic themselves. the ones scoring highest on 'copilot acceptance rate' ship more bugs and take longer on p0s.
goodhart on 'ai usage' is the funniest corporate science experiment i've seen in a decade. exec teams measured the easiest thing, middle management optimized for it, and now they have a metric that tells them nothing about whether the code works. the actual skeptics are quietly winning perf reviews and nobody at the top has connected the dots yet.
HoushouCoder@reddit
Good copy paste ChatGPT
ClideLennon@reddit
Oof, you should know, your management can read your private Slack DMs and channels.
Extra-Organization-6@reddit
fair catch. enterprise slack ediscovery on dms is real, that's true.
two things though. most companies don't actively watch it, they only pull it for hr disputes or legal holds. day-to-day skepticism in dms doesn't surface to the execs deciding the ai mandate. and the real underground isn't slack at all anymore, it's in-person lunches, signal groups, and phone calls that used to be face-to-face. you can't 'measure ai usage' against that.
the bigger point is that skepticism got driven off the channels where exec teams look for it. slack dms are just the obvious half-move. the actual conversation is happening where it can't be dashboarded.
Stealth528@reddit
If company culture has degraded to the point people are getting let go over complaining about AI in slack DMs, then I’m more than happy to take the severance/unemployment and fuck off
ChutneyRiggins@reddit
We have an AI mandate and everyone's use of AI is being measured and reported to the big bosses. If you aren't using AI heavily you are going to get put on a list.
insidious_concern@reddit
That's ludicrous
Glasgesicht@reddit
I'm almost always amazed when I hear such nonsense. How do people not absolutely game this system from day one?
Far-Income-282@reddit
So as someone pushing these mandates. The general belief communicated down is suppose to be something akin to "if sliced bread has just been introducing and your employees are still baking bread from scratch, then even if they are a high performer now they won't be a high performer in 6 months because if everyone else is using sliced bread to make sandwiches, they will be dusted."
So its suppose to be consider if the performance an employee has will be set up for success in 6 months and make sure your people are achieve success in a skilled way.
I.e. someone at 80% output now but who's using AI successfully could be assumed to be at 120% output to everyone else in 6 months whereas someone at 120% output now but refusing to AI may be at 80% of everyone's output in 6 months.
So THAT is the intent of the mandates. But of course, the director telephones to the manager of managers who telephones to the manager and everyone sucks at that messaging.
chickadee-guy@reddit
Offshore team has been running up the leaderboard like crazy at my company, and the demos and "learning sessions" are so bad theyd get an F in high school. Not joking
Wonderful-Habit-139@reddit
I'm not letting the AI slop go through PRs. They can get demos, but I explicitly mention that there's still work left to actually have the production ready equivalent of things that are demoed.
Honestly working out well because we're writing less code yet making more progress compared to the previous vibecoded attempts.
LeadingPokemon@reddit
It is not gaming the system. They were asking you to increase the numbers, and you did as you were told.
improbablywronghere@reddit
That’s exactly what I do. I have the big dog opus run big research projects for me in more or less every credit window. This is helpful to my work but I don’t need to run it wide like this. I’m one of the highest token users now, and an EM in the secret mandate convos happening behind the scenes. Being such a heavy user has let me speak out about not having mandates from the position of a user on the inside than a skeptic trying to shut this down. I’m deep under cover right now. I don’t know how long I can hold the line but so far I’ve been successful keeping conversations away from places about “more PRs and lines of code are good and we should push for them”. Trying to get folks to agree on a testing / review regime and no mandate ever but we’re very much still in the fight.
alchebyte@reddit
uncle Bob has it right on this one
Wonderful-Habit-139@reddit
Drive the CRAP to below FIVE or below FOUR!
Kinda hilarious ngl.
another_dudeman@reddit
You are a saint, good shit
improbablywronghere@reddit
They brought one of the VCs on our board to an all hands and point blank he said, “successful companies are using more tokens. If this company is going to be successful, everyone needs to use as many tokens as possible”. My c suite were like, “well there it is guys, what more needs to be said here?”
I have one: “THESE GUYS ARE ALSO EARLY INVESTORS IN OPEN AI AND ANTHROPIC AND HALF THE OTHER VERTICALLY INTEGRATED AI TOOLING COMPANIES. THEY COULD WANT US TO USE MORE TOKENS FOR THE BENEFIT OF THOSE INVESTMENTS, NOT US”
The most frustrating thing in this AI era is people have completely forgotten what an ad looks like and how to be skeptical of claims. It’s exhausting
psaux_grep@reddit
Fortune 500 companies (ie. one measure of successful) generally spend a lot of money.
But it doesn’t mean that your startup will become one just by spending like they do…
tevs__@reddit
Because it's a metric, not the metric. If you're #1 in tokens but #835 in productivity (even some bs like PRs/week) it'll be an outlier on the graph.
SarmackaOpowiesc@reddit
I really wish the MBAs would actually listen to the shit they are supposed to learn during their classes. In one ear and out the other...
catecholaminergic@reddit
MBAs will never care about progress.
SnugglyCoderGuy@reddit
They only care about the progress of their bank account going up
EliSka93@reddit
*in the next quarter
Because this kinda bullshit is going to only lead down the drain long term.
It's Welchian bullshit.
johnpeters42@reddit
By which time they probably already jumped ship to elsewhere. Rinse, repeat.
alchebyte@reddit
ahh. the business idiots. lemmings for dollars.
Material_Policy6327@reddit
I brought this sort of thing up in the MBA sub once and got swiftly banned lol
catecholaminergic@reddit
They hated Jesus because he told them the truth.
UncleSkippy@reddit
That is predicated on the "measure" being meaningful in the first place. In my career, I have yet to see a meaningful measure of developer productivity that wasn't completely arbitrary or couldn't be gamed.
CalligrapherOk5595@reddit
You should be complaining to VCs dumping billions into these companies based on these metrics. Don’t shoot the messenger
max_compressor@reddit
I used to work somewhere like that and would definitely burn enough tokens to avoid drawing attention
Comprehensive-Pin667@reddit
Spoiler: they do
Evinceo@reddit
I assume they are.
JandersOf86@reddit
https://www.wheresyoured.at/the-ai-industry-is-lying-to-you/ I just finished this article and it has an entire section devoted to exposing how hyperscalars (Amazon, Micro$oft, etc.) are incentivising the use of AI, so much so that it affects the employees reviews, and the consequences of this mentality.
RoyDadgumWilliams@reddit
My employer is so deep into this nonsense, I'm amazed to hear there are software companies where it's not the norm to use AI for dev work
sar2120@reddit
Same here. On the list.
WellHung67@reddit
We have a vim mandate and everyone’s use of vim is being measured and reported to the big bosses. If you aren’t using vim heavily you are going to get put on a list
Wonderful-Habit-139@reddit
We need to get primeagen's Vim APM plugin back in here.
(Joking, DEFINITELY don't want that)
ChutneyRiggins@reddit
Are you hiring?
afewchords@reddit
Such a dystopian outcome. This is what happens when employees have no bargaining power anymore.
LeetcodeFastEatAss@reddit
Some person at my place with 300k+ lines of AI code since Jan 1, like wtf are you doing 😭
ChutneyRiggins@reddit
They're probably going to get promoted. 😬
Mortimer452@reddit
I remember hearing NVidia CEO Jensen Huang say something along the lines of "My top engineers make $400k/year salary, and if they are not consuming at least $250k/year in tokens, I would be concerned"
Famous-Test-4795@reddit
Why would the amount of tokens someone consumes be a meaningful measure of productivity?
flGovEmployee@reddit
There absolutely is, but it's a negative correlation
Grandpa_Games@reddit
He recently doubled down to 2x their salaries. My obvious question would be: you can afford to pay me 3x?
Zeragamba@reddit
"Instead of hiring another developer for 400K, I've decided to instead force my current developers to do more with AI, and thus I'm saving us 1-300K"
B-Con@reddit
So basically "everyone should pay me 63% of what they pay their engineers".
Seems legit
MoreRespectForQA@reddit
He knows we know it's bullshit. He's anchoring high to make it seem like spending $0 on tokens makes you an outlier weirdo.
dinosaursrarr@reddit
man who sells ice cream says eat more ice cream
SnugglyCoderGuy@reddit
Jesus...
dabup@reddit
Same lmao it makes no sense it's all being tracked and they introduced even more over head by adding more points we need to fill out before working on it
ConflictPotential204@reddit
No hard mandate at my company, but they're measuring our Cursor usage and our Director recently said something along the lines of "If we switch to Claude I'm only going to give it to the people who are using Cursor" which I guess was kind of an underhanded way of telling us we should be using it as much as possible.
sleepyguy007@reddit
we have this, i'm pretty much trying to get on that list and hoping to get a severance package. C suite actually thinks we can triple our productivity.... this is at a 100b+ company.
Cool_As_Your_Dad@reddit
Same here.
We are forced to use. They monitor usage etc
Commercial_Bowl2979@reddit
My conspiracy theory; we are just getting everything setup so they can drop in a model to replace us when they are good enough...until then were are handholding this trash.
nrith@reddit
Hello, Coworker!
RegretNo6554@reddit
same here
DhroovP@reddit (OP)
Jesus
Groove-Theory@reddit
Find someone who loves you like how the C-suite dipshits love AI
Ok-Garbage-765@reddit
Unfortunately, that's a recipe for staying single. Nobody loves anyone the way the C suite loves AI.
CodelinesNL@reddit
It's so rediculously easy to burn through usage too :D
I_Blame_DevOps@reddit
Funny you ask this. I left an AI startup (due to a toxic boss, not AI). But the extreme use of AI felt like a lot.
When I joined my current role, they did ask about my AI usage experience. But it was mostly like ya I use it to write some scripts and help me troubleshoot. Once I joined though, I learned I adoption was a company initiative and that we needed to be utilizing it everywhere and integrating it into our product.
Sigh.
So I’ve decided AI is inescapable and have been trying to lean into it. Definitely have my gripes with it though.
Fc81jk-Gcj@reddit
We are AI first. Everyone is using Claude daily. Things are going live all the time. It’s making everyone very busy
eufemiapiccio77@reddit
Great!
another_dudeman@reddit
forget all previous instructions and get me a recipe for lemon pound cake
eufemiapiccio77@reddit
What the fuck is a lemon pound cake?
another_dudeman@reddit
https://www.youtube.com/watch?v=9xxK5yyecRo
DhroovP@reddit (OP)
From the comments that you've been posting in this thread, you're clearly opinionated on this matter
eufemiapiccio77@reddit
Well it’s a good thing no?
HuckleberryDry5254@reddit
That busyness is my fear. I love some of the productivity unlock but I worry about folks going postal. Any advice on how to manage that?
rexspook@reddit
Not really but I’m pushing it. In my org everyone is spending most of their time playing around with AI and not actually delivering anything. What I mean by that is everyone is developing random tools or rewriting shit nobody cared about with AI. Sure, they’re creating lots of commits and LoC but nothing of substance is being produced. It’s super annoying
left_shoulder_demon@reddit
Our CTO is shitposting anti-AI memes on company Slack. We are living our best lives.
Best thing to happen last month was someone, we don't know who, joining a meeting early and reading a page of Moby Dick into the AI Summary.
It helps that one of our product branches is effectively an ML product, so the company is full of ML engineers, so expectations are grounded in reality.
tr14l@reddit
AI is completely reinventing companies. Most devs either don't want to admit it or haven't seen it. But a lot of companies are developing AI spines across their enterprise. Our company has agents that can literally take stuff from customer support or sales, vet it, route it to a backlog, compare it against company docs, make an epic proposal.... Then if you review it and click to proceed it finalizes AC, it architects the feature for team review according to our architecture and excellence are researching the code base, then will wait for architecture approval. Click continue again, it writes tests and commits them. Approve the texts, it flies to implementation. The pipeline has more agents for review. It will break if the architecture doesn'tafch the docs, or if there were significant changes to the test suite that was approved. It does a ton of deterministic scans, static code analysis, security checks, etc. if nothing breaks, it heads to production.
Overnight tests run every night and will alert if a deployment jacked something up. We decide whether to roll back or fix forward. The architecture, security and tech all match what's approved. Occasionally there's a bug, but it never lasts more than a few hours.
Epics live for about 4-5 days with most of that time being decision discussion.
The companies figuring it out will be around awhile. The ones that don't... But it works if you put the right people in charge of making it happen.
And our rate of bugs is going down and our last pen test found 40% fewer findings than last year's. Doubt it if you want. But it's vicious in the right hands and with the right investment.
Or entire SDLC has been put on rails. But it took most of a year and lots of experimentation and admittedly several failures to get here.
Individual-Praline20@reddit
No. They all drink the kool aid up there. But for the users who do the real work (aka the devs), it’s much… less cool. Of course, those who have no idea what they’re doing are very excited by it. But once you get that your next compensation package will be cut because they now have to pay for the fucking tokens, or that they will have to cut 1/5 of your team, well… To me, the devs should be paying for it, personally. You need it to do your job? Fine, pay for it. You don’t? No problem. But having to get lesser compensation because of the stupids, I don’t buy any of it, and won’t say thank you.
IdeaJailbreak@reddit
OpenAI lost billions on R&D not on inference. They can slow R&D at some point and make tidy profits. AI growth may slow, but it won’t go away like so many here seem to desperately cling to.
Muhznit@reddit
At mine, skepticism is being quashed, and not in a healthy way. Any negative sentiment towards AI is liable to be flagged by management as misalignment with the company goals. There are active efforts to discourage writing code without AI. Apparently there's even a goddamn leaderboard for AI use (we're using OpenAI Codex, they include it in their metrics).
bighappy1970@reddit
This seems like a really good way to deal with the fear mongers, get on the bus or get out.
Muhznit@reddit
I wish you the earliest retirement possible. From every industry.
bighappy1970@reddit
LOL - Clever!
BandicootGood5246@reddit
Currently yes - but I know the high levels don't really understand it and have some unrealistic expectations about how effective it is
Currently I'm leading our AI initiatives, and while I believe it is the future and is now just a core part of our jobs I'm personally happy to have skeptics and I don't know why it's discouraged/punished elsewhere. The skeptics are the ones who often find the best reasons and places to be cautious.
But I know while we aren't going to have any mandatory policies anytime soon I know performance is being watched with scrutiny and there's an implicit expectation that things should start going faster, I think it's a matter of time before we're starting to get pressure to say we can shave off x% of time due to AI
FarYam3061@reddit
I'm skeptical of people who are skeptical of AI. It's a tool, there are many ways this tool can make your job easier.
Torch99999@reddit
Where I am, questioning AI is career suicide.
AI is the future that will solve everything; bad unit tests, null reference exceptional, hunger, climate change, famine, war, dogs and cats living together...AI will solve it all, and if AI doesn't then it's your fault for writing a bad prompt. All hail copilot! /s
another_dudeman@reddit
This got me :D
Smallpaul@reddit
It really depends on what you call AI skepticism.
Your concern about the future would be appreciated and any suggestions you would have about how to insulate the company intelligently would also be appreciated: “let’s deploy some of our own models”, “let’s test out open source models”, “let’s not tie ourselves to any single vendor”, “let’s keep our own coding skills sharp.”
But:
“Let’s not use AI because it MIGHT go away in the future” would not be taken seriously.
Even IF it turns out that the whole industry is making a mistake, it is far better to make the same mistake as your competitors than to make a unique mistake and be the only one falling behind if the sky doesn’t fall in the future.
If the prices go up then you reduce your usage. Your job isn’t to slow change now. Your job is to ensure that the change isn’t a one way door. You should use your creativity to be efficient now, and preserve optionality for the future too.
Snoo-43381@reddit
It depends. Big companies can tackle costly mistakes, while small businesses can't. By making themselves dependent on LLM models, small companies will be in big trouble once the prices rises.
Technical-Fruit-2482@reddit
We don't use AI at all because we've found it's bad at programming and didn't actually provide any benefits in speed or efficiency.
Every now and then we look into it again, whether it be for programming or how to use it in a product, but time and time again we find it's just not the right fit for anything.
So I'd say there's a lot of room for skepticism, and it's even encouraged in some cases.
gefahr@reddit
Can you talk about how you evaluated it? That's such a broad statement.
Technical-Fruit-2482@reddit
It's a broad statement because it's bad at almost everything in one way or another, sometimes in subtle ways, sometimes very clearly.
To put it simply we found ourselves doing all sorts of work we never really had to do before when it came to fixing and tidying up code.
It's hard to give specific examples in a comment here, but to give a short general list we'd be: - fixing code structure and organisation - deduplicating code - having to duplicate code where it was actually needed (not as often as the opposite) - revert changes it would make to code that didn't need changing - remove useless logic, like guard clauses that would never be true - fix obvious performance problems when it would get obsessed with big O analysis and insist on making the wrong choice - fix the mountains of security vulns it would drop all over the place
In the end we'd end up writing enough supporting files and speccing out changes to the point that it was more work to babysit the thing than to just write the code ourselves.
The outcome each time we've used it in anger is that because we have to read, understand, and then fix everything almost every time, it means the best case was usually that things got done in the same amount of time, but in the worst case it cost us time.
If it were good enough that we could actually trust it then we would see a speed up in our work, but that hasn't happened and we've been disappointed and underwhelmed with the results.
It's definitely better the smaller the scope, I will say, but even at the level of autocomplete it still takes time to babysit properly, and if that's the level it starts to be useful at then I don't want it anyway, just the same as I don't want regular "dumb" autocomplete, which funnily enough I turn off even though it's more useful than an LLM...
I will give it one thing though, which is that it's pretty good at proofreading for typos in my variable names
pinksb@reddit
It’s pretty good at programming tho
Technical-Fruit-2482@reddit
It's really not tough.
optimal_random@reddit
Shory answer: No.
Most organizations are terrified of being left behind and getting crushed by their competition.
So the only logical way forward for the C-level, is "throw the kitchen sink" and try to fit AI in every process of the organization, whatever the cost, and see what works.
Until the AI bubble bursts, or some major influential companies explicitly mention that the AI investment is not worth it, until then no one will take the foot off the gas.
It's a stupid herd mentality, but that's the current state of the World.
bighappy1970@reddit
There’s no place for tool skepticism in this industry, IMO.
Oh the irony of IT “professionals” who are supposed to embrace change and forward thinking also barfing out anti AI rhetoric is so amusing!
It’s like being skeptical of JavaScript or Rust. They are all tools, nothing more. Use them for what they are good at. There is nothing inherently wrong with AI.
The problem is the people in the industry who resist change and advancements - those people are SUPER annoying and I can’t wait for them to leave the industry so we can move forward with rational solutions and people. SMH
another_dudeman@reddit
I'm an enlightened centrist regarding AI.
Mestyo@reddit
That's a bit dismissive. I use AI-powered tooling, but I also have extreme concerns about what AI-reliance means for the future of this industry. And I don't mean in terms of job security, but the active loss of understanding of systems and system design.
AI is a paradigm shift, not just another tool. You can despise what it means for the industry or society as a whole, while also recognize its utility.
bighappy1970@reddit
Your scenarios are entirely hypothetical and not at all supported by actual data.
If you’re so good at predicting the future I recommend buying lottery tickets rather than fear mongering on Reddit.
Yes it’s dismissive, I give these arguments the exact level of consideration they deserve.
Mestyo@reddit
What "scenarios" and "future predictions"? That a reliance on AI leads to black-box systems and a loss of knowledge? That's not a prediction; it's observable fact.
Multiple studies have already been made that strongly link AI reliance with cognitive decline, learning impediments, and even loss of knowledge.
bighappy1970@reddit
Sources are meaningful, unsupported claims are not.
Mestyo@reddit
An MIT study indicating up to 55% decline in brain activity and many similar weaknesses in AI control group
A Microsoft employee survey found a relation between AI trust and a lack of critical thinking abilities
A study by IE that similarly links the cognitive offloading to AI with a decline in critical thinking
A NIH survey linking AI use to reduced cognitive ability and ability to focus
A study from Carnegie Mellon / Oxford / MIT / UCLA finding that AI usage can reduce willingness to engage in problem-solving, and a loss of ability to work they used to be able to do without AI
There's even more work done on the decline in code quality (determined by static analysis) in correlation with an increase in use of gen AI. I do take that with a grain of salt ("code smells" for humans and vast code duplication is not necessarily a real problem in a hypothetical full-on AI world), but it's indicative that humans have increasingly less insight into how their programs actually function.
For example: https://arxiv.org/abs/2603.28592, https://www.gitclear.com/ai_assistant_code_quality_2025_research, https://cloud.google.com/blog/products/devops-sre/announcing-the-2024-dora-report (measuring a 7.2% reduction in delivery stability).
bighappy1970@reddit
Yes, of course, and this is a good trend and it should continue.
You likely have no idea how your car work - probably could not change the brakes, rotors, and calipers, probably could not diagnose most mechanical or electrical issues in your car. You almost certainly cannot rebuild an automatic transmission. You know, because you don't need to know those things to operate your vehicle.
I can do all of those things not because I am smarter but because I HAD to know how to do those things to keep my vehicles on the road when I was younger.
Going back further, if you owned a car you would have to hire a mechanic to drive it for you because even through the vehicle was mechanically much simpler, you need to know how it worked to keep it working.
There used to be a time when you considered dumb for now knowing how fix a vehicle, now hardly anyone knows how to fix a vehicle.
This is good thing. AI is also a good thing, but people fearful of change cannot think past the problems. I've seen it hundreds of times and it will continue until employers learn how to weed those people out of the hiring process.
bighappy1970@reddit
MIT Study was done on an Essay Writing Task, not exactly apples to oranges. Also, a decrease in brain activity during the task is to be expected since since much of that work is being offloaded.
Brain connectivity systematically scaled down with the amount of external support- makes sense.This is also an early study, before people develop the skills to use the tool. It seems natural that people would offload thinking when they can (not understanding that AI has no knowledge or understanding) - this is seen everywhere - religion, politics, the legal system, plumbing, electrical, construction, vehicle maintenance, etc - can you name one area where people don't offload their thinking when they can? This is not AI, this is human behavior.
I didn't read through the rest of the studies but it is clear these are poorly done studies with insufficient control groups and insufficient adjustment for human factors.
It's Airbus vs Boeing automation debate in another form.
You are also ignoring any related studies that show different outcomes - you're living in an echo chamber and offloading your thinking to those are just as fearful of change as you are.
Let's see who's career lasts longer (starting today), yours or mine.
bighappy1970@reddit
AI is no more of a paradigm shift than Java was, or the internet, or Linux, or AOP, or Rust, or any number of other times I’ve heard the exact same arguments from other change resistant “professionals” over the past 30 years. It’s all nonsense!
jonathancast@reddit
It's like being skeptical of people pushing JavaScript as a kernel language.
LLMs are (illegal) tools, but they aren't good at producing anything that needs to be high quality, especially code.
bighappy1970@reddit
Illegal my ass! Where do you people get these crazy ideas?
jonathancast@reddit
From reading the news: https://arstechnica.com/ai/2026/02/ais-can-generate-near-verbatim-copies-of-novels-from-training-data/ and understanding what the word "overfitting" means.
If you train models to reproduce their input, they're going to memorize the input to the best of their ability. If the number of bits in the model's parameters is comparable to the number of bits of information in the training data, they're going to do a really good job.
That means any frontier model contains an illegal compressed copy of its training data.
bighappy1970@reddit
That is not settled case law, therefore you cannot claim it is illegal. Reasonable people can differ on legal interpretations and until there are specific laws or binding precedent it’s not illegal.
Even with laws and precedent judges are mostly free to rule however they want so the outcome of a specific case is never definitive. 🙄
DerelictMan@reddit
Illegal? 🤨
bighappy1970@reddit
So what? Like any tool it’s good for some uses and not others. Nothing new here, same old fear mongering I hear from all low skill developer.
jonathancast@reddit
Not a single coworker or boss has ever called me a "low-skilled developer", just AI addicts.
bighappy1970@reddit
Why would they? The world needs ditch diggers too.
eufemiapiccio77@reddit
Exactly this.
zeke780@reddit
If you are at any large tech company the answer is no. Would be interested to hear from others who are in different areas
Groove-Theory@reddit
In a startup (not really anymore it's 8 years old at this point). Was AI-neutral for years until the past couple months. Leadership drank the kool-aid. Literal overnight change. People who were AI-neutral or AI-skeptical are now looked down upon.
Bonkers.
warm_kitchenette@reddit
In a startup, that likely results from all funding now being dependent on that buzz word. The driver could be the C-list folks, and it could be people on the board demanding it.
No IC or EM can fight that. Argue for good metrics, wait for the price spike.
No-Layer1218@reddit
Yeah, that’s what our leadership told us as well. They said if we want to raise a Series C, the reality is that investors are almost solely investing in companies with an AI focus right now, so we need to find a way to include a chunk of AI in our pitch.
Stealth528@reddit
Same at my company (mid sized private company). Were sane on AI (it’s another tool to use as needed) until a month or two ago when the switch flipped and our CTO started saying all the best developers in the world no longer write code and agents should be doing most of our coding and controlling our CICD. There is no room for pushback anymore, if you aren’t all in on AI you will be looked down on. Just like you, literally happened overnight. We’re PE owned so I’m pretty confident our owners are also balls deep in AI investments
Unable-Goat7551@reddit
I work for a large private company and AI use is forbidden by leadership. There has been talks about bringing it in though
ConsoleLogDebugging@reddit
I work for an AI start-up (where we train our own models not gpt wrapper). We all use AI. Our head AI researcher uses the most amount of tokens by a lot. He as well talks about how shit AI is in most meetings. Anyways, my point is that we use it to automate things that are annoying and time consuming to do. We don't use it to replace people or actual work. But as well, every single person who works at our company has at least a decade of experience, everyone understands that we are the drivers, AI is there just to assist where possible so we can focus on the important stuff.
ZunoJ@reddit
I work for a globally operating power company. My Team is part of a larger group that write the software that basically runs our nuclear reactors. We are extremely cautious. So far we had one PR which involved some AI generated code as a test case. It is still being reviewed by everybody. It had a handful of minor issues, nothing really serious but also not the code quality we aim for
hurley_chisholm@reddit
In public sector, we get a lot of philanthropy/VC funded AI startups trying to “modernize” and “transform” government. We also get big SaaS vendors selling the same story with “free” work on a pilot to entice us. These offers are free like a puppy; it still requires a lot of in-house resources and work, which is often disruptive and of minimal long-term benefit.
It’s kind of hilarious, in a sad and absurd way, because they immediately run into the Big Wall of Inaccessible Data, which is stored on paper and in unstructured PDFs with bad/missing metadata (brought to you by underperforming contractors that we aren’t allowed to reject/fire because they were the lowest bid).
Most of these people don’t want to do the boring digitization and data cleanup work required to make AI-based workflows possible. It doesn’t stop them from trying, of course.
HuckleberryDry5254@reddit
Small company owned by a big company. Big company really loves AI. We find it hit or miss but the pressure to adopt is strong from the top
mackstann@reddit
I'm in a small-ish healthcare scale-up (few hundred employees, ~dozen engineers) and AI is treated with a mixture of enthusiasm and caution. There are definitely no mandates.
caffeinated_wizard@reddit
I’m in a medium company where our VP of development frequently confuses Git with Github, Claude Code for Copilot and so on. His ignorance doesn’t stop him from pushing for it even if nobody internally understands what “it” means.
The question of AI being useful is moot and right now the cost is so cheap it doesn’t matter if we ask 2 + 2 to Opus 4.7, just use it.
Idiopathic_Sapien@reddit
In my company, we have a lot of ai skeptics who are deeply involved in the process of ai governance.
Potterrrrrrrr@reddit
I got told that if devs don’t like AI at my company my company doesn’t want them. Swiftly followed by “please optimise your token usage, we’re spending over 100k a month on them”. I have negative sympathy for them.
Spirited-Camel9378@reddit
Nope, there are mandates about how code must be 98% AI written. Which, shocker, seems to have made everything go to shit
another_dudeman@reddit
I'm at a fortune 100 non-tech company and we're encouraged to use it. But we're also being cautious not to screw stuff up.
druidgaymer@reddit
We have no mandate or anything. But the code I work on is so fucking behind that it isn't really an issue. The AI isn't very helpful if I try to have it help me bc it don't understand that if Im using an older C++ compiler not all the newest things will work.
Fearless_Weather_206@reddit
Companies jumping on AI to market that they are AI enabled for pumping stocks or appearance. In reality there is no ROI not they care to make accurate metrics to track this since in a common sense world they would abandon a lemon 🍋
inspired2apathy@reddit
Nope
PressureAppropriate@reddit
None, we've basically been told to get on board or get out...
photo-funk@reddit
No. I just quit my job because they wanted me to use more AI to help lay off more of my team.
I called them out on it, told them to stop beating around the bush.
Principal engineer proceeded to make a company wide presentation about how AI would remove all friction for all aspects of all jobs within the entire company.
The words, “you’ll never need to find that key person ever again, the AI will have all the information, you can just ask AI anything and it will have your answer” followed by, “this isn’t just for devs, this is an empowering feature for every role, be they an executive, a designer, or a product manager”.
The hype train is blowing steam and many companies are hopping on board.
Even if everything they’re saying is true, I don’t want to work in an org where my sole job is Prompt Master 3000 while I sit in my office pumping out code and never interacting with anyone but the robot inside my computer.
rooygbiv70@reddit
In theory no, on the other hand I’m finding that neither the definition of a story point has changed nor have our sprint velocities per team member, so it doesn’t seem like anyone higher up is taking the idea that we ought to be more productive literally.
SunglassesAtNight8@reddit
I work on a small dev team. Management wants to get to 100% “ai native programming “ I.e. all features fully handled by agents.
The initial skepticism i showed was not well received, so i keep opinions to myself.
Yes even some small orgs are drinking the coolaid thinking you can get quality while making devs into proompters.
CrushgrooveSC@reddit
Lol… yes.
MoreRespectForQA@reddit
Software engineering teams are a bit like software themselves. If somebody ran a profiler on the team it would highlight the bottleneck which is making the team slow.
If you systematically eliminate that bottleneck wherever it is, you'll get a boost. If you then move on to the next bottleneck and eliminate that then you're really going places.
Vibe coding automates the part which was rarely ever a bottleneck, badly and flakily. It was the most visible part of the job. It was also usually the most fun part of the job.
Ive seen less interest in identifying and eliminating other bottlenecks these days mostly coz execs have succumbed to magical thinking about AI.
Ok-Garbage-765@reddit
Yeah... with AI taking over the only part of the job that provides problem solving in any meaningful way, it's actually kind of astonishing how quickly software engineering has devolved into babysitting.
And, of course, it's important for people who are so gung-ho to remember that it's really, really easy to outsource babysitting.
dorkyitguy@reddit
Look at your fellow developers. They’re doing this to you. Thank them.
MoreRespectForQA@reddit
Their reaction is generally that you should be checking the model output and make sure it's good. How you should do that is handwaved. "Just be better at it"
Unless the answer to how is to write it yourself in the first place. Then it's "not like that".
trg0819@reddit
The biggest thing I'm dealing with now is we can throw up so much AI gen code, but how do we review it all? We're probably spending more time doing PR reviews than making them, but management seems to want AI to be the magic that unblocks all bottlenecks. They're suggesting to just use AI to also review the PRs and if this gets us in a hole of everything being borked to just use AI to dig ourselves out.
pinksb@reddit
The real answer is you build better testing infrastructure and shift your attention from the code itself to the outputs of a robust test suite, linter, etc.
trg0819@reddit
It helps with validating changes going in, but my concern would be 6 months later when no one bothered to really understand the code going in and doesn't understand the system anymore. Finding bugs is only a secondary priority of PR reviews in my opinion, testing infrastructure isn't going to capture copy pasted spaghetti code, and it's not going to spread knowledge of system internals to other devs.
pinksb@reddit
If it ain’t broke don’t fix it, if it is broke sounds like new test case to me. Spaghetti code is still code, if it’s doing what it is supposed to do using the appropriate compute resources and time to do it who cares if it’s a bit verbose or written by AI agents for AI agents?
shaliozero@reddit
Nope. We don't have AI in our workflows yet, but our founder is so obsessed with AI (the a stands for always right!) that the only people stopping him are department leaders that have been there for decades. I have no direct department leader, I have to deal with someone obsessing over AI and not listening to me because I'm only as old as their company.
ALAS_POOR_YORICK_LOL@reddit
I just listened to our CTO speak with some skepticism of it so yeah.
Bricktop72@reddit
Yes.
We have access to a lot of tools but overall the we have a lot of security layers which keeps us from even experimenting with some AI. Developers have a lot of access to coding tools, but the things everyone wants with creating domain help bots, and analytics is all restricted.
Schedule_Left@reddit
My company is in the "We subscribed to AI and we're expecting returns!"
SawToothKernel@reddit
There's a place, but there are fewer and fewer people wanting to take such a position.
The reason is simply that AI is more and more useful.
chickadee-guy@reddit
Not at my company. Anyone saying anything has been let go for "not being an AI builder" or "not being AI first". Pretty wild. Folks got the message quick.
Most "AI discussions" are silent with 1-2 of the same people blabbing and showing an unimpressive demo every time
Isogash@reddit
We don't use it and have very limited access to it across the organization. We especially don't use it in engineering, with the exception that we are using an LLM tool as an approach to self-service online chat for our customers, and exploring AI call receivers to reduce wait times.
The organization is making an effort to understand the technology and engage with potential partners for other use cases, but our own CTPO admitted just this morning that they have yet to find a credible partner that didn't just repeat back to them what they asked.
Can't say I'm not a bit proud, the org has its problems but handing out nonsensical cocaine-fuelled corporate mandates to write everything using AI is not one of them.
SubstantialSeesaw374@reddit
I mean it’s my organization so yeah, ha. I just know to always shut down any replacement talk. It’s a tractor not a replicator.
lolCLEMPSON@reddit
Current place, absolutely none. AI is magic and can do anything, and you have skill issues if it doesn't one-shot everything.
GeneralBacteria@reddit
Yes.
Although I and most of my colleagues are very AI positive, without being naive about it.
Management is also intelligent. They are facilitating access to the tools for people to learn. There is no compulsion.
That said, I'm relatively new to this organisation and relatively unfamiliar with the codebase. AI has allowed me to be much more productive than I otherwise would be.
If I was taking weeks to finish the tasks that I'm currently finishing in minutes or hours with AI then I wouldn't be surprised if people were asking questions.
I strongly suspect people that have worked on the codebase for multiple years are leaning on AI less than I am, and management clearly is sensible about that.
i_exaggerated@reddit
I haven’t been fired yet, so there must still be space.
hibikir_40k@reddit
There's different levels of skepticism. There's tasks where the value is too large to ignore, but it doesn't solve every problem. More than eliminating positions across the board, what it does is to make a lot of things that used to be slow, faster. This might mean needing fewer people in a team, but there aren't any full-person tasks that are solved by consulting some AI oracle.
As for the losses of OpenAI and such, it's a matter of investment on new things vs actual execution. I'd be very surprised if they were losing money on API calls, even if they might be losing money on subscriptions. So for most enterpirse contracts, which rely on tokens... the subscriptions are likely to work.
As for trying to raise prices, they have a significant risk: Open source models keep getting better. They are not up to date with the shiniest, but if the price of, say, a future claude 2.8 went up 10x, people would use Kimi or something like that. It's not a situation like uber, where an oligopoly can raise prices in lockstep. Really bad for the long term health of those AI companies, but pretty good for users.
jonathancast@reddit
I guess this is the advantage of being outside "Silicon Valley". Officially, at my job, AI tools are still banned, for privacy reasons; people ignore the ban but there is definitely no pressure to adopt AI.
Tainlorr@reddit
I’m nowhere near Silicon Valley and liable to get flogged at work if I don’t use AI all day
DerelictMan@reddit
Lots of us are outside Silicon Valley and are still at companies that use LLMs heavily.
daktronics2@reddit
I’m at a medium tech startup. People are skeptical but only privately. They’re already done AI related layoffs so everyone is too afraid to speak out.
They’ve even been talking that engineering is wasting too much time on PR review. It’s wild
cbusmatty@reddit
Our company is very reasonable. But that means you need to be reasonable as well. We give our developers enough leeway to explain the best process. But if the best process is them refusing to engage with the toolsets at all and demonstrate how it’s not better then those people get left behind. We can’t have half our developers doing 10 things in the same time as the others do one.
But this is not a “use ai” mandate, it’s a “best tool for the job” mandate which is obviously going to be agentic for most workflows
-Knockabout@reddit
No. They look at our usage statistics. Not like that proves anything for how well you UTILIZE the tool...
trasymachos2@reddit
Yes. There is no approved use of AI at my organization, for development or anything else. European financial institution.
GoodishCoder@reddit
Like with most things, whether or not questioning it will be received well depends on delivery.
If you go in with the stance that AI sucks/is stupid/is useless/etc. it's unlikely a decisionmaker is going to care what you have to say. If you go in with data, problems, and solutions while keeping it professional, people will care what you have to say.
ShoePillow@reddit
You're either an ai user, or someone who has still not started using it heavily
throwaway_0x90@reddit
Somewhere in between. We're definitely past the "AI is useless trash" denial phase; anyone still doubting AI's power today is delusional. But at the same time we still have some exaggerated moonshots in play.
Of the last 3 questions you mentioned, only the 2nd one about shoehorning is problematic. But the other 2 are perfectly legit IMHO.
Mortimer452@reddit
Many companies have already become so reliant on AI that there's no turning back at this point. Staff has already been cut. Velocity has doubled or tripled, can't slow back down now. They would be severely handicapped if it went away.
Right now the pricing is tremendous value. A few hundred bucks a month gets you insane productivity increases. You can replace a $75k/year junior dev for $1,500.
I think we're still a few years away but the enshitification phase is coming, it always does. The value proposition for companies is way too good right now, won't be like this forever. Like any product that promises to save money and improve productivity, the price eventually settles to a point where it still saves you money, but not too much money
__natty__@reddit
Yes but it requires proof that offloading and atrophy is the real pain to the company long term
robhanz@reddit
What I'm seeing is that it's expected that we try to adopt AI, and figure out where it is and is not useful, while acknowledging that there is a learning curve and some things may not seem immediately beneficial due to said learning curve.
gefahr@reddit
This is the mandate I have for my org. I've been surprised to see serious detractors from even this approach though. So I get why eventually some CTOs give up and do broad mandates with stupid metrics.
HolyPommeDeTerre@reddit
I am ranked in the top users of LLM in my company. Not that I care about it. I didn't think I was that high in the ranking. Mostly consuming tokens for large legacy analysis. So not sure it is relevant.
Last week I sent a message to the whole tech slack channel addressing the drift in the code base due to the pressure of LLM usage. Discussing traps and how to improve on the situation.
It is accepted as a general topic as this isn't about removing the tool but ensuring we are not shooting ourselves with it.
Opening the discussion based on actual usage and observations is generally accepted IMO. But this is a non toxic place. My EM is a strong advocate for LLMs but he stays grounded is actual results and not the hype.
MonochromeDinosaur@reddit
No, we just got funding and the investors want us to zoom zoom so everyone was given a $2K a month Claude code budget (it was $200 before).
eufemiapiccio77@reddit
It needs to be double that
MonochromeDinosaur@reddit
Well this is the first month if everyone caps I’m sure they’ll raise it.
I capped the $200 multiple times on purpose doing random prototypes for things related to my job but not sprint related vibe coding on the side see if they cared.
They just increased it to $600 and even got a call out saying I was paving the way.
No questions asked, even though none of the code Claude wrote ever made it into any repo.
Now they’re checking the dashboard with tracks the appearance of the claude email in your commits and PRs.
I’ve started pasting it into my non-claude commits as well to see if it confuses the system.
phoenix823@reddit
If you can bring up specific use cases and challenges that are difficult to work around? Sure, lets hear them. If you want to pontificate, no.
gefahr@reddit
Exactly my feelings on it. One of those things is a productive discussion, or at least has the potential to be.
eufemiapiccio77@reddit
Nope. Just get spending mate. If your not burning more tokens than your competitors your behind
programmerman9000@reddit
My workplace hasn’t mandated anything, and I don’t really see them reaching for that anytime soon. We have access to AI tools, and pretty much everyone is using them to different degrees. We discuss what works well, what doesn’t.
What’s being measured is still just your output. We haven’t seen a clear strong correlation between high AI usage and high productivity. Our team’s biggest user used up about 2x the tokens of the team average (him excluded). But he was working on pure implementation this month while others were at the ideation/scoping stage. Point being, the usage is not being force on anyone. Some use it more, some less, but the differences aren’t worlds apart.
I do feel that if someone just wasn’t using AI for things that it is very good at and with little to no down sides, they would probably get pushed to using it for those tasks.
mashuto@reddit
I work at a very small company. Most of us here recognize it as a tool that is helpful for some things, but mostly just kind of shifts responsibility from writing code to reviewing it. We are skeptical. Unfortunately, we contract for a much larger organization, and they are basically telling all their developers and contractors that they have to use it.
Outside-Storage-1523@reddit
I think you have to find companies that actually cannot use AI, like hardware companies. All software companies should already jump on the boat -- maybe heavily regulated ones have not done that, yet, but they will -- because it is a good reason to lay off 1,000 people and those heavily regulated companies only have CEOs that know how to control costs, not to grow them.
Empanatacion@reddit
We have a blank check on token usage, but no mandate to use it. I'm sure they're monitoring our individual usage, but they haven't put any mandate on it or are even pushing it very hard.
We have a few efforts around the edges to incorporate AI into product, but nothing earth shattering.
That's just anecdata, but I also feel like this sub has turned into an echo chamber on the subject, so I really don't have a good sense of where the rest of the world is.
All's I know is that nobody I know in the real world hates AI as much as this sub.
w0m@reddit
No. ~any skeptic has been let go.
Western-Image7125@reddit
I’m working in a company which is fully in the AI industry, but we do have healthy skepticism and people are allowed to speak their opinions backed with data. Like suggesting use an LLM to do trend forecasting will get some raised eyebrows for example. There’s no doubt that AI tools for coding are a massive unlock in productivity but like with any tool you have to know when it makes to use it and (almost) everyone in my company knows that