(Rant) If you’re not benefiting from AI, you will become irrelevant
Posted by Spartapwn@reddit | ExperiencedDevs | View on Reddit | 60 comments
Firstly, I do not think AI is going to replace software engineers, I think it’s a tool that will drastically change how we work. Think the invention of the internet to researchers, or the sewing machine to seamstresses.
If you are not using it, you will be less useful than those who use it well. Period.
Somebody who can use AI well is capable of producing results much faster, capable of reviewing code and documents much faster and more accurately, and is keeping up to date with the latest in the newest tech more than people stuck in the ways of old.
Some arguments I hear are:
“The output is slop and doesn’t make sense” - this is a horrible argument. This means you’re not using it properly. Nobody cares about your “bespoke” software style that took you decades to develop.
“People produce slop without knowing what it does!” - you’re not using it properly, learn to.
“It’s expensive, it will eventually all be turned off, even the power grid can’t support it” - we will adapt and build more infrastructure. Same was said about electric cars.
“It hallucinates and can’t be trusted!” - Neither can user input, build safeguards around it.
End of rant.
abandonplanetearth@reddit
At my org we won't consider hiring anyone that isn't AI first. If I hired someone that wasn't all about AI, it would look like a bad hire on my part.
I know that correlation isn't causation but this sub is unique in the sense that it is super anti AI and also full of posts by ppl that are at their wits end because they can't land a job.
throwaway_0x90@reddit
This sub is getting astroturfed. I'm unwilling to believe this many engineers can be so incredibly stubborn.
abandonplanetearth@reddit
Lol ask me anything then. You think I work for someone else.
I just reenabled my post history on my profile so you can go be a sleuth and see how wrong you are.
throwaway_0x90@reddit
No, I mean I agreee with you about this sub being unreasonabley anti-AI.
pwouet@reddit
Why would anti ai people create bots? If anything the only ones with an interest in creating them are pro AI.
I exposed one one month ago which was trying to promote its AI books. It was probably an open claw instance.
chat-lu@reddit
So bullet dodged for the applicants.
ObeseBumblebee@reddit
More like paycheck dodged.
overzealous_dentist@reddit
IME, real life is full of engineers who are very excited about AI and the big boost to productivity, but this subreddit is full of engineers who think it's a disaster. I don't know if it's a selective effect or something else, but it's bewildering
abandonplanetearth@reddit
Exactly my experience as well.
Anyone that hasn't figured out how to use AI yet is already 1-2 years behind their peers that do use AI.
FrenchCanadaIsWorst@reddit
It’s useful but you gloss over the production of code without knowing what it does way too fast. That’s a legitimate concern that will only get worse as ai coding gets better, and as a result you’ll see lots of malicious little issues pop up over time when the ai does occasionally make a mistake or a design decision that isn’t aligned with your project goals and you now know nothing about the giant code base you have to debug.
overzealous_dentist@reddit
you don't have to know anything about your giant codebase, your ai will. humans are exiting the loop of code production entirely. this is the worst it'll ever be, and it's already very very good.
chat-lu@reddit
And how does that work for vulnerabilities?
overzealous_dentist@reddit
literally the same way, ask it to scan for security vulnerabilities. this is genuinely industry standard.
donttrytoleaveomsk@reddit
That's how I almost ended up storing plain text api keys in the database last month because the agent is hallucinating that the library I'm using for working with the database has built-in encryption. Funny thing, it warned me about storing plain text first. A few hours later, without changing anything in the code, I rebooted my laptop and lost the history. So I put the same prompt as before and now it highlights the same code that it used to say had a problem and says the method I'm calling to store the data somehow encrypts it (it doesn't). Like it remembers that there was a problem here but since I'm asking again it thinks it gave the wrong answer and says the problem doesn't exist. Same model, same code, same prompt and it can't decide whether my code has a huge vulnerability of storing plaintext credentials in database or not
overzealous_dentist@reddit
Did you then ask it to scan for vulnerabilities?
chat-lu@reddit
That’s the dumbest thing I heard today.
overzealous_dentist@reddit
this sub is for "experienced devs," if you are not familiar with very basic industry standard practices, why are you here?
this is how github advanced security, snyk, wiz, etc work to harden repos. you tell the AI what you want to do, you point it to context that tells it how, you let it rip, you tell it to fix the found issues, then repeat. even better if you are a Big Name and have a frontier-ai solution like mythos
chat-lu@reddit
Basic security practice requires to understand what your code does. How long have you been coding? In months.
overzealous_dentist@reddit
humans are _accountable_ for security, they are not _responsible_ for it any longer.
Xacius@reddit
Until it tests implementation details that don't meaningfully validate anything. From that you typically get false negatives, i.e. production bugs with passing tests.
Or a sprawling architecture with no cohesive structure. The classic "add one change, break three unrelated features."
Coding was never the hard or meaningful part. The human still needs to understand the architecture and decisions behind the code.
overzealous_dentist@reddit
everything you're saying is immediately solvable if you just ask claude (or codex, or...) to check for it.
"ensure tests are meaningful, and comprehensively look for testing gaps"
"build a plan for checking the architecture at multiple levels of detail. document everything in lazy-loaded documentation by feature"
a human doesn't need to know any of the implementation details, they just need to be able to direct the AI appropriately. strategy, not tactics.
Xacius@reddit
You're treating these tools as 100% deterministic. They are not. Hallucinations are a side effect of the architecture. Even if these models work 99% of the time (and that's a major stretch at this point), that extra 1% can compound into a critical risk over time.
Without proper technical oversight, eventually you end up with a big ball of mud. The only way to guard against this problem is to understand what's being built, which means understanding the codebase.
overzealous_dentist@reddit
no, I'm not treating them as 100% deterministic. and you have it exactly backwards, 0.01% x 0.01% is 0.0001%, not 10%.
JoeHillsBones@reddit
What happens if Claude or whatever pick your poison crashes? Goes bankrupt? These aren't cheap there won't be free versions until it's way too late, relying on tech that can disappear at a moments notice that you pay for on a subscription basis is an absurd "strategy". you own 0% of Claude just cause you download it, you'll have the code still but if you can't use it it's useless. You're just fooling yourself as a customer thinking that access to the tool you bought counts the same as if it was yours. Everyone hates proprietary file extensions and shit, we're recreating that but times 1000 w AI
overzealous_dentist@reddit
switch to one of the many other vendors in a highly-competitive landscape, just like all current tools
Famous-Composer5628@reddit
Reality of most production code bases are maintained by people who didn’t originally write it.
With the average tenure at a company being 2 years, most people are contributing to code bases they don’t fully understand anyway.
Good safeguards, e2e tests and metrics are imo more important than knowing what each line is code does
drnullpointer@reddit
Yeah. I think people are too quick saying they understand what are really going to be consequences of all this.
My company is all in on AI. If you believe the presentations, productivity is shooting through the roof as the developers are creating mare and more PRs than ever.
At the same time all I am seeing is juniors responding to my rejects with new iterations of their PRs, before I even have time to make a coffee.
They have no idea what they are doing and are simply copying my comments into their prompts and then the outputs directly become PRs.
So effectively the team consists of me and a bunch of juniors who are operating the AI for me.
When something needs to be designed or fails, I still am the only guy.
All AI in the world can't help you if you don't know what you are building and you don't know what is the question you need answered.
boring_pants@reddit
Right, which is it? Is it going to make me irrelevant or is it not?
Yeah, that's why I don't let users write my code.
Anyway, go talk to chatgpt about this. It's going to give you the fawning approval you crave.
DocLego@reddit
>Same was said about electric cars.
People STILL say this about electric cars.
People who have no idea what they're talking about, but still..
Goingone@reddit
Reviewing documents more accurately than actually reading them?
That’s a new one.
dooyd@reddit
People will lose their jobs to AI and starve or commit suicide. It’s concerning that this makes you happy.
Spartapwn@reddit (OP)
Truly a Reddit take
dooyd@reddit
You seem to lack empathy
Spartapwn@reddit (OP)
If you existed in history, you’d probably oppose any technological advance if it had the ability to reduce jobs
Agent7619@reddit
From an abstract historical point of view, they aren't wrong. You don't see as many farriers and lamplighters today as you did 150 years ago.
It's a shit take, but not wrong.
Spartapwn@reddit (OP)
Difference is farriers existed to maintain an outdated technology. AI exists to enhance and develop existing/new technology.
A better analogy is seamstresses. A 1700s clothing company could have one seamstress with a sewing machine over 10 hand-sewing seamstresses. But should they fire 9 or think big and get 10 sewing machines? Engineers will only lose their job if they suck and don’t adapt
Ok-Entertainer-1414@reddit
No they won't
ohtaninja@reddit
> No Low Effort or repetitive Posts, Excessive Venting, or Bragging.
Mods?
throwaway_0x90@reddit
seriously, unproductive ragebait post
AvailableFalconn@reddit
Beyond everything else, AI is so easy to use, I’m not going to be left behind. It took me 10 minutes to start using Claude. The whole point is this shit automates stuff. If it required skill, there would be no point.
Ok-Entertainer-1414@reddit
Yet another post trying to convince you to have FOMO about LLMs, which was clearly written by an LLM. I wonder who has a financial interest in convincing us that a bunch of real humans have this opinion?
These tools have been available for years. People have been saying identical things to this for years.
It should have been long enough by now to observe the productivity gains in the macroeconomic data. Where are the supposed productivity gains? If it makes software engineers more efficient, where's all the new demand for software engineers?
If it was revolutionary, we'd all know; there wouldn't be a debate about it at this point.
ObeseBumblebee@reddit
This is definitely the harsh truth. I've been using AI agent mode for a little bit now and yes it produces slop. But it works in real time and you can talk to it in real time. So you can use that big old engineering brain of yours to tell it when it's producing poorly thought out code.
AI + Engineer who knows what they are doing is far faster and makes less mistakes than Engineer who knows what they are doing.
AI on its own is a disaster. And AI + non coder is a disaster.
People need to stop being threatened by AI and realize that not only is our jobs secure but made much easier.
chat-lu@reddit
The more you slop with AI the more this engineer brain atrophies.
overzealous_dentist@reddit
this is very "the more you ride a car the less good at horseback riding you are"
yes, the point is to replace the horse with something better, the car
ObeseBumblebee@reddit
It's probably a good thing credits are so limited and expensive then. I do a few tickets manually just because I can't do them all with AI or I'd spend all my allotted credits too quickly.
idontevenknowwhats@reddit
Its true, but you will get downvoted by all of the grey beards that spent the last 30 years crafting their skill thats now less valuable.
abandonplanetearth@reddit
No that's wrong. It's the greybeards that have been empowered the most by AI, if they are willing to learn.
Agent7619@reddit
As a member of the Literally A Greybeard Society, I did not downvote.
idontevenknowwhats@reddit
10 others did though lol
rodw@reddit
It's more that they will get downvoted for stating the obvious. Is there anyone seriously advocating that there's no AI-assisted coding practice that adds value or efficiency?
idontevenknowwhats@reddit
Yes
ObeseBumblebee@reddit
I've seen it all over the place. It's definitely a dying voice though.
Spartapwn@reddit (OP)
Yes, everywhere lol
rodw@reddit
I don't see that but I believe that you do.
Majestic_Diet_3883@reddit
It is really the omly way to compete in the current market. Im a certified ai racist but i still use clankers lol. It's also funny how ppl say jobs arent being taken by ai, dude companies would literally use slaves if they could if it meant more $$$ in the end
JoeHillsBones@reddit
Just brushing off the infra as "whatever we will build it" is extremely naive, the AI boom is driving a completely unsustainable power demand, we literally can't and shouldn't create that much power just to run some fucking models so you can think less about your code
pwouet@reddit
Who asked?
throwaway_0x90@reddit
I don't think the mods like rant posts.
mc-funk@reddit
I'm not so cavalier about the pricing model collapsing (lots of people/orgs using it now couldn't do so if it were actually priced to be profitable)-- and I do wonder about what that means for people who have shifted fully to 'AI skills'. However, I do think a lot of criticisms are based in an old way of thinking (or in incompetence in using AI -- and using it competently does tend to be easier when you actually know how to build software).
For instance, I used to waste a lot of my energy arguing for "maintainability" and "code quality", always throwing myself up against a wall of product managers and engineering managers. No one actually cared enough to invest in it, no matter how much it slowed us down (that's what unpaid overtime is for). Now that robots can read, understand, and debug our code, those kinds of things functionally matter less.
messedupwindows123@reddit
i feel this way about static typing.