Things I used to be proud of doing well - Modern AI just does better
Posted by ninetofivedev@reddit | ExperiencedDevs | View on Reddit | 195 comments
Obligatory No Tokens were used during the composition of this post
So over the years, there are a number things I noticed, when working with other devs, that I just seem to do well. Things I took a lot of pride in.
- My ability to scan a codebase and find what I'm looking for quickly.
- Grep/Git-foo and finding the root cause quickly.
- I get around in a terminal better than most and have years of muscle memory built up.
- I can read faster than your average person.
Something that is a bit of a hard pill for me to swallow is that AI just does these things better.
It'll kick off parallel grep commands, include it's own regex string for multiple search strings and for specifics, like casting a wide net just looking for breadcrumbs.
You give it a pretty high level concept and it'll scour the codebase looking for any and all places it might be referenced and develop an understanding.
You want it to generate a first pass at a code review? It'll put together a high level summary with actual severity ranges. This specific functionality, I still think I do a better job, but it's getting close.
Finally, my agent doesn't get distracted. I get pulled into a meeting or a slack thread, it just keeps chugging along.
I was a certified AI non-believer. I didn't think the technology was anywhere near capable of being more than a better google.
I was wrong and I have completely flipped my stance. So much so that I've barely written a line of code in the last 6 months.
What things do you notice that AI does better than you?
(No people, this isn't AI just because I include a call to action in my posts. This is something everyone has done for years... hence why AI often does it).
ForeverIntoTheLight@reddit
I dunno, dude.
Personally, my own experience of it has been all over the place. It behaves like this hippie dev who is usually reasonably competent, sometimes takes ... ahem... mind enhancers to achieve leaps of logic that leaves me taken aback, and at other times, crashes from that drug-induced high to generate such ridiculously dumb conclusions that I'm left scratching my head. This, despite it being told to consider every conclusion as false unless it can cite hard, formally verifiable evidence.
It is at times like this that I'm reminded that despite all the marketing hype around these 'thinking' models, they do not logically reason about things the way we do.
vacant_gonzo@reddit
“I reasoned with Claude”. No you did not.
ZergTerminaL@reddit
I'm starting to see this on my team. "Let me share with you the AI chat" Is getting tossed around as proof of a proposed solution. Problem is you can see all the questions being led and the hints being dropped, on purpose or otherwise, and the LLM in its quest to always pleasure its user just spouts whatever nonsense that agrees. I don't understand why so many engineers have jumped fully on board the "LLM is smarter than me" train, but it's pretty depressing.
birchskin@reddit
It's so easy to fall for it, too, if you don't have your guard up. Its a powerful tool that I don't think we're going to see go away in our profession, but it is really good at both being confidently incorrect AND letting the user be confidently incorrect.
I absolutely refuse to share LLM output as a discussion point, it feels lazy and akin to sharing a lmgtfy link... Instead if AI was a tool I used to come to a conclusion or a discussion point, I'll independently make sure I can back up and understand the thing, and then present it in my own words. It feels silly writing it out like this without a concrete example, it's just a process that I think helps prevent abdicating critical thought to ai models, while continuing to use the powerful tool we now have access to.
ZergTerminaL@reddit
I'm still extremely skeptical of AI. It's been useful as a search engine (especially now that google is uselss), but otherwise it's been a bit of a wash... well for PMFs it's a total win.
That said the damn thing has all the red flags of something that is addicting to use, and to me that's the only sense in which these things are powerful. I'm watching juniors unable to think without having the AI hold their hand, and these things haven't even been around all that long.
birchskin@reddit
I actually think we'll end up seeing a similar trend we've seen in manufacturing over the last 50 years where the quality and shelf life of goods has gone in the toilet as corporations have prioritized cost over everything else... At what point does the quality of the code actually stop mattering to stakeholders, as long as it fits the current need and can be produced fast enough? Even without AI when I was doing work for agencies/non-product work, it varied by agency and customer, but in most cases the quality mattered less to them than checking the boxes and getting it out the door on time. They'd always pay lip service to tech debt and future proofing and scalability but would always choose to sacrifice those things first.
Now with AI you can check the boxes and get it out the door super fast and the people who don't know any better can throw caution to the wind even faster. I think learning to work with the tool on a way that still, on the whole, produces quality code, is possible, but it needs to be very intentional and obviously takes more time than a single session of, "build this feature", but it's going to require operators that know any better....
Abdication of thought is going to be the worst long term impact. I think for those of us experienced enough to actually understand what is being produced there is still risk, but at least system design/architecture which can be much more subjective is
Pandas1104@reddit
It lures you into this space where it seems very sensible.... Then it will start making small mistakes and before you know it your surrounded in piles of manure with no idea how you got there.
sc4s2cg@reddit
My team does this, but not as proof. More as a "here's what I talked with AI about, what do you think of this proposed approach" kind of thing. We don't do "Claude said its g2g so its g2g".
babblingbree@reddit
This can still be a massive hurdle for actually getting anything done; see Brandolini's Law. Asking Claude things and then putting the burden of disproof on everyone else is more disruptive than it is helpful, in my experience
koreth@reddit
People have always spewed half-baked BS at coworkers (whether in the form of prose or of code) and put the burden on the coworkers to shoot the BS down, but writing BS by hand imposed a natural rate limit on the spew and thus a limit on how much coworker time could be wasted. AI tools remove the rate limit. It is now possible for one person to saturate multiple seniors with reams of AI output (especially in organizations that measure job performance by raw quantity of output produced).
GenezysM@reddit
Dunning Krueger effect. They think they are smart
kevstev@reddit
This has been my experience as well, and every time I talk about this I just get told I am holding it wrong or something similar. But a lot of these claims are just... not what I am seeing and the disconnect is really head scratching.
malln1nja@reddit
It's very easy to say "you're using it wrong" when everything including the inputs, the harness, the model, the "interpreter", intermediate results, etc. are non-deterministic and imprecise.
Blueson@reddit
"But have you tried the latest Claude model????" Every single time, yes I am literally using that one.
CSAtWitsEnd@reddit
Seems to me that if you're experienced in any field or subject and have an LLM attempt to complete tasks relating to that field, it very quickly becomes obvious that the shape of the responses might look correct, but the actual content of the response is sometimes wildly off.
They're providing an illusion of knowledge and if you know better, you can discern useful responses from nonsense responses. If you don't know better, all the responses appear useful.
Nez_Coupe@reddit
I think maybe it’s my lack of actual real world experience/work experience as I only graduated a couple years ago, but I’m thoughtful enough to accept I may be completely wrong. I feel like one of the people that you are talking about. However, maybe it’s because the tasks I ask it to perform just are not that hard? I’m not sure. I’ve had a lot of success with it, and have definitely had a ton of “no that’s incorrect, this the actual problem, don’t you agree?” moments with the tools, but the net has been really positive for me. Again, I think it may be due to a combination of things like difficulty/scope of task and lack of experience in work itself, and I realize that.
kevstev@reddit
Could be- honestly I don't really use it for debugging, I mean sometimes but rarely. Usually new feature development.
dashdanw@reddit
Curious what languages and settings youre using it in. In a large Python project (~500k lines) with reasonably small files I tend to have very good luck giving it about 80% of my tasks and having it completed with a little short game at the end. I work much faster as a result. I have talked to people who code using Swift who have the opposite story, I've noticed at times this is because the language and the amount it has to parse to build context are often much higher.
Pancakefriday@reddit
Yeah, that’s the real issue is they don’t think at all. They’re just probability matrices, and they take the path of least resistance to the “most probable answer”. You can’t tell it not to guess, because it’s always guessing via probability. I mean, you can try, but it’ll ignore that order, because if it stopped making guesses it’d cease to function.
It’s why the hallucinations are a feature not a bug. It’s because it thinks the probability of that thing existing is high, even if it doesn’t exist, so it won’t look it up first.
All that to say, I have to steer it often and it makes wild assumptions constantly, and I don’t see that changing even if it’s code quality has increased, it will always jump to conclusions because that’s how it works.
ninetofivedev@reddit (OP)
This is a fundamental misunderstanding that many people have.
It’s not picking the most probable token. Modern LLMs are non-deterministic because we discovered introducing “temperature” made the results better.
That’s an oversimplification, but it’s not path of lease resistance as you say.
rafgro@reddit
Correct. "The most probable token" is 2013 machine learning. We moved very far away from this. It's surprising to see this take updooted (and your response downvoted) on a subreddit for experienced devs
ninetofivedev@reddit (OP)
Most people here are experienced. That doesn't mean they know what they're talking about.
In fact, most people here seem to just hate anything related to AI, so anything you can say bad about it will get upvoted and visa-versa.
DontDoxMe3352@reddit
This mirror my experience as well. We have a huge codebase, and sometimes it can find some issues like it knows the code better than I do, other times it ignores glaring issues in the code it just wrote, ignores the rules that we set to define code quality, standards and whatever. I also had situations where it gets really fixated into one idea it had and it no matter how much time you spend arguing with it, it still tries to go down the same route. The only fix is starting a new agent and going from the top.
SakishimaHabu@reddit
Yeah, AI feels like engaging with a perpetually wasted principle engineer. It knows everything but can't form a coherent problem statement or solution.
Rabble_Arouser@reddit
Honestly, it's all about domain. The more complex the domain, the more often you'll see these weird logic leaps and failures.
But when you're working on relatively straightforward tasks, it's a god-send. Like, I can whip about features lickity split if they're not too reliant on complex domain interactions. If it's just CRUD, or I need whip up a new UI that leverages existing API end points, or I need implement end points that don't have to make too many repository-level changes, it's fan-fricken-tastic, especially if you have well defined guidelines.
I still have to do a ton of code review, but being able to give a spec to an AI, give it some acceptance criteria and some development guidelines, and then see a completed feature that only needs minimal revisions -- gasm.
-Knockabout@reddit
Even calling it "reason" is a mischaracterization. They give the appearance of human reasoning and understanding and judgement, but it should always be remembered that they do not "know" anything and can only provide an "answer-shaped thing"(1) rather than an answer. It's all pattern-matching a la "The Chinese Room", except the dictionary is 100000000 books that can contain typos and inaccuracies, and the model can bring in new material for reference. All things considered, it's genuinely impressive how often their output is correct: likely partially because humans are hardwired for pattern-matching and tend to use similar phrasing for the same facts.
Preaching to the choir, but it bugs the hell out of me how many people accept that a non-deterministic model with no understanding of facts(2) is the perfect resource to blindly research with. It can be useful, obviously, and its strong point does seem to be generating code because of how deterministic THAT is. But anyone relying on it for tasks where accuracy and reliability matter does not understand how LLMs work.
(1) Shout-out to Chuck Wendig for putting this in such an eloquent way: "It will conjure not an answer, but an answer-shaped thing."
(2) A lot of AI tools do have tie-in functionality like citations to help with this, but simply telling it not to make mistakes or to only use accurate information does not work like an instruction to a person would. It still doesn't "know" what either of those are. Notably even when it does make a citation, the summary associated to it can often be fallacious or mischaracterize that citation, because the model does not actually have anything like reading comprehension. Also relevant: it only correctly states the date or does arithmetic if there's some kind of hard switch that tells it to use actual system datetime or calculators when asked those questions. Because it does not understand or reason.
Doctuh@reddit
Yeah sometimes it will do an amazing job finding some difficult timing bug in some code that is genuinely amazing and then a few minutes later its dripping over itself trying to import a sdtlib.
Duukt@reddit
I wrote a design doc, then gave it a link to the design doc and it implemented everything for me. When I reviewed the changes, it had missed one item because I had missed it in the doc!
Different PR, my teammate used a review skill to review my PR and it commented on mostly minor issues and one bug. I then used another skill to address the comments and my AI session fixed them. This was our first time playing around with skills.
AssistFinancial684@reddit
Better than me:
Reads an email from a client.
Parses out their framing and intent.
Gives me logical ways to structure a response.
Faster than I can get through the first paragraph.
farfunkle@reddit
I think part of the disconnect is that the majority of developers are quite bad. The corporate world just has way too much need for programmers than can be naturally or educated to a point of proficiency. So even inaccurate AI-generated code is a significant step up from what was being generated before.
Blueson@reddit
Honestly, at this point takes like OP makes me understand how I can have such a higher velocity than most of my colleagues and be considered a high performer when working basically 20 hour weeks.
natty-papi@reddit
I don't know if it's a step up. I feel like mediocre devs end up generating mediocre code through LLMs with bad prompting and bad understanding. They end up producing a much higher volume of slop.
Even with a skilled reviewer, they end up exhausting them and slop just ends up passing through.
Expert-Reaction-7472@reddit
i think this is such a dumb take.
I have worked in world class development teams and some less than brilliant development teams.
Most developers are competent. Some (few) are exceptional. Some (few) are liabilities (often appearing exceptional).
The majority are somewhere around mean. It's a std distribution bell curve and if you think otherwise... you're probably lower quartile.
Manic5PA@reddit
LLMs are only capable of doing this to a limited extent. Entropy grows exponentially with project size and "maturity".
And this is with half a decade of research, infrastructure building, and reinforcement training via sweatshop labour. I don't see this situation changing within the next decade.
We have new very powerful and not-very-economical tooling at our disposal now, but the people switching to full agentic workflows are lunatics and they will get burned hard.
call-the-wizards@reddit
I call it the 'ability to do too much' problem. Inside its weights are encoded a million different design choices and algorithms, and given some problem, some of those solutions are good, others are horrible. If you prompt it right it will give you the good solution, but at any moment it can for unknown reasons shift into doing the awful thing. And then once its own context is polluted with the awful thing it's just downhill from there. The longer you let it go without supervision the higher the chance it will take a sharp left turn and crash and burn.
Manic5PA@reddit
I personally call it the "inability to think" problem. Once the task gets distant enough from the mean, the probability of the LLM hallucinating a correct solution becomes 0%. "Agentic" AI just gets stuck in a negative feedback loop and that can become dangerous if you don't supervise it.
I personally feel like all those horror stories we hear about Claude nuking production database is partly due to horrible practices (of course) but also due to context exhaustion leading to terrible performance. A competent engineer now not only needs to understand the task it's delegating to AI but also how the AI works.
Own-Chemist2228@reddit
AI may hurt stronger developers and top performers more than others. If corporate leadership believes AI and people are interchangeable, why not start by replacing the most expensive people?
SnugglyCoderGuy@reddit
The AI isn't a step up in quality, just the same shitty quality (if most are bad, and i 100% agree, then guess what the AI was mostly trained on?) but faster, and since speed is just objectively easier to measure than quality, it's seen as a godsend. However, the part people forget is that right now they are selling the access literally pennies to the dollar. Once they start to want to turn a real profit, the cost os going up 2 or 3 orders of magnitude.
BigDickedAngel@reddit
Lol if that were the case places like Microsoft, that are replacing devs with ai left and right, would be improving...yet we can all agree on their rapidly declining quality.
The truth is...companies are short sighted. They see features getting built quicker than ever. What they dont see is that their marble castle now has a cardboard and duct tape 5th floor of the north west tower. Younger generations entering into the workforce think theyre smarter because AI gives them something that works and they champion using it.
We haven't even seen the maintainability fallout yet. This bubble is going to pop in a year or 2 and its gonna be like y2k with places trying to hire seniors to unfuck their monoliths.
riotshieldready@reddit
Is microslop actually replacing devs with AI tho, or is that just an excuse?
Also having better code quality doesn’t mean less bugs. I’ve found AI can code really good at the atom level, it will make better functions/methods than me, and line by line have better code. At the macro level AI will miss half the requirements, and introduce loads of new bugs because it changed how existing code works.
It has access to all the best coding practices and example, but will just make up things all the time. If places push engineers to make 100s PRs with 1000s of lines and don’t give others time to properly review it, it quickly becomes chaos.
throwawayacc201711@reddit
I think this is largely predicated on what the ceiling is for AI capabilities. I feel like people are gonna eventually figure out what the sweet spot is for what effective AI use is. There is a lot of generated or blackboxed code that we tolerate in the industry currently. With AI, code can be written faster whether it’s good or bad. The problem of bad code always existed. It’s now just going to mean the standard on reviews will need to go up
Dense_Gate_5193@reddit
all that black box code can be easily re-written in modern languages. it can also reverse engineer a lot of stuff to prevent vendor lock-in. lots of sass companies literally have zero most anymore because of AI.
if you don’t believe me that software is free now, i’ve already proven the thesis with a database i wrote using AI tooling that’s faster and more efficient than the original i was using.
so devs are starting to pearl clutch but those pearls are so easily cloned….
Altruistic-Bat-9070@reddit
Do we know whether that decline though is because of less diversity in the room, i.e developers brought a specific way of thinking, or due to declining code quality?
Low_Bag_4289@reddit
Reads and writes the code faster. Biggest advantage I have over AI(and over most devs tbh) - very good memory. I have this nice skill that I remember most of the code I wrote and can recall most of the stuff I reviewed, even after long time. Either design, code or solution
If I need to analyze something across one project, where I’m working on single “bounded context” - AI does it better. Do I need to aggregate multiple contexts, analyze flow across multilple systems, which I know only from fairy tales, talk to people, explain it by analogy and do good impression to people who holds money - that’s where I shine.
Just accents have in our work have changed
Actual_Database2081@reddit
Are you claiming you have a better memory than AI?
Childish_Redditor@reddit
Artificial intelligence, by which I assume you mean LLMs, does not have memory. This is a fundamental aspect of the models.
Actual_Database2081@reddit
I think this is just semantics when my point is clearly about holding the state of the codebase in the head.
By memory I mean context. An LLM can hold the entire codebase in its context, which is far beyond what a normal human can do. Unless the OP thinks they are a genius.
inglandation@reddit
It doesn't hold the entire history of the codebase though. Humans also don't, but we have a relatively good way of remembering that, which is better than the "Memento" approach of tatooing yourself with notes (aka AGENTS.md or other similar markdown files).
Actual_Database2081@reddit
You would need documentation to remember a complex codebase from like a year ago anyway. I don’t see anything inherently inferior in the way LLMs handle memory etc.
I agree though it doesn’t hold the entirety of the history of the codebase, as do not humans.
I have personally made complex changes, and coupled with a typed language and set of tests its better than the average dev at making changes (assuming the LLM provider is not nerfing their models ahead of a release)
inglandation@reddit
Yeah, it's all true of course. I am amazed at what Codex/CC can do.
I recently created a personal app mostly with Claude, which wrote around 25k LOC in a matter of days, and it's quite obvious that maintaining and expanding this codebase is going to be challenging in a way that it wouldn't be if I had written and maintained everything myself over several months of work. I can't exactly explain what the problem is, but I can tell that it probably has to do with continuous learning or memory. Claude has a superhuman ability to explore a codebase efficiently and thoroughly, which partially hides this problem, but it is there and seems fundamental to me.
subma-fuckin-rine@reddit
The context and memory aren't the same things
Actual_Database2081@reddit
How are they different for this discussion?
stubbornKratos@reddit
How and why a codebase came to be is entirely how it just is
Low_Bag_4289@reddit
You never dump entire codebase into LLM context at once. Unless it’s small script. Or you don’t care about quality and never heard of context rot. But anyway I wrote if it’s single codebase - AI is faster than me(not always better). But I need to aggregate multiple contexts, connect multiple dots which are not always a codebase - I’m still needed. And unless AI/LLMs architecture changes I will be better(and every competent human being) at this than AI for at least a year.
softwaredoug@reddit
I have found I can quickly make a mess of things if I'm not super careful of ensuring my agent has all the right context. So ensuring it discovers all my rules, etc for the codebase is super crucial.
Otherwise I get over my skis, then spend a day shepherding the agent back to sanity.
LemonDisasters@reddit
It makes a mess often enough even with enough context. We must stop enabling this propagandistic negativity; it is designed to sell more usage.
tlagoth@reddit
Yeah, the more I use it, the more I find it gets in my way. It’s good to explore codebases you’re not familiar with, and I use it a lot for that (we have hundreds of repos in my org).
In codebases I know well, though, in almost all cases, the cost of adding all the context it needs is higher than just doing it manually (I’m talking about codebases with 400k+ lines here).
I’ve been burned enough times by trusting the agent that now I prefer to do most stuff myself, instead of asking the agent to do it, then having to manually review and fix its code. In other words, I pay thousands of tokens to have almost the same level of work, but paying a premium for it (and no, this is not about harness, we have a very comprehensive setup at work, with conventions, specs, etc).
The more I use LLMs, the more I’m convinced by the end of the year this notion of letting it do anything/everything in established production code seems more and more far-fetched.
Wonderful-Habit-139@reddit
There you go. I've done enough prompting and I'm someone that generally types fast enough and has enough patience to write correct sentences, try multiple prompts and test things out a lot, but I've been burned so many times as well, and writing the code directly ends up being much faster.
The moment I have to write more than 1 prompt, I've been slowed down compared to doing it manually. And that's not even talking about the cognitive hit from using AI, and the fact that your brain starts getting used to getting solutions really fast and starts shutting down. Definitely not worth it.
vacant_gonzo@reddit
This is my experience too. It’s awesome 80% of the time. 15% it’s okay, uses a different convention or something but I can fix that.
5% is wrong, or, worse, very believable nonsense, hard to spot but could lead to unintended consequences.
I am definitely more productive with it but it’s definitely not the 10x productivity I see spouted. No human I’ve actually spoken too has that productivity gain.
I made up those percentages btw, seems that’s what you do when talking about AI!
mx_code@reddit
The problem is that 5% can be a quite impactful, so they’re still a vast need for hand holding and for the developer to incur the cognitive load of understanding really what happens.
So at that point, yes there is value in automation with the code writing but the cognitive load is not going away and that’s why developers matter
And if you couple that with the sycophantic. behaviors and excessive optimism of coding agents it’s not precisely a recipe for success
frankster@reddit
I feel like there are lots of things that save you 5 minutes, then that one stupid thing it swears blind works this way and iterates over for ages without getting anywhere while trying increasingly wild fixes consumes 3 hours of time until you release it's out of its depth and redo yourself.
-Knockabout@reddit
Lol I was gonna say, for me it's probably 50% fine, 25% okay, 25%, dead wrong/incredibly strange implementation.
chaitanyathengdi@reddit
10x is a sales pitch.
CoolingCool56@reddit
I'm 10x more efficient. However, I have a decade of experience and have to give good prompts. Without a human the gain would be 0%. So humans are still needed. Especially for the critical thinking part
Raunhofer@reddit
Are you sure you just weren't inefficient before, and AI made you more motivated? I'm not pulling your leg, I'm genuinely curious, as I've seen that happen.
In my usual scenario, while the model is still thinking, I’ve already fixed the issues; not just one, but usually for all the concurrent agents. I've tinkered with ML for over a decade now, and see no roadmap for 10X, despite being experienced enough to replace entire teams in big corpos. I do feel it's the opposite, AI makes slow faster, not fast faster, as the context building, iteration, and waiting, requires so much extra effort.
For new features, I've concluded that the quality is not there yet.
For tests and other repetitive boring stuff you can set a firesure framework ML has been a lifesaver.
haskell_rules@reddit
I feel like it makes me 10x more efficient at 10% of my daily programming work, and 10x more efficient at corporate mandated nonsense like filling out performance reviews which was another 20% of my time.
Total efficiency gain is ~19%
mx_code@reddit
And you are the best case scenario, imagine this dream coworker: “overtly confident, inexperienced and reckless”.
If you are 10x more efficient, he’ll be 100x more efficient
HopefulHabanero@reddit
One striking thing about current AI "best practices" is that they're pretty much universally associated with increased compute usage. Agentic workflows, MCP servers, every single new model being more token hungry than the last, etc.
I don't think AI is going away, not at all, but I do suspect we're going to see massive disruption in the way these tools are used at some point in the next few years once AI providers raise prices enough to cover the costs of providing those models. Many companies are already facing capex stress from subsidized workflows people are using today.
stormdelta@reddit
I think that's the most frustrating part of using it. It just feels so hit or miss, even when I do everything almost the same way, because it's inherently heuristic/random-walk-esque.
s-to-the-am@reddit
I’ve found having your conventions and rules enforced in Pre-commit and CI goes a long way to keep the agent in walled garden type experience.
chickadee-guy@reddit
At that point its faster to do it myself
oupablo@reddit
I have found that AI seems to hate abstraction and will often copy/paste almost identical methods around the place to avoid it. It frustrates the hell out of me.
dinosaurkiller@reddit
If it makes you feel any better, I can give it existing code, tell it what I’m looking for in the existing code that works and has been fully tested, give it explicit directions and the context of existing code, and it still shits the bad. I find myself talking it down for way longer than it would take to just find or fix it myself. Not always, but too often.
joshocar@reddit
This gets more and more important the larger and more complex your code base is and how many parts of it your change touches.
europe_man@reddit
Thank God that it does. Otherwise, I would go insane with all the legacy crap. For me, researching, investigating, correlating, etc. are really strong use cases for AI. Sure, for things where I am kind of the owner, I don't need it that much. But for stuff where people left their brain outside the room while coding, and that is mostly what I am involved with, I can't be thankful enough.
The whole post read fine to me until you mentioned not writing a line of code in 6 months. I don't understand how it fits with everything else you said. But maybe it is just me.
elniallo11@reddit
I find it very helpful, today for example we had a little prod bug where something was failing silently. So I fed it the log line and had it quickly check the code paths that would have to fail for it to fail silently. It came up with 3 suggestions, 2 I had checked and one I hadn’t yet. It was the third one and I had a fix ready to review in very little time
PatchyWhiskers@reddit
Reading fast still helps with AI because it spits out a fuckton of text
Altruistic-Bat-9070@reddit
When i read ok reddit i think three things are happening:
By definition most developers are average. Maybe those on reddit are the top developers out there but i question that because if they had worked a few jobs at a few places they would have seen human written code significantly worse than what claude makes.
A lot of developers tried AI in the loop coding a year or so ago and don’t understand the space has moved on significantly.
A lot of developers i meet have an ego, and that ego has stopped them using AI, and that has resulted in them being awful at prompting. A lot of developers are also sub-optimal at requirement setting and explaining the problem themselves, its why we often have to hire PMs/product owners etc to translate the work. Together these factors probably actually do genuinely result in awful results when using AI.
I use claude all the time and it means i get to do more of the fun stuff like actually delivering, architecting, solutionising and communicating.
Sionpai@reddit
Yeah nice "fun stuff". This comment reeks of AI.
Altruistic-Bat-9070@reddit
Lol just cause you don’t find that stuff fun you think its AI?
Sionpai@reddit
Let me see; generic ass username followed by a number, hidden post history, comment written like its AI. Just pattern matching.
Altruistic-Bat-9070@reddit
if you sign up with google accounts nowadays you only get the option of these usernames, blame reddit, not me.
Sionpai@reddit
Lmao chill out, you seem very pressed by what I said. I gave you the reason why I said it looked like AI, if you say you're not then I believe you. I don't care enough.
Altruistic-Bat-9070@reddit
Lol
You must be one of those hard right people i hear so much about.
Sionpai@reddit
Its not that deep sweaty XD
Altruistic-Bat-9070@reddit
Sweetie*
Sionpai@reddit
I used sweaty on purpose, you fell for the AI bait
Altruistic-Bat-9070@reddit
Wut
Sionpai@reddit
Don't worry about it
Altruistic-Bat-9070@reddit
Wut
Altruistic-Bat-9070@reddit
There's that fragile ego lol
mx_code@reddit
You should have already been doing that if you parallelized work and handed off and mentored junior engineers.
Writing code was never the problem
SansSariph@reddit
Delegating work to and mentoring a junior is a time investment. It is valuable and high ROI work with the right people, but it's not an accelerant for delivery. It's odd to me you don't see this as apples and oranges.
As an IC you also have limited actual authority over delegated work. If the junior doesn't deliver it becomes a management problem.
mx_code@reddit
"it's not an accelerant for delivery" it's not?
building a high performant team over time is not an accelerant for delivery?
Altruistic-Bat-9070@reddit
mx_code@reddit
Bloated legacy systems -> i can guarantee you that no one is letting a coding agent in such an environment.
Why would it become bloated because of the fear of impacting the little reliability it had, why did it have little reliability because of bad engineering practices. Bad engineering practices that if let loose a coding agent would simply replicate.
So…. Talk about generic hyperbolic comments? Sounds to me like you suffer from some little lack of self awareness, lol
Altruistic-Bat-9070@reddit
We are and they are great exactly because of how good they are at finding out how an undocumented system works in the way OP described.
mx_code@reddit
lmao, wonder what kind of system you are working on.
you don't just let them loose because of the amount of false positives that a codign agent raises, that's why you still a human of the loop.
no wonder reliability has gone down the drain in this industry the past year
Altruistic-Bat-9070@reddit
I never said we just let them loose. I said we use them. We have AI in the loop development and it works great.
fazeshift@reddit
Yes, most devs love communicating.
Altruistic-Bat-9070@reddit
But isn't that a problem?
ninetofivedev@reddit (OP)
No notes. This is pretty spot on.
Tritondreyja@reddit
AI is unreal if you learn your own personal brand of full integration with it. This is to say, explicitly replace the things you know slow you down or steal your spoons, but not giving it more autonomy beyond that. Almost like being a cyborg without physical augmentation.
I had to dial that in, and I'm an AI annotator also that helped me set the stage for what and what not to try (I know the vectors companies are tuning for months before the public does, and know what the breaking cases are and how to push that boundary). But in the right hands with the right mindset, WITHOUT total free reign, the skill multiplier is insane.
I hope AI literacy in a healthy and pragmatic way permeates with time, but idk how hopeful I can realistically be ultimately.
DoingItForEli@reddit
It CAN do better, as long as you're there to guide it. I don't mind that I'm not spending time on the code itself, but I do review everything AI gives back. It is a CONSTANT thing where I am pointing out where they either over-engineered, under-engineered, missed the requirement, or introduced something I know will become an issue later.
Think of it like this: A calculator can come up with an answer faster than a mathematician, but that mathematician still needs to ensure certain things for big picture calculations.
Different_Berry5015@reddit
It's a tool, it's not perfect for everything, doesn't mean it's not good for anything. Use it when you can and do it to raise yourself to next level of tasks - higher level thinking, strategizing.
LousyGardener@reddit
Seems like AI is it's own best advocate.
I know this is the case because the same well written article keeps appearing in my feeds. Somewhere toward the end will be either a sales pitch or a marketing survey, which it is, despite your assurance that it isn't.
I've also tried them all - Gemini, ChatGPT, etc. They don't actually solve the hardest problems in fact they choke hard. They're very good at the tedious stuff I've done 1000 times myself though, so that is good
ninetofivedev@reddit (OP)
Just so we're clear, are you thinking that this post was written with AI or that I'm just AI?
LousyGardener@reddit
Does it matter?
ninetofivedev@reddit (OP)
So you just want to fling shit, but not actually stand behind anything you're saying?
Monowakari@reddit
Tokens aren't used they're consumed, cause llms are hungry hungry monsters
ninetofivedev@reddit (OP)
I use that verbiage all the time, but that's not even accurate. They're metered would be the correct way to phrase it. They keep track of how many tokens you've sent and either throttle based on your usage window, or if you're paying pure ala carte, they just charge you based on usage.
KitchenTaste7229@reddit
I feel like I've had a pretty similar shift in perspective as we started adopting AI tools at my company. Though I'd say my perspective is still about how AI is just limited to routine/repetitive tasks, like codebase search, summarizing, first-pass debugging. It can also generate good enough analytics queries in my case. But it's still not completely sophisticated enough that you can just let it do everything on its own. Some of the juniors I work with over-rely on AI, but it just makes them less capable of understanding tradeoffs, how to validate output with production reasoning, and basically just apply business/user context. Even Anthropic's own internal research shows AI is mostly good for simple execution work, but down the line has major weaknesses in engineers' reasoning, debugging, and even collaborative skills.
Head_Let6924@reddit
My problem with ai is not the tech. Its the cunts pushing it.
Stupidity_Professor@reddit
Would you be able to list out things / hard skills that you think made you competent. Even in this age of AI, I still would like to know how to grow.
For example, you say you could move around the terminal quickly. What exact actions would you suggest someone to master. What all tools, methodologies, technologies, etc. would you suggest some with around 2-3 YOE to learn for a long term career?
Inevitable_Guide_942@reddit
If you give it enough context, it will do everything better. The only times it doesn't it is your fault.
lzynjacat@reddit
AI is for sure better than me at somethings and I'm better than AI at some things. AI is also terrible at some things, just like I'm terrible at some things. That's why I prefer a pair programming approach. I usually drive and I have AI watch and review as I go, and we discuss the merits of different possibilities or potential problem/bug root causes. I find this kind of pair programming approach works really well, but it is NOT an agentic approach.
AllTheWorldIsAPuzzle@reddit
AI has drawn a distinct line that shows where people in my organization are skills wise. People with poor or no coding skills look up at AI as the savior that can write code they struggle with. Suddenly they are producing code like rock stars.
But that code has issues. They try putting the issues back into AI and it throws out more code, which breaks other things they built. So they put the errors back in AI, generate more code, break more things. Repeat. Beyond AI telling them what to try, they have no skills.
So then there are the ones that CAN write code and debug. We get stuck having to analyze the code they "built" and try to make it work. Unfortunately upper management looks at AI as having unified the department because now the low performers are generating code and the high performers are fixing the code, where before the high performers were doing both.
For me, the AI is a good resource because I can ask it what the mindset was when it built the code. Asking the original person who generated the slop is like looking at a deer in headlights. And I am able to ask the AI questions on the code level that the ones who generated it originally can not.
ninetofivedev@reddit (OP)
Our team built the platform for tracking ai usage, developer productivity, etc at our company.
What we found is that it's not a straight line.
We have people at the bottom end of AI usage also at the bottom end of productivity. Presumably because they're just not working at all. It's also where all most of the middle managers are.
Then we have people at the top end of AI usage also at the top end of productivity.
What does this mean? Well mostly that there is a lot of people sitting at home all day not doing anything.
But besides that, the people who use AI have figured out how to also use AI to make themselves look better on the metrics.
I actually hate the platform that we built, but it's what the executives wanted.
AllTheWorldIsAPuzzle@reddit
I'm on the fence. I don't blame AI for the crap that has come up, it's just a tool. What sucks is my workload has gone further through the roof because I'm stuck diagnosing mountains of slop. Before it was looking at piddly changes and stuff like that that the low performers were only good at doing, all the while building and testing my own work.
I will say I am happily using it for the "back of my head" ideas I've had over time but do not have the time to build them. I can work with it like what was intended: give it context, ask and answer questions, validate results, look over code and help dial it in.
It hasn't helped with our legacy systems. A decade of adding improvements and patching from multiple sources with little to no documentation has created a complex mess that AI just can't trace through. But hey, neither can the former low performers, so I don't blame AI. I'm stuck on the legacy systems while the former low performers generate AI code for new projects, and I only get pulled in when they hit that "last 10%" of the project where they can't get the numbers to work. And as we all know, that past percentage is typically where the bulk of the work is.
djnattyp@reddit
"productivity" = how many clowns can fit in the car
StTheo@reddit
I go back and forth. Sometimes it does amazing work and simplifies otherwise repetitive tasks. Sometimes it makes up its own rules and I have to go back and clean stuff up.
I think (hope) we’ve moved past this “AI will replace devs” phase. At least, my company’s gone that way.
zangler@reddit
Stop be a non-believer and be a technician, an engineer, a pragmatist. You are in a much better position to correctly leverage AI in meaningful ways...but you will need to catch up.
Even those of us that lean in massively find the topology moving so quickly it is extremely hard to keep up.
frankster@reddit
Fuck I'm every one of these!
mx_code@reddit
Ai is better than me at acting like an idiot savant with zero critical thinking
Come on man, give yourself some credit this sounds like when the first spreadsheet software first came out and someone said “i can do math quickly but the computer does it quicker”
ninetofivedev@reddit (OP)
When Google came out, it was amazing how good it was at providing you i formation based on search terms. None of the other search engines could even compare.
Despite that, people still take pride in their ability to use Google effectively. This phenomenon is not that uncommon.
I used to work construction way back in the 90s. People took pride in their ability to use a nailgun.
Welders take pride in their ability to use an arc welder.
mx_code@reddit
Yes, people take pride at mastering the skills.
Over time the skills that are valuable changes, irrespective of that the one thing that can’t be replaced is critical thinking.
You are not making any point though, technology automates skills and???? It’s not the first time in history this happens
SansSariph@reddit
I feel like you're just angrily agreeing with them
AI automates some skills, the skills to do the work change, judgment and expertise still matter
Pretty sure that's what you're both saying
Fast_Age_775@reddit
It does the donkey work better and so much faster, which can't be ignored, but it cannot substitute for the virtues, which are irreplaceably human. Prudence to know what to do in certain situations, to check its conclusions, to prompt it properly, to ignore its output when relevant - and to know how to find what's actually good in it. Also to know what not to use it for. Humility is also needed to interact with it well - first to understand when it's genuinely found flaws in ideas and to be willng to accept that they need to be improved upon/abandoned. Since its outputs are very formulaic and easily identified as bot generated, the wording often doesn't convince. So human temperance is needed to overcome that - don't use it for everything, only what it's good for.
hypernsansa@reddit
Please explain how you expect developers to be prudent when they've outsourced their skills to a bot? Some orgs don't even let their devs write code themselves r n. How the fuck are they going to be in a position to critique any outputs?
Fast_Age_775@reddit
The whole point is that you don't outsource your skills. You still have to learn everything. Only once you've done that - and understood these virtues at least theoretically - should you begin using AI at all. And when you do, start on non-critical tasks to learn prudence. That's what I think. And learn some manners.
SansSariph@reddit
I think the same. Love the focus on virtues. I have been saying "values" a lot recently. We can't outsource human judgment or accountability, and the judgment required to know when the tool is useful vs misleading (and even when it's misleading, there is often still a nugget of value to extract) is so important.
hypernsansa@reddit
Skills have to be honed and maintained continuously. Stop practising anything and you'll get rusty. Not sure why I have to explain that to you.
Incredulous sellouts who seek to erode our profession (like yourself) don't deserve manners.
ashultz@reddit
People who are using AI for their junior and mid-level career right now are never going to be that prudent, some of them will just get fired when they can't become seniors and others will go through a really hard process of realizing they know nothing themselves and forcing themselves to learn. Those are both going to absolutely suck to go through. I'm so glad I did not have a knowitallbot to wreck my own learning.
neuronexmachina@reddit
I have no idea how I'd survive as a junior in the current environment. The prudence you describe is something I only learned after years of hard experience.
Fast_Age_775@reddit
That's a great point. I think mentorship is crucial. Also you can study these virtues to some extent (currently not taught in schools, as far as I'm aware). Nothinig beats practical experience though, so perhaps allow them to mess up and learn on non-critical tasks. Success there determines graduation to the serious stuff. I think is one way around it. It's tough for sure.
SplendidPunkinButter@reddit
Bot post
false79@reddit
Definitely writing a lot less code than before and more reviewing.
However on occasion, there are some problems a bit more complex, it would take more effort to assemble a prompt, that it's just faster to roll up the sleeves.
MCPtz@reddit
Started in Feb 2026 with Claude 4.6 and whatever Codex was available, our team noticed LLMs were useful.
Bottom line: As long as you don't turn off your mind, you can use LLMs to great advantage.
I've definitely spun the tires (tokens?) and wasted half a day, where then during a break, I have an epiphany and change directions. The problem solving still feels about the same to me, and much of the tedium has been reduced, albeit, a lot of my "new" work is fixing the overly verbose, and straight up odd, output from an LLM.
CaffeinatedT@reddit
Surely after 20 years you had a bit more to offer than knowing how to use a terminal, read code, and how to use grep?
ninetofivedev@reddit (OP)
I'm also really good at tricking companies into paying me way too much money to simply sit in my terminal all day and play dig dug.
le_dod0@reddit
I thought I was the only one. Stealing paychecks 10 years and counting...
ceirbus@reddit
Dig dug, my man - you’ve just spent my day for sure
corny_horse@reddit
To be fair, AI has a very good track record of tricking companies into paying way too much money to sit in a terminal.
CaffeinatedT@reddit
Spoken like the true professional.
Own-Chemist2228@reddit
I know people with 20 years experience that barely know how to read and use a terminal. So let's give OP some credit.
BusinessWatercrees58@reddit
I guess I hear you but we're moving into a new era. Bragging about being able to scan a codebase quickly or get around a terminal quickly will sound very similar to how being able to do large amounts of arithmetic by hand quickly will sound today. Why spend time doing long division when you can just use a calculator and get on to solving actual problems?
ninetofivedev@reddit (OP)
That is the point of the post…
featherknife@reddit
suborder-serpentes@reddit
Businesses have never rewarded the traits of a talented person. They have rewarded visible, measurable changes. So I don’t feel less valued.
Own-Chemist2228@reddit
True, but the measures are often subjective.
One of the great ironies of our profession is that our craft deals with logic, numbers and purely objective machines, yet we have never found a way to objectively measure the performance of the people doing the work.
Educational_Ad_6066@reddit
"I used to think it wouldn't be better than replacing google, but now I don't have to: search text, find recommendations, get results from a query I would have put into google search to find on stack overflow"
I dunno, man. Sounds like you just use it like google with a local file context
ceirbus@reddit
Unfortunately for AI it is yet to be able to create it’s own guardrails with the .md files and always does dumb stuff like taking credit for my prompting and commits
It only knows what WE have taught it over the life of the internet, it can’t do anything we cant do
It’s not more resourceful than I am; it’s only faster at small green field tasks I could do about as fast - im finding even as I develop, it’s constantly hallucinating and a lesser dev would not know how to steer it as quickly - we see this in code reviews, seniors pass the AI review first time, anybody with lesser experience that couldnt have done the task without AI, doesn’t get there in as few cycles
Anything brown field is just back and forth telling it what to fix, how I would have already developed it - I like that I can fire off a prompt mid meeting to keep working but this isn’t replacing me - it’s just another interface that’s replacing my IDE
AlexisHadden@reddit
This has been my experience as well. Try to create a skill or reusable prompt and the more I use it, the more I notice that the LLM will randomly ignore parts of the skill or prompt. Break a large task (convert 30 docker compose stacks to podman quadlets) into chunks and it becomes pretty clear as the LLM gets something different wrong with each chunk, or ignores a different part of the same prompt with each chunk. The fact that you have to inject randomness into the model to keep the output feeling ‘novel’ is also the bane of using LLMs like this.
I have some logic in a side project that can hook up a PTY to a network protocol, and had it create a couple more protocol handlers, done separately. In one case, it followed the existing patterns and reused the code as I would have, so I just had to iterate on the gaps. In another case, it inverted all the components, and basically copy/pasted a couple hundred lines of code, slightly tweaked it, and bolted it into the wrong part of the pipeline rather than use the existing pipeline component. Wound up deleting nearly half the code it generated because it wasn’t even needed once I fixed the weird anti-pattern it had created.
The more specific you can be, the more consistent you can get the results (to a point). But the thinking you put into that step also makes you faster when it comes time to write the code, as you already know the shape the code will take. So the more you actually design before you write, the less the LLM actually speeds you up. The more you YOLO, it can generate code faster than you, but it’s also likely to generate a ton of code that went down the path you wouldn’t take, and so now it’s a gamble if it has helped you, or just made things worse.
gpbayes@reddit
My company expects me to be a full stack software engineer, a data engineer, a data scientist, and a devops engineer. I trained for 7 years to be a data scientist. I don’t know JavaScript, like at all. Codex does ALL of my frontend programming as well as most if not all of the backend programming. Reasonably I haven’t written a line of code in going on a year. If you put me in an interview now I would most likely fail it, that’s how much my programming has degraded.
But I am efficient. I’ll have 4-6 instances of codex running at the same time. I’ll have multiple PRs in a day for someone to review. It’s crazy how much more productive I am now. I give 1-2 week timelines but am usually done in like 45 minutes, if that. Not because I’m sand bagging but I want to make sure the shit works and isn’t slop. I’ll use codex to review PRs as well and it catches things I never would have found. This is a new era of coding. I feel bad for people looking for jobs because now a 2 man team can operate as a 10 man team.
No_Software8474@reddit
It is extremely valuable when it comes to understanding codebases and pointing out the pros and cons of architectural decisions if you understand how to prompt it correctly
FarYam3061@reddit
AI just makes me faster, not better. Which is good because I have very limited time to do IC work.
markvii_dev@reddit
larp
Void-kun@reddit
I agree, part of me was worried about losing my edge coding, but my system design, and soft skills have gotten sharper.
No-Economics-8239@reddit
Faster, certainly. Better is subjective. As you point out, we've been making ourselves faster the entire time. Automation and open source and multiple monitors and IDEs have all undoubtedly made us more productive. The challenge we've always had is how do we measure that?
Productivity remains a very subjective measure. And if it is just based on how we 'feel' is that the correct metric? What metrics are you using to measure success?
One of the challenges I've grappled with my entire career is trying to maintain the code bases written by others. If we aren't really writing code, how well do we understand it? Do the agents understand it better than us? What sort of long term success can we expect with that sort of model?
TastyToad@reddit
This is a weird set of "things" to take pride in. Especially while claiming 20 years of professional experience.
Own-Chemist2228@reddit
No different than a veteran surgeon that takes pride in doing quality sutures.
They also know many other advanced concepts, but the basics are still important to the outcome.
AggressiveResist8615@reddit
You're obviously not very good at reading the post it seems. AI is actually good at that.
ninetofivedev@reddit (OP)
The other things I'm good at that I'm proud of:
AI isn't good at those things.
TastyToad@reddit
True paragon of our profession then ... And an exemplary human being on top of that ...
roger_ducky@reddit
It can do research quickly sure, but it can’t keep the full results “in its context” all at once.
You still need to keep track of the design and architecture.
ImportantPoet4787@reddit
Unless it's super simple, I can usually write much better, more concise code than Claude. I have found letting it write it first, then going back over it and redesign/rewrite parts of it. It lives to duplicate code.
floobie@reddit
In my experience, the good coding models can indeed read unfamiliar code faster than I can. But, with a tiny bit of experience in any codebase, I build up a pretty quick mental model of where to look.
The LLM needs to start from the beginning every time (or barf a transcript of all prior inputs and outputs into its context so it can fake having any kind of working memory). I can just be like “oh we need to fix some validation logic for this feature? cool it’s probably in this file in this service grabbing a regex from this table”.
shadowisadog@reddit
In my experience the principles of good software engineering still apply. You still need to ensure that code is modular and still need to ensure tests are written and actually valid.
I use AI like a scapal doing a little at a time with very targeted changes and I take time to define what the code needs to do in detail. If you give it garbage it will produce garbage. I don't like to give it huge high level tasks because those never seem to produce great results. Your hands still need to be on the wheel guiding it to proper practices.
I find myself spending way more time testing and verifying the code works. I use a variety of tools to check things like security and performance. I find if properly applied this has resulted in better quality code with AI assistance. But I'm not allowing my AI agents to run around massively changing huge parts of my code base like a slot machine.
MagnetoManectric@reddit
I dunno man. This reads like every other astroturfing post on here. you've hit all the beats. Double digit YoE, "I was a huge skeptic, now I am a believer", a list of fairly trivial tasks, and finally of course "i've not written a line of code in x months".
The only thing you missed is mentioning a specific brand and version number.
This isn't really going to convince anyone who's still skeptical to change their mind, but I guess it does add to that sense of normalization and inevitability.
charging_chinchilla@reddit
I've come to grips with the fact that AI is already better at programming than any human, and it's getting exponentially better by the day. It may already be a better software engineer / architect too, and if not it won't be long before it is.
The only benefit we offer is accountability. At the end of the day, businesses need someone to hold accountable for the software it releases. Shareholders won't take "you're absolutely right!" as an answer. Neither will CEOs, or their VPs, or their directors, etc, so there will always be some amount of humans in the loop who have to review the stuff AI is cranking out and be responsible for it. There is value in that. Unfortunately there's also significantly fewer jobs in that world, and the expectations for those jobs will lot a lot different than expectations for a SWE have.
LemonDisasters@reddit
No it isnt lol
You guys must stop reading marketing from AI companies and watch actual skilled developers work on novel systems, the doomerism is convenient only to those companies
Intrepid-Ostrich2226@reddit
Looks like we have no idea of AI existence, never used it, so need more posts about it.
ninetofivedev@reddit (OP)
It’s Wednesday. Go ahead and take the day off reddit
adg516@reddit
thread completely void of egos as usual
ChickenSaladHoagie@reddit
I'm in a similar boat, although I take some comfort in the fact that I don't think AI is close to replacing human contextual judgment.
A lot of AI-assisted codebases seem to have their fair share of issues: several methods for seemingly the same thing, chunks that look irrelevant but somehow break things when deleted, deviation from standard patterns, etc.
So my approach to AI encroaching on skills like scanning codebases and reading quickly has been to double down on the judgment calls those tasks inform.
Thin_Driver_4596@reddit
AI is really good at small tasks. If you can provide it context and the general overlay, it will most likely do the task faster than you. And that is the most important part.
You can let AI take control of the flow, but from my observation, it is incentivized to fix the bug rather than to solve the problem. It takes lots of shortcuts and wrong turns just to fix the immediate problem. That is alright when the problem itself is small, but there are cases where bugs reveal fundamental issues in assumptions in the codebase. These typically require refactoring and reimagining the solution space, but with how it works, you'll likely not go through that process and not refine your domain model.
apartment-seeker@reddit
Reading faster than other humans still matters.
Not to come across as overly negative, but IDK why one would ever take pride in these kinds of skills. To me, these kinds of things are boring and just tools in aid of work that's more interesting and more real.
Well, it does the above better than me too, but I never liked using TUIs when I didn't have to, or using git to find things.
It also obviously has much more knowledge about programming and software internals than I do, but of course, its ability to apply those things effectively without human guidance is highly questionable.
IDK that AI does anything "better" than most of us here, nor do I think it has to in order to be useful and legitimately reduce the demand for software engineers.
EENewton@reddit
I find AI is really good at all tasks where summarizing or synopsizing would be needed.
In other words, if I reasonably think I could find a stack overflow conversation about a pattern or a solution to a problem, the AI seems like it can do a decent job of synopsizing that back to me.
Whenever I've tried to ask it a really advanced question - solving certain kinds of advanced networking problems, for example - it will spit out YouTube tutorial-level solutions.
I haven't yet seen the gestalt where AI is suddenly intuiting things about code people haven't intuited before.
Often when I talk to people who are stunned at AI, they have a very low estimation of just how common certain kinds of problems and situations are. In which case, yeah, "how'd it do that????!!"
"Because you're not asking very original questions, Bill."
KandevDev@reddit
the trick i have made peace with: the things that distinguished us 5 years ago are not the things that will distinguish us 5 years from now. scanning a codebase fast was a 2020 superpower. taste in choosing what to scan, what to ignore, and what intuition to override the AI on is a 2026 superpower.
the muscles that compounded for the prior decade are not the same muscles that compound for the next one. that is genuinely sad if you got attached to the old muscles, but it is also just how skill evolution works in every other discipline. carpenters who got fast with a hand plane felt the same way about power tools in the 60s. the work is still the work, just shaped differently.
DeterminedQuokka@reddit
So can do all those things. It can’t do them better than a human. You just need to get good at the far end. It does the easy stuff but it isn’t good at edge cases. Ai gives you the first 90% of the answer then you have to find the hardest 10%.
Focus on getting really good at that part.
trakdtor@reddit
There are rumors that AI wont get better than what it is now
ninetofivedev@reddit (OP)
Well this isn't the first time they said that, so I guess we'll see. Supposedly GPT 4 was the best it was going to get. And then it got a lot better. Like a lot better.
trakdtor@reddit
They are struggling at the moment
ninetofivedev@reddit (OP)
There is supposedly this super model waiting to be released because it's "too powerful".
Do I believe this is anything more than hype? No. I don't. I'll wait and see.
But I find your statements completely unfounded and unsubstantiated as well, so I can't say I agree with you either.
trakdtor@reddit
Mythos is a scam, that is correct
chickadee-guy@reddit
Skill issue on your end, modern AI has not progressed in any meaningful or measurable way. I still run laps around all the AI boosters at my work and i only prompt to meet the quota
Ill-Recognition287@reddit
I'm in the same boat in terms of having cultivated the same skills, but I don't believe they are cultivated in vain. In a way, it is still highly useful that you developed those skills in the first place. I also believe that it is worth it to continue to maintain these skills.
I don't think that AI is just a slightly better Google because it would be utterly dismissive. I completely understand that with AI you can do many things faster, however you must approach it with caution. You should approach it with caution not because of quality per-se, or hallucinations even, but because improper use will stunt personal growth and promote cognitive debt.
You list:
Basically my point is, is that we have a sense of toxic productivity where the only metric that is considered is pure throughput in many cases, and that learning and self-growth is not counted as productive although it certainly is.
If AI progress is to go either of 2 ways, either we will never need to learn how to code ever again, or we will still need to know how to code. Right now I can only see the latter being true, and if the latter is to be true, we should never stop improving ourselves, which sometimes may mean compromising on speed or quality (because an AI can produce something better than you, I wouldn't deny it). These skills are ones that people should still learn, they're not made obsolete just because AI does them faster.
kennethbrodersen@reddit
All of the things you mention.
I am visually impaired (5% eyesight). I let AI do the coding, parsing the log files to help me debug issues, generate the YAML (getting indentation right sucks when you rely on a screen magnifier) and it helps me get an overview of complex code bases and documentation.
Instead, I am freed up to do the things I am really, really good at. Understanding complex cross-domain problems and exploring/implementing solutions that bring value to our business!
A year ago, I was basically limited to c# backend work with C#. Visual Studio was the only IDE that worked well enough with my accessibility tools.
Now I am playing around with mobile applications, Linux server management and yes - the "blind guy" is having quite a bit of fun with frontend development.
All of that is happening while I am moving towards a new role as enterprise architect.
I haven't had this much fun with software development before!
TheOwlHypothesis@reddit
The things I'm proud of doing well (systems level thinking and design, integration of complex systems) are the things that AI suffers the most with at the moment (although it's easier than ever to get them to help). So the other stuff is just a big cherry on top
Grep? Command line?
Isn't that stuff literally in your first CS lab in school? Lol sorry I know it goes deep, but I didn't know anyone who was particularly proud of THAT aspect of their skills/work.
ninetofivedev@reddit (OP)
I didn't realize how many people would need a full in-depth analysis at my statement when I said this.
The tools available to software engineers? I used those better than most software engineers.
obelix_dogmatix@reddit
I still haven’t gotten proficient at using it at large scale. I think of it as someone else making commits. And I can’t review shit unless the commits are small. Imagine someone submitting a PR for thousands of lines of code across a couple dozen files. Heck no. I need to get better at guiding them agents.
AppropriateRest2815@reddit
I'm 100% with you and these are my strengths as well. My greatest weakness is that I'm a horribly slow adopter of all things new, but I've been converted, kicking and screaming. My new 'skill', for whatever it's worth, is recognizing some of AI's idiosyncrasies in code reviews faster than others will (it constantly ignores code quality issues and is hyper paranoid about hash string vs symbol keys). It's not too useful but I take my joy where I can find it these days.
sevah23@reddit
Be proud of the things you build, not the tools you used to build them.
another_dudeman@reddit
I'm an LLM hater but this is one of the few legit things it is good for.
therealslimshady1234@reddit
Clown 😂