AI and nostalgia of the times before
Posted by cocaine_kitteh@reddit | ExperiencedDevs | View on Reddit | 143 comments
We had a meeting among experienced colleagues today at work, and we were asked how we feel.
One colleague is really into AI, and agentic coding. He replied that he feels nostalgic of the time before AI, and that AI has completely changed how he works. He proceeded to give kind of a warning to the rest of us, saying that many of us keep working like we always do, but the change is big and coming for us.
I still have not used AI at all at work. What do you make of this statement? Does it resonate with you?
Serializedrequests@reddit
While I've seen it do amazing things, somehow that hasn't amounted to much time saving for me. The issue is that everything it does has to be checked at my company. This is non negotiable. So if you offload a task, you'll just pay the cost in review.
Junior-Procedure1429@reddit
We have completely automated code review that goes through DeepSeek R1 (hosted in the building) and if it doesn’t pass all layers of validation the commit won’t even reach to Plastic repo. Most members absolutely hate all this, because nothing goes unseen anymore and it prints the automatic review results in the team’s slack via a bot for everyone to see it. Only then a lead will review the commit and manually approve it after manually running their own unit tests.
Serializedrequests@reddit
We are interested in automating code review as well. We have something that just pushes the output to bit bucket comments. Engineering it is harder than expected, but it's better than nothing.
I'm not sure why Atlassian isn't trying to sell a solution here.
thefightforgood@reddit
hwat?
Sounds like there's multiple questionable/anti patterns in your SDLC that would drive me insane.
Junior-Procedure1429@reddit
By ‘manually’ I mean “with human action to start” and final review to approve.
__loam@reddit
80/20 principle.
Technical_Gap7316@reddit
Were you not reviewing your PRs before AI?
false_tautology@reddit
Think of it this way. Now you have to do two code reviews. You reviewing the AI generated code, and your coworker reviewing your PR.
ninseicowboy@reddit
I completely agree on the time saving - it slows me down quite a bit because I double check every little thing it spits out, and it gets tedious.
Honestly it’s a huge waste of time sometimes. Even the most “powerful” models hallucinate on simple tasks. If I didn’t have some experience, I wouldn’t have any intuition to catch these hallucinations, which is an issue.
The only thing I disagree with is that you won’t improve your skills. I think the assumption behind this statement is that the user is prompting “implement this”, copying and pasting it, then opening a PR.
Just replace “prompting” with “searching on stackoverflow” in my previous sentence. The realization here is that this has been an option for lazy devs, even before LLMs.
Smart people will always find the time to learn why, regardless of the tooling. And this will always take more cognitive effort, which is why “lazy” devs will open PRs without understanding what they wrote, AI or not.
OddWriter7199@reddit
Well said. Some would like to force us to train AI at the expense of our own brains.
tparadisi@reddit
>> I still have not used AI at all at work. What do you make of this statement?
Not even the chatgpt?
cocaine_kitteh@reddit (OP)
No, not even that. Though much of my work is architecture work, concepts etc., helping juniors with their questions. I don't do that much of boilerplate coding. But yeah, I am a late adopter.
horserino@reddit
I don't know, in my recent uses of Gemini Pro (through my workplace's subscription) I've felt that it is not all that great at writing code but is absurdly good as a learning assistant.
I was able to become productive in a totally unfamiliar part of the codebase in minutes. It can read through huge amounts of code and give me a high-level overview and point me in the right direction, and do that in seconds. It feels like a superpower.
The code writing part is meh. Deals with annoying stuff decently but isn't great at larger scopes or non trivial stuff so you end up fighting the AI more than what it would've taken to write the code in the first place. And code review fatigue doesn't help with AI written code.
But the code reading and explaning part? That feels like something out from sci-fi. I don't miss browsing through tens of SO questions to finally find something similar to my problem and with no clear answer lol.
vinny_twoshoes@reddit
I strongly agree with this. Don't use it to replace yourself and your skills, use it to make you better. It's been a much more positive, constructive, and useful mode. It's an inversion of the usual prompting relationship I'd been doing for a couple months.
I've added something like this to my base prompt: Your primary goal is not to complete the request, but to make me a stronger engineer. Use the Socratic method to ask questions and push me in the right direction to build better solutions.
It's been awesome! I feel more engaged, more productive, I produce higher-quality work, and all the worries about being replaced have melted away. Instead of my skills and motivation rotting and being replaced by a shitty facsimile, I actually feel stronger at my craft and, if anything, indispensable.
CpnStumpy@reddit
Side note: engineers that get irritated by the Socratic method taking offense at questions as though they're personal attacks need to GTFO I'm so tired of that bullshit.
martabakTelor6250@reddit
could it make us learning faster? if yes, how?
orangeowlelf@reddit
You’re the first person I’ve seen that hits on the best use I’ve ever seen for artificial intelligence. I use it to thoroughly cram my brain full of information about the tooling I need to learn to finish my tasking. I actually don’t have it generate any code for me except for small snippets occasionally, where it really excels for me is teaching me things.
One of the biggest complaints I hear is that if you let AI think for you, then you slowly start losing your ability to think for yourself and I honestly feel I’m going the opposite direction.
Drinka_Milkovobich@reddit
Exactly. I work in this space and coding training is kinda plateauing because we don’t know what to do anymore other than train on more and more complicated problems, get each one evaluated and rewritten by an expert over like 3 hours 🤷♂️
Even Claude (the best of the lot) gets very confused with larger codebases and niche knowledge. All of the agents get stuck in logical loops where they refuse to incorporate your feedback correctly.
We’ll probably get there someday but it’s at least a few years away and will likely take a new paradigm beyond LLMs (and beyond world models as far as coding goes)
kisielk@reddit
I tried coding with claude today in something I’m almost totally unfamiliar with (writing a vscode extension with typescript). I would say on the whole it helped a lot because it was able to churn out all the boilerplate and hooks for the extension that would have taken me hours or days poring through docs and blog posts.
But it also fucked up on really trivial things many times. One time it took the first 400 lines of packages.json and appended them a second time as a nested attribute of that same json. From that point onward any time it wanted to make a change to packages.json it would be changing the fields of thar nested attribute instead and it would never work. Took me a while to figure out what was happening and manually fix it.
Other times it would seemingly get stuck in a loop trying to make a change it had already made, or oscillating between two versions of a change. In my case it was loading a js file locally or via a CDN. It would implement one solution and then say “ok great now let’s improve this by…” and switch it to the other one, it cycled like that a few times before settling.
I think now that I got the basic skeleton of what I want in place I’m going to stop using it because it’s just getting slower and less precise as my needs are more specific and the codebase is larger.
Would I use it again? Probably if I was just starting off on something new and unfamiliar. Once I’ve learned the particular coding domain I don’t think I’d benefit as much. It takes a long time to go over the code it writes and fix all the nuances and understand whether the changes actually meet requirements.
Sea-Us-RTO@reddit
ive seen these before; i usually just undo everything and fully restart.
under-the-hood, i wonder if a json parsing bug or a regex bug is what's causing it. everything after the onset of the issue is also bugged, so i figure there must be some kind of... difference in positioning of the parser, vs. the positioning of the backend interpreter.
w0m@reddit
I said that 2 years ago; and ~every 6mo my "It's years away" gets launched. The rate if progression is fairly staggering I think.
thekwoka@reddit
There's still certain limits to what the current core design of the AI can likely reach. But definitely the power of the tooling around the LLM is showing its power to maybe get a full system based on LLM to a point of much more capability.
But I think the next big step will be having the "model" made up of many specialized models that the "agent" selects between on context, instead of needing these general purpose models to handle everything.
w0m@reddit
I said the exact same thing 2 years ago :D Time is a circle.
thekwoka@reddit
Not a circle. It just hasn't happened yet.
Constant-Listen834@reddit
You need a model like Claude 4 to write good code. Gemini is significantly worse
vinny_twoshoes@reddit
This is like saying "you need a different brand of hammer to build good houses". It's not that there's no difference, because there is, but intelligent use of the tool is way more impactful than Gemini vs Grok vs Claude
Constant-Listen834@reddit
I mean yea you can get the job done with any but Claude is a significantly better model. It’s like moving from a handsaw to a table saw
vinny_twoshoes@reddit
Yeah maybe you're right. I guess at the current rate of change I'm not really indexing on model A vs model B, that type of knowledge becomes outdated in weeks. The skill of how to use them is more durable, and I'll wait until things settle down before worrying about which one is best
Constant-Listen834@reddit
Yea I feel that. That why I like how cursor can select the model for me a lot. It mostly uses Claude but I don’t even need to worry about the model and that’s a big plus
peripateticman2026@reddit
Pretty much this.
WeakestLynx@reddit
So basically it is a substitute for code documentation
hangerofmonkeys@reddit
Or a drop in for where documentation should exist, but doesn't.
guack-a-mole@reddit
want to laugh? Try asking notebooklm to generate a podcast from a git diff between two branches, no other input.
Chumphy@reddit
“On today’s show we’re going to be talking about how the difference between..”
“Uh huh, Uh huh”
creaturefeature16@reddit
"This is about-"
"coding, and-"
"how these two GIT branc-"
"-hes compare"
God I fucking HATE Notebook "conversations". Such mind numbing drivel.
nomadluna@reddit
This is my favorite use. It’s a great learning assistant, interview prep assistant etc. code part can be frustrating at times (still useful) because I’m either a bad prompter or some of these model don’t freaking listen.
horserino@reddit
I feel like it'll bring back in strength the concept of the "full stack engineer" as it lowers the barrier of entry to different tech, , languages, frameworks, ecosystems, etc
griffin1987@reddit
Not really, at least not due to AI.
After 30+ years I've gone from QBasic, Pascal, C and C++, to PHP/HHVM (wordpress, Typo3, Drupal, ...), Flash(AS2/AS3), prototype.js, mootools, jQuery, ExtJS/Dojo/React/Angular, Cobol, Java, and recently things like nim, rust and zig, and dozen other techs, systems (BS2000 anyone?) I'm too lazy to all write out now.
It's always been like that.
Things get hyped and at some point people finally realize that they suck. Or they just die out, very very slowly. (the hyped things, not the people ...)
AI still sucks for coding. People who argument with "boilerplate code" are doing something wrong anyway - why would you write boilerplate code all the time?
I'm nostalgic about game development though, because in the 90s I could just write some C with SDL or Allegro, compile it with DJGPP, and have something fun up in an hour or less. Now I've been working on a renderer in my free time, and it took me weeks (besides my full time+ job) to even get a triangle drawn with vulkan (yes, I could have used an existing engine or just copy pasted all the code, but then again, where is the fun if you do that?).
Things used to be simpler, yes. But to me, that's not due to AI.
The "issue" with AI, to me, is mostly, that it's overhyped, and all the recruiters and CEOs are buying into that, because they think they're gonna have a business advantage when they'll be "the first to do it with AI".
Remember blockchain? And what about LESS? How about Dojo and ExtJS?
The list goes on, basically forever. Yes, these things still exist. But they aren't as relevant anymore as they were made out to be by marketing.
AI has come to stay, but definitely not as much as marketing wants you to belief. Never forget that the companies pushing all that AI stuff are also the ones that sell it.
cocaine_kitteh@reddit (OP)
Though none of those had the societal impact that AI is currently having. Every article and discussion is about AI, the discussions about big data or LESS where not on that level.
Evinceo@reddit
I wonder if it will have a more severe impact even than social media on our wellbeing.
cocaine_kitteh@reddit (OP)
Yeah I wonder the same. Like people talking with AI instead of friends and so on.
__loam@reddit
The people selling AI do seem to have a lot of connections they can use to shill their products. Not sure about the actual societal impact.
griffin1987@reddit
Blockchain WAS in every article and discussion at some point. People talked about "NFTs of famous paintings" and the like.
cocaine_kitteh@reddit (OP)
Oh definitely. We had a company seminar, where a speaker was invited and she talked about how bitcoin will change democracy.
But even with that, we never saw it actually being used anywhere, or the likes of Apple, Microsoft, META etc pushing it so hard. Or the impact it has on how people communicate or the slop we see on youtube.
griffin1987@reddit
Apple has apple intelligence. Microsoft has Copilot. Meta has Meta AI.
Yes, they're all pushing their products.
Youtube has had tons of bots in the comment section way before any LLM, and youtube also had cheap "your daily dose of the internet" videos years ago already.
Doesn't seem to me like much has changed - the trash is just coming faster, and it's more. But it's not that different from all the trash and spam that's already been there.
Also: Schöne Grüße nach DE aus AT :)
cocaine_kitteh@reddit (OP)
Sure, but they have their products because they see value, probably. Idk it still feels bigger than anything I've seen so far.
Schöne Grüße zurück :)
griffin1987@reddit
They have their products because they see money.
And yes, it definitely IS bigger - but so is NVidia's market cap, and so is our world population, and tons of other things. Things always get bigger, especially in tech. Blockchain also was the biggest back when the hype was there. And "AI", or actually LLMs, won't be the last of it.
It could go the way of "VR" and basically be a niche thing, or it could go the way of "the internet" and "smartphones", and become (and keep being) an essential part of our daily lives - we'll probably only know for sure in a few years :)
Antique_Drummer_1036@reddit
To be honest, writing code was never really the bottleneck. If you tune AI properly, you can get pretty solid results. The real blockers in a corporate environment are always the meetings, requirement gathering, and architectural constraints.
Vamosity-Cosmic@reddit
Its good to saving time but thats about it so far. I think as it progresses it'll of course get better at saving time and I'm willing to form some connections with LLM developers to try and utilize it to the best I can for more private solutions. We should remember, while retaining some dignity and our philosophical beliefs, that you play to win the game, and in this case, the game is to be the most efficient we can to solve the client's problems.
bupkizz@reddit
I’ve started having it do all the boring stuff. Today I connected Claude to Shortcut via an MCP server and wrote a slash prompt to ask me about the feature then it writes and updates the ticket and checks out the correctly named branch. Felt more like magic than using it to actually code.
Hziak@reddit
Here’s my take - AI produced mediocre results and has been proven to be slower and require a totally new skillset to manage. What it allows you to do is trade quality for reduced payroll at an extreme rate.
Does that sound familiar? Yup. Offshoring! Even though now-a-days, that’s a bit more complicated because of IP concerns and companies offshoring their entire capacity and knowledge base…
Why would anyone want this? Because the business cycle has dropped from decades to about 2 years. Hire a new CEO to fix things. CEO cuts a bunch of corners to bandaid the problems, in the process creating new long-term problems but looks great for the next earnings report and everyone gets a big bonus. Fast forward 12 months, the board noticed the CEO made a huge mess of things. Fire. Repeat. If they can cut payroll and everything not implode until year 3, then do it! By then, it’ll be someone else’s problem. Birb in the hand, ‘ya know?
So yeah, change is coming, it’s gonna be a stupid change, but it’s coming. The arguments for AI will never replace devs are based on the misguided notion that CEOs actually care about code quality, which we have decades of evidence to suggest the opposite. I think what follows this change, however, is a business apocalypse that paves the way for the developer contractor revolution where we get to charge $300/hr to fix the stupid shit they let AI do to them. At least, that’s my hope…
creaturefeature16@reddit
This is spot on.
And also why I would never work for a company of a certain size. I'd much prefer to focus in small business while doing a side hustle...those smaller companies can't leverage AI the same way they can't leverage offshore development, either. There's a huge amount of opportunity out there if you ignore the corporate world, which we all should.
yodog5@reddit
Could you provide some advice on getting started as a contractor or developing on the side? I've always wanted to, but offering my skills on race-to-the-bottom job boards doesn't sound like a great use of time
griffin1987@reddit
As someone who has 30+ years as programmer across lots of things, as well as C-Suite experience, I 100% agree to what you wrote. I think you make a good, and very important, point CEOs (usually) don't actually care about code quality.
MoreRespectForQA@reddit
They dont care until they become Carlos Abarca.
Just like Boeing execs dont care about rigorous design until a window falls off.
Round_Head_6248@reddit
Dude, the AI pushing execs will just hop over to another company. Why would they care what their decisions did to their old company?
cocaine_kitteh@reddit (OP)
That's a great point you are both making, that I also agree with. I don't get people who think like that, as if CEOs would have an issue with pushing extremely bad software out, if it generates money at a lower cost.
MoreRopePlease@reddit
What skills should I be honing now in order to be able to take advantage of that inevitable day?
Hziak@reddit
Do as many recovery, refactor and migration projects as you can fit on your resume, I think. I’ve seen more than a few CEOs in my day think that software gets fixed by some kind of wizard recovery consultant the way a plumber fixes clogged pipes… Be that snake oil salesman and I think you’ll have a bright career in the wake of the first wave of big AI disasters.
PoopsCodeAllTheTime@reddit
It's a pump and dump scheme with extra steps.
mmcnl@reddit
AI can write books but it will never write Lord of the Rings. The value of AI should both not be understated and overstated.
In many ways it's Google + StackOverflow on steroids. That's not human level intelligence but it's super useful. It accelerates your coding by a lot.
Imo what AI is not great at is writing production code. It never fully understands the context of your code and the code is never really what you need.
But the latter doesn't mean AI is immensely valuable. Faster Google + pair programming partner is amazing.
Constant-Listen834@reddit
He’s right. I’m at a company that moves fast due to the nature of our lifecycle (unicorn startup) so pretty much our whole engineering org is using AI heavily and we were a very early adopter.
It does completely change the development process. I’m not here to say it’s for better or for worse (AI is controversial) but I can confirm that it’s very different, and the time to start learning it is now.
tugs_cub@reddit
If “it’s not necessarily a faster delivery,” and the code quality is worse, what are you getting out of it? Are you saying you are trading off code quality for design quality?
I don’t mean this as a judgement of your process, I’m just trying to understand what you’re saying.
Constant-Listen834@reddit
Honestly it’s just way less mental load. My days are way more chill when I outsource the busy work to AI. I get off work way less tired.
marshallandy83@reddit
Not directly, but poor-quality code costs a lot more to modify/maintain.
Constant-Listen834@reddit
Imma be real, as long as the code has high quality test coverage, the effort to maintain amazing vs ok code isn’t that different IMO. But yea that’ll be another hot take here
teslas_love_pigeon@reddit
Unless you're doing mutation and prop testing to verify the quality of your tests, redundant code coverage isn't a boon people think it is.
Especially with poor quality tests.
Constant-Listen834@reddit
I’ve never consider code coverage a good test metric.
Main-Drag-4975@reddit
This may be true if the bad code is expertly slotted into well-designed interfaces created by a highly experienced dev, but my experience with AI-generated code has been neither of those things, just crap piled on top of more crap.
Constant-Listen834@reddit
Yea I really think it comes down to the human generating the code. AI is just codegen at the end of the day.
nemec@reddit
Written by humans, right? Not by the creature whose goal is to write passing tests (and not necessarily accurate tests)?
Constant-Listen834@reddit
Yea definitely. I don’t like using AI much for testing
TooMuchTaurine@reddit
Can you briefly jot down the steps in your teams development process where you use a AI (ie the full SDLC from inception through to release) and what tools or processes you use?
false79@reddit
I've noticed by the amount of downvotes I get around here that people don't want to let go of the control they have to make AI work for them. The tooling that is available today is such that the learning curve is superlow and can integrate into your flow as opposed the other way around.
But some people are unimpressed because it falls below their expectations. Really it's a skill, an art, to break work into smaller units an LLM can accomplish faster than a human could. And if that still doesn't deliver, one can work with an LLM to obtain the clarity to break that work smaller units, then have those stories auto created in an issue tracker, and so on.
People don't have to go all in, 100% hand off everything to an agent. You can have as little as 5%, 10%, 25% of your work offloaded and over time, it just pays back huge dividends. If not releasing faster, then at the very least by you some time to work on bugs.
Drinka_Milkovobich@reddit
The disparity between the vast majority of devs I talk to irl and this sub’s view of this is wild; I think it’s a combination of Reddit contrarianism and maybe the freedom of anonymity. Really odd given that the hundreds of devs I’ve worked with over the past decade in different domains almost universally currently describe AI as not near replacing humans, but a great tool and efficiency gain.
false79@reddit
The real immediate threat is not having AI replacing developers. It's more productive developers replacing one or more less productive developers, complacent ones who think this is not a threat to them. And these more efficient devs will charge the same or less.
Software developers are a commodity. 99% of anyone here can be replaced/subbed. (The punchline is that alot of people would believe they are in the 1% exception that they would not be so easily replaced).
Drinka_Milkovobich@reddit
Yup, and this is also why the job market for entry level and juniors is abysmal. I just don’t need them as much anymore, you know?
freedom2adventure@reddit
I have worked on a relatively large project using cursor of late. Here are my observations. It is very much faster, but you have to treat it as a inexperienced dev and lay the project out with that in mind. I always make sure the LLM knows it is 'An experienced senior dev in language with experience with skill and skill', then you setup a scope and an LLM.txt file in your project. The scope defines the project, your goals and your audience. The LLM.txt is used for the LLM to leave notes and messages to its future self. This includes kdocs structure, build rules, gotcha's that have been a problem in the past.
I also setup a ref_docs folder and place the docs of whatever api I am using. I have also added docs that I know have changed since Dec 2023, when most LLM's have their cutoff knowledge date. I then proceed to run a 'sprint' with the bot for each new feature, making sure that it is using git. Over-all about a 5x multiplier. The kdocs are important to inform the bot not to mess with classes or functions that are not related to the current scope of the sprint.
ie:
/**
* u/description Load all family members from the repository with enhanced error handling
* u/core_feature family_member_management
* u/do_not_edit_unless related_to:family_member_management
*/
Over-all my experience is that the bot can be really stupid at times and if you don't know what it is doing you will have a rats nest of code very quickly. But for picking one feature or one bug, analyzing the current code and developing a plan to log, update and test something. It is great. But you have to keep an eye on it that it isn't adding one feature and removing another.
Long response to get to your answer. Yes, change is coming. Yes you should be using modern tools in your tool-box. Learn to program well so that you can help all these places fix their future spaghetti code. The future is bright for bug disclosure and mitigation. Would I go back to coding without it? I doubt it. It can speed up the initial MVP, but it is that last 20% that needs to be handled with care and experience.
Aromatic-Low-4578@reddit
I miss sitting down and just writing code. It's undeniable that I'm more productive with AI, but I miss the feeling of being alone with my code for hours. It's also just not quite as fulfilling to figure something out with AI's assistance.
I imagine this is similar to how machinists felt about CNC when it proliferated their industry. It's undeniably more productive, but we're still losing something.
AlaskanX@reddit
It’ll be funny if there ends being demand for “hand-made code” like there is for many other things in this age of mass production
Pleasant-Memory-1789@reddit
As quality continues to decrease, I think there will most certainly be some sort of demand for "hand-made code"
justaguy1020@reddit
Artisanal CRUD app. The bugs give it character.
creaturefeature16@reddit
I still sit down and write code, except now I don't need to do regex searches or SQL docs, which is arguably even more fun than it used to be! But, it's a balance.
LoneWulfXIII@reddit
Idk I’m not convinced it makes me more productive, but it has helped me learn some of the differences in technique and patterns as I’ve changed tech stacks. I mostly use it as why doesn’t this work the way I expect and for basic questions though.
I don’t trust it to write decent code from a macro perspective and it’ll take a lot more for me to get there if I ever do.
Nofanta@reddit
It takes away what many people enjoy about building software. What’s left is something different to be enjoyed by different people.
PoopsCodeAllTheTime@reddit
I type really fast, I also use Neovim and coded my own version of some movement plugins, I use MacOS but miss window-manager functionalities from Linux so I coded some hammer spoon script to switch windows with my keyboard. All this to say that I put a lot of effort into moving quickly through ergonomics.
And.... I find AI not worth my time for nearly all tasks, it is the odd task that benefits from some LLM, but not by much.
Now you got that other thread with some research that mentions people feel like they are faster with LLM when in fact, they are not.
So my opinion is that LLM feels somewhat nice, but mostly because people are really slow using their computers, and having a little thing that types for them in many files, makes it feel like a huge difference. In reality it isn't doing that much, might be a small benefit, smaller than LSP-based auto-complete. Some people just never used Language Server plugins, so using one for the first time might feel like an enlightening experience. Likewise, some people just never typed fast nor managed their editor efficiently.
Constant-Listen834@reddit
Brother when I use cursor/Claude it literally thousands of words a minute you absolutely cannot type faster than that
cocaine_kitteh@reddit (OP)
I never found typing speed to be the limiting factor. We are not code monkeys.
Constant-Listen834@reddit
Sure but actually writing code still takes time, and now that part takes less. That speeds you up even if you only spend 5% of your time writing code
-Knockabout@reddit
It kind of depends on if writing the prompt + waiting for the agent to complete + review takes longer than you writing it yourself.
Constant-Listen834@reddit
It rarely does in my experience, but it can for sure
jypKissedMyMom@reddit
Yep. If a class or function is simple, it’s faster for me to describe it to an LLM in a few sentences and fix up a few spots, than to type it out by hand. It’s literally saving me from the code monkey parts.
PoopsCodeAllTheTime@reddit
It's funny how everyone loved those reports that mention the amount of bugs is relative to the amount of LoC, and now everyone is fire-hosing code because it's just so easy to do so. It's easy to put two and two together and predict what's the direction of this agentic vibe coding.
griffin1987@reddit
"because people are really slow using their computers"
Agreed.
"Some people just never used Language Server plugins"
IMHO:
For java, IntelliJ idea is better than e.g. vscode + java language server.
For c and c++, visual studio (not code) works better than clangd/language server in terms of tooling, but clang or gcc are better for intellisense and the like.
For all others, an LSP is better than not having one I guess, so very much agreed on that.
Drinka_Milkovobich@reddit
idk I still feel like I write 95% of my code myself because agentic coding is just not accurate enough yet. Error rates show up >10%, which is way too high to just set it and forget it, letting it run wild merging PRs. I have to review the AI spaghetti code at the end of each run and decide if it’s made some serious error that looks correct because it knows how to mimic the rest of the codebase.
That said, there is no question that it has probably doubled my pace. I no longer spend hours blocked on something, because I can ask the AI and it gives a bunch of possible answers, one of which is likely correct. I then just have to use my brain to evaluate the shortlist. A similar thing applies when looking at a new codebase or another team’s work. Having that makes my job so much easier and faster that I don’t think I’ve used Google or StackOverflow for work in the past year.
I sometimes miss banging my head on the keyboard for a long time, reading docs and posts, trying and failing until I fix something, then feeling like a genius afterwards… but that’s not what we’re paid for. We will still have plenty of jobs, but they will be slightly different. Much more high level focus.
eddie_cat@reddit
I haven't been blocked on anything in that way in years. I also only used StackOverflow as a junior. It comes with experience. Eventually you just...know what to do or where to find the information you need to know what to do. You are likely short circuiting your own path to getting to that point by letting the AI do it for you. You are supposed to learn from that experience
Drinka_Milkovobich@reddit
Are you saying you haven’t had any blockers in years??? Not sure what you’re working on, but there are plenty of novel problems that require deep thinking and tradeoffs at most jobs at all levels, and having an AI assistant is unquestionably faster than my normal process. It’s not really the same kinds of issues as the ones I was working on 10 years ago, and it’s not really for “learning new things”, it’s for helping with problem solving.
StackOverflow was always a crapshoot because of stale information, but Google has always been a staple for finding documentation and information. AI has essentially replaced this part. It’s not like I’m asking it to write code, but jumping into a new codebase at a large company is much faster now. I don’t need to bug a random team with minimal documentation for hours or days to get answers.
maria_la_guerta@reddit
Yes. If you're not using AI to move at least 5% faster, minimum, you are going to be left behind.
It has flaws, but it is very good at basic things in all of our day to day workflows. Anyone who claims they are better at writing boilerplate code than an LLM trained on hundreds of billions of LOC is either lying or suffering from user error.
It is not all CEO hype like Reddit pretends, and I do agree it's a ways off from replacing people as a whole but it is a valid tool that all of us should have in our toolbox.
griffin1987@reddit
https://www.reddit.com/r/ExperiencedDevs/comments/1lwk503/study_experienced_devs_think_they_are_24_faster/
maria_la_guerta@reddit
Did you read the fine print? It literally states most of the users had no experience with LLMs or AI.
As I said, user error.
ghost_jamm@reddit
The study does not say that productivity definitely increases with familiarity. The very next line after what you quoted is:
Maybe it was “user error” but maybe the developers got worse the more they relied on AI.
maria_la_guerta@reddit
Before AI or after AI, no business has ever cared about how smart you are personally, they care about what you deliver. You may be the smartest dev in the room but if you are delivering the least you are the first on the chopping block.
I'm old enough to have seen the same arguments made about Stack Overflow when it came out, too. The point of these tools is to lower the barrier to entry to delivering value, which is of course going to require devs to know less and less nuance.
Betting against AI in the future is not smart when it is already causing rapid change in the labour force of today. To say people only seem faster with it because they're slower without is irrelevant when adoption is growing and it's not going away.
griffin1987@reddit
"the same arguments made about Stack Overflow"
Stack overflow was never useful to anyone above a certain skill level, but it had (and has) lots of wrong information.
"The point of these tools is to lower the barrier to entry to delivering value"
NO. The point of these tools is to make the companies offering them money. Companies will offer whatever people pay for.
"To say people only seem faster with it because they're slower without"
Maybe you want to edit your post after rereading what you just wrote, or, if you really mean it like that, reread what others have already written, or let your "AI" explain it to you, because it seems that you either mistyped, or didn't really understand the comments you replied to.
maria_la_guerta@reddit
Wrongest thing you've said in his entire thread, and once again gives off very junior vibes for someone with "30" years of experience.
Absolutely true, you still need a good engineering sense to use SO or AI properly. Not debating that. Never disagreed with that.
By upping productivity. That is what they're selling and that is what companies are buying.
Didn't mistype anything. The study showed that the only dev with experience in cursor moved faster with than without it.
griffin1987@reddit
Stack overflow isn't google, and LLMs still halucinate lots of wrong things.
"By upping productivity" - no. They don'T make more money if they up your productivity more - they make money by having their marketing tell more people how great it is. Companies are buying marketing hype.
maria_la_guerta@reddit
Is your point that AI is nothing but a rip off and everyone investing trillions into it is being ripped off?
Jeeze. We should tell them some guy on reddit thinks CTOs of the leading tech companies on earth are all wrong.
griffin1987@reddit
Their priority is to make money, and selling AI makes them a lot of money, so how are they wrong? :)
maria_la_guerta@reddit
McDonalds priority is to make money. Does that mean Big Macs aren't real? :)
ghost_jamm@reddit
I’m not saying that businesses care about how smart you are. But the idea that AI definitely increases productivity seems to be premature. And that is something I’d think businesses would care about.
I’d argue that it’s AI hype that’s driving changes in the workforce more than AI itself.
Again, I think a blanket assertion like this is premature. We frankly don’t know. My guess is that we’ll eventually move out of this excessive hype bubble and AI tools will enter a stable period where they can be used productively for some limited tasks and people will look back and say “Man can you believe we let it do all this other stuff too?” I just have suspicions that a tool that literally isn’t designed to give correct answers can ever be a massive general productivity boost.
maria_la_guerta@reddit
You and I are theorizing about the future, which I agree, none of us know. To clarify, my stance is this: it is proven to show productivity gains among users who familiarize themselves with it. It is not going anywhere in tho short term future at least, potentially long term, I agree who knows, but there is growing adoption in it and trillions of dollars being invested in it right now.
Our workflow will probably be very different in 20 years, with or without AI. In the interim, IMO you are going to be left behind if you're not using it. I will say the same about whatever does eventually follow AI too, whether that's in 5 or 50 years.
griffin1987@reddit
" it is proven to show productivity gains among users who familiarize themselves with it"
No, it's not PROVEN. That's the whole point.
" IMO you are going to be left behind if you're not using it"
You mean like the people who aren't using blockchain are left behind?
maria_la_guerta@reddit
The study we linked showed that. A mere 50 hours with cursor makes someone faster with it than without it.
Never mentionod Blockchain, 0 idea why you're bringing that up.
griffin1987@reddit
"The study we linked showed that. A mere 50 hours with cursor makes someone faster with it than without it." You might want to learn how evidence based science studies need to be set up to actually "show" something in a scientifical way. An example of one person isn'T a "study that showed that".
Yes, you didn'T mention blockchain, I did - because it was hyped as well and everyone said it was the second coming of jesus. And now it's basically dead in terms of software.
maria_la_guerta@reddit
Ok so we agree that the study you linked is useless then.
Still no idea why you think Blockchain is relevant to this conversation. Comparing a payment network to a productivity tool is wild and if you're trying to prove that one was a fad so both must be in the same comment as lecturing me on science studies, you are out to lunch pal.
griffin1987@reddit
I read everything, yes. I didn't mean to "correct" you, sorry if you felt like that. But your numbers aren't any more meaningful than any "study" that get's reposted nearly daily on reddit.
If you want to actually take arguments though: Why would any experienced developer care about writing boilerplate code? If you write boilerplate code that much, then you're doing something wrong anyway.
maria_la_guerta@reddit
I agree my numbers are really vague and meant to be ballpark figures. It should be speeding you up.
Strongly disagree, boilerplate is everywhere. Tests, types, documentation, PR descriptions, seeds, all of us perform repetitive tasks in our day to day, and these are all common things I use AI for.
I used to give code to juniors and tell them they have a day or 2 to write tests for it. Now AI and I can do it in an hour, just by prompting and a description.
griffin1987@reddit
If writing tests includes lots of boilerplate code, you're doing it wrong - or at least, very inefficiently. Also, just adding more tests isn't helping anyone, and if your tests are basically just a copy of each other, than what functionality are you actually adding at that point?
Documentation can be generated, and too much is bad, because it will just increase maintenance and get out of sync over time. Good code is the best documentation there could be, and for semantically documenting your product - that's something that should happen beforehand anyway.
PR doesn't need descriptions, just link the ticket. People should review the code anyway, so what should you describe? If the people reviewing your code don't understand it themselves, how should they review it?
Having "AI do stuff" still means you need to tell it what to do, and if that takes you less time than just coding it yourself, you're doing something wrong. Human language is extremely inefficient in comparison to what you can do with code. Also, it's far less work to achieve exactly what you want with code than with human language, because human language is also very inaccurate.
maria_la_guerta@reddit
Disagree with all of this.
Granted I am speaking loosely here so some of your nits are fair, but they are nits and you are being purposefully naive to AIs strengths. Eg, ok, AI can be writing tickets in your case, and not PR descriptions, but that isn't the level of granularity I'm getting at here.
99% of us should be writing simple, readable code. LLMs have been trained on billions of lines of that. If you're saying they're of absolutely no help to you at all, it's user error.
griffin1987@reddit
"Print 'hello, world' to stdout" and having to wait for it to do that, versus just writing `printf("hello, world");` - which one is faster and easier? Or would you like to argue that that's not "simple, readable code" ?
And no, AI shouldn't write tickets. How should the AI know what to write, unless you instruct it to? And if you can instruct an AI precisely enough to actually write exactly the ticket you need to achieve what is needed, you could as well just write exactly the text you give to an LLM into the ticket.
Either way, we disagree, and you've started with insults, which is the point at which I don't think it benefits either of us continuing the discussion, so have a good day/night.
maria_la_guerta@reddit
Ok, yes, lol, if you're using AI to write a log statement that's bad. You're once again cherrypicking nit examples to dance around the larger point I'm making.
But anyways, you're right, we're not changing each others minds. 🍻
RestitutorInvictus@reddit
I think there's too much weight being placed on this study:
1. I think a single study is insufficient to disavow the usage of this technology; although, it is valuable as a means of setting expectations and getting people to understand this isn't a panacea.
local-person-nc@reddit
I'm always blown away when devs say AI is useless. AI has been an absolute game changer for me. Sure I write a ton of code without AI but I also wrote a ton of code with it. I've been far more productive not just in the things I know but far and beyond in things outside my expertise. I can fully become a full stack expert across every domain. Being an historically T shaped dev, AI has pushed me into almost an expert in everything. If AI truly can keep pushing the limits, the definition of software engineer will change dramatically. Even now it's has really changed the game. Unfortunately, more devs I meet are against AI tools than for but hey, people hate change as change is fucking here
eddie_cat@reddit
You don't even know what you don't know if the AI did it so how could you possibly consider yourself an expert?
local-person-nc@reddit
Don't even care. Been doing this 15 years. AI has made my job so much better.
Junior-Procedure1429@reddit
AI is annoying and I dislike it as much as the next guy, but ONCE you understand how to make it output real results, if your company is a contractor they will absolutely be outshined by the outsourcing ones who are abusing AI to boost production.
I don’t really care about “my skills”, so if my company is saying to use this crap whatever, I don’t even write “real code” anymore.
But many younger coders need to have their sense of self worth tied to the ability of typing code, it’s something that doesn’t get to my head anymore. If I get replaced by AI I also don’t care, but many programmers are still paying a mortgage and scared to death of eventually being replaced.
jypKissedMyMom@reddit
My shot in the dark guess: I think in about 5 years most developers will use an agent to assist with coding.
The improvement from 2020 - 2025 has been massive. I don’t think in 5 years AI will be replacing developers, but I think it will completely change how we think about coding over the next decade.
chaoism@reddit
I do think software engineers 10 yrs from now would require a different set of skills from the engineers today
I think AI is here to stay and it does help on a lot of stuff, so in the future we will have engineers who know how to use AI and engineers who don't. Those who don't will be obsolete
What i find most helpful with AI is how I can dissect and plan a very vague project into something executable
I still don't like to let LLM write code. I find the time I save from writing code will eventually be spent on reverse engineering what LLM writes
But yea, LLM acts as someone who can point me to a direction where I can "see" my project's fruition
I think a lot of our job functions will be replaced by AI, but there are also new opportunities we can do with AI
Ok-Vermicelli-7807@reddit
We really are moving into a sci-fi world. It's cool and all, but yeah I REALLY miss the human connection we used to have.
Hell, I'm in my 20s and I think that. Early-mid 2000s kids still talked to each other like humans.
Now everything seems to fast-paced, wireless, optimized, and without charm.
I'm not just nostalgic of coding without AI, I'm nostalgic of a time before I was born.
cocaine_kitteh@reddit (OP)
Hehe, I am 35, so mobile phones became a thing while I was growing up. Different times for sure.
griffin1987@reddit
The internet became a thing while I was growing up, and I'm only a few years older :)
VirtualSolid3062@reddit
I feel this as well. It’s all so un inspiring.
Adept-Result-67@reddit
Yes it resonates, and yes big change has happened and is happening in realtime. And yes, people have a lot of inertia and always gravitate back to doing what they’ve always done and are used to doing. It takes a lot to adapt and change.
If you build things in your spare time or are working on personal projects It’s quite a fun time now though, AI hasn’t stolen everyone’s jobs yet, if you love technology and love coding and building things it’s still fun and you can offload a lot of boilerplate and mundane tasks to agents now and focus on building cool stuff.
If you’re just slogging away at a job, it’s a weird place to be as it’s a brand new world and no one’s really certain about where it all ends up when the dust settles.
creaturefeature16@reddit
Anybody that gets replaced by an LLM was going to get replaced by an offshore developer at some point, because that business doesn't value them, or the work they are doing. That's pretty much the end of the story. I've never worked for such a place, and I never will.
matthkamis@reddit
Just throwing this out there but you don’t have to use ai to generate your code. You can use it like how I do which is to dump my code to an llm and ask for a code review. This has actually helped me fix bugs and made my code more concise. Sometimes it writes non sense but more times than not it is helpful.
armahillo@reddit
Using LLMs is a choice. He doesnt have to be nostalgic, he can literally choose to stop using LLMs.
cocaine_kitteh@reddit (OP)
I mean, he feels like he doesn't habe a choice, if he wants to keep being employed / employable.
Aggressive_Amount_73@reddit
There are some studies already showing that the usage of AI chats, uses way less of your brain, making less connections, and way less effort. And we know that to keep your brain strong to even prevent things like Alzheimer's, you need to keep it working and active.
So I'm my opinion, if you want to have a good brain in the future, surrounded by a lot of foolish people that will use AI to fry a egg, you should keep practicing.
I think we can use it to make boring things faster (e.g boilerplates, tests, templates etc). But for real problems, we should keep practicing and using our brain.
cocaine_kitteh@reddit (OP)
I really hope that's the case but I tend to be pessimistic. But I hope that the value of a person who can think critically and focus will increase in the future, compared with people that all they do is consume slop content and ask AIs.
OkLettuce338@reddit
The biggest change with ai is the sheer amount of work you can do. Yes it makes mistakes but with discipline those mistakes can be caught and you can try again quickly. The sheer amount of work you can do with something like Claude code is the actual game changer.
MsonC118@reddit
I’d like to write an entire blog post about this with both of my perspectives, but maybe another time. I’ve been an early adopter for decades (Bitcoin in 2012, ETH in December 2015, AI/ANN/ML in 2017ish, and have been following this trend since the beginning including beta access to most of the tools). I’ve been on both sides of this argument, and am the type that has extremely high standards.
At first, I thought of it like a toy, sort of cool, fun to play around with, but not much “real” value. I believe that it still produces crap code, but it’s getting better. I do believe that we’re seeing diminishing returns and that it won’t ever fully surpass human level code quality/performance at mature organizations. I like to see all angles to get a better and more logical perspective of this situation. Here’s my take:
In the care that AI does take our jobs, and the worst case scenario does come to fruition, I’m not worried. Here’s why, we’ll be some of the last people who should worry. Management, financial services, communications, customer service, etc should all feel more threatened than us based on the skill barrier alone. My personal opinion is that, if LLMs can do our jobs, then it can do basically all of the white collar jobs too. It’s like worrying about the stock market crashing to 0, if that does happen , you’ve got even bigger problems than your portfolio lol.
On the other side, if LLMs don’t take our jobs, and instead augment parts of our work (as well as other fields), then the valuation that investors are putting on LLMs is also flawed, and that means this is also a bubble (similar to the dot com bubble). Remember, a bubble doesn’t mean the tech is bad or not valuable, it means that the the valuations are simply far to grandiose and divorced from reality. I believe this is the more likely outcome, given that history does repeat itself. I also believe that we’re in a recession right now and am waiting for the official GDP data from Q2 of this year to confirm this. The pattern reminds me of the dot-com bubble too much and humans are gonna be humans.
On the more nuanced side, I believe it’ll be a short term thing. They’ll try to replace us, and find out just how badly that goes. In fact, human stupidity is a part of this problem. It’s not about logic, it’s about fear and greed, as well as human emotion. Just use LLMs to the best of your abilities, and use it in places where it helps you, not where it helps everyone else. I believe in keeping an open mind, and trying it out. If it slows you down, then don’t use it for that thing. It’s simple. It’s my belief that most of the emotions are not from LLMs but from the few people with megaphones trying to push a narrative and stoke fear. Most of the people who make these extremely grandiose claims stand to reap financial rewards from those same claims, so why would they stop? Evaluate people’s opinions by their experience, not their prestige, wealth, or perceived intelligence. It’s the same logic for why you wouldn’t ask an overweight person how to get ripped lol. Trust your experience, and your intuition, and question the narrative by simply asking “is there a financial benefit for them to say this?”.
On the bright side, either way we’ll be fine. If we’re not, we’ve got bigger problems to worry about.
JimDabell@reddit
When I first started my career, there was no such thing as NPM or PyPI. Even CPAN was in its infancy. Either everything was coded from scratch or at most, standalone snippets were used. The most popular way of sending mail from a form was just to manually download a file called
formmail.pl
, put it in your/cgi-bin/
, andPOST
to it with a parameter telling it where to redirect to afterwards.Sometimes I write things from scratch, like the old days. The simplicity is nice. I do feel a bit of nostalgia for that style of programming.
But would it make sense for the industry to have carried on doing things that way? Of course not. The industry is so much better now. We can do more, better, faster.
There have been several more shifts along those lines. Open source hasn’t always been prevalent. When it did become popular, it took a long time before documentation was the norm. Stack Overflow didn’t always exist. Reddit didn’t always exist. Discord didn’t always exist.
AI is just the next step on this path. Yes, the act of programming will change significantly. Yes, you will sometimes feel nostalgic for the old days. But it’s a big step forward. Don’t cling to the past.
ZukowskiHardware@reddit
I’ve used it since the beta of copilot. I’ve not really seen much improvement in it. I think it is very useful for small functions and unit tests especially. Most of what it generates takes too much fixing to be worth it. I have yet to try the agent based development.
Main-Eagle-26@reddit
It’s a useful tool that changes how we do our jobs, but it isn’t the dramatic productivity increase that the hypester hucksters are pitching.
Get Cursor for your IDE and stop using VSCode and try it out.
It’s useful for some things sometimes but be careful bc overusing it actively makes you rustier and dumber.