AI coding assistants aren’t really making devs feel more productive
Posted by scarey102@reddit | programming | View on Reddit | 500 comments
I thought it was interesting how GitHub's research just asked if developers feel more productive by using Copilot, and not how much more productive. It turns out AI coding assistants provide a small boost, but nothing like the level of hype we hear from the vendors.
NodeSourceOfficial@reddit
This nails it. AI tools are great at reducing low-effort coding tasks, but productivity bottlenecks often live elsewhere: flaky tests, long CI pipelines, unclear requirements, or cross-team coordination. Until AI tackles those, the gains will stay modest.
tobebuilds@reddit
For me, the main use case for AI encoding is just as a way to reduce a boilerplate. For example, in rust, every time I add a new optional field to a struct, I must edit every single initialization of that structure in my entire project with the default value for that field. AI makes this faster (though I suppose a good IDE could do this as well).
It can also be useful for printing out unforeseen edge cases in code, or suggesting causes of error messages.
But that's really it. As others have said, it's just like a fancy auto complete. The hard parts of coding happen in your brain rather than in your hands.
CodeSoft_@reddit
I use AI to code. A lot. For the most part, here's my typical workflow:
Core idea -> Brainstorm with ChatGPT -> Break down the problem into small steps -> Have AI do the boring, simple stuff while I do the harder stuff -> Final product, just as good, maybe a little faster
I have been having to do non-ai practice a lot to keep up my basic skills so I don't become fully reliant on AI, though.
Mysterious-Rent7233@reddit
"AI coding assistants aren’t really making devs feel more productive"
But the article say that only 21% of Engineering leaders felt that the AI was not providing at least a "Slight improvement." 76% felt that it was somewhere between "Slight improvement" and "Game changer". Most settled on "Slight improvement."
30FootGimmePutt@reddit
Engineering leaders. Aka not the people actually being forced to work with this crap.
blindsdog@reddit
You must be awful at learning new tools if you can’t figure out how to use it productively. It’s incredibly useful.
zdkroot@reddit
Lmao and you must be awful at your very easy job if an LLM is making you magically more productive.
blindsdog@reddit
Yeah, search engines are just evidence that people are bad at their job too 🙄. How fucking dumb do you have to be to not recognize the value of a tool that makes access to information easier.
zdkroot@reddit
Lmao you literally quoting what people said about the actual fucking internet when it first appeared.
"All the information of the whole world at your fingertips, everyone is going to be a genius!"
"The entire encyclopedia fits on this little disc! Everyone is going to know everything!"
How exactly is that working out for objective truth and broadening the worlds knowledge? Oh, everyone is fucking dumber because it's far easier to just google things rather than have to, gasp, remember them? I mean, learn and know things? What for? Our brains are plastic and literally shaped by experience. What do you think a smooth brain is? Why is that a phrase?
Every social media is flooded with bots shilling OF accounts of fake people, the average American is underwater on a car made in a country we placed tariffs on, and they can't do basic addition or figure out the tip at restaurants. Yeah, the internet fixed everything. The pinnacle of human achievement.
Gee I can't imagine that happening in the exact same fashion with another extremely similar technology.
A whole ass movie was made about this entire concept, and you are fulfilling it.
blindsdog@reddit
How young are you? Jesus Christ, I get that it’s super cool to be cynical when you’re young but holy fuck is that a dumb take.
Oh no, life is so awful because old people can’t figure out QR codes 🥺. Why did the Internet do this to us?! Clearly the Internet is to blame for the rest of the world starting to catch up to America economically, not trends that can be traced back decades prior to ARPANET. Everything bad in the world is because people look up concepts online instead of memorize them!
Jesus fuck, I get that the Internet is addicting but you should get offline sometime. It’s ruined your perspective. Ironically though it’s improved your life drastically.
zdkroot@reddit
I am fucking 40 you dumbass. It's like nobody is even trying to read. I did not blame the internet for anything. I said it was not the magic bullet that fixed every problem humanity faced, the way it was marketed to people. Just like LLMs are now. They are just another tool, not a solution to problems you haven't even thought of yet.
Do we have world peace? Is racism dead? No children are going hungry anywhere on the planet? Every hillbilly in the south is gonna graduate magna cum laude next week, right? Because the internet gave them access to all the knowledge of the world! Those are all things that were supposed to happen. Gee, I guess the internet didn't magically fix everything, huh! Did I blame the fucking internet for hungry children? Do you even have a functional brain cell?
I am speaking from literal experience. I was literally there. I got AOL cds in the mail. This is history repeating itself. A small percentage of people are going to figure out how to make money hucking snake oil to rubes, and the rest are going to get dumber.
How are people so fucking stupid?
blindsdog@reddit
You’re not very smart if you believed any of that would happen. The fact that it didn’t solve world hunger, which was never in any way any kind of realistic expectation, doesn’t mean it hasn’t been an incredibly valuable tool.
You’re one of the stupid ones, my guy. Again, your perspective is fucked. The internet didn’t make people any more stupid. People were already, and have always been, very stupid. The internet made it more visible. You’re too lost in the internet rage. Take a long break.
It’s insane that you can’t realize the incredible value the internet has added to everyday life because you can’t see past superficial issues.
zdkroot@reddit
L O L O L O L
Now, go look in a mirror and repeat this but about chat gpt.
Holy fucking shit not even a drop of self awareness. Can you even read what YOU wrote?
blindsdog@reddit
My guy, you're not proving shit. That would be an amazing tool if it was anywhere near as revolutionary as the Internet.
You're sitting here arguing that something having the impact of the fucking Internet wouldn't be completely revolutionary? Holy fuck. What a thing to say on r/programming.
zdkroot@reddit
No, I am not. Try again.
blindsdog@reddit
Oh okay, so it's gonna be revolutionary, but not too revolutionary. You're making great points here.
TheBoringDev@reddit
Eh, I’m becoming increasingly convinced that people who find AI incredibly useful are awful at using the regular tools. It’s always people who have mountains of boilerplate to write but never think of using a template.
blindsdog@reddit
Yeah, same. People who use stack overflow or Google just don't know enough themselves. Fuck using tools to help you write code, you should switch the transistors yourself too.
smallfried@reddit
Being forced to work with a tool you don't want to use is shit and bad management.
Not seeing the value of LLMs makes you a bad software engineer. Feels similar to a dev refusing to work in a proper IDE.
zdkroot@reddit
No, it is like being forced to use a *different* IDE to the one you are currently using, because some executive read a medium blog post. Is there a _problem_ that needs solving? Because all I see every time somebody mentions AI is a very expensive energy hungry solution in search of a problem.
7h4tguy@reddit
Wait, you have another pre-recorded demo to impress me with (which you tuned the fuck out of)?
Takeoded@reddit
Previously I would have to write
Now, with CoPilot, I write:
and I get the exact same code. Previosuly, I would type 145 characters to iterate each pixel of an image. Now I write 10 characters, and press tab 3 times, to do the same. 13 keypresses < 145 keypresses. Definitely an improvement.
PeachScary413@reddit
Yes, it's snippets on steroids. I love to use LLMs for this and they are perfect tools for it 👌
nolander@reddit
Slight improvement though is far from worth the resources actually required to run it.
Jugales@reddit
Coding assistants are just fancy autocomplete.
emdeka87@reddit
~~Coding assistants~~ LLMs are just fancy autocomplete.
labalag@reddit
Don't yell to loudly, we don't want to pop the bubble yet.
Halkcyon@reddit
Let it pop. I'm tired of investor-driven-development.
7h4tguy@reddit
I found the Next Big Thing, finally!
Accomplished-Fox7970@reddit
Whats that?
IanAKemp@reddit
Let it burn, more like it.
smallfried@reddit
That's a very reductive stance which often gets extrapolated into saying that LLMs can't reason or make logical deductions. Both things that are provably false.
They're ove hyped by the companies selling them, but no reason to completely dismiss them.
14u2c@reddit
Sure, but turns out being abele to autocomplete arbitrary text is read of code is quite useful.
wildjokers@reddit
This is naive and doesn't take into account how they work and the amazing research being done. Most computer science advancements are evolutionary, but Transformers described in the 2017 paper All You Need is Attention was revolutionary and will almost certainly earn the Turing Award.
https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf
30FootGimmePutt@reddit
No it’s an accurate summation that we should continue to use because it makes dipshits ai fanboys really upset.
wildjokers@reddit
https://en.wikipedia.org/wiki/Ad_hominem
30FootGimmePutt@reddit
I wasnt making an argument, I was insulting you for being a dipshit ai fanboy.
lunar_mycroft@reddit
None of what you said changes the fact on a fundamental level, all LLMs do is predict the next token based on previous tokens, aka exactly the same thing as an autocomplete. It turns out a sufficiently advanced autocomplete is surprisingly powerful, but it's still fundamentally an autocomplete.
wildjokers@reddit
Calling it just autocomplete it is still naive, that totally disregards the complex behavior we see from a simple underlying principle.
lunar_mycroft@reddit
You still haven't engaged with the point. "I don't find 'fancy autocomplete' sufficiently flattering of LLMs" is not, in fact, a valid argument that LLMs aren't fancy autocomplete, just like "I didn't come from no monkey" isn't a valid argument against evolution.
wildjokers@reddit
I have, LLMs show complex behaviors that autocomplete doesn't. The fact that you don't want to acknowledge that doesn't mean I didn't engage with the point.
30FootGimmePutt@reddit
Like what?
wildjokers@reddit
30FootGimmePutt@reddit
So it’s fancy autocomplete. Fancy covers the other parts. Autocomplete covers what it actually does.
lunar_mycroft@reddit
No, you haven't. No one said that GPT-4whatever is literally identical to your smartphone's autocomplete. Of course it's more capable, that's implied by the "fancy" prefix. But it's still fundamentally still accurately describable as an autocomplete.
This argument is equivalent to "I'm not a primate, I'm much smarter than a chimp!"
30FootGimmePutt@reddit
No it’s pretty accurate. We admit it’s very fancy autocomplete.
knome@reddit
If you grabbed a bunch of computer scientists, lined them up, disallowed them from communicating, then handed the first a paper with a question on it, let them write three letters, and then pass it to the next, and repeat, you could come up with some pretty good answers regardless of each of individual only taking the current state of the paper into account and adding three letters.
Yes, the LLM is forced to rebuild its internal representations of the state for each token, but that doesn't mean it isn't modeling for future outputs as it chooses its current one.
https://www.anthropic.com/research/tracing-thoughts-language-model
sure, the direction could theoretically swerve wildly and end up nowhere near wherever the first in line was modeling towards, but most communication isn't so open ended, and picking up the constraints of the current state of the prompt should cause each of them to head in roughly the same direction, modelling roughly the same goal, and so end up somewhere reasonable.
satireplusplus@reddit
That's how they are trained, but not necessarily the result at inference. A fancy autocomplete can't play chess (beyond a few opening moves that can be memorized) - there are more possible moves . Yet if you train on text data of chess games, in order to better predict the next character, it learns to compute the state of the board at any point of the game, and learns a diverse set of rules, including check, checkmate, castling, en passant, promotion, pinned pieces, etc. In addition, to better predict the next character it also learns to estimate latent variables such as the Elo rating of the players in the game.
Experiments like these: https://adamkarvonen.github.io/machine_learning/2024/01/03/chess-world-models.html hint at sort of emergent world representations in LLMs that are a bit more than just fancy auto-complete.
hoopaholik91@reddit
Funny you mention Chess considering I saw this thread yesterday: https://old.reddit.com/r/gaming/comments/1l8957j/chatgpt_gets_crushed_at_chess_by_a_1_mhz_atari/
satireplusplus@reddit
The 50 million parameter model that is trained on nothing but chess text data plays better than ChatGPT. A model that is probably approaching a trillion parameters. Which shouldn't be too surprising, because ChatGPT learned to play chess in passing, with anything else it learned.
Anyway, this isn't about playing chess well - ELO 1500 is a hobby player, but about learning the rules of the game without being told the rules of the game.
30FootGimmePutt@reddit
They built autocomplete for chess.
It doesn’t learn. It doesn’t understand.
It’s a statistical model of chess that examines the board spits out the next move. It’s fancy autocomplete for chess.
satireplusplus@reddit
No, that's where you wrong. It learns. It understands. Read what I've linked.
bedrooms-ds@reddit
To me their completion is just nuisance. Chats are useful though. I donno why.
Crowley-Barns@reddit
Rubber duck that talks back.
kronik85@reddit
this. oftentimes rather than pouring through manuals and scouring Google search results, lolms can point me in the right direction really fast. they expose features, when not hallucinating, that I've not aware of and can quickly fix problems that would have taken me weeks previously.
I work on long living code bases, so I never use agents who just iterate until they've rewritten everything to their liking, AKA broken as fuck.
Crowley-Barns@reddit
Yep. Great for when you don’t know what you don’t know. Like maybe there’s a library perfect for your needs but you don’t know it exists and it’s hard to explain in a google search what you’re looking for. It can point you in the right directions. Suggest new approaches. Lots of stuff.
Like with anything, don’t turn off your critical thinking skills. Keep the brain engaged.
kronik85@reddit
"what are my options for x in y. give pros and cons of each" works really well for me.
30FootGimmePutt@reddit
What infuriates me is they make a stupid mistake, stick it into the code, then constantly try to use that as a reference.
You have to just kill them, erase the mistakes, and start a new one.
agumonkey@reddit
it can infer some meaning from partial wording, i don't have to specify everything according to a constraining grammar or format, it really tune with the way our brains are, more fuzzy and adaptive
teslas_love_pigeon@reddit
I think I prefer chats over agents because I would rather purposely c&p small snippets than let an agent change a 5 unrelated files adding so much pollution that I have to remove the majority of what it suggests 95% of the time.
The only "productivity" I've had with agents is that it does the initial job of a template generator with a few more bells and whistles. Afterwards it's been like 30% useful.
Better off just reading the docs like the other commentators posted.
flatfinger@reddit
Responsibly operating a car with Tesla-style "self-driving" is more metally taxing than responsibly driving with cruise control, and I would view programming tools similarly. Irresponsible use may be less taxing, but in neither case is that a good thing.
bedrooms-ds@reddit
Agreed. So much simpler to isolate everything in the chat window and select the snippets that are correct.
luxmesa@reddit
I’m the same way. I like autocomplete when it’s suggesting a variable name or a function name, but for an entire segment of code, it takes me too long to read, so it just interrupts my flow while I’m typing.
But when I’m using a chat, then I’m not writing code, so it’s not interrupting anything.
codeprimate@reddit
Maybe if you are using them incorrectly.
_Prestige_Worldwide_@reddit
Exactly. It's a huge time saver when writing boilerplate code or unit tests, but you have to review every line because it'll often go off the rails even on those simple tasks.
kerabatsos@reddit
You’re using it wrong, then.
okawei@reddit
All of them felt that way to me until I tried codex from ChatGPT.
tu_tu_tu@reddit
Tbh, they are pretty good at this. Makes your live easier when you have to write some boilercode.
xRehab@reddit
which if you know how to leverage can dramatically increase your workload.
i just managed a prefect 3 upgrade for some really old prefect 1 code. after reading some documentation on what deprecated, an few explicit instructions to Chat refactored 80% of the code for me
it’s useful if you can use it right
aksdb@reddit
Which is good, if used correctly. For example when writing mapping code, after one or two lines manually written, the remaining suggestions are typically quite on point.
I exclusively use one-line completions though; getting confronted with large code chunks all the time just throws me off and costs me more time than it saves (and exhausts me more).
koja86@reddit
Except in some cases they “autocomplete” 100% of the code from a short description. That’s like saying email is just a fancy postcard
CoronaMcFarm@reddit
Yeah they autocomplete the easy part of the code, it is not where you save the most time.
CWagner@reddit
They do sometimes complete the boring part of the code. My boss wanted some internal crud interface. That is essentially just boilerplate with nothing interesting. I could write this, as I have many times. It’s boring, repetitive, and easy. But also takes time.
But I can also just ask some LLM to do it for me, get it done in a fraction, and instead do another more interesting task by myself.
I’ve been coming around to the idea that "write once never read internal crud interface" (there are sometimes extensions requested, but they tend to be minor) are an amazing use of LLMs.
user_8804@reddit
And these things are not where we spend most of our time
CWagner@reddit
That very much depends.
xcompute@reddit
Email is just a fancy postcard
Basic-Tonight6006@reddit
Yes or overly confident search results
My_reddit_account_v3@reddit
Not quite… It also provides suggestions that can get entire tasks done in a split second.
igna92ts@reddit
I mean, my workflow is usually: ask it to do something that I couldn't be bothered to do, it does it but it's a horrible implementation, then I go back and forth trying to get it to do it right and then I end up saying "fuck it, I'll do it myself". So if anything it's a waste of time so far. I do like it for creating dummy data with a given structure and stuff like that
Material_Policy6327@reddit
I’ve found the one we use to be more annoying that anything else
composero@reddit
I was told by HR that the purpose of using AI agents and why everyone in the company should use it is so that I could be the boss… of AI… so that everyone could be a boss… but… like… none of us control AI paychecks
weggles@reddit
Copilot keeps inventing shit and it's such a distraction.
It's like when someone keeps trying to finish your s-
Sandwiches?!
No! Sentences, but gets it wrong. Keeps breaking my train of thought as I look to see if the 2-7 lines of code mean anything.
It's kinda funny how wrong/right it gets it though.
Like it's trying but you can tell it doesn't KNOW anything, it's just pantomiming what our code looks like.
Inventing entities that don't exist. Methods that don't exist... Const files that don't exist. Lol.
I had one brief moment where I was impressed, but beyond that I'm just kinda annoyed with it???
I made a database upgrade method and put a header on it that's like "adding blorp to blah" and it spit out all the code needed to... Add blorp to blah. everything since has been disappointing
composero@reddit
That autocomplete can be awful at times. It’s only really helped when I have to write POM for testing. Outside of that, sometimes it gets lucky and we are able to produce a correct conditional, but most of the time, it’s just helping with pathing and imports.
composero@reddit
I was told by HR that the purpose of using AI agents and why everyone in the company should use it is so that I could be the boss… of AI… so that everyone could be a boss… but… like… none of us control AI paychecks
mickaelbneron@reddit
I tried Copilot, and turned it off on day two, very much for the reason you gave. It did sometimes produced useful code that saved me time, but more often than not it suggested nonsense, interrupting my train of thought every time.
Perhaps ironically but, in the year or so after LLMs came out as it kept improving, I got concerned about my job, and yet, as I've used AI more, I started to feel much more secure, because now I know just how hilariously terrible AI is in its current state. On top of this, new reasoning AI models, although better at reasoning, also hallucinate more. I now use AI less (for work and outside of work) than I did up to a few months ago, because of how often it's wrong.
I'm not saying AI won't take my job eventually, but that ain't coming before a huge new leap in AI, or a few leaps, and I don't expect LLMs like ChatGPT, Copilot's underlying LLMs, and others will take my job. LLMs are terrible at coding.
dendrocalamidicus@reddit
Turn off the autocomplete and just tell it to do basic shit for you. Like for example "Create an empty react component called _ with a usestate variable called " etc.
The autocomplete is unbearable, but I find it's handy for writing boiler plate code for me.
weggles@reddit
I think that's the play. It's good at spitting out boiler plate code, not great at helping in-line 😅
Giving it a prompt is an opportunity to give extra context which is where the automatic suggestions fail
MCPtz@reddit
I've seen some people call it something like "auto-complete on steriods", but my experience is "auto-complete on acid".
Where auto-complete went from 99.9% correct, where I could just hit tab mindlessly ...
To worse than 5% what I wanted and correct. It's worse than useless.
AND I have to read every time to make sure it's not hallucinating things that don't exist or mis-uses a function.
It also tends to make larger than bite-sized suggestions, as it's statistical, pattern matching suggests I'm trying to write these next X lines of code. This makes it harder to verify in documentation.
I went back to the deterministic auto-complete.
It builds on my pre-existing knowledge and then tries to suggest small, bit sized efficiency gains or error handling, where it's easy to go check in documentation.
636C6F756479@reddit
Exactly this. It's like continuously doing mini code reviews while you're trying to write your own code.
hippydipster@reddit
Maybe don't use Copilot. There are other forms in which to utilize AI.
weggles@reddit
Copilot is what my job pays for and encourages us to use.
dudeman209@reddit
Because your start to build context in your mind as you write. Using AI makes you have to figure it out after the fact, which probably takes more time. That’s not to say there isn’t value in it, but being productive isn’t about writing code faster but delivering product features safely, securely and fast. No one measures this shit unfortunately.
PeachScary413@reddit
I think what most non-devs get wrong about SWE produvtivity is that they think the bottleneck is typing.. the bottleneck has always been about understanding, keeping multiple code paths in your head to "see" how your changes will interact with other code.
I firmly believe that chess players would make excellent programmers.
zdkroot@reddit
Yeah but sometimes it doesn't completely fuck everything up beyond repair, so we should probably replace all workers in all industries with LLMs like, tomorrow maybe? Or do you think we should wait like a week or two?
dudeman209@reddit
You ok?
zdkroot@reddit
No. If I have to hear one more time that spicy autocorrect will be taking my job any day now I might literally combust. Artists too, who needs those elitists assholes when we can all just use AI to create the same overly stylized totally uninspired cookie cutter deviant art horseshit as everyone else?
I am angry, you didn't do it, I agree with you. This whole situation is just completely fucked.
Responsible_Syrup362@reddit
That's because you're doing it wrong. You build it in your mind first, still. Then generate a schema, break it down into manageable bites. Then break that down into 500-1500 line modules/scripts. Ezpz.
SynthRogue@reddit
Yes but that's because people have AI program for them, as opposed to using AI as a faster way to get documentation on commands, libraries and patterns, and then using those as you see fit, block by block in your app.
Lame_Johnny@reddit
Exactly. Every time I write code I'm gaining knowledge about the codebase that I can leverage later. When using AI I dont get that. So it makes me faster in the short term but slower in the long term
7h4tguy@reddit
Hallucinate, no it's not that, iterate, hallucinate, no wrong again, iterate, ah, that's actually somewhat useful. This garbage is harmful in the hands of the uninformed, but somewhat useful in the already capable. The nonsense though is they think they're going to replace the more expensive capable with newbs guided by AI and it's all one big hallucination now.
hippydipster@reddit
The bottleneck is how long it takes to integrate your understanding of the code - the existing, the newly written - and the domain (ie, what the app is trying to accomplish for users).
If you don't integrate your understanding, you get to basically the same place you get if you just write untested, unplanned spaghetti code - eventually there's tons of bugs and problems and you spend all your time playing whack-a-mole and painstakingly, slowly inching forward with new features. And it just gets worse and worse.
I am finding a module size of 10,000-15,000 LOC per module to be a plateau point for building extensively with AIs. Going past that with the AIs takes great discipline.
BillyTenderness@reddit
I have found some marginal uses for AI that I think help build that understanding faster. I work in a huge codebase (that's well-indexed by an internal LLM) and being able to say "What's the tool we have for making this thing compatible with that other thing" is helpful when I know it exists but can't find the right search term off the top of my head.
Or when ramping up on a new language I was able to say, "I want to take this class and pipe it into that other class; I think this language feature over here is explicitly designed to let me do so. Is that right?" And while I didn't have 100% confidence after asking that question, it still helped me feel somewhat more confident that I hadn't missed some obvious pitfall of my proposed approach, before committing any time to prototyping it.
I haven't decided if those time savings cancel out the time wasted on helping/correcting people (esp new grads) who think the AI can just understand things and do the work on their behalf, so it might still be a net-negative.
sprcow@reddit
Exactly this. You jump right into the debugging legacy code phase without the experience of having written the code yourself. Except real legacy code has usually been proven to mostly meet the business requirements, while AI code may or may not have landmines, so you have to be incredibly defensive in your review.
heavy-minium@reddit
I do feel a 30% boost in my work. It's on the fringe of "I wouldn't want to miss it" but not quite in the realm of "you can't compete without". Keep in mind you have to take into account your web searches that would have led you through various pages, now instead all is concentrated in ChatGPT.
godndiogoat@reddit
Been there, trying to decode those cryptic doc pages ourselves. LLMs totally save the day, turning my baffling "what even is this?" into "now, this makes some sense." Think of it as Google on steroids. Pro tip: integrate APIs the easy way with DreamFactoryAPI or RapidAPI. Or try APIWrapper.ai for streamlined API chaos.
TheJuic3@reddit
I am a senior C++ programmer and have still yet to find a single instance where I need to ask AI anything.
IT have rolled out Copilot for all developers in my company but no one is actually using it AFAIK.
davidbasil@reddit
Because you're deeply specialized. It's those "full-stack" people who are easily impressed.
Vivid_News_8178@reddit
To be fair, C++ developers are like the guardians of sanity for SWE. Salute.
To your point though, from experience I agree - No truly decent developer is out there relying on copilot.
Intendant@reddit
Copilot is bad tbf
davidbasil@reddit
Productivity comes from deep specialization, not trendy tools.
QuantumFTL@reddit
Interesting. I work in the field and for my day job I'd say I'm 20-30% more efficient because of AI tools, if for no other reason than it frees up my mental energy by writing some of my unit tests and invariant checking for me. I still review every line of code (and have at least two other devs do so) so so I have little worries there.
I do find agent mode overrated for writing bulletproof production code, but it can at least get you started in some circumstances, and for some people that's all they need to tackle a particularly unappetizing assignment.
s33d5@reddit
I'd agree.
I write a lot of microservices. I can write the complicated shit and get AI to write the boilerplate for frontends and backends.
Just today I fixed a load of data, set up caching in PSQL, then got a microservice I made previously and gave it to copilot and told it to do the same things, with some minor changes, to make a web app for the data. Saved me a good bit of time and I don't have to do the really boring shit.
Worth_Trust_3825@reddit
We already had that in form of templates. I'm confused how it's actually helping you
mexicocitibluez@reddit
Because templates still require you to fill in the details or they wouldn't be called templates.
Worth_Trust_3825@reddit
And you're not filling those details out by writing a prompt?
mexicocitibluez@reddit
Idk why it feels like people who argue against these techs are always doing so in bad faith. Particularly in the tech community. It's like I literally have to explain every step of how these things work before people admit they're useful.
Are you implying that using plain English and writing a sentence to generate a template for you vs. having to fill in those template details manually is going to be the same? Can you not imagine a situation in which filling out a template my be tedious and an LLM could offload that for you?
Templates, in their nature, are fill-in-the-blank types of structures. Almost what these tools were built for. Take a pattern and match it. If you can't find that useful in what you do, then I'd love to be enlightened.
zdkroot@reddit
No. Are you implying one is better than the other in 100% of cases? Because it is not.
Can you not imagine a situation in which the LLM fucks something up and you have to spend time correcting it?
This is like buying something on sale. If you want to save money, leave it in your pocket. You didn't "save money" buying something on sale, you spent money.
I truly do not believe these LLMs "save" you time, you just spend that time a different way, then feel smug about it. Lmao.
mexicocitibluez@reddit
And there it is. Every. Single. Argument. about this tech always ends up with you guys having to be like "wElL iT's NoT pErfEct" no shit? No one said it was. Nobody. Literally nobody on this planet says that generative AI works 100% of the time. Even the people that think it's the next coming will admit it's not perfect. Which is why it's always funny it ends like this. Always.
I know you don't use these tools because if you did you wouldn't be saying things like this. Of course. It's a trade-off. It's a tool. Nothing is perfect.
I'm sure you haven't heard of it, but there's a tool called Bolt that generates UI designs for using React and tailwind. I am not good at building out UI designs. I understand that limitation. In nearly EVERY GODDAMN CASE it's creating something better than I could.
You don't believe? Well then that settles it guys. Shut down the models. zdkroot's "smugness" leads him to not believing people's own experiences because he somehow can magically be everywhere all the time and thus know whether it's true or not. They've seen every single type of programmign on this planet and every need and belive it isn't worth it.
https://www.youtube.com/channel/UC3RKA4vunFAfrfxiJhPEplw
This video will help illustrate just how stupid you sound about these tools and how absurd it is to believe you know everyone else's experiences based on you're own.
zdkroot@reddit
Bro you must have completely checked out because AI horseshit is being pushed as the answer to every problem, everywhere. You literally suggested devs are afraid AI will replace them, why do you think that? Because every AI company that exists is selling that product they claim can do just that. As the solution to problems you haven't even thought of yet. Every one of these things is a magical solution in search of the perfect problem.
What rock do you live under? That is the entire fucking problem. Why would I even make this post if that wasn't the case? It is completely fucking endless.
Yes please tell me more about people arguing in bad faith. This is not the argument you are making it out to be. Yes we should do cancer research even if we don't eliminate cancer. Yes you should take a shower even if you will get dirty again tomorrow. Yes you can use AI even if it's not 100% perfect. That is not the argument I am making. HUMANS are not 100% perfect and we still use them all the time. How fucking asinine for you to imply this is the point I am making. It's fucking not.
LLMs are not the answer to every problem, in fact they are the answer to very few problems, but everyone who talks about AI wants to use it for every god damn thing under the sun. Most gen-z use LLMs to figure out where to go for dinner. What a fucking game changing technology. Do something NEW and NOVEL with this technology. Computers did not revolutionize the world because people are able to write books faster. They can DO NEW THINGS that was impossible to do. Scientists used to fill entire blackboards with equations to calculate orbital mechanics, when suddenly a machine could do something that would have otherwise taken a dozen people weeks. You and everyone else who talks about LLMs acts like this is where we are at with them. That one person in one day can now do the work of ten people in a week. It's completely fucking false.
mexicocitibluez@reddit
and
That's literally your comment. That's all you have. You're asking me to defend that it's not perfect in 100% of case (noone made the claim, you just pulled it out of your ass).
None of this means anything or makes any sense. You guys sound like lunatics.
zdkroot@reddit
"I'm too fucking stupid and ignorant to understand what you are saying"
Yeah, I know man, I know.
mexicocitibluez@reddit
You're misinterpreting not wanting to read a bunch of shit written by someone with 0 experience with the shit they're talking about. Which is why you like a fucking moron.
Still cant believe people like you exist btw
I can't fucking wait til you're forced to use this tech and it eventually takes your job because you're too ignorant to learn how it works
zdkroot@reddit
No, I am not. This is such fantastic comedy. Are you 12? You don't even know what my job is but you are confident LLMs will take it. The level of delusion is just off the charts.
mexicocitibluez@reddit
hahahahahahhhhahhahhahhahahhahhah
You don' even know what my job is yet you're confident LLMs don't help.
GTFO.
zdkroot@reddit
You should probably get off the internet until you hit puberty. It is rotting you brain.
mexicocitibluez@reddit
Another comment that's absolutely worthless.
It's so funny you're like "You dont even know what my job is" and yet literally spent the last day arguing that you know what other people's job is.
You said humans aren't 100%, then claimed for whatever reason that an LLM not being 100% is bad. Airtight logic on that one.
Again, you're entire argument rests on the premise that millions upon millions of devs are just lying and that you know better than them.
The irony in calling me a child while still believing you are teh center of hte universe is too much, even for this sub.
zdkroot@reddit
You have no idea how to argue, or it seems even to read.
By all means, keep arguing against things nobody said or suggested or even implied so you sound smart to other morons. You will definitely get smarter this way.
mexicocitibluez@reddit
Irony is dead bro.
"Here's why AI isn't helping your job"
"Don't you dare tell me about my job"
good luck dipshit
zdkroot@reddit
Look in the fucking mirror my dude. Every accusation is a confession.
This is the most brain damaged conversation I have ever been cursed with having. This is the textbook definition of the bad faith argument you began your complaint about. Truly not sure if you are just a karma bot or what. Not one iota of understanding. Fully convinced you know everything and AI is coming for my job, while accusing me of doing that.
mexicocitibluez@reddit
I find AI useful in my work.
Go ahead, argue with that. Tell me how how I don't actually find it useful. Seriously. Do it.
zdkroot@reddit
Thank you, this is proving my entire point right here. I never said it wasn't, so why would I do that? Good lord, how stupid are you? Truly?
YOU STILL DONT UNDERSTAND MY ARGUMENT
You literally just made that up and started arguing against it. You did not take even one second to engage your brain and attempt to understand the point I was making, because you are too eager to let everyone else do the thinking for you and score some fucking ego points with a mic drop moment that gets you upvotes.
YOU inserted the "well if it's not perfect we shouldn't do it", I never said that either. You fabricated half a dozen ideas I didn't say or suggest or imply, then argued against each of those in turn. You are having this entire argument with yourself because you have never actually engaged with or responded to anything I actually said. Just the most smooth brain shit I have ever witnessed.
mexicocitibluez@reddit
"I truly do not believe these LLMs "save" you time, you just spend that time a different way, "
You're a retard.
zdkroot@reddit
That is not a response to what I said, retard.
mexicocitibluez@reddit
hahahahah
zdkroot@reddit
Neither is that. You are literally unable to engage, because you don't understand.
mexicocitibluez@reddit
"I truly do not believe these LLMs "save" you time, you just spend that time a different way, "
zdkroot@reddit
Obviously nobody, certainly not you who has been responding to me for literal hours. You definitely don't care. Nope nope nope. Not even a little bit.
Holy shit dude do you actually not realize how fucking transparent you are? You are just a collection of internet tropes, it's honestly fucking hilarious. When you finally have an original thought be careful you don't get hurt patting yourself on the back.
mexicocitibluez@reddit
lol
bluhhhhhhhhhhhhhhhh@reddit
Holy shit, I just read this whole thread, and the person you've been responding to is genuinely one of the densest people I've ever encountered on Reddit.
Absolutely zero reading comprehension; just worthless comments that completely misinterpret or misunderstand what's being said in the conversation, all wrapped in a smug delivery that accuses you of literally the exact things he's been doing for 20+ comments.
Exhausting.
mexicocitibluez@reddit
You know what? This is whole thing is actually hilarious.
You made up your mind about a tech you don't even use in other's peoples jobs you have no clue what they're doing and then are like "You cant even argue". With what??? Your opinion???? On something you have no experience with???
What could I possibly say to someone who doesn't even understand how it works, doesnt understand how its used, doesnt believe people about their own experiences,and thinks they've seen it all? Honestly. Are you an imbecile?
Worth_Trust_3825@reddit
the "bad faith" argument comes from the fact that we already had this, and people weren't using it or used it not enough, while complaining that they need to write boilerplate. templates must be static, it must not generate a template on demand, but rather use an existing one. if you have too many parameters for your template that you cannot fill them out, then it's a bad template, and you need to think through how to reduce the parameter count.
mexicocitibluez@reddit
We 100% have not had generative AI.
Yes. And now a tool does it all for you. You're arguing against efficiency.
I have absolutely no idea what this means with respect to what we're talking about. What does static even mean in this context? A length of time? Can't add fields? Can't remove them? Is it days or weeks we're talking about?
There is nothing inherent in the word "template" that means "static".
Another idea you're just making up off the cuff to defend a point. This isn't even a thing, tbh. I've never, in my life, heard of the quality of a template being defined by the # of parameters it may or may not have.
You're going to have to admit these tools are useful and stop twisting yourself into arguments that don't make sense to prove otherwise.
Worth_Trust_3825@reddit
The tool doesn't do everything for you. What are you on about?
mexicocitibluez@reddit
Are you now moving the goal posts from "helping you with boilerplate" to "do everythin for you"?
Do you see how you've had to turn this into something disingenuous and bad faith to continue to make your argument? Can I ask why you're so dead set against admitting these tools are useful despite the overwhelming evidence they are? What do you have to lose by admitting it?
Worth_Trust_3825@reddit
From your own comment.
mexicocitibluez@reddit
"everything" ie Filling in the boilerplate. If you couldn't figure that out from context I don't know what to tell you. No one is claiming it does Everything for you. We're talking about boilerplate code being filled in as opposed to manually filing out templates.
Worth_Trust_3825@reddit
Right, and you couldn't figure out what static templates are.
bluhhhhhhhhhhhhhhhh@reddit
Can you just admit that you have exceptionally poor reading comprehension and move on? The vast majority of your contributions in this thread consist of you egregiously misrepresenting or misunderstanding the comments you're responding to.
mexicocitibluez@reddit
you literally just made up a phrase "static templates".
TheBoringDev@reddit
I cannot. Either you care what values are set, in which case you have to tell the LLM, or you don’t, in which case you can use the template defaults. How is the LLM saving you any work?
mexicocitibluez@reddit
Cool. Not going to explain to someone something they can experience for themselves (but wont and instead will double down like every other moron in this thread who refuses to acknowledge it's use cases).
No clue what this means. You're just making up rules about things in order to defend some ridiculous point about how LLMS aren't useful despite both not using them and also not understanding the millions of different wants software can be built.
Let's saying I've built a template for ingesting questions for a 200-question questionnaire. And then it does it for me in SECONDS. And I review that it's correct which takes a few minutes. The fact that this simple situation is such a foreign is absolutely nuts to me.
TheBoringDev@reddit
The LLM isn’t going to have any context you didn’t give it, and if you can describe what you need with a sentence of natural language, you probably didn’t need 200 questions. You’re presupposing useless busywork.
mexicocitibluez@reddit
You know what's really funny about this response? I'm not creating the questions, Medicare is. So asking an LLM to turns those into a Json template from a pdf is definitely useful.
It's so nuts to me that in this huge field people still wanna question others experiences like this.
wildjokers@reddit
It is really baffling to me why developers are luddites when it comes to AI. My only guess is that some of it just comes from fear that it is going to replace them, so they come up with a whole bunch of weird arguments about why they aren't useful.
zdkroot@reddit
Lmao LLMs are not replacing devs any time soon. Yes I have seen the headlines of companies alleging they are doing it. They are not. They are just laying off devs and using AI as a cover story. Literally nobody is doing this. Why is OpenAI hiring if the have an AI that can replace devs? What a fucking joke rofl.
wildjokers@reddit
I never said they were.
zdkroot@reddit
> My only guess is that some of it just comes from fear that it is going to replace them
Who said this then?
wildjokers@reddit
I am having some trouble figuring out how you interpreted that as me saying devs are going to be replaced by LLMs.
I say some devs have a fear they are going to be replaced by LLMs. That is vastly differently than saying devs are going to be replaced by LLMs.
smallfried@reddit
I read here sometimes that people are being pushed to use these tools by management. People go into donkey mode quickly.
A bit like agile development.
mexicocitibluez@reddit
Same. Just literally making things up like "templates must be static". Where on god's green earth does that even come from?
TippySkippy12@reddit
Classic templates are generally provided, for example by the IDE.
The AI can deduce a template through pattern matching.
Worth_Trust_3825@reddit
why deduce when you can select exact template that you need at a given time?
TippySkippy12@reddit
Because the AI detects a pattern as you write the code. For most things, there isn't an actual template for a repeated code within a specific context. But there are patterns.
s33d5@reddit
Because I haven't given you all of the details and my job is different to your lmao
mexicocitibluez@reddit
are you replying to the right person?
s33d5@reddit
Nope lol!
mexicocitibluez@reddit
Why do I need the details to your job? What are you talking about?
s33d5@reddit
I said I'm not replying to the right person! The message wasn't meant for you!
You and the other person just have the same colour avatars so I clicked the wrong one.
mexicocitibluez@reddit
im dumb. my bad.
P1r4nha@reddit
Yeah, agent code is just so bad, I've stopped using it because it slows me down. Just gotta fix everything.
Helpful-Pair-2148@reddit
It really depends on the LLM / task.it's not a silver bullet, it's good for some stuff and bad for others. I use agent mode (with claude 4) to write our documentation all the time and it works flawlessly, barely have to change anything.
chat-lu@reddit
LLMs write the exact kind of documentation we teach to avoid in CS 101.
Helpful-Pair-2148@reddit
Any concrete example? That couldn't be furtber from the truth and honestly just the fact that you are trying to make that comment tells me everything I need to know about your opinion of AI: you haven't genuinely tried it, you are close minded.
The number of PRs I had to ask for changes because a human wrote superfluous docs or comments is higher than I remember. It has literally never happened with AI generated docs.
chat-lu@reddit
Sure, go to the Zed Editor’s homepage and click on their main video. Wait until they explain that it’s amazing that LLMs can document your stuff for you. Pause the video. Actually read the documentation it generated. It’s total crap.
What we teach in CS 101 is “don’t document the how, document the why”. And LLMs can only understand the how.
Helpful-Pair-2148@reddit
At least provide a timestamp, some of us have actual jobs to do.
Jfc. That advice is for CODE COMMENTS, not DOCUMENTATION. Documentation should absolutely 100% tell you how to use your methods, because the consumers of your API couldn't care less about the "why". Maybe you should redo CS101 because clearly you have no clue about coding in general.
chat-lu@reddit
So do I. You are the one who initially asked for labour. That's the most I am willing to put because I have better things to do.
Helpful-Pair-2148@reddit
Just admit you are wrong ffs... you don't even understand thw difference between code comments and documentation, that is beyond ridiculous. Stop arguing about subjects you obviously have zero understanding of.
Maybe the reason why you hate AI so much is deep down you know you are exactly the kind of bad dev that AI is good enough to replace.
chat-lu@reddit
Code comments are one form of documentation. The Swagger page you serve is a form of documentation. Your confluence wiki a form of documentation. Everything you write for a human and not for the machine is documentation.
I’m not sure what kind of weird definition of documentation you use so that comments don’t qualify.
By your own admission, you are the kind of dev that AI is good enough to replace since it writes code as good as you.
Helpful-Pair-2148@reddit
And each of those have different purposes, only a very minimal subset of those should flow the "explain why, not how" adage, yet you used that adage to proclaim that AI was bad at all documentation. You are just trying to save your face, it's plainly obvious.
Literally never said that so not only do you not know how to code but you also don't know how to read.
smallfried@reddit
You can probably adjust your prompting a bit to avoid superfluous comments.
wildjokers@reddit
It drastically speeds up the writing of unit tests. Sure, I generally have to massage them a bit, but still saves tons of time and I end up with better and more complete test suites.
hbgoddard@reddit
I'm amazed you trust an LLM to properly test your codebase
wildjokers@reddit
Why?
I review what it generates and add/remove tests as necessary. I don't blindly trust what it generates, but it saves tons of time.
WhyWasIShadowBanned_@reddit
20-30% is very realistic and it’s still amazing game in company. Our internal expectations are 15% boost and haven’t been met yet.
I just can’t people that say on reddit it gives the most productive people 10x - 100x boost. Really? How? 10x would have been beyond freaking expectations meaning a single person can now do two teams job.
uthred_of_pittsburgh@reddit
15% is my gut feeling of how much more productive I have been over the last six to nine months. One factor behind the 10x-100x exaggeration is that sometimes people see immediate savings of say 4 or 5 hours. But what counts are the savings over a longer period of time at work, and that is nowhere near 10x-100x.
smallfried@reddit
Some tasks do speed up 10x. Problem is those tasks optimistically only took up 10% of your time, meaning that your total speedup is 100/91 or about 10%.
7h4tguy@reddit
Boost? Everything needs review. That's extra time spent. Maybe 5-10% of useful, actually productivity delta if we're all being strict honest.
KwyjiboTheGringo@reddit
I've noticed the most low-skill developers doing low-skill jobs seems to greatly overstate the effectiveness of LLMs. Of course their jobs is easier when most of their job is plumbing together react libraries and rendering API data.
Also the seniors who don't really do tons of coding anymore because their focus has shifted into higher-level business needs often tend take on simpler tasks without a lot of unknowns so they don't burn out, but can still get stuff done. And I could see AI being very useful there as well.
AI bots on Reddit and every other social media site have run amok as well, so while the person here might be real, you're going to see a lot of bot accounts pretend to be people claiming AI to be better than it is. This the most obvious on Linkedin, but I've seen it everywhere, including Reddit.
Connect_Tear402@reddit
There where a lot of jobs on the low end of software development if you are an upwork dev or a low end webdev who had managed to resist the rise of no code you easily gain a 10x productivity boost.
SergeyRed@reddit
It has to be "it gives the LEAST productive people 10x - 100x boost"
Vivid_News_8178@reddit
What type of development do you do?
NoCareNewName@reddit
If you can get to the point where it can do some of the busy work I could totally get it, but every time I've tried using them the results have been useless.
RevTyler@reddit
I've been using it more for refactoring and completing repetitive tasks and I've really found that if you can do one part, then say "hey, look at this part, make similar changes to these other 30 parts". Give it some reference and it does a much better job. When you realize it isn't smart, it just knows a lot of things, you learn how to structure requests better for busy work.
7h4tguy@reddit
But your upper-level management are dictating everything must be tied to AI now and this is going to solve all problems, right?
QuantumFTL@reddit
Also I've had some fantastic success when I get an obscure compiler error, select some code, and type "fix this" to Claude 3.7 or even GPT 4.1. Likewise the autocomplete on comments often finds things to remark on that I didn't even think about including, though it is eerie when it picks up my particular writing style and impersonates me effectively.
Arkanta@reddit
I use it a lot like this. Feed it a compiler error, ask it to give you what you should look for given a runtime error log, etc.
It certainly doesn't code for me but it's a nice assistant.
DHermit@reddit
Yeah, there are some simple transformation tasks that I absolutely could do myself, but why should I? LLM are great at doing super simple boring tasks.
Another very useful application for me are situations where I have absolutely no idea what to search for. Quite often an LLM can give me a good idea about what the thing I'm looking for is called. I'm not getting the actual answer, but pointers in the right direction.
_I_AM_A_STRANGE_LOOP@reddit
Fuzzy matching is probably the most consistent use case I’ve found
vlakreeh@reddit
I recently onboarded to a c++ codebase where static analysis for IDEs just doesn’t work with our horrific bazel setup and overuse of
auto
so none of the IDE tooling like find usages or goto definition works, so I’ve been using Claude via copilot with prompts like “where is this class instantiated” or “where is the x method of y called”. It’s been really nice, it probably had a 75% success rate but that’s still a lot faster than me manually grepping.smallfried@reddit
Ugh, C++ makes it too easy to create code where a single function call takes reading 10 classes on different inheritance levels to figure out which actual function is actually called. Sometimes running the damn code is the only way to be sure.
smallfried@reddit
LLMs excel at converting unstructured knowledge in structured knowledge. I can write the stupidest question about a field I know nothing about and two questions along I have a good idea about the actual questions and tool and API pages I should look up.
It's the perfect tool to get from vague idea to solid understanding.
CJKay93@reddit
I used it to add type annotations to an unannotated Python code-base, and it actually nailed every single one.
7h4tguy@reddit
Maybe because it was high?
_I_AM_A_STRANGE_LOOP@reddit
I think in all contexts where you can defer to genuinely linguistic emergent phenomena - code often falls somewhat into this bucket - these models perform their best. Try to get them to play chess...
dudaman@reddit
Coming from the perspective where I do pretty much all of my coding as a one person team, this is exactly how I use it and it works beautifully. I don't get the luxury of code review most of the time, but such is life. On the occasion where I'll need it to do some "thinking" I give it as many examples as I can. I'll think, ahead of time, where there might be some questions about a certain path that might be taken and head that off before I have to "refactor".
We are at the beginning of this AI ride and everyone seems to want to immediately jump to the endgame where they can replace a dev (or even an entire team) with AI agents. Use the tool you have and get stuff done. Don't use the tool you wish you had and bitch about it.
mount2010@reddit
AI tools in editors would speed programmers up if the problem was the typing, but unfortunately the problem most of the time is the thinking. They do help with the thinking but also create more thinking problems so the speed up isn't really immense...
captain_zavec@reddit
It's like that joke about having a problem, using a regex, and then having two problems
tmarthal@reddit
Claude can one shot most regexes if you provide a description and a couple sample values to parse. it's great.
zdkroot@reddit
Lulz, very accurate.
Mysterious-Rent7233@reddit
This is far from universally true. For example, tricky mocks are automatically self-validating so I don't need to read them closely. And writing them is often a real PITA.
TippySkippy12@reddit
Tricky mocks are a sign you are doing something wrong (or mocking something you shouldn't) and don't validate anything. Mocks are descriptive and should absolutely be read closely because they describe the interactions between the system under test and its external dependencies.
Mysterious-Rent7233@reddit
In Python, the mock is checked to be replacing a real object. If there is no matching object, the mock fails. One must of course read that the thing being mocked is what you want mocked, but the PATH TO THE MOCK, which is the tricky thing, is validated automatically.
Furthermore, most mock assertions will simply fail if the thing isn't mocked correctly. How is the mock going to get called three times with arguments True, False, True if it wasn't installed in the right place?
TippySkippy12@reddit
That is like a basic syntax check, and not the point of a mock.
The challenge with mocking is to understand why you are mocking. If you randomly patch your code to make the code easier to test, you are fundamentally breaking the design of your code.
Mocks should align to a higher level of orchestration between components of the system.
Thus, when I see a complex set of patches in Python test code, that is a smell to me that there is something fundamentally wrong in the design.
The real question is why is it being called with "True, False, True"?
Verification is actually the better part of mocks, because that actually demonstrates the expected communication. But the worst is when you patch functions to return different values.
For example, the real code can fetch a token. In a test you don't want to do that, so you can patch the function to return a canned token.
But, this is an external dependency. Instead of designing the code to make it explicit that it has a dependency on a token (for example, taking a token function as an argument), you hack the code to make it work, hiding the dependency.
This is related to Misko Havery's classic article Singletons are Pathalogical Liars.
Mysterious-Rent7233@reddit
I don't think you are actually reading what I've written, trying to under stand it.
For example, if you had written If you read the section "Understanding where to patch" on this page , as I suggested, you would have not have responded with: "Thus, when I see a complex set of patches in Python test code, that is a smell to me that there is something fundamentally wrong in the design."
Because as the page says, the complexity is IN HOW PYTHON DOES MOCKING, not in my usage of the mocking.
Since you are not interested in understanding what I'm saying, I'm not particularly interested in continuing the discussion.
Have a great day.
TippySkippy12@reddit
If you had understood what I said, you would understand why that link doesn't address my response.
That link is about the mechanics of mocking. For example, as I already said, in a test you should patch the function that returns the token. Just as the article says, patch the lookup not the definition.
I was talking about the theory of mocking. The higher level idea that mocks are supposed to accomplish in a testing strategy. If you want a better idea of this, put away that article and read an actual book like Growing Object Oriented Software Guided By Tests, written by the pioneers of mock testing.
So, when I tell you that I think that patch is terrible, hopefully you understand why.
Finally, to circle back to the point of this thread. You need to carefully define and pay attention to what you are doing with mocks beyond "is my mechanical use of mocks correct", because it is the contract of the collaboration. AI can't be used the way you are describing to write effective mock tests.
Mysterious-Rent7233@reddit
Exactly. Thank you. That's precisely what I've been trying to say.
And MY POINT is that managing the MECHANICS of mocking is PRECISELY the kind of work that we would want an AI/LLM to manage so that a human being does not need to.
Which is why I'm deeply uninterested -- in this context -- in discussing the theory of mocking, because its completely irrelevant to the point I was making.
I want an AI to manage the sometimes complex, confusing and tricky MECHANICS of mocking, so that I can focus on the THEORY of it, and on everything else I need to do to deliver the product.
TippySkippy12@reddit
Ah, I see. I was triggered by this:
Any time I see the words "mock" and "don't need to read them closely", I get nervous.
Mysterious-Rent7233@reddit
I did try to clarify that it is the PATH to the Mock that does not need validating.
For example, in Python, sometimes you override json.loads with patch("json.loads") and sometimes with patch("mymodule.loads"). Which you use depends on the module under test, but when I am thinking about the logic of tests, I should not need to focus on this detail. AI should handle it. If the AI gets it wrong (which it seldom seems to), I will get an error message.
shantm79@reddit
We use CoPilot to create a list of test cases, just be reading the code. Yes, you'll have to create the steps, but CoPilot provides a thoroughly list of what we should test.
zdkroot@reddit
So, three devs spending hours on code review is "more efficient"? At doing what? Producing bad developers?
DrGodCarl@reddit
Yeah I’m building something somewhat complex and I was fairly well able to describe the flow and have it generate all the cdk needed for it. I’ll go in and fill in the real logic and focus on the meat but having the bones in code after starting with plain English saved me at least a day.
bwanab@reddit
I totally agree. I've been using claude code. I've found two huge benefits. First, I spend a lot more time thinking about what I'm trying to do and less time worrying about syntax and naming things. Second, and possibly more importantly, I find that I can spend a lot more effort trying different approaches to see which is better. For example, I'll try one approach which would take me maybe a day to code myself but it takes Claude code maybe an hour to get right. I have it write unit tests then check in the original solution as a commit and roll back to the previous commit and have it code another solution. I can do this as many times as I want and try out several in the time it would have taken me to code the first one. So I'm not sure if the code that's produced is any better than I would have written myself but I'm pretty sure that the solution that I arrive at is much better for having tried several and picking the best one.
Wall_Hammer@reddit
I hate writing tests, LLMs are good at writing the code which I then make sure it’s correct
eldelshell@reddit
I feel stupid every time I used them. I rather read the documentation and understand what the fuck
leftpad
is doing before the stupid AI wants to import it, because AI doesn't understand maintenance, future proofing and lots of other things a good developer has to take into account before parroting their way out of a ticket.RICHUNCLEPENNYBAGS@reddit
You can absolutely tell it “use XXX library” or “do this without importing a library” if you aren’t happy with the first result.
zdkroot@reddit
Where does this end? So you need to make a new rule every time the AI does weird shit?
Congratulations, you have recreated the justice system.
Understanding context without it being spoonfed is like, why we will continue to use humans and why LLMs don't work well for programming. I can ask a question to any of my coworkers and I will not have to remind them to only give suggestions in the language we use and with the libraries we use and stored in the same place as everything else -- they already know all that. It is just assumed.
I swear everyone who is fully guzzling the AI kool-aid does not work on a team. If you work alone adding an AI is like having an actual assistant, I get it. Having a team use LLMs is like adding a junior that will never improve or understand the code. It fucking sucks and is not some kind of 10x speed boost for any one person, let alone the entire team.
RICHUNCLEPENNYBAGS@reddit
No, you look at the output and make suggestions for things it should change. It’s not about trying to preemptively construct an elaborate rule set. It’s not really that different from giving CR feedback or scrolling through a few Stack Overflow answers or whatever in principle.
zdkroot@reddit
So what exactly do you think your role is here? How do you know if the AI is right or wrong? Where did you gain the knowledge/experience to make that decision?
What everyone is suggesting is trading writing code for reviewing it. I don't know where everyone gets the idea they know enough to be senior manager who only reviews code for structure. It's a complete joke.
RICHUNCLEPENNYBAGS@reddit
Where did I get the knowledge to evaluate whether code is right or wrong? Is that a serious question? The only person suggesting AI is a replacement for knowing what you are doing or going to drive overall project direction in this discussion was you.
zdkroot@reddit
I don't know why you are confused. Were you fucking born with it? Yes it is a god damn question, did you bother to consider it? How are people without decades of experience writing and reading code before LLMs existed supposed to gain this knowledge? Where did the code you learned on come from? How are people this fucking blind?
RICHUNCLEPENNYBAGS@reddit
By the same methods they always have? I imagine you also have some experience implementing sorting algorithms and basic data structures that you would rarely implement yourself in a production app for learning purposes.
zdkroot@reddit
It's hilarious you think the answer to the question is soooo obvious yet you really struggling to actually articulate an answer to it.
You are so fucking close to the point yet you are just dancing around it. Yes. I do. Where did I get that experience? Was I just fucking born with an innate understanding of quicksort? Did I absorb this knowledge through osmosis just being near an omnipotent AI who did everything for me?
I FUCKING WROTE THE GOD DAMN SHIT MYSELF
And it didn't work, and I had to debug it, and make it work. That is how we all learned. That is where I gained the experience to know what works and doesn't work, and so the fuck did you. You were not born with this knowledge, you had to go find it. That is not how any new people are going to learn when the lean so heavily on LLMs.
If you can't trust the teacher to tell you the truth, how are you supposed to learn anything? If you get a 75% on the quiz cause 1/4 of the things they told you were lies, would you continue to trust that teacher? What if there was no test? How would you know they were lying or incorrect? Oh, the code doesn't work then you have to debug it? Wow, what a game changing technology.
RICHUNCLEPENNYBAGS@reddit
OK. If you want to not use useful tools because some past version of you wouldn’t have been able to use them effectively that’s a choice you’re free to make.
zdkroot@reddit
You clearly are not following, to the point of being wilful, which is just as hilarious as this entire thread. Let's try something else.
Where do senior developers come from? Straight from the mcdonalds drive through to team lead? Do you think a person with no programming experience could self-teach themselves into a senior role only using LLMs?
Who would teach them right and wrong? Certainly not the fucking chatbots. You? How did YOU get to be a senior?
If we replace all the juniors with LLMs doing the grunt work, where will the seniors come from? If we replace all the seniors with LLMs because the juniors are able to write prompts, who is going to tell them what works and doesn't work?
Who in the organization has the whole application in their head? Anyone? What is your bus factor? You can't even ask the AI why it did a thing, because it doesn't have a reason, it just matched a pattern.
This is poisoning the well.
I never said they were useless tools. Not once, ever. Acting like they are going to upend the economy and every worker will be replaced with by LLMs is such a pie in the sky dream I struggle to accept that anyone is dumb enough to believe it.
A circular saw is a very useful too. So is a nail gun. You cannot give a five year old free access to power tools and expect them to construct the Sistine Chapel without guidance. That is what everyone is suggesting these LLMs are capable of doing. They are not even close.
RICHUNCLEPENNYBAGS@reddit
OK, well, if you want to argue against such a proposition, why don’t you go find someone who thinks it’s true first?
makedaddyfart@reddit
Agree, and more broadly, I find it very frustrating when people buy into the marketing and hype and anthropomorphize AI. It can't understand anything, it's just spitting out plausible strings of text from what it's ingested.
7h4tguy@reddit
Buy into the hype? It's full time jobs now YouTubing to venture capitalists to sell them on the full hype package that's going to be the biggest thing since atoms.
There's rivers of koolaid, just grab a cup.
zdkroot@reddit
This got a light chortle out of me, thanks.
TechyAttam@reddit
I get that nothing beats actually understanding your code, especially when it comes to long-term maintenance. That said, I’ve been using Famous.ai more as a way to scaffold or prototype quickly. It’s not about replacing the thinking part just helps get a working base so I can focus more on the logic and structure I care about.
zdkroot@reddit
I feel like your reference to `leftpad` specifically is being lost on a lot of people. That is exactly the kind of shit that happens when you don't understand the larger picture, which no LLMs do.
aksdb@reddit
AI "understands" it in that it would prefer more common pattern over less common ones. However, especially in the JS world, I absolutely don't trust the majority of code out there to match my own standards. In conclusion I absolutely can't trust an LLM to produce good code for something that's new to me (and where it can't adjust weights from my own previous code).
7h4tguy@reddit
Look man, we're just going to write this in Python OK?
(eye rolls seeing Python as the most popular GitHub language to be ingested by robots)
mnilailt@reddit
When 99% of stack overflow answers for a language are garbage, with the second or third usually being the decent option, AI will give garbage answers. JS and PHP are both notoriously bad at this.
ewankenobi@reddit
I ask it to provide the most efficient solution, the simplest most readable solution and a solution that balances efficiency and readability and to discuss the differences between the solutions. Then I end up either picking the best one or combining elements of each, which is what I'd end up doing when reading stack overflow
farmdve@reddit
I've used it to speed up many things.
For instance, I told it I wanted a GUI application for windows that scans for J2534 devices, implements some protocols, add logging capabilities. Etc. About 80-90% of the code works.
Do you know how much time it would've taken me to code that from scratch? I am notoriously bad at gui element placements. The neural net spit out a fully functional GUI with proper placement(in my eyes).
I also gave a screenshot of a website and told it to create a wireframe with similar CSS. It did. It did so splendidly.
I told it to create a Django website with X features present. It did so. And it works.
And a few more applications(especially a matplotlib one) that combined if I had to program all that from scratch would've taken me months or more, and my ADHD brain would've been on new projects by then.
Nax5@reddit
I think the point is "it works" isn't sufficient for many senior engineers. The code is rarely up to standards. But it's certainly great for prototyping.
arkvesper@reddit
I like it for asking questions moreso than actual code. I finally decided to actually dive into fully getting set up in linux with i3/tmux/nvim etc and gpt has been super helpful for just having a resource to straight up ask questions instead of having to pore through maybe not-super-clear documentation or wading through the state of modern google to try and find answers. It's not my first time trying it out over the years, but its my first time reaching the point of feeling comfortable, and gpt's been a huge reason why
farmdve@reddit
It's obvious that /r/programming has an agenda of downvoting posts where the user has prompted more full-fledged applications.
.
DatumInTheStone@reddit
This is so true. First issue you come across with the first set of code ai gives you, it then shuttles you off to a deprecated library or even deprecated part of the language fix. Write any sql using ai, you’ll see
aksdb@reddit
Yeah exactly. I think the big advantage of an LLM is the large network of interconnected information that influence the processing. It can be a pretty efficient filter or can be used to correlate semantically different things to the same core semantic. So it can be used to improve search (indexing and query "parsing"), but it can't conjure up information on its own. It's a really cool set of tools, but by far not as powerful as the hype always suggests (which, besides the horrendous power consumption, is the biggest issue).
WTFwhatthehell@reddit
LLM's try to complete the document in the most plausible way.
Not just produce the most common type of X.
Otherwise llm's would just produce an endless string of 'E's since E is the most common letter in the English language.
Feed them well written code in a specific style and they're likely to continue.
Feed them crap and they're likely to continue crap.
atomic-orange@reddit
Weighting your own previous code is interesting. To do that it seems everyone would need their own custom model trained where you can supply input data and preferences at the start of training.
aksdb@reddit
I think what is currently done (by JetBrains AI for example) is that the LLM can request specific context and the IDE then selects matching files/classes/snippets to enrich the current request. That's a pretty good compromise combining the generative properties of an LLM with the analytical information already available on the code gen model of the IDE.
ExTraveler@reddit
You can just ask ai "what the fuck leftpad is doing" and spent less time searching for this. And this is equal to "being more productive". Sometimes I think there is enormous amount of dev who don't even know how to implement ai in their life, they just once do something like one prompt - "chatgpt, write me a whole project", then see shitty results and think that this is it, there is nothing else that ai can be used for, and since results were shitty this is not worth to use it at all
TippySkippy12@reddit
Why would you do this, instead of just looking up the code or documentation yourself from the actual source?
Seriously, do you want the AI to wipe your ass too?
dimbledumf@reddit
I think it's clear you've not done any serious dev work in your life. Why spend hours when you can spend a minute.
Do you even use google or is that just for plebs, why use google when you could just read the docs?
wintrmt3@reddit
You don't seem to understand that maintenance is a much bigger part of developing something than writing the code in the first place, and you accuse someone of not being a serious developer? LOL.
dimbledumf@reddit
Are you saying you don't write tests?
Or are you randomly updating your components?
Getting the answer from AI or stackoverflow doesn't mean you don't understand the solution, but it does mean you don't have to spend an hour to figure out the right parameters.
Uristqwerty@reddit
Why spend hours familiarizing yourself with a dependency now, when you can spend twice as long debugging later? If you think the parent commenter hasn't done any serious dev work, then I suspect you've never had to maintain your own code for more than a few months before hopping to a new project, leaving the old behind as someone else's problem.
KrispyCuckak@reddit
Oh, please. This whole “AI can’t help developers” thing is the most hilariously outdated crock of horse shit I’ve ever read. You know what’s “actually” slowing developers down? Your lazy ass still using JavaScript like it’s 2005. Newsflash: AI is not here to hold your hand while you try to figure out how to unroll a basic for loop. AI is here to kick your ass into the future.
The idea that you could “out-code” an AI assistant is laughable. The AI is out there writing algorithms while you’re busy crying into your two-day-old coffee, googling “How to fix a segmentation fault in C++”. If you can’t even remember basic syntax, it’s time to step aside. You’re not a “developer”; you’re an unpaid intern at the ‘I’m Stuck’ support group.
And don’t give me that crap about AI not being creative. AI’s already out there inventing things you didn’t even know could be invented. Meanwhile, you’re sitting there like a 60-year-old man yelling at his toaster for not making him a damn sandwich. It’s embarrassing.
Let’s just say it—AI is going to replace half your job and probably give you a better performance review than your boss ever did. But you know what? Keep on crying about it. The rest of us will be sipping our espresso while our coding assistants write entire applications for us in under five minutes. Hope you enjoy that stackoverflow thread, champ.
TippySkippy12@reddit
... did you even read the thread you are posting to?
ExTraveler@reddit
No, I want to relese my project. And don't want to spent more time in Google than actually building my app, thinking about architecture, or what I want it to do and etc etc etc. You reminde me that stories about devs in 90-s who refused to use IDE "because it's cheating". Again, to be clear, I didnt say anything about letting ai write code for you.
TippySkippy12@reddit
This is a dangerous attitude, which misrepresents what I said. I'm not talking about automation. I'm talking about getting information from actual sources. The AI is not an authority which you should be asking "what the fuck does leftpad do". Leftpad has an actual project page, created by the actual authors.
This reminds me of people consulting the vast quantity of slop answers on StackOverflow. In the name of "getting things quickly", developers take what answers they can find without verifying if the answer is correct or applicable to their circumstance.
Your attitude is a part of a more dangerous trend, where people don't go to sources anymore but trust information coming out of places like TokTok, because they want to get their information fast instead of actual checking sources, because who has time for that, amirite?
joshocar@reddit
It is just another tool in the toolbox that you can pull out in the right circumstances when you need it. For example, sometimes I'm working in a language I am not super proficient in. In those cases it can be hard to know what you are trying to find. Using AI I can put in the line of code or function that someone wrote and immediately know the name of what I was confused by and get an brief breakdown of it. I can then either move on or I can dig into the documentation to get a deeper understand. This has saved me a LOT of time when I'm trying to onboard with new languages/projects.
TippySkippy12@reddit
Funny thing is, I do the opposite. I'll use AI to get summaries of what I'm more proficient in, because I will be able to better judge the AI summary.
I would not use AI to summarize something I'm not familiar with, and would rather read the documentation for context, because I do not trust myself to accurately interpret the AI summary and its applicability, because I don't have enough background information.
ExTraveler@reddit
For now new models already don't hallucinate when you ask something that in documentation of thing that is not very new or niche. In this 2 particular cases nobody can stop you from reading documentation, or even in all others if you just don't like ai. This is a tool that help decrease time you spent for searching things. That what matters. I would totally agree with you if this was time of chatgpt 3, when it would just feed you some hallucinated bs that God knows where it took from
TippySkippy12@reddit
I'm not talking about hallucination, but the idea of not consulting sources and the detriment that causes the human element of software development (or the pursuit of knowledge in general).
This has a negative impact the other way as well. A lot of projects in the days of StackOverflow got lazy and outsourced their "documentation" to StackOverflow. This leads to a decrease in authoritative information, leading to knowledge essentially becoming anecdotal.
In the pursuit of making it "easier to search for things", we forget how to actually search for things, which only results in more slop.
Don't get me wrong, I'm not against AI. But if I want to know "what the fuck leftpad does", I'm not asking the AI, I'm going to the source, because I still know how to do that.
dimbledumf@reddit
Seriously, why would you just have ai give you the answer when you can go get the documentation and starting reading through it, it's only a few hundred pages, plus it has an index to narrow it down to 5 or 10. Sure your particular use case is unlikely to be in the docs, but if you just get the code off github or decompile it yourself you can figure it out yourself. It's only a few million lines of code you'll have to look through without any documentation or good starting point, why is that so hard?
Sure you could just AI and it will give you the answer in 1 sentence, but you'll miss out on all that cool digging around obscure parts of the internet looking for your answer that makes a true dev /s
TippySkippy12@reddit
Ah yes, reading is hard, let's trust the AI to give me a one sentence summary so I don't have to make my head hurt and let's go shopping!
dimbledumf@reddit
Wow, yes why spend a minute to fix an issue when you could spend hours. I must hate reading, we all have lots of time to spare and don't mind reading through tons of documentation especially for something that is only used once in the entire project.
TippySkippy12@reddit
My guy, I've been doing dev work since the days when you actually had to buy books.
ExTraveler@reddit
Man, as developers we solve problems. I want my app to do that and this, while writing code I am facing problems and tasks that needs to be done, so the project would actually be done. That's it. If you want to be more "true" or "cool" dev by spending uneccessary time, so be it, just remember what and why you are doing. If this is fun for you and you it's ok, just remember that there is no meaning in just writing some random code, all code meant to do something and that's why you write it. What is your goal? I feel like for most situations using ai is better. When i need some answers while building something I better just get it for 10 seconds and Continue to actually create something new with this information and not spending uneccessary time.
TippySkippy12@reddit
Yes, your job is to solve problems. But the actual code you write is a small part of the solution.
Your job isn't to write code right, it is to write the right code. This means primarily having an understanding of the business requirements and functional requirements. It also means understanding the frameworks and libraries used by your application.
If you don't do this, and take shortcuts to avoid spending "unnecessary time", I suggest you aren't solving problems, you are creating problems. If not for yourself, then for the poor souls who have to maintain or extend your code.
ChampionshipSalt1358@reddit
Wow dude. Just, wow.
Glugstar@reddit
Ok, then what? AI returns an answer, how do you know it's not complete bs that it just hallucinated? You still have to do the normal research that you would be doing in order to verify the answer.
AI can't help you with learning new information.
runescape1337@reddit
Sure it can help you learn new information.
"I'm going to use leftpad to do this __. Is there a better option?"
If it says no, you were going to use leftpad anyway. If it says yes, you look into the answer. Anyone blindly copying/trusting it is a terrible developer. Use it as a glorified search engine to figure out what you actually want to google, and you can learn new information much more efficiently.
ExTraveler@reddit
I think I am done discussing a tool with people who clearly didn't use it properly even once
Hyde_h@reddit
Yea but I’ve tried this and gotten complete bs many times. Especially if I’m tracking down edge case functionality or something more convoluted, it will make shit up. I then have to spend time verifying what parts are true from the actual documentation.
TippySkippy12@reddit
Shouldn't you be doing that anyways, regardless of LLM?
Human developers make the same mistake. Especially Javascript developers.
Nooby1990@reddit
Without LLM you import only what you understand, but with LLM you might be presented with imports you don't understand. The decision making is backwards.
TippySkippy12@reddit
That's only if you are typing the imports by hand. Most modern IDEs will auto generate the imports, especially if you are copying and pasting code from somewhere else.
Nooby1990@reddit
If your IDE imports random unknown libraries then I would suggest to switch to a serious IDE. Most IDE that I know only automatically import stuff from the stdlib or things you already have explicitly installed.
I have never had an IDE just import something I don’t know.
TippySkippy12@reddit
IntelliJ and VSCode do this all the time.
A fun example, I saw in a PR
Pair
imported from JavaFX, in a backend project, and I commented check your imports, bro. Turns out, IntelliJ automatically did the import, and the developer didn't notice.Are your suggesting that the most popular IDEs for most production code are not serious?
Nooby1990@reddit
I do not work with Java, but yes I would suggest that an IDE that does shit like that should not be considered a serious IDE.
Why does it just Import things from unrelated sources and why did the dev not notice? Both unacceptable in my opinion.
TippySkippy12@reddit
People don't pay attention to imports (treating it as boilerplate at the top of the file), and IntelliJ tries too hard to be helpful.
But I find it hilarious that you think this makes IntelliJ "not a serious IDE" when most serious work in Java is done in IntelliJ.
janniesminecraft@reddit
thats because you are not accurately describing intellij's behavior to him. intellij does NOT import code that is not installed in the project. javafx is available in the classpath of the project, otherwise it would not be imported.
the reason this usually happens is because at some point he autofilled in a function from javafx, eithrr by accident, or temporarily before realizing it is not necessary, then deleting it, but not deleting the autoimport intellij did simultaneously.
TippySkippy12@reddit
This was in the days of Java 8, when JavaFX was bundled with the JDK.
In fact, this was one of the things that came up with the Java11 migration, because people imported random classes from Xerces and JavaFX unnecessarily (because they were on the classpath in Java8).
janniesminecraft@reddit
Yes, but this is not comparable to the behavior of AI. AI will try to add random libraries and actually use them. You are saying humans make the same mistakes, but this is not the same mistake that guy was talking about. This is just leaving an unused import at the top of the file, something you can clear out of the entire codebase in less than 30 seconds with 0 side-effects.
That's not the same as AI deciding that your code should be using left-pad from npm to pad strings.
TippySkippy12@reddit
You're missing the point.
If a human imports something, you usually have PR to decide that that's an import that you actually want.
You should review AI code the same way you would human code. AI doesn't "decide" anything unless you're insane enough to turn over the keys to your codebase to the AI.
janniesminecraft@reddit
if a junior in my firm imported leftpad, id tell them to cut that shit off. they would learn, and would be more careful not to import bullshit in the future. if a senior did it, i would try to get them fired.
for ai, i need to keep reviewing the same insane shit mistakes. at that point i might as well write the code myself, as i am just losing productivity. reviewing code is much harder than writing it, reviewing code does not give you the same understanding of it as writing it.
you are saying human developers make the same mistake, but they don't. at least not forever. they will improve, and at some point they won't be importing stupid shit.
TippySkippy12@reddit
you realize "AI" is not all or nothing right? Sometimes it makes good suggestions, sometimes it doesn't. You probably shouldn't be letting AI be creating PRs, but either accept or reject its suggestions. It's a tool.
AI also improves, through additional training data. Sort of like humans.
janniesminecraft@reddit
except that it can make bad suggestions that look extremely alike good suggestions, leading you down a far bigger rabbit hole of a problem than just fixing it yourself. i've spent a whole day trying to fix something by ai, then using an hour myself and fixing it instead.
should i have known ai will be gaslighting me for a day? it SEEMED to be getting closer to the solution. it SEEMED to have good ideas.
from my using ai, this issue seems almost fundamental. ALMOST anything the ai does, i could've done myself just as fast. and it seems to me that almost all the 10% of cases where ai solves a problem faster than i think i would've, are offset by the 10% of the time ai successfully gaslights me into wasting an equally large amount of time on shit that wont work. and even if that is not totally equivalent, and ai saves SOME time, it is not worth the loss of context and understanding of the code i get by writing it myself.
it does improve, but not at all like a human. it's extremely incremental at this point. the fundamental issues have not changed at all since the introduction of reasoning models (which were genuinely a huge upgrade i will admit).
i still use it. it's cool. it just has tons of issues.
TippySkippy12@reddit
humans do this too, and the same thing can happen if a human gives you a bad suggestion. that's why you have to have judgement.
this has not been my experience. it all boils down to how you are using the AI.
what is even your point? did anyone say AI is perfect? It's a tool. Use it where it makes sense, don't use it where it doesn't. It's not a replacement for thinking and judgement.
janniesminecraft@reddit
yeah, but they learn from it. next time less bad suggestions. ai does not do that. training a foundation model is not the same.
i agree ai is fine for tasks like the one you said. anything with very predictable text manipulation it is generally good at.
Empty_Geologist9645@reddit
Terrible argument, because your execs don’t care either.
AlSweigart@reddit
"Spicy autocorrect."
MirrorLake@reddit
Automate the Boring Stuff: Spicy Edition
flopisit32@reddit
I was setting up API routes using node.js. I thought GitHub Copilot would be able to handle this easily so I went through it, letting it suggest each line.
It set up the first POST route fine. Then, for the next route, it simply did the exact same POST route again.
I decided to keep going to see what would happen and, of course, it ended up setting up infinite identical POST routes...
And, of course none of them would ever work because they would all conflict with each other.
throwaway490215@reddit
There's a lot not to like about AI, but for some reason the top comments on reddit are always the most banal complaints.
If you're actually a good dev, then you will have easily figured out you need to tell the AI to not add dependencies.
Its not that what you mention isn't part of a very large and scary problem, but the problem is juniors are becoming even more idiotic and less capable.
If you think its a problem as its preventing you from getting something of value from AI then I have bad news for you.
UpstairsStrength9@reddit
Standard preface of I still find it useful in helping to write code it just needs guidance blah blah - the unnecessary imports are my biggest gripe. I work on a pretty large codebase, we already have lots of dependencies. It will randomly latch on to one way of doing something that requires a specific niche library and then I have to talk it out of it.
Tarik390@reddit
right. It loves to pull in random libs for no real reason. I’ve started just telling it upfront what to avoid or to stick with built-ins. Helps a bit.
phil_davis@reddit
For actually writing code I only find it really useful in certain niche circumstances. But I used chatgpt a few weeks ago to install php, mysql, node/npm, n, xdebug, composer, etc. because I was trying to clone an old laravel 5 project of mine on my linux laptop and it was great how much it sped the whole process up.
vital_chaos@reddit
It works for things like that because that is rote knowledge; writing code that is something new is a whole different problem.
ZestycloseAardvark36@reddit
For myself, I would say around 20% more productive; mostly tabbing rarely agentic. Agentic too often results in a complete revert for me. I have been using Cursor for a few months now.
NoMoreVillains@reddit
The only thing I use AI for is particularly tricky SQL queries or bash scripting. IMO it works best when it's a replacement for the time it would take you searching through docs or SO answers and for something you can immediately verify, understand, and easily tweak afterwards.
If it's being used to generate large amounts of code you lose a lot of the thinking behind decisions or the ability to factor in the larger context/architectural decisions and planning
dwmkerr@reddit
And protocols for AI are frankly awful at times. People gush over MCP but stdio breaks 40 years of unix conventions, local execution via npx is a huge attack vector, especially when what you download can instruct your LLM. Plus no distributed tracing as you can’t use HTTP headers (seriously, context for a remote request was solved effectively by HTTP headers decades ago). So many simple and battle tested conventions ignored, feels like the protocol itself was scaffolded by an LLM not thinking about how we’ve been able to use integration patterns for years. I mean the protocol works, I’ve stitched lots of stuff together with it, but in my enterprise clients we have to have a raft of metadata fields just to make sure we sensibly pass context, are able to trace and secure and so on. Rant over
dwmkerr@reddit
Honestly my biggest improvement isn’t writing code, it’s using LLMs to take it away. Heavy code review, find inefficient and useless abstractions, discover options to use a library rather than bespoke logic. Using LLMs as a safeguard to say “do you really need this” can be more helpful than the manic approach of them writing a shit tonne of stuff.
I probably spend more time now writing guidelines like in here https://github.com/dwmkerr/ai-developer-guide this is basic cause I have one internally for work which is richer, but by extracting the best idioms I can have agents attack multiple repos and bring things into line with sensible standards. I think too many people forget good engineers write less code- they compose complex workflows from simple steps, avoid over design, plan for long term maintenance and reliability, make SREs life easier, etc.
HunterIV4@reddit
Current AI struggles at anything larger than a single function, and even that it will struggle with if there's a lot of context needed. That may change in the future, and it's already getting better, but for now I find that Copilot often spits out stuff I don't want and eventually turned off the built-in autocomplete.
It is, however, pretty good at refactoring and documentation, assuming you give it good instructions (do not ask it to give "detailed" doc comments as it will give you 20 lines of docs for a 3-line function), and it's good at following patterns, such as giving it dictionary of state names to abbreviations and having it fill in the rest of the states. Having assistance with otherwise tedious parts of programming is nice. It's also not horrible at catching simple code problems and helping debug, although you need to be cautious blindly following its suggestions.
I think it can be a useful productivity tool, if used in moderation and within specific roles. People claiming it's "glorified autocomplete" are wrong both on a technical and practical level. But "vibe coding" is suicidally dangerous for anything beyond the most basic of programs and should not be used for production code, ever. We'll need a massive increase in AI reasoning and problem solving skills before that's possible.
On the other hand, ChatGPT does better than a depressing number of junior programmers, so...yeah. LLM's aren't going to replace coding jobs, at least not yet, in any company that isn't trying to scam people. But they aren't nearly as useless as I think a lot of people wish they were, and frankly a lot of the "well, ChatGPT didn't write my entire freature from scratch and perfectly present every option!" is user error or overestimation of human programmer skill.
LLM's don't have to be perfect to replace most programming jobs, they just have to be better than the average entry level dev. And they are a lot closer to that level than you might think.
eslof685@reddit
Headline: "AI coding assistants aren’t really making devs feel more productive".
Proof: A chart showing 68% of "engineering leaders" saying that AI makes them feel more productive.
CatholicAndApostolic@reddit
This subreddit is the one holdout on the internet that's trying to pretend that AI isn't improving programming productivity. Meanwhile, senior devs like myself are achieving in 1 day what used to take 2 weeks.
BlobbyMcBlobber@reddit
They're a good productivity boost (most of the time) but the truth is if AI will be good enough to complete tasks on its own, a lot of developers will lose their jobs and a single developer could orchestrate dozens of AI agents to complete a project in a fraction of the cost.
vehiclestars@reddit
It’s only useful cod simple programs and doing things like wrong SQL queries .
WeeWooPeePoo69420@reddit
Hm I can one-shot simple web apps. I'm not sure how that's not productive.
Ferovore@reddit
Are you employed as a SWE?
WeeWooPeePoo69420@reddit
Yep I'm a senior software engineer. I actually don't use AI at work much except for debugging sometimes, but it's great for personal projects or a small start-up MVP.
Ferovore@reddit
So you say sorry if others don’t know how to leverage it but then admit you don’t use it for your actual job..?
WeeWooPeePoo69420@reddit
I think your argument has run out of steam
Ferovore@reddit
Seems like you just can’t defend not practicing what you preach.
WeeWooPeePoo69420@reddit
Nah I just prefer having actual conversations about stuff like this, not devolving into personal attacks
Helpful-Pair-2148@reddit
Not the person who you asked the question to but I am employed as a senior software engineer and share similar experience. I've been working in in FAANG companies for over 10 years and, and been coding for over 20 years.
Devs who don't currently use AI are idiots who will soon go extinct. It's a tool, it's not there to replace you but it does make you a lot more productive... when you know how to use it.
It's like a chainsaw. It's a lot faster than chopping woods woth ypur bare hands or even an axe, but if you use it improperly, you will mess up bad.
MrWFL@reddit
Honestly, it just feels like google in the back old days, but with a little less thinking. Google got worse, llms got better. But if you're doing sophisticated/niche stuff, it quickly shows its limits.
Helpful-Pair-2148@reddit
"Quickly show its limits", that's your mistake for believing that AI is a silver bullet tool to do everything. Who here in this thread ever said LLM should be used for "sophisticated/niche" stuff? It's like arguing that a screwdriver quickly shows its limits because you tried to hammer a nail with it.
Ferovore@reddit
I use it every day man. There’s still a stark difference between a 10-40% productivity increase and a noob who thinks that because it can shit out a web app people who criticise it “don’t know how to leverage it properly”
WeeWooPeePoo69420@reddit
I was just responding to the submission title, "AI coding assistants aren’t really making devs feel more productive". If you aren't more productive with AI, and you're trying to be, that's on you. And it's also not a fault of that person. But it doesn't really say anything about AI itself.
Helpful-Pair-2148@reddit
Yes, that's why I said it's a tool. If you suck at programming you will suck even with AI. If you are good then AI will increase your productivity significantly.
GuruTenzin@reddit
The only code I've ever seen with no bugs was an empty file.
WeeWooPeePoo69420@reddit
I mean I'm talking like todo list simple, but for various utilities
ArakenPy@reddit
I used Copilot for around 4-5 months. Stopped using because I was spending more time debugging the code it generates than thinking of my own solution.
RMCPhoto@reddit
I think the biggest issues is with the expectations. We expect 10x developers now, and for challenging projects it's not nearly at that level. So, we still feel behind and over burdened.
The other problem I have personally, is that ai coding allows for a lot more experimentation. I was building a video processing pipeline and ended up with 5 fully formed prototypes leveraging different multiprocessor / async paradigms...it got so overwhelming and I became lost in the options rather than just focusing on the one solution.
loptr@reddit
Not just that, but it's literally a new way of working, it's bizarre to not acknowledge the learning curve/adaptation.
There is no profession where you can haphazardly introduce completely new tools/development behavior and think people will become faster without first becoming slower while learning to master them. But it seems wholeheartedly ignored.
CherryLongjump1989@reddit
It’s not a new way of working at all. Employers have been shoving 5 useless contractors on any project that I couldn’t fully complete myself and telling me, “there you go, now you have all the help you need” since I started in the 90’s. Now they are just treating the “AI” like a totally free useless contractor.
NotTooShahby@reddit
I’m curious, do they often hire contractors for new work or to maintain older systems while devs make newer ones?
CherryLongjump1989@reddit
As a rule of thumb, they'll try to save money on labor any way they can, wherever they perceive an opportunity to do so.
ThisIsMyCouchAccount@reddit
I'm working at this crappy start-up now. The owners deepthroat AI. I have to provide examples of how I'm using it my some of my status reports.
It's my first experience using it. At my last place we had not yet figured out the legality of it since it was all project work and that would mean sending source code to LLMs.
And you're right - it's a new way to work. I've helped along that process by using the in-IDE AI that JetBrains offers. But it's still a new skill set that needs to be learned.
In-line autocomplete:
Hot and cold. When it's right it's very right. I defined seven new variables and when I went to do some basic assignments it suggested all of them at once and it was exactly what I was going to write.
I'm doing some light FE work now in the template files. It just can't handle it. I'll be adding a new tag and it suggests BE code.
Agent:
Used it once it did exactly what it was supposed to. I asked it to make event/listener combos for about half a dozen entities. It scurried off and did them. And they were 95% correct.
On the other hand - there are console commands to do that exact thing. And it mostly just ran those commands and made some small edits.
Commits:
Functionally this has been the best. It somewhat matches the conventional commits structures.
feat(TICKET-NUMBER): short description
And the longer description it puts after that has been better than any commit I've ever done. It is somehow brief and specific. It doesn't just list off file that changed. It actually has some level of context.
internet-name@reddit
This is another example of insane management thinking. This is a violation of sovereignty of engineering, akin to asking how many lines of code you’ve written.
ThisIsMyCouchAccount@reddit
Oh yeah. I don’t think anybody has any experience making software. Even tho one of the co-owners is a dev. And the other co-owner is managing like its 20 years ago.
teslas_love_pigeon@reddit
You should look at the VC firm who gave your crappy startup money and see if the AI products you're forced to deep throat are related to the firm.
ThisIsMyCouchAccount@reddit
1: It's not really a start-up. They call themselves that. But it's really just a small business they are trying to start. I'm pretty sure VC has nothing to do with it.
2: It's all AI. They want devs to use any AI. They had designers using AI for prototyping. Instead of getting Jira they signed up for some BS called Airtable that has AI front and center.
teslas_love_pigeon@reddit
ah gotcha, always interesting to see what startup means to other people. SMB/lifestyle companies tend to have their own problems, but they're also way more susceptible to change.
Hopefully your employer wises up because at that level you can't really afford to take many bets.
ThisIsMyCouchAccount@reddit
As best I can tell they're using it as an excuse to half-ass everything.
teslas_love_pigeon@reddit
Companies don't care, we're heading into a future where aligning with the company can now be against your interests as the company is pushing ways to remove you from a job.
Previously if you wanted to get ahead in your career, supporting company goals and initiatives is what you traditionally did. Now it's turned on its head where the company goal is removing you entirely.
mirbatdon@reddit
I have felt this way every day working in the devops space.
jl2352@reddit
Feeling overwhelmed is a real issue. I have had PRs get done in half the time with the help of AI, however the experience felt like an intense fever dream about programming.
PoL0@reddit
I tried copilot with zero expectations and even then I was disappointed.
the expectation seems to come from the top level, and the disconnect is huge, deeply influenced by marketing and hype. we have received several workshops by "experts" which have been so lackluster and pointless and sterile.
the feeling is that it's being shoved down our throats because it's the trend now and because they expect they'll be able to replace people and save money. which is a terrible idea. you cannot replace even juniors because you need new people to learn the ropes, as they are the seniors of tomorrow.
all this whole deal is crazy, can't wait for the fever to pass.
reddituser567853@reddit
Idk for me , it’s been a fun opportunity to learn some dev ops and have my pet personal projects get sophisticated CI/CD , why makes my coding more pleasant, and what a coincidence is the exact thing you need to do to scale agentic work flows
Just pretend you a software company with strict branch rules and release strategies.
The problem is that eventually you have to make the decision to give up control , and just gate PRs , then eventually just feature branches
startwithaplan@reddit
I think people are realizing that it's 1.1x and that the majority of AI suggestions are rejected. It's good for boilerplate, but not always.
haltline@reddit
I'm often drawn of retirement (not unusual for me really) to fix things and my initial reaction the AI assisted coding was that it made bad code, however, that was unfair because I was only seeing folks who failed at using it. After all, they aren't calling on me because stuff is working so I'm getting bad samples.
Luckily, there's usually some good programmers that want to pick my brain to see if they can find something to add to knowledge (I also get educated by them on more current things). I saw them using AI assist quite effectively.
My summary: AI Assistants are a hammer. In the hands of a carpenter they build things. In the hands of children they break things. They don't do much of anything if not in someone hands. Management needs to realize (and they hear it from me now) that AI doesn't do the job on it's own. Good programmers are still required for they must understand what they are asking of the assitent.
Resident_Citron_6905@reddit
Code is only “free” if you allowed yourselves to be frame controlled into oblivion.
ILikeCutePuppies@reddit
My take is somehow in the middle of this.
I do find AI is allowing me to get code much closer to where I would like it. I can make some significant refactors and get it much closer to perfect. In the past if I did attempt it it could take a few weeks. Now I can do it in days. So I wouldn't make the changes until much later.
Now it probably adds a few days but the code is much more maintainable. My diff is fully commented in doxygen with code examples and formatted well. I have had the AI pre review the code to save some back and forths in reviews. I have comprehensive tests for many of the class.
The main thing that is that will improve is the AI I use other than direct chatbots takes about 15 minutes to run (sometimes an hour) - its company tech and understands our codebase so I can't use something else. It isn't cloud-based so I can only do non-code-related tasks while its is going (there is plenty of that kinda work).
It doesn't do everything either like run tests, it just validates builds etc... so I need to babysit it. Then there is a lot of reading to compare the diff and tell it where to make changes or make them myself. [This isn't vibe coding.]
However once this stuff speeds up and I do get more cloud based tech... I think it will accelerate me. Also of course accuracy will help. Sometimes its perfect and sometimes it just can't figure out a problem and solves it the wrong way.
spiderpig_spiderpig_@reddit
I think the thing is with the docs and code examples and so on. Are they really adding anything of value to the output, or is it just more lines to review. They still need review, so it’s not obvious that commenting a bundle of internal funcs is a sign of productivity.
ILikeCutePuppies@reddit
I ask it to write relevant comments and comments with examples. What I would want to read if I was reading the code. Tell it not to write comments like "this is a constructor", only have it do header comments.
You don't typically use doxygen on internal comments. Its used to also to auto-build documentation.
delinka@reddit
It has been most beneficial for me to get suggestions for dependencies, code snippets for my projects, and pattern-matching autocomplete on text manipulation (like turning a list of 200 strings into relevant collections of enums.)
I had it build for me an entire prototype app. It got the layout right, scrolling and buttons did the right thing, sample data and audio worked nicely. But it couldn’t add new features without breaking the first rendition of the app. And when I got in there to start edits, the organization of structs, classes and functions barely made sense for the prototype, and made no sense for the other features I would want.
liquidpele@reddit
AI makes terrible devs seem okay, but they never get any better because they never learn anything by using AI for everything. AI has little impact for experienced devs, it's like a fancy IDE... it has some cool features we might use but that's about it.
Far_Yak4441@reddit
Often times when using it for debugging it will try and send you down a rabbit hole that’s just not even worth looking into
Familiar-Level-261@reddit
What dev feels is irrelevant, actual improvement in productivity is relevant.
I'd imagine the improvement being smaller and smaller the more experienced dev is just because it will go from "writing code instead of the dev" for the inexperienced to "pawning off the repetitive/simple tasks" for the more advanced ones that focus more on buiilding complex stuff that needs more thinking than lines of code produced
TyrusX@reddit
I just feel empty and hate my profession now. Isn’t that what they wanted us to feel?
golden_eel_words@reddit
Same. There's an attitude going around that AI can do everything that I've spent most of my life learning as a skill, and that being paid to do what I do makes me some kind of con artist. I got into software engineering because I love the art of solving complex problems and it gave me a sense of pride and accomplishment. The AI can do some of what I do (no, it can't replace me yet) and is a great tool, but forgive me for feeling like it's taking the joy out of something I've loved doing for my entire life.
7h4tguy@reddit
Sometimes this idiot AI will just literally grep **/* for something when I've obviously already done that. If you have no training on the data or intelligence to be helpful, then what's the point?
golden_eel_words@reddit
Sure, but the point I was trying to make wasn't at all about the effectiveness or utility of the agents. It was that it's being hyped as a replacement for a thing that I've built a life and passion around learning and refining that I consider to be a fulfilling mix of art and science.
They're good tools that can definitely augment productivity (especially when guided and reviewed by a professional). But they're also being used as an excuse for companies to hire fewer software engineers and to not focus on leveling up juniors. I also think they'll lead to skill atrophy over time. I see it as digging my own grave on a thing I love, except what non-professionals seem to think is a shovel is currently actually only a spoon.
Pozeidan@reddit
It's funny because for me it's the other way around. Copilot wasn't bad but was super helpful either. We now use cursor with well written cursor rules and I'm having a blast now. Why?
Because AI is great at everything I find tedious like writing a detailed PR description, typing most of the code, explaining things that are obscure. It's also good at finding things in the codebase. If you prompt it right and use a good context it's amazing to write unit tests.
Of course you need to double check everything that's generated and fix some things. And sometimes it's faster to simply make the changes then make a prompt but it's faster because the cursor often goes where you should go next and it's right most of the time. It does save some time and allows me to take more breaks and have a greater output, I don't feel as exhausted at the end of the day.
Also I'm not a fast typer, I've always used a keyboard and mouse, for me it's great.
It's just a different way of working and it needs some adaptation but I definitely love it. It's not yet good enough to provide good feedback for PR reviews in my opinion but anyways I like doing that.
yabai90@reddit
Nobody wanted that, it's just called progress. Things changes and you just can't stop it. Many hand workers wen through the same thing after automation and factories. It's just elevating (hopefully) the society. We will figure out what we enjoy next I'm sure of it. At the moment I share the same feeling. Writing code is not as enjoyable and feel less and less valued. Being an orchestrators far from the actual code is not something we all enjoy at the moment.
gigaquack@reddit
What part of society appears elevated via AI?
dekuxe@reddit
Are you joking?
yabai90@reddit
So many, efficiency to pick one.
malakon@reddit
I maintain a .net wpf xaml product.
Xaml is a pita. But also quite beautiful. Trouble is when you don't work on it for a while - it's complex arcane ways quickly leave your brain.
So I often remember that I know a thing is possible, but not the specific xaml syntax for how to do it.
Enter copilot. Give it enough prompting and some example xaml and it tells you the best and some alternative ways to achieve it. With pasteable xaml code ready to mostly use. Usually with a bit of tweaking.
In that role, AI definitely helps. It's substituting for what I would have done by conventional searching and trying to find similar situations.
The code generation stuff is nice. But meh, take it or leave it, I can type it myself pretty quickly.
AI Test case generation stuff is definitely cool. I use that ability a lot. Because. I hate writing them.
krakends@reddit
The founders need funding. They can't do that if they don't claim AGI is imminent tomorrow.
30FootGimmePutt@reddit
They have backed off on the AGI as too many credible people called them out.
Now Apple is releasing papers showing just how lame LLMs truly are.
BaboonBandicoot@reddit
What are those papers?
30FootGimmePutt@reddit
https://machinelearning.apple.com/research/illusion-of-thinking
ninjabanana42069@reddit
When this paper came out I genuinely thought it would be all over the place and hopefully generate some interesting conversation here but no it literally wasn't even posted, in fact the only place I've seen it mentioned was one post on Twitter which is crazy to me. I'm no conspiracy theorist or anything but it did seem a little odd to me that it went this unnoticed.
7h4tguy@reddit
3 years. It's all going to change. Just give me 3 billion and you'll see.
a_moody@reddit
Tell that to the execs using this as an excuse to layoff people by the hundreds.
7h4tguy@reddit
Who have never even used the AI. It's all just feel-good hype demos.
Pharisaeus@reddit
I've seen cases where developer was significantly less productive.
They were using some external library and needed to configure some object with parameters. Normally you'd check the parameter types, and then jump to relevant classes/enums to figure out what you need. And after doing that a couple of times you'd remember how to do this. Especially if there is some nice Fluent-Builder for the configuration.
Instead the developer asked copilot to provide the relevant configuration line, and they copied it. And they told me it's something "complicated", because they've done it a couple of times before. But since they never tried to understand the line they copied, they would have to spend 1 minute each time to type their query to copilot and wait for the lengthy response, in order to copy that specific line again.
7h4tguy@reddit
Which is why we can't outsource $200k jobs to India coddled by AI to get the same results. What WallStreet thinks, but they know it's a house of cards.
TippySkippy12@reddit
That's because the developer is lazy, it has nothing to do with the LLM. As with all code, generated by human or LLM, you should actually understand what the code is doing. That's basic intellectual curiosity.
Seriously, I used to call this the StackOverflow effect, and it's nothing new.
Long ago, I reviewed some JPA (ORM) code that didn't make sense, so I asked the developer to explain his reasoning. He told me he found the answer on StackOverflow and it worked. I asked him if he understand why the code worked, and he had no clue. Well, he was using JPA incorrectly, and I had to sit him down and explain to him the JPA entity lifecycle, why his code apparently worked, why it was incorrect and the showed him the correct way to write the code.
ChampionshipSalt1358@reddit
Wow you really are crusading for AI eh? You are all over this thread. I thought for a moment it was a lot of very passionate people but you make up far too many of the comments for that to be true. Just another blind faith fanatic.
30FootGimmePutt@reddit
The AI fanboys are so goddamn annoying.
I just don’t believe any of them are competent at this point.
Pharisaeus@reddit
While I agree, I think it's "worse" now. On SO it was unlikely you will find the exact code you need, enough to just copy-paste. In many cases it would still take some effort to integrate this into your codebase, so you would still "interact" with it. With LLM you get something that you can verbatim copy.
It's also not exactly an issue of "understanding what the code does", but of muscle memory and ability to write code yourself. I can easily read and understand code in lots of languages, even those in which I would struggle to write from scratch a hello world, because I don't even really know the syntax. Most languages have similar "primitives" which are used to construct the solution, so it's much easier to understand the idea behind the code, than to write it from scratch.
TippySkippy12@reddit
I agree that AI does makes things worse because it automates slop.
But, I don't take that as an indictment of AI, which is just a tool, but as a refinement of human laziness and corporate shortsighted in pursuit of shortcuts to maximize short term gains.
The solution is the same as it ever was. Don't deny the tool, but exercise intellectual curiosity, by reading the code and documentation and do some work figuring it out for yourself immediately reaching out for the AI (or begging for answers on StackOverflow, in yesteryears).
StarkAndRobotic@reddit
AI coding assistants are like that over enthusiastic person that wants to help but is completely clueless and incompetent and just gets in the way.
TippySkippy12@reddit
or how about the really basic things that you don't want to do. For example, I asked the LLM to transform a bunch of existing code written with string concatenation into Java's new text block syntax. I could have easily done that myself, but why waste my time when the LLM can do it?
It's like passing off some tedious and simple work I don't want to do to a junior developer, so I can work on more important and interesting stuff.
7h4tguy@reddit
That is a simple regex. I do text edits like that constantly. AI wouldn't speed this up if you're decent.
chat-lu@reddit
Can your IDE do it ? That’s the kind of transformation you can easily do in jetbrain’s IDEs by clicking on the lightbulb.
StarkAndRobotic@reddit
I take less time to do it than it would take to ask AI to do it, and I get it perfect the first time. Its also tedious and disruptive waiting for it to get stuff done, when instead i can stay in flow doing things myself, and accomplishing a lot more really.
I guess it helps junior developers accomplish something at work instead of slowing down experienced developers, but the trade off is that instead of becoming more competent, junior devs become less competent because they aren’t exercising their grey cells. And worse, at some point someone is going to have to rewrite some of that AI code that they’re not competent enough to get right the first time.
throwaway8u3sH0@reddit
You might want to try the async workflow in Codex. You give AI a small problem to solve (similar to a junior engineer) and let it go off while you do other things. It comes back with a proposal. You work on 3-5 problems simultaneously with this: review changes, provide corrective instructions, next problem, repeat...
It takes a while to get the hang of it -- especially the breaking down the problem into small enough chunks for AI -- but once you've got it, you've basically got a team of moderately competent, high speed junior engineers who can solve even really big problems so long as you break it down for them enough.
I can see why programmers are failing to get the gains from AI. It's an entirely different workflow than normal. (An analogy might be using containers like VMs and then complaining about container tech.) It's not just fancy auto complete and it's not just instant StackOverflow - it's its own thing that you have to master.
30FootGimmePutt@reddit
Or maybe we aren’t the problem and you’re just full of shit?
StarkAndRobotic@reddit
Like I said, its much quicker for me to do things right myself. AI is a waste of time at present. There is a lot of hype and effort by AI companies to push these things as useful etc so they can inprove engagement and try to get their next round of funding. But really, people who are so dependent on them shouldnt be coding in the first place. Better they stay at home and watch netflix or something instead of doing something stupid that someone else has to fix.
TippySkippy12@reddit
How can you say this, when you don't even know what the problem is?
I transformed around twenty fairly complex SQL statements from string concatenation to text blocks. The LLM did the transformation much faster than I could have done it by hand. I'm a fast typist, but not that fast.
Also, I absolutely don't have confidence that I'll get it perfect the first time, because I am a human being. Especially with tedious repetition, my attention might wander, and I'll lose my attention to detail.
StarkAndRobotic@reddit
Like I said
30FootGimmePutt@reddit
But simple refactoring like that are possible without ai, at a fraction of the cost.
30FootGimmePutt@reddit
You’d fire those people asap.
As soon as they delivered obviously broken shit, then when told to fix it they acted like total sycophants and returned with more broken shit.
niado@reddit
So, as I see it, the big value currently isn’t in increasing productivity for solid devs who are already highly productive. The things that AI can do really well aren’t something that they need.
However, AI tools can bridge a truly massive gap for a very common use case:
people with valuable skills and knowledge, who need to write code for analysis, calculation, modeling or whatever, but don’t have a strong coding background. For these types of users AI can provide capabilities that would take them years to achieve on their own.
I am personally in this category - I am familiar with coding on a rudimentary level and have a working knowledge of software development philosophies and practices, but I am far from competent enough to build even small scale working tools.
But using AI I have been able to build several quite substantial tools for projects that had completely stalled, since I didn’t have the time or mental bandwidth to advance my coding skills enough to get anywhere with them.
At this point I’m pretty sure I can build whatever tool I could conceivably need by leveraging AI. I actually built an API coding pipeline that integrates with GitHub, so that I just send a prompt, and it spits out the required code, automatically updates the repository, and tests. This is something that was very far out of my reach just a few weeks ago.
Arkiherttua@reddit
well duh
Root-Cause-404@reddit
My observation is that developers tend to use AI as a supporter and not for code generation. So they write code as they used to do. If there is a challenge they cannot solve, they consult AI. Therefore the promise of rapid improvement is more a trick for such a team.
However, what I’m trying to do is to deploy AI in some additional scenarios like: PoC code generation, boilerplate generation, code review and documentation generation from code.
mickaelbneron@reddit
I tried using Copilot, then turned it off on day two. It suggests wrong code and comments most of the time. Sometimes very useful, but more often than not it gets in the way.
AngelaTarantula2@reddit
Every time I use AI to solve a problem I feel like I learn a lot less, so I start relying on it more. It becomes an addiction I never needed, and it’s not like the AI makes fewer mistakes than me.
podgladacz00@reddit
Recently I had to do unit tests pretty fast, so I went "let AI help me why not". It was pain to make it work as I wanted. After fine-tuning and giving it good examples it was almost doing what it should. Almost. Dumb unit tests it will do great, no complaints. However more complex it starts to make it harder for me so I just pretty much went to do it whole by myself.
I'm also at the point I sometimes consider turning of AI in the code editor as it tries to give me auto completion to nonsense code.
matthra@reddit
Wait so the entire thing is based on how the developers felt? Is that a metric we care about?
c_glib@reddit
Sigh... looks like a whole bunch of people, including techies, are falling prey to this weird kind of luddism. If there's one thing that current version of LLM's has absolutely revolutionized, it's coding.
I say this someone who has spent more than 2 decades at the heart of tech industry in silicon valley. First 10-12 years as mostly a programmer and the last decade or so as a founder/CTO mostly doing architecture along with product/people/project management stuff (as one must in small, startup'ish teams). I've written code in the linux and freebsd kernels, high performance servers, created brand new network protocols (on top of UDP) from scratch and helped ship many products that I'm sure most of you have used at least some point in your life. I say that not to brag but to establish the background my comments are coming from.
The last decade or so of not being able to spend an appropriate amount of time and focus on code writing has been a sore spot for me (in an otherwise very fulfilling career). Specially since my specialty has been low level performance critical code, I never really picked up skills in the frontend world.
The current product that I'm leading is a modern day chat app (https://flai.chat). There's a (small) team of programmers who works on the flutter based frontend. Suddenly, in the last few months, I find myself contributing not just design and architecture guidance (there's a surprising amount of protocol/networking nuance involved in building a chat app like this) but just coding up stuff instead of waiting on someone else to get freed up. To be clear, 99% of the actual code is written by the machine. All I'm doing is making sure it follows the guidelines on code structure, standards and best practices I lay down. I can honestly said that I have never been this productive and efficient in delivering code in my entire career.
Tl;dr... if you're an experienced programmer and haven't dipped your toes in using LLM's to speed you up, start immediately right now. Don't listen to the naysayers. For programmers who know what they're doing, LLM based code writers are a game changer. A force multiplier of unimaginable value.
30FootGimmePutt@reddit
You say this as a dipshit ai fanboy who has suffered permanent brain damage from chasing every Silicon Valley fad for 2 decades.
c_glib@reddit
Wow. This level of hate for a tool. Incredible.
Worth_Trust_3825@reddit
how much are you paid for this trash?
c_glib@reddit
Sigh. Denial is strong on this sub.
zzubnik@reddit
When I use it, it goes like this: ME: OK Copilot, what the fuck is wrong with this Python code? COPILOT: Tells me exactly how stupid I am and gives me the corrected code.
Not sure it makes me happy, but it fixes some bad code sometimes.
HomeSlashUser@reddit
For me, it replaced googling almost completely, and that in itself is a huge timesaver.
SteroidSandwich@reddit
What? You don't like 30 seconds of code and 15 hours of debugging?
NelsonRRRR@reddit
Yesterday ChatGPT flatout lied to me when I was looking for an anagram. It said that there is no word with this letters in the english language... well... there were two words... it can't even do word scrambling!
NineThreeFour1@reddit
It didn't lie. A lie requires intent. LLMs are probabilistic text generators and have absolutely no understanding of the text they are generating.
wildjokers@reddit
How do you define "understanding"? LLMs don't have a human understanding of what it is generating but it has a functional and structural understanding of what it is generating. Why are you defining that it is human type understanding that LLMs need to achieve to be useful?
30FootGimmePutt@reddit
They have no fucking understanding of anything.
You idiots will believe anything a charlatan like Altman pushes won’t you.
wildjokers@reddit
Once you are at ad hominems you have nothing to add the conversation.
https://en.wikipedia.org/wiki/Ad_hominem
30FootGimmePutt@reddit
No, I just have no respect or remaining patience for ai fanboys.
You fucking idiots fall for every single thing pushed out by Silicon Valley charlatans and every single time you act like anyone who disagrees is just too dumb to understand.
NineThreeFour1@reddit
No, wrong. Even the largest matrix has no understanding of which function it's implementing when multiplied other matrices.
bonerstomper69@reddit
most LLMs tokenize words so they're very bad at stuff like "how many of this letter are in this word", anagrams, etc. I just asked chatgpt how many Rs there were in "corroboration" and it said "4".
jk_tx@reddit
I use AI as a Q&A style interface for searching and asking technical questions that in the past I would have looked up on StackOverflow, dissecting template-heavy compiler errors, etc. So far that's about all I've found it useful for. Anytime it suggests actual changes or code-generation, it's always either sub-optimal or flat-out wrong.
I'd never let it generate actual production code. I don't even understand the appeal of that TBH, it's literally the most enjoyable part of the job and NEVER the bottleneck to shipping a quality product. It's the endless meetings, getting buy-in from needed stakeholders, emails, etc; not to mention figuring out what the code actually needs to do.
For me actually writing the code is an important part of the design process, as you work through the minor details that weren't covered in the high-level design. It's also when I'm checking my logic etc. I wouldn't enjoy software development without the actual coding part.
Maybe if I was working in a problem space where the actual coding consisted of mindlessly generating tedious boilerplate and CRUD code I'd feel differently, but thankfully that's not the case.
MCPtz@reddit
Exactly. As they wrote in the article, bottlenecks are actually elsewhere, either in organizational inefficiencies you identified, or technical ones:
hippydipster@reddit
This is almost universally true for business, and almost universally untrue for hobby projects. When I'm building software for myself, the bottleneck is very much how fast I can code up the features I want, and how much my previous code slows me down or speeds me up. I spend no time in meetings dithering and all that.
Now, some might think "well, duh" and go on without thinking about it, but there's a real lesson there, for those who have the interest.
Daegs@reddit
Uh maybe they don't know how to use the tools or haven't invested in making their own agentic loop API calls, but damn I'm easily 5x-10x more productive for most problems. Higher on simply run of the mill stuff.
Vilkaz@reddit
well ... you have to learn how to use them ... low temperature, split the tasks, know how to debug with ai. ... which model for what ..
and use the right tool... im probably about 10x more productive now, working from terraform infra automations, bash, python git hub actions, full stack typescript apps (different projects) ...
also passive income with selling ai images, where i automated the tag creation to save me multiple hours weekly...
The less people know how to work with ai propperly, the better for me. so yeah, hate the ai please :D more money for me :D
NineThreeFour1@reddit
And you measure this how? What was your income before and after using AI? Should have increased by about 10x if true.
30FootGimmePutt@reddit
They put out half broken slop a bunch faster now.
Vilkaz@reddit
mostly by the amount of work i get done. different scripts, automations, things that usualy take me days because I have to at least learn the tech for that ? i now make in hours.
And hoours turned into minutes. That docker terraform solution ? bugged me for 2 years already. solved it today when i needed to deploy terraform code locally again.
Also 2 big private projects, that i would have needed to reject earlier.
Im not counting on my job to pay me so much more, so i am adding more and more personal stuff that generates me money.
But im not here to convince :) those who dont use ai will be so heavily outperform by those, who KNOW how to use it.
The big thing is, to KNOW what to do. It is a tool, but in different hands it gives completely different results.
jeffbagwell6222@reddit
Complex regex syntax is done in mere seconds for me. Complex queries... AI makes me so much more productive it is insane.
Sometimes I feel I could be working myself out of a job if I get things done too fast.
Need to slow my roll a bit and fart around.
Things like "why does this code work with mysql 5.6 and not 5.7" takes out a lot of the debugging time and guess work.
overtorqd@reddit
How so? If you're being more productive, doesn't that make you more valuable not less? Or are you saying there is a risk of simply finishing all the work?
TippySkippy12@reddit
I think the bigger problem, and which will be interesting to see how it pans out, is that it will be that AI will be working other people out of a job.
As a senior developer, in the past my go to solution for getting tedious but simple work done that I don't want to do myself is to pawn it off on a junior developer. The junior learns something and i can do more interesting work. Now, I just ask the LLM.
If the LLM can do the job of junior developers, what happens? Are we coding juniors out of a job? Or, are we just going to expect much more out of juniors, thinning the crop?
30FootGimmePutt@reddit
I feel sorry for you if you have worked with people so inept they can be replaced by fancy autocomplete.
tnemec@reddit
In the long run, definitely not. If companies want to hire senior developers, they'll find out, sooner or later, that senior developers aren't created in a vacuum. They have to come from somewhere, and usually, that's "being a junior developer" plus time and experience.
But in the short term? I don't know. I wouldn't put my money on a whole bunch of executives opting for long-term sustainability over short-sighted immediate cost-cutting.
jeffbagwell6222@reddit
Finishing all the work quicker than I would if I was doing things manually.
Let's say I'm hired hourly to complete a new feature. With AI might take me a week or less, without AI it would take me over a month.
NineThreeFour1@reddit
So? By that logic you could take at least 4 times as many jobs per month than before.
jeffbagwell6222@reddit
I don't work freelance.
NineThreeFour1@reddit
Then take 4 jobs in parallel if work on one job is only 1/4 as hard now.
Or only work 1 week and focus on hobbies for the remaining 3 weeks if you say you get the same work done in that time.
jeffbagwell6222@reddit
I didn't think of that. I think I might take on 4 other jobs during my 9-5. Great idea!
potentialPast@reddit
Weird takes in this thread. Senior Eng, 15+ yrs. Its getting to be calculator vs longhand and its really surprising to hear folks say they can't figure out how to use the calculator.
MagicWishMonkey@reddit
I think people are using them inefficiently. If you're working on a mature codebase and need to do something like add a feature, use the LLM for the tricky stuff that would typically require a significant amount of time to think through yourself. Don't try and make it write all of the code. For me, I whip out ChatGPT any time I would normally need to google something (like if I need to format a date a certain way and don't remember the pattern, or if I need a complicated regex, etc.). It basically lets me stay in the flow without needing to break concentration to figure things out.
If you're starting a new project, that's where things start to get really interesting. You can take the agentic approach and feed it a documentation on what the program should do, what rules it should follow, have it write unit tests for every function and verify that all tests pass before moving to the next step, etc. etc.
30FootGimmePutt@reddit
I’m working on a matur code base and the LLMs get instantly lost and become worse than useless. They come up with the stupidest solutions if they can even come up with anything.
potentialPast@reddit
Some great notes on context here, agree with all of that. With proper prompting I am saving a lot of time lately, but it takes some trial and error to find the right prompt for your situation at times.
I would urge folks to try more cursor directives and rules around your prompting to keep agents in line
IanAKemp@reddit
Senior Eng, 20+ years.
No it's not. A calculator is inherently deterministic and its output can be formally verified, an LLM is the exact opposite, and we work in an industry that inherently requires deterministic outputs.
Literally nobody has said that, and dismissing others' negative experiences with LLM limitations as incompetence or luddite-ry is not the slam-dunk you believe it to be.
teslas_love_pigeon@reddit
People really act like writing for an LLM interface is hard. As if we don't already do this daily throughout the majority of our lives.
It really shows how poor their understanding of knowledge is if they think the bottleneck to solving hard problems is how quickly you can respond.
wardrox@reddit
The data: vast majority of devs say they are more productive using AI assistants.
The headline: AI bad
Mescallan@reddit
Yeah 8/10 people I talk to are glowing about how productive they are in the last six months. 1/10 hasn't updated since gpt3.5 and the last one only works in some esoteric language or isn't allowed to use it at work
30FootGimmePutt@reddit
Do you exclusively talk to dumb people?
andrewsmd87@reddit
I don't really understand what they're getting at. They say 88% of people using copilot feel more productive, and then turn around and say only 6% of engineers say it's made the more productive. Which is it.
For me personally, I only use cursor (with claude) for certain tedious things but it absolutely makes me more productive. That's not a feeling, it's in the stats on things I've been able to produce without losing quality. I'm not saying, hey please take these technical specs and write me a fully functional schema, back end, and front end. But I am using it where it shines, at catching patterns for a lot of coding scenarios that are just monotonous. Like if you're connecting into a third party api and they have json that is all snake case and you'd like to alias all of your object properties to be camel case, but that is 10 classes and over 100 properties.
I've also used it for lots of one off stuff like if someone asks for a report we don't have, and I can query that in like a minute, then just have it create me a graph or line chart or something using whatever language it feels like and screen shot and go.
The other day I had around excel files delivered to us that needed to be csv for our ETL stuff, and while I could have converted them all by hand, cursor did it in about a minute.
edgarallanbore@reddit
AI tools like Copilot or cursor with Claude are great for the grunt work, you know, those boring tasks that make you question your life choices. I mean, who doesn’t want to avoid manually converting Excel files to CSV when you can have AI do it in a flash? For folks like me juggling countless APIs, swapping snake_case to camelCase can be mind-numbingly tedious. Dive into some tools like APIWrapper.ai if you’re connecting APIs on the daily. Pair it with others like cursor for tedious pattern work or even Power BI for those quick visualizations – you'll notice the difference in your sanity levels.
chat-lu@reddit
The vast majority that uses them which is a selection bias.If it’s crap that slows you down, you stops using it, so you aren’t on the survey about devs using it.
Richandler@reddit
Turns out all those people talking about the apps they built and their 100x productivity were lying. Wanna know how it was a dead giveaway? They never posted a damn thing about what they actually did, nor the actual results. The only things that did get posted were regurgitations of already existing tutorials you'd be better off downloading the git repo for.
BornAgainBlue@reddit
Yeah developer here. It's huge.. I don't know what the f*** this article is talking about.
Guypersonhumanman@reddit
Yeah they don’t work, they just scrape documentation and most of the time that documentation is wrong
FiloPietra_@reddit
So I've been using AI coding assistants daily for about a year now, and honestly, the productivity boost is real but nuanced.
The hype definitely oversells it. These tools aren't magical 10x multipliers. What they actually do well:
• Speed up boilerplate code writing
• Help debug simple issues
• Suggest completions for repetitive patterns
But they struggle with:
• Complex architectural decisions
• Understanding business context
• Generating truly novel solutions
In my experience building apps without a traditional dev background, they're most valuable as learning tools and for handling the tedious parts. The real productivity comes from knowing *when* to use them and when to think for yourself.
The gap between vendor marketing and reality is pretty huge right now, but the tools are still worth using imo.
BoBoBearDev@reddit
Lolz, if you asked me, does my assistant who is going to replace me or some sweatshop contractor using the same same assistant has given me a "feeling" that my productivity has increased, of course NOT!!!
Fisher9001@reddit
The absolutely best part is that people who know how to use them simply do that and have considerably more time for themselves while everyone else is busy talking about how shitty and unreliable those tools are and how they are only a fancy autocomplete.
flanger001@reddit
Lol no shit
Beginning_Basis9799@reddit
Prompt this "Write in python a web scraper that takes a URL parameter and write all methods in ye olde English"
This will result in exactly what you ask for, but why does my coding assistant know ye old English and secondly why has it not told me to go do one.
What this demonstrates is a word or a line out of context and it hallucinates, the hallucination here is following ye olde English as a guide.
We invented the phrase "It's a feature not a bug". It's a bug your coding assistants are to fat the problem is not it can hallucinate it will hallucinate.
Consider this if it were a colleague I had to pigeon the whole code to they are either extremely cheap or fired.
So what do I give the coding assistant stupid jobs that would take me 30 minutes instead of 10 build a struct for this json and consider sub structures where needed. The hallucinating clown 🤡 works fine here.
MirrorLake@reddit
This feels like the Fermi paradox, but for AI. Where's all this LLM-fueled productivity happening? If it's happening, shouldn't it be easy to point to (and measure)? Shouldn't it be obvious, if it's so good?
Xryme@reddit
It’s largely just saving me search time, instead of digging through a bunch of google results. But that’s not the main bottleneck for development. If I ask it to make something for me I often have to fix it up anyways and find all the bugs.
PoisonSD@reddit
AI feels like it's killing off my creative inclination for problem solving while programming. Depressing me and making me less productive, even more so when I see the use of AI software engineer teammates on the horizon. Just those thoughts make me less productive lol
Berkyjay@reddit
100% more efficient and I would not want to give them up.....I don't use Copilot BTW because I think it's ass. What I wouldn't do is allow an AI to touch my code on its own.
I wouldn't trust the software to develop its own software, it's just so bad at it. If you just take its output verbatim you are essentially just getting something that has already been done before. Which is fine. But it doesn't discriminate between good code or bad code.
lorean_victor@reddit
depends on how you use these tools I suppose. way back when with the first or second beta of github copilot, I felt instantly waaay more productive at the coding I needed to do (at the time it included lots of “mud digging”, so to speak).
nowadays, and with much stronger models and tools, I feel “slower” simply because now I take on routes, features and generally “challenges” that I couldn’t afford to pay attention to before, but now can tackle and resolve like in 2-3 hours max. so the end result is I do enjoy it much more
HaMMeReD@reddit
June 11th, 2025
Article it's based on: September 7, 2022 (Updated May 21, 2024)
Uhhhh..... Copilot Agent mode came out after that, as did Cline and most other Agents.
TheNewOP@reddit
Wake me up when AI can do requirements gathering, thinking of all edge cases from a business/product POV, and can get me out of sitting in 1-3 hours of meetings a day.
hippydipster@reddit
A snippet from Claude I got yesterday when I was telling it about filling in missing stock market data:
Does that not look like requirements gathering and fleshing out scenarios and cases?
w8cycle@reddit
I use it and it’s helpful to help with debugging, summarizing, etc. It’s not as good at coding unless I basically write in English what I could have written in a programming language. It’s like it fills in my pseudo code. It’s nice sometimes and other times I spend hours getting it to act right.
whiteajah365@reddit
I don’t find it very useful for actually writing code, I find it useful as a chat bot, I can run ideas by it, ask for explanations of language idioms or syntax. I still code 90% of my lines, but I am in a constant conversation, which has made me more productive.
Lothrazar@reddit
Every time i try to use it, i feel like i wasted so much time trying to get it to be above a grade 1 reading level. and then its always wrong.
bareweb@reddit
You’re doing it wrong, luddites.
g1rlchild@reddit
I have found ChatGPT just fantastically useful for things like questions that start with "how do parser combinators work" and end up 10 minutes later with functioning sample code to start working with and building on. Especially where there are deployment or administration questions about how to get from "never used this before" to "I have code that compiles and runs on my machine."
It's also been great for follow-up questions like "how do I handle XYZ case with parser combinators."
I mean, you can do all this be reading documentation somewhere, but for me, the conversational structure of how I interact with it just blows anything else away.
MrTheums@reddit
The post highlights a crucial distinction: perceived productivity versus objectively measured productivity. GitHub's research focusing on feeling more productive, rather than quantifiable efficiency gains, is a methodological weakness. Subjective experiences are valuable, but they don't replace rigorous benchmarking.
The "small boost" observed likely reflects the nature of the tool. AI assistants excel at automating repetitive tasks and suggesting code snippets – tasks easily measurable in lines of code or time saved. However, complex problem-solving and architectural design remain largely human domains, and these aren't easily quantifiable in terms of simple productivity metrics.
Therefore, the seemingly low impact might stem from focusing on the wrong metrics. Instead of simply measuring overall productivity, a more nuanced approach would involve analyzing task-specific efficiency gains. Separating tasks into routine coding versus higher-level design would reveal where AI assistants truly shine (and where they fall short). This granular analysis would provide a more accurate picture of their impact.
NewAgeBushman@reddit
💯
Dreadsin@reddit
I’ve found the only things AI is really good for is instructions with absolutely no ambiguity or for a “quick sketch” of something using an API you’re not familiar with. For example, I never wrote an eslint plugin, but I can give Claude my vague instructions and it at least spits out something I can build on top of
PathOfTheAncients@reddit
I find chatgpt is useful for answering questions about syntax or expected implementation. Basically the things I used to google before google became useless.
For everything else AI writes code that is overly complex, brittle, and often non sensical. It often takes more time to use than it would to have figured it out on my own.
It is decent for unit tests. Not because it does a better job but just because it can write them in mass and I can fix them fairly quickly.
vinegary@reddit
I do, quicker answers to basic questions I don’t know the answer to at the top of my head
nonono2@reddit
Like a better search engine?
vinegary@reddit
Kindof, with natural questions, often good enough
Racamonkey_II@reddit
Skill issue, I’m so much more productive with it, it’s an amazing tool.
Akujux@reddit
You guys are forgetting that this pretty much helps to close the gap for non developers. We can now spin up prototypes quick, if we have surface level understanding of logic and programming.
ASKnASK@reddit
What's this sub's obsession with AI? Anything that reaches the frontpage from it is 'hurr durr AI no good'. Feeling threatened or something?
redneckrockuhtree@reddit
Not at all surprising.
Marketing hype rarely matches reality.
That said, while a 10% boost may not seem like much, from a company bottom line perspective, that adds up. Provided the tools don't cost more than the 10% boost saves.
DanTheMan827@reddit
I don’t trust copilot to write full classes, but for autocomplete it’s great.
I did have ChatGPT write a “full” rust-based SDL2 app, but in the end it was still somewhat broken.
It’s great for boilerplate code and starter projects
TheESportsGuy@reddit
I've learned that if a dev tells me AI is making them more productive, the appropriate reaction is fear.
OldThrwy@reddit
On the one hand they make me feel more productive for a time. But I find that I typically have to rewrite everything it wrote because it was trash or wrong. But I find myself continuing to use it because everyone else says it’s so amazing, so maybe I’m just using it wrong? Still, I can’t shake the feeling it’s just never going to get better and the people saying it’s so great are actually not really good coders to begin with. If you suck at coding maybe it does seem really great, because you can’t discern that what it’s giving is trash. I dunno.
amulie@reddit
They are without them realizing it via clearer and better written JIRA tickets with clearer requirements, q/a notes, research notes etc.
5ManaAndADream@reddit
Ai is great when I go to it for syntactical lookup or quick (done many times historically) little mathematical methods that are quicker to validate than derive.
Ai is bloody awful when it tries to finish your sentences or constantly feels the need to suggest things in every little bit of a program you’re writing.
Thats why assistants are shit. They’re tools to be used not brains to think for you.
ZirePhiinix@reddit
I just had an interesting thought experiment about AI.
Let's assume AI somehow is 100% capable of replicating the work of an engineer, but you can't do two things:
1) sue them 2) insure against their actions
Would a company still use AI? Of course not. If the AI steals your code, you can't sue them for IP theft. If they open backdoors and get your company wiped out, you can neither sue them for damages nor get insurance protection.
So given that AI isn't even close to what a person does, who the hell thought AI can replace engineers?
PineapplePiazzas@reddit
No shit...
Anders_A@reddit
And no one who's actually worked with software is surprised 😂
Sensanaty@reddit
The way I see it is that it instills a false sense of speed and productivity. I've tried measuring myself (like literally timing myself doing certain tasks), and honestly I think I've definitely spent more time trying to work around the AI and its hallucinations, but then there's also those moments where it miraculously one-shots some super annoying, tedious thing that would've taken much longer to do myself.
At the end of the day, it's a tool that is useful for some things, and not for others... Just like every other tool ever created. The hype around it is, I feel, entirely artificial and a bit forced by people with vested interests in making sure as many people are spending time and money on this tooling as possible.
One big issue I have though, is that I have definitely felt myself getting lazier the more I used AI tooling, and I felt like my knowledge has been actively deteriorating and becoming more dependant on AI. I'd look at a ticket that would usually take me 10 minutes of manual work, for example, and just copy/paste the whole thing into Claude or whatever and try for half an hour to an hour to get it done that way, rather than just doing it myself. I've been interviewing for a new job, and I feel weaker technically than I did even back when I was new to the field as soon as I don't have the crutch of an LLM.
Because of that I've delegated my AI use to pure boilerplate. Things that are brainless and hard-to-impossible to fuck up, but tedious to do yourself. Have some endpoint that gives you a big ass JSON blob and it's untyped? Chuck it to the AI and let it figure it out for you. For any serious work though, I'm not touching AI if I can help it.
TimeSuck5000@reddit
I disagree. It most certainly increases my productivity.
When I am low on dopamine and having trouble getting started it copilot makes a way better rubber ducky than the one on my desk. The fact that it can take 30 lines of complier errors and translate it to plain English, saves me mental effort and allows me to keep my focus on the code.
Low-Ad4420@reddit
I really don't feel more productive but rather more stupid and ignorant. AIs are booming because the google search engine is just a steaming pile of garbage that doesn't work. I use AIs to the get the links to stackoverflow or reddit for relevant information because trying to google is a waste of time.
stolentext@reddit
ChatGPT regularly tells me to use things (libraries, config properties etc) that don't exist, even when I toggle on 'search the web'. It feels more like a novelty than a productivity tool.
cran@reddit
I am extremely efficient with them with small bits of code. They write pretty good code in small chunks much faster than I can type and I rarely have to correct it. Also, autocomplete is uncannily prescient. It’s also really good at showing me how to use technology I’m unfamiliar with, but it’s usually just a good starting point. I’m much more productive with AI, but it has its limitations. All these back and forth discussions are silly.
unixfreak0037@reddit
I think some people still don't understand how to use these models well. I'm seeing massive gains myself because it's allowing me to plow through stuff that I'd typically have to spend time researching because I can't keep it all in my head. It's a lot of small gains that add up. Something that should take me 5 to 10 minutes instead now takes 0 because I pivot to something else while the model figures it out. Over and over again. And it's wrong sometimes and it doesn't matter because I correct it and move on.
At work I keep seeing people trying to use it to do massive tasks like "refactor this code base" or "write this app that does this". Then it fails and they bad mouth the whole idea.
It's just my opinion, but I think that people who have mastery of the craft and learn how to use these models are going to be leaving the people who don't in the dust.
RobespierreLaTerreur@reddit
Because 80% of my time is spent fighting defective tools and framework limitations, and finding counter measures that AI cannot, by design, think of, and I found it unhelpful most of the time.
auximines_minotaur@reddit
Cursor made me frustrated and minimally productive in a language and platform I had never laid eyes on before. Now I’m desperately trying to actually teach myself the language and platform because the experience has been so frustrating and inefficient.
The real wake up call came when I actually started diving in to fix things that Cursor got stuck on, and realizing I could have just fixed it myself in the first place and saved myself a lot of time and frustration.
One thing it is nice for is CSS, a skill I’m expected to have but don’t have the slightest interest in. However maybe they could just hire people to do this instead of making their devs waste time lining up pixels?
In the end, it’s hard to say if Cursor actually saved me any time. Of course it made it possible for me to ship code in my first month. But would that time have been better spent actually learning the language and platform? Maybe. Probably. Who knows.
This is the world we’re living in.
hippydipster@reddit
To get the most out of the AIs, you have to already be an excellent team developer. You have to be the kind of developer that can read any PR and give great, useful feedback on it.
If your a developer who's good at going "heads down", getting into flow and banging out a lot of code in isolation, AI might not be the best fit for you. But if you've taken the next steps as a dev beyond that phase, and become a team multiplier, you can use AI very efficiently.
Because you're exceptional at reading other people's code and understanding it. Knowing right away what's good, what needs rework, what needs to be refactored or abstracted from a simplistic solution, where poor assumptions were made, where simplistic is good enough for now. You know how to write documentation so future devs can use the API. You know when and where to focus on testing,
But the key is reading code. Most people can write couple their own way, but can't easily read the 10 different ways their teams expresses their logic in code. When working with AIs, the bottleneck is how fast you can read and understand the AIs code, and how fast you can see what needs to be adjusted. If that's a slow process, it's not going to speed you up as much.
pa_dvg@reddit
I would say it’s pretty good most of the time.
For low complexity tedious tasks it’s great. Stuff that would have eaten up lots of time but not really difficult at all. And the fun part is that if you, the human with a brain, break everything down and just prompt it with the steps you want to execute it can do a lot more than you think.
Sometimes it tries to do too much and needs reigned it.
I’m still not convinced of the ethics of the technology but it’s certainly useful.
Kjufka@reddit
They can be useful if you feed them your codebase and all documentation materials. Good for finding info. Terrible at coding though, most snippets were useless, some outright misleading.
PM_ME_PHYS_PROBLEMS@reddit
All the time I save is lost rooting out the random package imports Copilot adds to the top of my scripts.
StarkAndRobotic@reddit
Instead of trying to help me code, AI should do other things like tell other people to shut up / keep quiet, screen my calls so there are no interruptions and sit in pointless waste of time meetings and fill me in later.
Nuno-zh@reddit
Good as a search engine.
wesleysniles@reddit
The actual value of AI assistants, for both code tasks and other related things like writing docs etc is that you don't have to start from a blank page. In fact, when these tools work well you can go straight to editing mode and fix up whatever isn't quite right. It's like templates but on steroids and aimed at the thing you want to build rather than something very generic.