Trying to use AI to write code is absolute misery. Is anyone actually being productive with this crap?
Posted by kibblerz@reddit | ExperiencedDevs | View on Reddit | 689 comments
My former boss has been drilling on and on about AI. He was bashing on me for using Nvim, instead of using Cursor and this AI crap. Claiming my ways are obsolete and all that jazz. Something something vibe coding.
Then I find out another former coworker is into this vibe coding stuff too. I try to be open minded, so I give it a shot..
Trying to make one React drawer menu took 50 cents of credits and it was highly problematic. Any libraries that have had changes that happened after the collection of the data for the model are a mess. It's altogether a very bumpy process.. It would've been far easier to just make it myself.
Some may claim that it is good for monkey work... But is it? Nearly all of my "monkey work" can be automated with a few vim macros, grep, regex, etc. And it can be done in a consistent fashion that's under my control.
Am I doing something wrong? Is anyone here actually finding AI useful for writing code? I've used it to understand code and more general concepts, but every time I try to have it write code, it's just a headache.
This vibe coding crap seems like a nightmarish dystopia...
tooparannoyed@reddit
I feel like the way my grandfather looked when I tried to teach him how to program a VCR.
> I'll just hit record when it comes on.
I get how it's useful for autocomplete, but at this point I feel like I'm being gaslit about agentic coding by incompetent people.
kibblerz@reddit (OP)
Yeah.. I've heard people claiming it makes them such a better programmer, and it's just made me think about how incompetent they must be if vibe coding results in better code.
Which-World-6533@reddit
Reddit (and the Internet) has made me aware of how bad people are at coding.
If they think AI is amazing then they really must suck at coding.
The whole "my PRs are better" must mean you are really bad at what you are doing.
BroadConfusion6563@reddit
laymans can't always see the difference
MorallyDeplorable@reddit
The whole "anyone who finds a wildly diverse and useful tool useful must be bad" attitude is so hilariously hypocritical. You're not better than other people because you don't understand how to apply a tool to your workflow that other people are successfully using.
Which-World-6533@reddit
Ever since these "AI" tools appeared I've been told I'm using them wrong.
I've spent time looking into how other people have been using them. The only use seems to be getting code that sometimes works for very trivial use cases.
MorallyDeplorable@reddit
Well then you're not looking very hard. You are using them wrong if you can't get them to work with you.
To make my previous comment more clear, you're not some special programmer sent by god whose own two hands are so far beyond what anyone else is doing that you get nothing from state of the art tools. You're far more likely just a stubborn old fart who is too stuck in their ways to put new tools to use.
jajatatodobien@reddit
Please provide examples.
Which-World-6533@reddit
This is what always happens with these AI-zealots.
If you don't worship their idol the insults flow.
Whatever, mate.
MorallyDeplorable@reddit
You started off with the insults, lol.
jajatatodobien@reddit
Or they are doing shitty, useless, meaningless work. Imagine using an AI to write your tests for medical systems or insurance LMAOOOO.
ninseicowboy@reddit
I don’t know what vibe coding is but without question AI tools (mainly just boring chat) have improved my coding and architecture skills.
ChatGPT for instance is an absolute machine when it comes education. I absolutely grill it on my PRs even after I have high confidence in the changes, i.e. - What are the tradeoffs of doing it this way? Is my documentation sound? Are there any issues on line 154?
I have absolutely schlopped out shitty code with gen AI, but of course never in a production setting. Only for going 0 to 1 on side projects and in hackathons. And yeah, I don’t learn much from doing this unless I ask ChatGPT how it works (or just google around and do my own research).
TLDR: fantastic for 0 to 1, bad for big refactors / monoliths. Fantastic for education if used as a study tool. Terrible for education if you just schlop out spaghetti.
Princess_Azula_@reddit
AI hallucinates too much for education beyond the absolute basics. Educational materials are just a few clicks away from a google search (libgen, youtube, wiki, sci hub, google scholar, pub med, etc.) for anything you can imagine. Until the hallucination rate for AI drops to zero, the use of AI in education shouldn't be relied upon.
Apprehensive_Elk4041@reddit
I think I'd also add that any nuanced subject is going to do very poorly with. Because it doesn't know anything, it's just really good at guessing things that sound like sentences.
I agree that for the basics it's probably fine, or syntax type stuff, but any specific things like pros and cons of a certain approach to a problem you're not likely to get a great answer from. I'd argue you're very likely to get a very emphatic wrong answer on those.
I just don't see it as much better than stack overflow in actual use, and I still almost always end up back at some point there during my searches.
ballinb0ss@reddit
I think you are quite wrong here but I see the truth in you criticism. Much of efficient learning is mental modeling. Mental models don't require precision they often don't require even that much accuracy. A reasonable mental model of a system can often consist of input to the system, transformation in the system, and output from the system.
I actually think for this reason, explain X like I am five, then ten, then a graduate student, then an established PHD is a fantastic use of LLMs with a trust but verify approach.
Princess_Azula_@reddit
The problem is, at the moment, that LLMs cannot be trusted to have accurate information yet. Maybe in 5 or 10 years they'll be good enough to be able to be used as accurate information sources, but as it stands now the technology isn't there yet beyond the basics. It's just frustrating because everyone is making LLMs out to be more than what they are. I'm also afraid they'll eventually be abused to mentally offload all mental tasks for the majority of users, but that's a future problem that we aren't at yet.
ninseicowboy@reddit
It definitely depends what you’re studying. The deeper (and more proprietary) the knowledge, the less likely a chatbot is to know it. I personally just made the shift from full stack to ML and spent a good 6 months studying, and I’ve got to say it did not disappoint. But it’s possible this is because it was a “happy path” study route - I was mostly learning the basics.
For things like law, or medicine, I think it’s less likely the AI was trained on some proprietary dataset that you might need to learn about some specialized thing. But this is only for edge cases, still it will know a whole lot about generic medicine and law.
I see no issues with this loop: - Ask AI to break something down - Read it, understand it - Fact check it - Repeat
Even better, copy some dense paper into chat and ask it to summarize or explain the pieces you don’t understand.
Princess_Azula_@reddit
Why don't you just read, understand, and summerize the material yourself? You are outsourcing your thinking and reading skills, and as a consequence they will degrade over time. As they say, if you don't use it, you lose it. The same is true with reading, thinking, and learning.
leetcodegrinder344@reddit
So every time you have a question, you go down to the library and find a primary source to read and analyze, to find the answer yourself right? No lazy shortcuts like checking an encyclopedia or worse, a search engine!
IMO, this new writing thing is destroying kids memories, they can’t remember anything anymore, they just forget it and then open a book and read about it instead… We never should have outsourced our memory to the printing press!
Princess_Azula_@reddit
Oh gosh, it's just so hard to read things, wow, it just takes so much time, and these long words are so hard to understand. I wish i didn't have to think about things for myself and have someone tell me what these long blocks of words mean! If only I had some tool to understand what I'm looking at so I don't have to read anything or understand anything ever again on my own.
leetcodegrinder344@reddit
“Oh gosh, it's just so hard to remember things, wow, it just takes so much time, and these long ideas are so hard to remember. I wish i didn't have to think about things for myself and instead have a book remember everything for me! If only I had some tool to remember what I'm hearing so I don't have to remember anything ever again on my own.” - princess azula trying to stop the advent of the printing press
Princess_Azula_@reddit
"The perfect world was a dream that your primitive cerebrum kept trying to wake up from. Which is why the Matrix was redesigned to this: the peak of your civilization. I say your civilization, because as soon as we started thinking for you it really became our civilization, which is of course what this is all about. Evolution, Morpheus, evolution. Like the dinosaur. Look out that window. You've had your time. The future is our world, Morpheus. The future is our time." -Agent Smith
ninseicowboy@reddit
Summarize*
So you consider any form of cognitive offloading to be detrimental to learning? I assume you also calculate everything with a pen and paper, you never ask your peers for help, never read your notes, nor use search engines, right?
The only thing that matters in education is whether the learner remains actively engaged in the interpretive process. In order for this to happen, they have to be motivated to understand the subject matter.
LLMs actually give me the opportunity to actually read more words than I would have read originally with just the paper. More context, more history, more nuance.
It goes: 1. Read section 2. Am I confused? No, move on 3. Read section 4. Am I confused? Wait, how does GeLU work again? (Cognitive offloading with google search) 5. Read section 6. Wait, how did they get 7,385 there? 7. (Cognitive offloading with calculator) 8. Read section 9. (Cognitive offloading with ChatGPT) prompt:
‘’’ They fine-tuned a pre-trained ViT using layer-wise learning rate decay, but they omit any ablation on the decay coefficient. Given that their downstream task is low-data and heavily class-imbalanced, could a lower decay rate actually help the model adapt better to underrepresented classes or would it increase the risk of catastrophic forgetting? ‘’’
Do you suggest I take this question to YouTube, libgen, sci hub, or wiki? Do you think the paper has an FAQ where they tackle this particular question? Do you really think what I’m doing is detrimental to my learning?
Princess_Azula_@reddit
You take each word or phrase, or concept, and learn what they mean, and use that to answer your question. Usually by reading other academic papers, or books, etc. If you cannot do fundamental research about your topic of study, define the words used in the literature youre reading, or postulate on the answers to questions you have about said subject, then do you really deserve to call yourself knowledgeable about said subject? You should go about answering your questions by becoming proficient in said subject instead of relying on a chatbot to answer questions for you. An LLM could tell you what the GeLU activation function is, but it could just hallucinate the wrong answer, leading you to erroneous conclusions about what youre postulating about. If your basic theorems are wrong from the start then all future work youve done will be wrong. An LLM could be helpful finding information about a subject for future study, but you shouldnt be relying on it for factual information. An LLM is just a text prediction engine, and is not a search engine, knowledge regurgitator, or intelligent. Maybe instead of talking to a chat bot, you could find an online community of like-minded individuals to converse with on your topic of study. Then you'll meet new, interesting people and will go much farther with your work than you ever could with just an LLM alone.
ninseicowboy@reddit
I certainly agree that fundamentals are paramount.
So your opinion is that we should simply ignore a tool in our toolkit (LLMs) just because they might hallucinate? Do you also not talk to humans because they might misspeak? The solution for both is fact checking and I’ve already mentioned this and I’m about to again.
LLMs fill a niche for me. That niche is fuzzy and nuanced questions which would not work with google search or any academic journal search. What I agree with you on is that often my time would be better spent networking, and asking these questions directly to the authors of these papers. Or niche forums like you said. But there are tradeoffs: the latency for an actual subject matter expert to answer me is high (sometimes infinite). Not only that, but I’m probably going to get several incorrect answers, because humans, much like LLMs, make mistakes. I fact check both, but only one I fact check politely.
The idea that one must “deserve” knowledge only through solitary toil feels more arbitrarily gatekeep-y than rigorous.
Princess_Azula_@reddit
It's not that you don't deserve it, but if you cannout understand the language and ideas of the subject then you cannot comprehend said subject. And what I am saying is that LLMs are more of a tool that can be used to generate words instead of having knowledge retrieving abilities. For example, if you don't know what you're talking about it can point you in the right direction for further research. If youre not sure about what a word, or phrase is it could help you find where you could find more research on said topic. For example, say you wanted to know about "spherical reaction wheels", but you forgot what they're called, you could ask chat gpt what the spinny things that keep things in space oriented are called, but spheres. And it would sput out an answer or two you can use to find more information. As for trusting information, thats the reason published works are peer reviewed; so they have accurate information. I would rather have my information be right the first time.
DesignerGas8331@reddit
I’d say it hallucinates less then humans on YouTube
Princess_Azula_@reddit
They should cite their sources then, to prevent human hallucinations, haha.
MorallyDeplorable@reddit
You're hallucinating right here.
EducationalZombie538@reddit
now gas light it with some made up issue and watch it agree with you and suggest another approach.
ninseicowboy@reddit
Yep I’m not ashamed to say I’ve given it this exact prompt several times “are you agreeing with me just because I’m the user? Please answer with no bias - just what you believe to be objectively optimal”
Prompts like this are an annoying but mandatory part of the daily workflow
EducationalZombie538@reddit
yeah, me too. i preempt it with one of my more polite ai prompts: "don't just f**king agree with me"
when the robots come i'm first in line to die...
ninseicowboy@reddit
I’ll be second then lmao
__loam@reddit
Education is one of the worst things you can do with it. If it lies to you, you don't know enough to know when it's wrong.
DesignerGas8331@reddit
You can say the same about human instructors
ninseicowboy@reddit
They said the same about Wikipedia
__loam@reddit
Yeah and now the kids are using ChatGPT to write their essays.
Italophobia@reddit
Chat gpt is going to be the new calculator
MorallyDeplorable@reddit
Education isn't just memorizing text, it's a process of learning and trialing and learning and trialing.
If the AI gives you bad info you find out pretty quick during the trailing stage.
Nobody makes it to a developer role solely through reading with zero doing.
Such a dumb concern.
__loam@reddit
Just waiting for the day when a major breach is caused by some ai generated bullshit.
BestUsernameLeft@reddit
I haven't made much use of it, but this sounds pretty useful. What's your technique for having it do PR reviews?
ninseicowboy@reddit
Step 0 is turning off data capture, I don’t love OpenAI using my data.
It’s almost certainly an inefficient way to do it, but step 1 is literally taking screenshots of my PR description and the code changes / diff, and just dragging it into chat. ~5 images.
Usually my questions / prompts that go along with the screenshots are quite specific to the task at hand - sometimes they’re normal (“can I word any of this better?”) and sometimes I literally treat it like a confessions booth (“shit, should this cert be checked into version control? Seems like a bad practice”).
I try to remember back to my thought process during implementation, specifically the biggest concerns - (“I wonder if this line will break X”, or “I wonder if this is a bad practice”, or “huh that warning is mildly concerning, problem for tomorrow”). Basically give all of these things that were minorly concerning during the development phase to Claude / ChatGPT or whatever, then it will flag the ones that are actually a problem. The goal here is to build confidence, and if I can successfully extract the biggest concerns from my brain and get sound reasoning on why they are not in fact concerning, it’s 1 step in the right direction. Plus you know what to answer when people give you shit.
wardrox@reddit
It's made me a faster programmer, not a better programmer.
MorallyDeplorable@reddit
I think your main issue is thinking that any AI use is "vibe coding".
Vibe coding is what you do when you want to impress their friends with how fast you can make a design mockup.
If you sit down and actually do planning with them you'll find they quite frequently ask the right questions during planning, and if you dump a fleshed out plan on them they can actually code quite competently.
You have to be intentionally missing how a tool as diverse and useful as an LLM can make people more productive.
kibblerz@reddit (OP)
I'm not saying any use of AI is vibe coding. Ive used AI to educate myself. But I've recently been hearing this vibe coding buzz and former coworkers who keep telling me how much code it writes for them.
It has me puzzled, because I just get crap code for the most part that rarely works
MorallyDeplorable@reddit
I don't know what to tell you. The tools work, though. I refactored and mostly reviewed 10k lines in a personal project this weekend. I refactored a bunch of related but standalone C++ utilities I'd made over a couple years into a Python extension, made bindings, added base classes for like tools to inherit. I've reviewed and tested about 80% of it so far. It's passing all my tests, it's written sane and legible code that's well-commented. It's got decent error handling. There were only a couple issues I had to sort through myself.
It wouldn't have written that code to begin with that well/quickly but that refactor would have taken me two months to do myself during downtime and I did it in a weekend with an AI. And I've never done a python extension before that (though honestly it was quite simple once I reviewed what the AI had written)
PizzaCatAm@reddit
You are obviously biased feeling is a threat, is a great tool.
wslee00@reddit
It's not really better code but my throughout it much better. Like another poster said it's autocomplete on Crack. Also helps a lot with test classes
brainhack3r@reddit
YEah... I'm getting really tired of it too.
99% of my job isn't coding - it's debugging and sticking shit together that were not meant to work together.
AI can't solve that problem because it's not been trained to solve that.
Like right now I'm trying to connect a cryptocurrency wallet app via react-native via webview into our app and build an RPC layer to manage that.
It completely falls apart in this type of situation and has no freaking clue what it's doing.
However, if I need it to generate like a merge sort, I can get that in any language I want.
That's kind of nice though.
digitalwankster@reddit
Tbh you probably aren’t utilizing it to the best of your ability. Talk to it like it’s a junior dev and give it as much context as possible. There’s no reason it can’t easily handle the task you mentioned.
digitizemd@reddit
You could just do it yourself rather than spending time giving context it.
digitalwankster@reddit
It would take a fraction of the time but OK
digitizemd@reddit
So I need to spend a lot of time explaining it what to do almost step by step then I need to thoroughly check its output but that'll take less time than... checks notes... just doing it?
digitalwankster@reddit
Yes, because that is faster than you doing it. There’s really not much room for debate here. You could also write pseudo code and have it fill in the blanks for you.
digitizemd@reddit
Yeah, but you're skipping the whole part where I had to thoroughly explain to it, step by step, what I need, rigorously check the output, find errors, have it repeat mistakes, etc, rather than just fucking do it. What you're saying makes no sense.
digitalwankster@reddit
You sound like someone who tried using ChatGPT to code 2 years ago and gave up on it. I promise you are not as fast as an AI agent and that you would exponentially increase your productivity if you weren’t so busy clutching your pearls.
digitizemd@reddit
I've tried most of the newer versions of ChatGPT and I've used Claude a bunch and Gemini, as recently as couple of months ago, and still use Claude occasionally. I used them fairly regularly and was basically always frustrated.
All of these models are going to hit a wall (and basically already have). I've been a developer for 10 years. I don't need to be bothered by trying to get a junior to do all of my work.
But while we're making assumptions, I'm going to make one that's probably a lot closer to the truth, especially based on your comment history: you're not a developer professionally; you don't have much experience at all; you work on hobby things that are trivial.
I'm not clutching pearls. I'm just an experienced developer who likes to do things like, read docs, learn. And not wasting my time on a stochastic parrot.
digitalwankster@reddit
I’ve been a developer for much longer than you, friend. You’re going to get left behind with an attitude like that.
digitizemd@reddit
I bet you have. Working on super "complex" PHP apps.
digitalwankster@reddit
I’ve built solutions for companies worth hundreds of millions of dollars and I probably make more than you so let’s not get into a pissing contest. The bottom line is you’re going to get replaced unless you learn to be the puppet master instead of the puppet.
Apprehensive_Elk4041@reddit
OK, so let's say you have. And you build so much stuff so quickly that you're wildly in demand as a super-freak coder extraordinaire.
Why would you waste time talking on here? If these issues were not real things, why hasn't this swept the entire IT world at this point?
Are we pretending that coders are 'valued members of the company structure' ? As is they're not seen by most orgs as a pure cost center they'd drop as quickly as possible?
Is everyone else on earth dumb except for you? I'd advise against accepting models of the world that are self-serving, that's a trap for sure. Any answer to a question that leaves your ego boosted should always require EXTRAORDINARY evidence.
digitalwankster@reddit
Why do I engage on Reddit? Why does anyone use social media at all? I'm not sure if you've been keeping up with the news but AI HAS been sweeping the entire world at this point and we've seen several tech companies doing large scale layoffs due to the productivity boost that AI has provided their senior developers.
No? The only dumb ones are the luddites that think that they can outperform their counterparts who are utilizing AI in their workflow. The irony of your statement is that the people posting copium in this thread are the ones that have big egos. Half of the people in this thread are essentially making statements like "AI could never take MY job!" which is foolish and comes from a place of ego and ignorance.
Apprehensive_Elk4041@reddit
It could be ego, there is a lot of that to go around.
But with the eyes of experience many things look very different. I wouldn't throw out opinions just because they clash with yours, that's a much slower road. I would try your best to understand what and why others say that.
Again, just calling it copium means that YOU are not taking copium, because YOU are ahead of those 'luddites'. This all feels very good, but I'd cautious, when your opinions put you in the 'better' group of people than others, they're likely just self serving and biased. Everyone does this ALL the time, it's a very common trap. It's a bias in service of pride, and the one thing that I've found almost universally useless as an adult is pride. It feels so, so good, it's almost always the wrong way.
When you get a little smile and a kick when you say the answer, then you should require a VERY high bar of evidence for the supposition, because you will by default always write yourself as the hero and on the 'better' side. It's very easy to block things out that don't support that narrative.
Apprehensive_Elk4041@reddit
No, they're not saying 'AI could not take MY job'. What they're saying is that this 'ai' (which is just built off auto suggest engines) is not capable of what the sales folks are saying. This is not uncommon, sales gonna sell. My guess on why you're on here is because you (if you're not just a bot) are tied to that sales cycle.
Relegating what you do to a machine in toto may be something in the future, but at this point, from what I've seen we're certainly not there yet. Syntax isn't the hardest part of programming at all, and the models have no conceptual framework.
I'm doubtful of that, this has been tried MANY times before and has always resulted in systems that had to be wildly constrained and ended up as one off 'frameworks' that could not handle general problems. We have code generation - code generation that's NOT based on parsing into a pattern recognition engine but on the ACTUAL syntax required. We've had that for a very long time. We've tried boilerplate removal, generated code (runtime or as text files you can read), expert systems (written by actual experts, not trained on randos github profiles) and the problem is always the same. In order to make it useful you have to constrain the problem space, that constrains the use cases it's good for and what you end up with is a one off program that in some manner solves some problems that's more complicated than just writing the code as a one off, but is too specialized to be effective as a framework or across a very broad range of different use cases.
This is the problem that I don't think you'll ever solve. It's a tradeoff, between limiting capability and complexity, and I think if you did have some 'ai' (whatever that actually means) that could build it what you'd end up with is something that was hardly readable or understandable by humans (ieg basically some form of hyper optimization more akin to machine language or brittle spaghetti code than an orderly high level language - because you would need an optimization function) meaning that you can't change the functionality in a predictable way. I just don't think we're remotely close to anything that can do this. This is completely leaving out the need to actually write the business requirements out. I'm sorry, but to me this is all just a different route to a search engine.
I'm not saying these things can't be improved, but I do NOT think that they're remotely there right now, and the test isn't what I think, but rather what companies do.
Everyone wants shortcuts. If these models were that great, we'd all be losing jobs in droves (and may someday). There will never be a shortcut as a human to learn and expand what you can do creatively, and all this does is attempt to short circuit that learning cycle. When you don't understand the syntax, you waste cycles thinking about it. When you are fluent in the syntax you can move beyond that to the step of looking at and thinking about how you structure things, this repeats up the chain of non functional requirements. If you don't know what a rock is, you will spend your day looking at the rock instead of building a house in the forest. You just can't shortcut it for you as the operator. I don't see these are remotely useful for lower level developers, and only lightly helpful for more experienced developers.
The problem isn't that it 'can't be better than me'. It's that the act of having a robot do pull ups for you ensures that you'll never be able to get to a strict muscle up; because you haven't completed that earlier steps in the progression yourself.
digitizemd@reddit
I bet you have. I bet you're the best developer in the world.
Apprehensive_Elk4041@reddit
lol, if you could put this together for everything you're doing (break it down into small enough, well defined pieces to be reassembled into code later), then you already know how to solve it, so it's just helping with syntax? I mean honestly, at that point what is it actually doing for you?
Given the propensity to give the wrong answers, you REALLY need to know how to read and understand what it's spitting out (not TELLING you, it's not thinking, it's a probability based guessing engine). I'm not sure that if you're far enough along to use it for real problems (well defined, atomic questions, to be reassembled into a solution), that you gain anything by not just doing it yourself since you'd have to verify what it spit out anyway.
digitalwankster@reddit
Yes. It can generate the code way faster than you can type it. Not everyone using AI to enhance productivity is "vibe coding". As long as you understand what needs to be done and you can verify the output (reading code is much faster than writing it), there's no reason to not use it to speed up the development process.
kregopaulgue@reddit
Agree so much about being gaslit about AI tools. They are okay for some stuff, but when people are telling me, that they are 2x or even more productive with them, I sometimes start thinking that I am the problem lol. I am fairly open to these new tools, but they save me like 5-10% of time in average.
FetaMight@reddit
Because you are. I've had long conversations with a few of these zealots and, to put it bluntly, they're all morons.
They have no idea what they're talking about. The don't understand AI or its growth. The have no concept of engineering. They just gobble up all the marketing hype and repeat it hoping it will give them an air of authority.
Bake-Busy@reddit
ZERO coding knowledge here, But, currently developing a mobile app using Flutter with Firebase integration through Claude & GPT.
Main features so far: • Role-based access control (RBAC): Users are assigned roles (admin, editor, viewer) with dynamic UI and data access restrictions. • Offline support: Users can work offline and sync data back to Firestore when reconnected. • Export functionality: Data can be exported to formatted Excel files (Apache POI on Android). • UI flow: Modular screen routing (Home > Data Management > Action Screens), built with a clean, layered structure.
ChildmanRebirth@reddit
The hype-to-output ratio in a lot of AI tooling right now is wild. I’ve had similar experiences — asking for something small and getting bloated, outdated, or just plain wrong code back. Especially with anything JS-related where the ecosystem changes every 5 minutes.
That said, I’ve stopped expecting AI to write code for me and started using it more like an assistant that helps me think through or debug stuff — especially under pressure.
One tool I did find genuinely helpful was ShadeCoder. It’s not for “vibe coding” — more like a stealth backup for coding interviews. It watches your screen + listens to audio during a live interview and gives you full solutions, tests, comments, etc., in real-time. Kind of like having a calm senior dev behind the curtain during a panic moment.
Not something I'd use day-to-day, but for that high-stakes, high-pressure scenario? Super useful.
Anyway, you’re not wrong — there’s a lot of noise out there. But in the right context, some of these tools are starting to find their place. Just gotta sift the hype from the actual utility.
duck-duck-goob@reddit
I have been productive with it, but almost never writing code. I use ChatGPT and Copilot. There have been times where I want to just disable copilot because the suggestions are usually just wrong, but ChatGPT has been helpful for certain things. If I’m working in part of our stack that I’m not familiar with, I’ll use ChatGPT to help me wrap my head around stuff or if something weird is happening, I’ll ask it for some help but I never just paste a bunch of code and tell it to make changes for me. Otherwise, it’s been useful for writing little helper scripts that I just use to automate a few things I don’t feel like doing manually.
I’d say I use it for probably less than 10% of things I do and that just frees up a little time for me to focus on more important things.
Factor-Putrid@reddit
My company's founder is so believing of AI he refuses to hire additional devs to help me. We're a team of five, only two of us are engineers, but I'm the only one building our app and our other engineer does network automation work by himself.
Looking to leave ASAP. AI has its place in the tech world but it should be treated like Stack Overflow and Google. No matter how good it is now, it is never an adequate substitute for a quality team of developers.
daddygirl_industries@reddit
I'll take your position if you're giving it up. Other people are always the biggest problem everywhere I've worked. I'm sick of arguing against dumb opinions, hubris, ego, and all the other bullshit that comes with people. Me + my AI agents can go way further than simply adding "more humans", honestly.
BigRonnieRon@reddit
YEP, agree 100%.
It's literally impossible to spend $0.50 coding one component unless you're rolling your face across the keyboard for the prompt. My AI response, which was one-shot and worked (despite the fact I would write that by hand in about 5 minutes or use MUI), cost $0.002
Opposite_Werewolf_98@reddit
Since you're looking to leave... mind if I get a DM on those business deets? Lol
PureRepresentative9@reddit
As far as I know, besides OpenAI, none of the big players have claimed that their tools are actually capable of replacing programmers.
So all the CTOs saying that LLMs can be used in lieu of hiring more people are talking out of their ass.
Also, OpenAI has also been hiring programmers this whole time.
Classic-Sherbert-399@reddit
Didn't the zuck prick say they're going to remove all intermediate dev jobs last year or this year because AI will be so advanced? Seniors to follow? Wonder how that's going for him
PureRepresentative9@reddit
I don't consider someone who struggled to get legs in his VR program to be a big player.
dpn@reddit
Legs in vr... Heresy!
digitalwankster@reddit
You don't consider one of the largest open source model contributors to be a big player?
PureRepresentative9@reddit
When I look at the impact and success they've made compared to ChatGPT, DeepSeek, etc.
They seem like a very small fish.
Basic-Tonight6006@reddit
I hope when the economy turns around that Meta can't hire any good engineers anymore after the stuff he's pulled like telling staff to "buckle up". I wish it was a good person who came up with Facebook and had billions and control now but instead we're stuck with this piece of work.
Ok_Bathroom_4810@reddit
Anthropic is publicly claiming they will be able to replace junior engineers 1 year from now.
VannaTLC@reddit
That.. is not a.brag thougg. A jnr developer actively detracts from the rest of the team, just like shitty AI
Jamb9876@reddit
Zuckerberg seems to have that claim about their model and it seems MS seems to be stating something similar in that 20-30% of the code written has been by AI. I expect over the year people will realize this is a bad idea.
TheNewOP@reddit
OAI has walked that back, somewhat: https://www.windowscentral.com/software-apps/sam-altman-ai-will-make-coders-10x-more-productive-not-replace-them
PizzaCatAm@reddit
It will, those devs better start their own companies. The economy is going to change, in general terms, we are doing thing like art, video, music, programs in seconds, everyone eventually will be able to get into the niche they want and companies will be super lean compared to what they are today.
Antonio-STM@reddit
You are fundamentally wrong. Devs are not programmers.
A programmer can write code to do something but they dont delve on how code works at low level (platform, OS, etc).
A Developer knows what the code He writes will do, how the OS will orchestrate things to execute that code, how the processor will move registers to perform an operation.
An in those differences are some reasons why AI cant replace programmers/devs.
AI cant infer bottlenecks in performance or which architecture is more suited for a certain project.
AI is trained for what is common by usage not for what is best by knowledge.
PizzaCatAm@reddit
I was part of an OS team in FAANG, thanks for sharing your take.
SemaphoreBingo@reddit
Yeah and they're all shit.
PizzaCatAm@reddit
Salty, they are miles ahead of what could be made last year, and miles ahead of the year before. Seems like devs became a trade, people are no longer excited about tech.
SemaphoreBingo@reddit
I will admit that most of the time they get the correct number of fingers, which is an improvement.
Not all tech is good tech.
PizzaCatAm@reddit
You are coming off as triggered.
SemaphoreBingo@reddit
You are coming off as a fanatic.
PizzaCatAm@reddit
Hehehe, talk in five years, you guys are so blinded by fear and identity.
Antonio-STM@reddit
We are not. We talk from experience.ñ and history.
Some of Us come from the command line era, others from the first UI, but all.of Us have experienced moments like the RAD fad when tools like Visual FoxPro or dBase or even MS Access promised companies every employee could build applications. Or the mobile app wizard craze promising ultra stylish and performant mobile apps made by everone just by dragging and dropping.
The position and expertise of many people talking about AI asisted development is comparable with someone that just discovered the use of a thermometer or a sphygmomanometer and thinks now can be a doctor.
BTW, AI doesnt.make art,.it just bashes things together. It would be like calling yourself a LEGO artist like GERARDO PONTIERR just because an app tells You which blocks are needed and how to sssemble them.
PizzaCatAm@reddit
You are wrong.
xDannyS_@reddit
That just always leads to MORE work, not less. Also, that's not how economies or businesses work...
Schmittfried@reddit
First, that’s just more of the AI company marketing nonsense. Let’s see how capable it will really become. It’s unlikely LLMs will be the technology that allows this.
A tenfold productivity increase might or might not result in layoffs, depends on the saturation of the market. The basically millionfold increase you are describing will, if it actually happens, cause mass starvation and a collapse of whatever middle class is left. It will be only super rich and homeless then, because what you are describing renders all knowledge work and most creative work obsolete. Tradespeople would probably have the last viable jobs.
Wattsit@reddit
Yeah a tsunami of poor, unreliable, untested, rushed, and slow software, presented with stable diffusion logos and their vibe coding dev on their knees begging for venture capital.
PizzaCatAm@reddit
Look at what could be done last year compared to this, you are all blind, but is OK, time will show.
Schmittfried@reddit
Not much less. We’ve seen incremental improvements.
spiderpig_spiderpig_@reddit
this is strikingly similar to the No Code promises of 5-10yrs ago
PizzaCatAm@reddit
I mean, whatever, this is pointless.
Gold_Trade8357@reddit
I’ve been using GitHub copilot for the last 6 months or so.
It’s helps for sure, it’s just a personal assistant and more helpful than google often. It has no where near made 2x productive let alone 10x. These Ai CEOs gotta sell to someone, unfortunately for us it’s our CEOs and executives who’ve never coded a line of code their whole life
Logical_Number_8316@reddit
Yeah I think copilot makes me 10-20% more productive. Which is significant, but not even close to their over hyped numbers.
HolidayEmphasis4345@reddit
If it makes you 15% more productive and you are Microsoft what does that mean? If you have 10k engineers you just got a windfall of 1500 engineers to cut loose are take on new stuff. And yes I’m simplifying but those numbers matter.
Aurori_Swe@reddit
I've seen "devs" claim that AI makes them 300% more efficient and my only answer is that "I'm sure it makes YOU 300% more efficient, that does not mean that you equate to 4 devs total either way".
Antonio-STM@reddit
Totally agree, also I want My code efficient and performant not the process of writting it.
akp55@reddit
OpenAIs LLM suck donkey balls at helping with code. as OP said any libs that have had changes after the data collection are a hot mess, i've tried to get it to update the model with the new libs which results in it getting confused and giving you code thats split between the 2 libs
morentg@reddit
Didn't Zukenberg say something in vein that they will have ai capable of replacing mid devs within a year?
Right-Tomatillo-6830@reddit
OpenAI is still hiring developers, actions speak louder than words.
GammaGargoyle@reddit
Anthropic very publicly banned the use of AI in interviews lol. The AI pie is getting smaller and the big players are going to try to lean into coding as much as possible. It’s a huge money maker from hobbyists alone who never even finish a project.
Right-Tomatillo-6830@reddit
there's a pie? are any of these companies profitable yet or are we talking VC money?
Thin_You_7180@reddit
Reliantlabs.io will handle all of your DevOps for you for free, just sign up on our website and we will reach out to you to help. Limited time only!
PizzaCatAm@reddit
It’s an assistant, I wrote a thousand lines service las week with it, research shows it increases productivity of senior developers more than anyone.
killbot5000@reddit
I tend to agree with you. It really helps with boiler plate and writing more “conventionally” in languages and apis I’m new to. It really helps me bootstrap.
It also helps me debug. It acts as a rubber duck but one that actually understands a lot context around certain code. Again, most useful when in the context of frameworks and languages I’m not personally familiar with.
As for writing my actual code, it’s generally useless at helping me develop my logic. It’s hard to convey my goals to it. Auto-complete is kind of a wash because it auto completes nonsense as much as it auto completes useful code. If I had to use it to write my code and could not write any myself I’d go somewhere else.
Toph_is_bad_ass@reddit
It's insanely good for boilerplate which takes up more dev time than people thing. I mean just quickly converting server data models to typescript types on the front end is a huge time saver.
PureRepresentative9@reddit
FINALLY
an example of where LLMs/advanced auto complete is helpful.
I've asked literally dozens of times and this is only the first concrete example lol
digitizemd@reddit
Feel free to cite said research. And if it's a blog post from some rando, that's not research.
Schmittfried@reddit
Yeah that’s a realistic view on this. I see it as a Google + Stackoverflow + Autocomplete on steroids, a dumb assistant as you said. None of that one-man unicorn nonsense.
PureRepresentative9@reddit
You can't be serious right?
You're using LOC as a measurement?
oupablo@reddit
They're saying that because the other CTOs are saying it. Tech is one giant circle jerk. VCs all flock to the same BS and when you don't talk about it, you get left behind.
Trevor_GoodchiId@reddit
https://www.businessinsider.com/anthropic-ceo-ai-90-percent-code-3-to-6-months-2025-3
tdatas@reddit
If your customers are all infantilised and dependent then you can sell them more. We see this play already with clouds.
cuddle-bubbles@reddit
likely the CEOs pressured them to say that lol
Zulban@reddit
Don't make the mistake of assuming a founder is telling you the real reason they are or are not doing something.
OkElderberry3471@reddit
Total opposite for us. The devs are paying for AI tools themselves and using them with great success and our bosses refuse the believe it and have made it our responsibility to convince them that AI has value.
Elctsuptb@reddit
I think it's better to keep that to themselves and have their boss think they're doing all the work instead of AI, because as soon as it starts spreading the less their contributions are going to matter
MagnetoManectric@reddit
Ever notice how the accounts promoting this crap are always AdjectiveNounBunchaNumbers accounts with relatively ordinary history up until a few months ago, then they suddenly only post about how great AI coding tools are?
Which-World-6533@reddit
I've found that Reddit AI-zealots generally have the following in common:
OkElderberry3471@reddit
I’ve been posting about cats and synthesizers for years. Not pushing anything here, just sharing my experience at work. I wouldn’t consider myself an AI zealot or even particularly knowledgeable about it beyond the average experienced dev.
When I say great success, I mean it’s reducing the amount of clicking and typing we have to do sometimes, tedious shit that takes junior devs hours and requires little skill. Those are hours saved, nothing earth-shattering. The shortcoming are obvious, that it’s fucking hard to understand and it’s changing every second, it’s not great at doing any complex changes on existing codebases, it makes junior devs lazy. And the industry hype fucks with our bosses heads and rolls downhill. At this point, if helps me type a bit less, that’s value for me. I think my original message wasn’t clear - I’m an optimistic skeptic that’s been burned by the hype enough times to know it’s not all it’s cracked up to be yet, and maybe never will be…but it’s a useful tool in many cases.
Which-World-6533@reddit
This is what happens when you let Chat-GPT write your comments,
OkElderberry3471@reddit
This is happens what you’re in denial about being left behind because your aging yourself out of your own industry.
Which-World-6533@reddit
Lol. AI-zealots can't help themselves.
OkElderberry3471@reddit
Would you like me to help improve this response or generate a more concise version? Just let me know.
somneuronaut@reddit
You mean the default when creating a reddit account? And even the result when trying to create a custom name? You are hypothesizing a conspiracy that doesn't exist. I created an account once while trying to use a username and it automatically swapped it for one of those you're talking about. You are not realizing the platform does that to people so it's not the people trying to do anything. Gosh. You looking for drama?
SS_MinnowJohnson@reddit
I’ll wear that tinfoil hat with you. I have data. My company essentially proved to itself it sucks, and might occasionally save time at best
https://www.reddit.com/r/ExperiencedDevs/s/DPfErBfuRn
OkElderberry3471@reddit
I’ve been posting about cats and synthesizers for years. Not pushing anything here, just sharing my experience at work, the fuck is ur issue?
Unintended_incentive@reddit
This sounds less like an AI problem and more like an "engineering field with no regulatory engineering board inspecting/issuing stop work orders and fines for blatant non-technical stupidity" problem.
hcaandrade2@reddit
Imagine a company saying you don't need to hire more devs because you had Stack Overflow.
mpcusack@reddit
Like everywhere, my company has been discussing/struggling with this a lot recently. How important is it for developers to use AI tools and should we make it part of the interview process. Opinions range from they shouldn't be used at all in interview through we should reject any candidate who doesn't insist on using them because they are out of touch.
I went through a round of interviews with openai recently and I thought an interesting data point is that they don't allow the use of any AI tools in their programming interviews.
Traditional_Calendar@reddit
This is what I think is happening in the tech market right now. People that don’t know shit are stopping progress due to the “incredible productivity boost”
phil_baharnd@reddit
It's a fantastic supportive tool when used this way. It's much more efficient and effective than Stack Overflow or Google. I used to get frustrated when I read the documentation, but there wasn't a clear answer on a detail I needed a clear answer on. Now I can ask AI and understand things more thoroughly.
dpn@reddit
I use Copilot in nvim, it's handy enough.
I find I need to spend too much time coaxing AI into the right solution with higher level more expansive use of it.
kibblerz@reddit (OP)
What do you use to make copilot work in nvim??
dpn@reddit
I'm using astronvim which I believe uses an alternative to the official plugin. It's installed with mason iirc.
I've only just moved from vim to nvim so I'm still getting head around all the common packages. I can grab a link when I'm at my pc next
BigRonnieRon@reddit
You're doing it wrong
1damienblue@reddit
Yeah I’ve had some success with some game development passion projects and a private web app I’m using to build my lean startup. It takes really clear cut requirements to get what you want and coding for 10+ years has helped me to understand how to write those requirements. It’s a huge time saver. I’ve had times where I hated it but usually coming back another day where I had the patience to write the requirements properly resulted in getting what I wanted. That being said, I don’t believe established companies should try to replace real people with AI from a moral standpoint.
RoughChannel8263@reddit
It's a tool. When Google first came out I didn't want to use it for coding because I felt like I was cheating. Now it's the first thing I open when I start a new project.
I'm an independent contractor with an insane workload. I use ChatGPT quite a bit. It speeds up learning curves on new tech that I need to use. I use it mainly as a learning tool. The fact that it gets things wrong sometimes actually makes it good for that. Someone else already posted about not putting anything in your code you don't understand. Good advice. When it comes to AI, copy and paste is not your friend. It's like that annoying engineer who no matter what problem you're having, he's got the answer. Sometimes they're they're useful and other times, not so much.
It fits well with my workflow. I'm getting more done in a shorter time than without it. Great for my clients. Most of my work is T&M so it seems like my billing should be down, but so far that has not been the case. It seems to help keep me focused.
Slodin@reddit
Umm try figuring out your keywords and prompts. I usually feed it bit by bit information to promote it to understand what I’m trying to do. So far I even threw bits of figma design images for it to give me UI element code.
Is it perfect and ready to ship? Hell no. But it gives me a really good starting point to work on the code.
I always had good experiences with ChatGPT, deepseek and copilot. Gemini has been the bane of my existence months back, I never tried it again.
Front end with fast updated libraries are not the best for AI IMO. But writing stuff like tables, sql queries and ORM stuff is like my dream came true. I hate dealing with data related stuff, and having an assistant helping me to do the boilerplate works wonders.
kibblerz@reddit (OP)
I haven't had much luck using it for UI. Implementing figma designs with Chakra ui or pandaCSS is pretty dang fast. It helps to not be constantly switching files and only need to focus on the components. I just don't see how getting AI to implement it would be any faster.
I love dealing with data. My approach to development has been about basically tailoring the data as much as possible, and then using codegen libraries with Golang to get some pretty streamlined APIs.
I don't really mess with ORMs, doesn't really make much sense to use those IMO. I'm a control freak about how data is stored and handled 😂 Plus we never really have to worry about SQL becoming abandonware or changing radically.
Slodin@reddit
the thing is. You just do what work for you. Shouldn't that be "vibe" coding lol? If you can already automate some work to fit your needs, that should be just fine.
yeah idk what vibe coding is, nor do I want to know.
I just do whatever I think works for me and whatever that seems to speed up my process without messing things up.
I never had a boss that cares about how I work. They all just care about the result and deadline. Your boss is weird. I guess thats why they are former huh.
slartibartphast@reddit
Just like ides let you refactor, make better names effortlessly, whereas before ides you would’t bother, ai can do the same for certain tasks.
I’ve been learning typescript and react and all of that (old man dev, I’ve done js but more bank end guy). If I’ve gotten something working I can ask it to make it into a generic utility. In ts, the hellish generics and mega nesting of braces brackets and parens makes it painful. Ai not bad.
It also finds the why of errors if unfamiliar with quirks. Reacts got a lot.
Also for things just trivial like make a Postgres schema for this structure. Stuff like that adds up.
No it’s not glorious stuff a ceo will be blown away with. But it’s useful. They think it’s far better than it is. They think it’s a 20 year expert when it’s reality and intern.
kibblerz@reddit (OP)
Yeah i can see it being helpful when someone is new to react. Ive been doing it for a few years, it honestly comes far more naturally than plain html, cause and JS ever did.
TBH Typescript is often overkill unless you're developing a library you intend to use with other projects or distribute. It serves as great documentation, but I've come to feel like the excess time spent isn't worth it.
It's too superficial imo. Itd be nice if it was part of Javascript in a manner that allows you to actually leverage the types during runtime. It feels like you only get a portion of the benefits compared to a type system like Go has.. And its far more convoluted
Just don't reuse variables for different purposes, handle null values, and typescript ends up being kind of wasteful.
Probably gonna get a ton of down votes for this take 😂
slartibartphast@reddit
Well coming from Java I’m big on static checks. It’s saved me a lot. But in typescript generics you end up having to do “any” because of a library, or fix properly down a huge rabbit hole!
kibblerz@reddit (OP)
Yeah i totally get the static checks, I just wish typescript felt natural like other type systems. But because it was an afterthought, it ended up kind of ugly and lacking runtime function.
Id jump on the typescript bandwagon if I could attach runtime functions to types. Last I checked, that still wasn't possible :(
kzlife76@reddit
I use it a lot for debugging. Paste the error and a stack trace into it and it will usually lead you to the correct answer. Other than that, I use it to format data and produce examples of specific functions. It usually just saves me from having to read through all of the documentation. Sometimes I just need an example. Writing a while app or component? No thanks. I'm with you on that. I use it as a tool, not to do my job for me.
Zulban@reddit
It's extremely useful in some contexts. If you don't think so, it's your role as a professional to figure out how. I haven't found any AI IDEs particularly useful yet tho.
kibblerz@reddit (OP)
I got avante (a neovim plugin) to fix the sorting in a graphql dataloader today and fix my sorting mechanism, surprisingly it did well on its first try.
So that was nice. One of the few attempts that's actually worked out so far. Of course, the plug-in then glitched and got stuck in some loop despite completing the task, and drained my Claude credits from 3 bucks to 0 in like 5 minutes lol
Zulban@reddit
Well, I'd focus on using AI to get it to write boring junior level code. It's like having a dedicated junior any second of any day. If you know how to assign work to a junior, you can get good stuff out of the best AIs.
criloz@reddit
There is too much hype around it, I just downloaded cursor today to tested it and I honestly got annoyed 10 minutes after, I'm starting to think that I am actually stupid because I can not get that productivity boost that everyone is talking over the internet, AI is just another tool, it is very useful for learn things that you don't know anything about it, making resumes, but you need to take their output with a grain of salt and definitely verify with other references, for coding is useful for small task with a very well-defined input and outputs.
kibblerz@reddit (OP)
Plot twist, all the hype is AI bots ran by cursor..
ImportantDoubt6434@reddit
The AI has limited use and needs to be doubled checked or assume poor/unusable quality.
It’s great for like an intern level make a simple script and type out a bunch of boilerplate/json for you
SynthRogue@reddit
I only use it as shortcut to documentation. So I can get the commands. Then I use the commands as I see fit
umognog@reddit
I use it a lot in meetings.
"Can you dumb down 'the data feed broke due to a change in the security layer that wasnt communicated in advance of it happening, which is your fault Bill', so that senior management can understand it?"
For real, not far off that, i use it during meetings.
Why does it always need to be about the code?
kibblerz@reddit (OP)
I actually use AI in this manner for quotes, it's pretty good for translating nerd talk to human talk. AI definitely has its uses and use cases like that are perfect. Using AI to write code? Nah.
What blows my mind is how many developers who are on this AI vibe coding train, who are clueless about what an AST is or how valuable traditional code generation can be. Like we've been able to get computers to "write code" for a long time now via metaprogramming.. It's a very valuable tool. And it's FAR more precise and reliable than AI likely ever will be. Yet most devs on this hype train are clueless about it
I just feel like we have tools that can accomplish many of the same things as AI with better precision and reliability, it's absurd how many devs seem to think that AI is the only way to automate programming.
umognog@reddit
I doubt many people that use a computer today could write their own kernel, even using assembly but use a computer that has had modern os software and IDEs created by others before them (i thoroughly recommend learning assembly and creating hardware control programmes by the way, i found a number of skills through it).
I think AI is another step that in 40 years time, most people wont know our archaic ways just like most of us dont know assembly and hardware control.
kibblerz@reddit (OP)
I don't think your kernel example is really a comparable one. Alot of building something is "how do I get this to work?". When developing a kernal, you're trying to make the most basic things possible work. But our expectation from software is far higher than that. It's a long road from building a kernel to having functional networking. And the kernels which exist have been the efforts of countless combined engineers.
I don't think making kernels for fun was ever really a common hobby for programmers.
umognog@reddit
Its quite a far reach between original hardware interfacing and modern computers but thats my point - AI is in its infancy and in the future, we will look back at now like we do to kernels now. Something we use, but don't understand
mikeyj777@reddit
It's good for a noob to spin up an mvp. While that's great that people can get their ideas out, I don't think it's effective past that. Any time I've tried to do something substantial, I get so many small, hard to fi d errors. Good luck having it test its own stuff.
hitanthrope@reddit
I've had some fairly good experience with it. Not to the degree that I am particularly worried about being replaced by robot overlords, but when I have Github's copilot plugged into my IDE it will very often "typeahead" essentially what I was planning to write. I can ask it to do things like find and build missing tests and even get it doing code reviews and refactors of messy files.
At my day job there is some fairly old and obscure tech (particularly in the data platform layer) and I have found it really helpful in essentially generating "on the fly documentation", where I can explain what I want to achieve and it will give me some annotated examples. That's quite helpful.
It gets it wrong fairly often, and requires adult supervision, but it has certainly become a useful tool in my arsenal.
SilentBumblebee3225@reddit
Agree. GitHub copilot is amazing
Pozeidan@reddit
Cursor is like Copilot on steroids.
Siduron@reddit
I've always enjoyed Copilot but compared to Cursor it feels cumbersome and very slow.
Right-Tomatillo-6830@reddit
have you tried the latest copilot? MS has adapted and pretty much made a better version of cursor just in a plugin to vscode.
ILikeBubblyWater@reddit
CoPilot is years behind cursor
Right-Tomatillo-6830@reddit
I couldn't see how? Do you have examples? The idea that source tree awareness is a big thing as the sibling suggests doesn't seem quite right. I know aider does this internally and roo code.. I'm sure copilot does it (not a hard thing to implement). The rest is just tools and mcp integration which the latest copilot has too.. ? So I'm not really seeing where the gap is..
StorKirken@reddit
Copilot still doesn’t allow completions in the middle of a line, right? That’s been an occasionally super useful feature of Cursor for me. As well as the killer feature: intelligent cursor jump suggestions.
Relevant-Magic-Card@reddit
Does it have the same features? Like codebase understanding, semantic search, ability to use terminal, mention files etc?
Also, nothing is better than sonnet 3.7 at generating code
Right-Tomatillo-6830@reddit
I didn't notice much difference in actual usage. but consider this: cursor built their software on top of vscode. There's not a lot of moat there other than the name, I'm not sure how well they will compete with multitrillion MS in the long term. I don't even think OpenAI will.. Notice that MS has taken a bit of a step back with funding them too..
DCoop25@reddit
Yeah it has every thing you listed. Maybe not completely model agnostic, but it does have Sonnet 3.7 and adds new models frequently.
Relevant-Magic-Card@reddit
Check the comparison table I share, its still quite different . But in cursor you can have both 🤷♂️
716green@reddit
And windsurf with Claude 3.7 is like cursor on steroids
sam-sp@reddit
Have you tried agent mode in VS Code? combined with Claude as the model its doing a much better job of thinking than the other LLMs. If you combine it with MCP for tool calling, and prompt it with the MCP to call, it’s quite impressive, especially if you ask it to come up with a plan, and then execute on that plan.
nappiess@reddit
Ahh yes, using AI to code review code that AI wrote. What a great idea /s
hitanthrope@reddit
Haha well, I have two points here...
1) I didn't specifically say I ask the AI to review it's own code, it's more a matter of having it review code written by humans. I'm still doing the review really, but sometimes AI spots interesting things that I miss.
2) Even if you did ask the AI to review code that AI had written, this is not really analogous to a human reviewing their own code, but more like one human reviewing the code written by a similar skilled human, which is something we do all the time. If you ask an LLM to write a document, and then ask it to review that document, it will often find legitimate improvements to make in it's own stuff. If we wanted to personify an LLM, it would be request scoped.
daredevil82@reddit
so how do you think AI models are validated?
they're validated by being reviewed by another AI. That's the standard practice.
hitanthrope@reddit
I don’t really understand where you think I have claimed, suggested or implied that this doesn’t happen. That might be my fault if I wasn’t clear, but yes, aware.
daredevil82@reddit
It was more in response to this. Even if you're not doing it, it definitely is happening, and a great case of a GIGO loop, unfortunately
hitanthrope@reddit
Oh, I see what you mean. Yes. I really took the original comment to be more about user level stuff. Asking AI to produce code and then asking the same AI to review it. My response was more that, firstly I didn't say that was what I was doing, but secondly it wouldn't be as crazy as it might sound if I did.
One of the reasons I might come across as slightly defensive about this technology is that I was, for quite a few years, the CTO of an AI company. As we are on an experienced devs sub, I assume everybody here will immediately recognise that this makes me an automatic expert on precisely nothing, indeed I had a team of much much smarter people than me doing all the really interesting research stuff. I mostly helped build them infrastructure and helped to make sure we could pay them.
That being said, it does put me in a position to recognise the amazing leap forward we have made with these latest models. We had some cool stuff at the company, but nothing like what we get to play with today.
I think it is very cool (though far from perfect) and I do take a little bit of issue to the, "only idiots find value in it", sentiment that the was being proposed (and later doubled down on) by the other responder.
nappiess@reddit
Yup, it's clear to me that the people who actually see great benefits from AI were "-10x" engineers, and using AI is just capable of bringing them up to baseline. Anyone actually above average tends to report relatively minor productivity increases.
hitanthrope@reddit
I really don't think you need to frame it in this kind of diminishing ways. It's just another tool that we have. For some stuff it works pretty well, for other stuff it doesn't.
It's not really much more complicated than that.
I got by for almost 40 years without it, so I don't think it's a necessity but it's a useful tool to have around.
theweeJoe@reddit
Because as a Devops engineer, if I have to figure out why that 10 year old price of jinja code is broke in that 1 place it's used, if I know what needs to be done to fix it, but don't know the syntax, it's easier to describe to an LLM in plain sentences what I need rather than reading through unmaintained documentation, getting it working again while I concentrate on moving the software stack to a better place.
It allows me to get an answer that will point me in the right direction instead of having to trawl thru old stackoverflow posts to see if anyone else has experienced the same issue with some indecipherable output which only seems to be affecting your hardware, and having people have contradictory and outdated solutions because it is 5 versions behind.
Scenarios like this are most of my use case for chatgpt and it's wonderful as a tool. I feel though that my use-case is a lot different than what Devs are trying to use chatgpt for. Pure development means that understanding code and how to problem solve yourself is quite important within your framework. I see a lot of lazy use by young Devs using it to code for them and the concept of 'vibe-coding', which isn't really coding but if it solves engineering issues then it's valid.
Engineering isn't about being a purist, it means investigating potentially immature technology and seeing if it can make you more productive or more product. Chatgpt is a useful tool, it's still an immature tool but if all that is directed at it is immature rants about how it's ruining people's brains and is lazy, those people will learn quickly wether it's a productive tool or not in a couple of code reviews and if it isn't good for that use case currently, dump it, move on, don't denounce the entire thing because that is myopic
ChronicOW@reddit
I swear if I have to see one more post on linkedin about ‘AI is the future’, ‘AI will replace all of us’, ‘insert generic AI slob post here’ imma lose my shit lmfao, i do use AI to code languages I’m not proficient at myself but it’s more of a syntax dictionary, all these execs with their new AI company… shit is getting out of hand, first of all it’s not AI, it’s a large language model, and while it can help some people to be more productive it cannot replace humans not even close, maybe in 50 years if ever, i swear everybody is just on the hype train until about 2/4 years from now when all of this shit goes tits up and they all go bankrupt
I saw a post the other day about a sales guy mocking the AI hype and the comments was full with people agreeing while having ** AI ** in their job title / bio, you can’t make this shit up 🤣
Inner_Engineer@reddit
I use it all the time. Great for getting things started. I’ll ask it about what libraries/modules/(whatever the fuck C# uses), exist in a given language to do what I’m trying to do.
And it’s pretty useful for going between languages. I’ve had it convert medium-sized blocks from one to another. This is only mildly time saving since I still end up going through line by line to verify.
I also ask it best practice questions to keep my code readable for the next poor bastard. Lord give them strength.
How much I trust it varies from day to day but usually it gives good feedback.
mophead111001@reddit
I'm in a similar position currently. The director of the company I work for has fully embraced the AI hype - claiming anyone not using AI will be obsolete soon. Just today, I had to spend 10 minutes explaining (to no avail) why I can't just ask ChatGPT to write a SELECT query without providing the db schema (or at least the columns that I need) - and at that point I might as well write it myself.
Given my lack of experience (5 months since graduating + 1 year internship) I shouldn't really be complaining. At least I have a job in this market but I'm not exactly liking the trajectory of the industry if this is such a common experience.
hkric41six@reddit
I don't think anyone with actual talent or experience who works on real code in production is productive with these shit tools. I have not seen it once. Every time I try it makes my job harder. The AI is consistently performing at the level of a Junior who wouldn't pass a screening interview.
kibblerz@reddit (OP)
I feel the same, yet my former boss AND my former mentor swear by it. I don't get how this shit is effective. I'd have to be a really shitty programmer for it to save time at this rate..
hkric41six@reddit
Yea it's like that everywhere. Usually those older hands have been out of the game so long that they've become irrelevant and just want to feel like they are somehow staying ahead of the curve. We've seen this before again and again. In the early 2000s it was all Java and "managed code" and "frameworks" blah blah blah. No one was going to use C and C++ anymore, compile once run everywhere! It wasn't like it didn't happen, but it didn't live up to the hype. Of course AI is the same, it's always the same.
I just ignore them and continue to do good work and no one asks if I used AI. I was forced to install copilot, but no one actually cares if I use it or not, and I don't. Although "was this AI?" is becoming a common PR comment for me lately to point out ridiculous code that is disastrously broken. It always is AI, because even mediocre programmers arn't that dumb.
kibblerz@reddit (OP)
Glad im not the only one. With how people have been talking, it seemed really bizarre that I kept getting shitty results that took more effort than just writing the code. Most people must just be really bad at coding..
hkric41six@reddit
Truth is, with the rise of CS, the actual percentage of good programmers went down from what was already low single-digit percent. If you are actually capable, with an OG nerd background where you started coding when you're a kid, you will only be in greater demand going forward. The work cleaning up this disaster is going to be unlimited and lucrative, and there will never be a short-cut to getting to that skill, meaning it will never saturate. I think that's really what we say - the market is flooded with kids who learned CS because they heard the salaries were high, but will never ever be outside the 50th percentile of skill. Those are the one going bad over AI, because it is the only "easy" way forward.
mikelson_@reddit
Skill issue
kibblerz@reddit (OP)
Using AI is not a skill lol
mikelson_@reddit
Ok boomer
AcesAgainstKings@reddit
Yeah. I'd look into Cursor rules and iterating on your approach.
Would you ask a Junior to compete a task and never look at it again? Of course not. But you can keep asking it to be a little better each time and you don't need to worry about it's feelings when you do.
PureRepresentative9@reddit
In what world is asking a junior to do something faster than doing it myself?
the vast amount of time of programming is understanding requirements and checking to make sure they're followed.
the actual typing is only 5%-10% of the time.
Strus@reddit
You can ask junior to do something so that you can do something else in the same time. Think about that.
I use Cursor in the background all the time. I create a few git worktrees, ask Cursor to do some things there that are easy/trivial but I never have time to do them (like fixing linter errors in legacy codebase that haven't had linter configured for years), get back later to review the files and create a PRs.
Also, for some things it's obviously faster than me. Like if I need a bash script that is trivial to explain in a prompt. Cursor will write it in a few seconds, where I would definitely need more cause I never remember the details of the syntax and need to check it.
PureRepresentative9@reddit
There's literally nothing a junior can do and that i need to review that's not faster than me doing it directly.
If they're highly skilled enough to be independent and I don't need to check and fix their work, they're no longer juniors lol
Strus@reddit
You are missing the point.
PureRepresentative9@reddit
When you get one, feel free to share it with the rest of us
digitizemd@reddit
The point is clear. You can ask AI (a junior) to do something for you then work on your own thing. Then when it's done (like in 10 seconds) and gives you an awful, non working solution, you can tell it that it doesn't work. Then it will generate a new solution -- this one with a made up API call. When you tell it that, it will give you the first solution that doesn't work. I'm not sure what you're missing here!
PureRepresentative9@reddit
Ya know
I think we've literally generated job security for us senior devs lol
kibblerz@reddit (OP)
It just seems like an extra step to writing code IMO
AcesAgainstKings@reddit
I guess using an IDE is also an extra step 🤷♂️
It's a new tool. At first I thought it was garbage, then the penny dropped. It has definitely helped my productivity, but if you think it will write 100% of your code for you then we're not there yet.
FetaMight@reddit
If you think an IDE is an extra step then you have no idea what your IDE is doing for you.
AcesAgainstKings@reddit
Think you might have missed the point
FetaMight@reddit
No u?
AcesAgainstKings@reddit
I drew a parallel between the fact that yeah adding AI into your workflow for building software might be "an extra step" but you get a hell of a lot of benefit for doing so. Just like how you could write all your code in notepad but using an IDE is going to come with a tonne of benefits.
Agents can be super helpful, but you've got to learn how to use them.
I barely even post here so not quite sure how you've come to that conclusion, but I'm not getting into a dick measuring contest.
FetaMight@reddit
You're not getting my point.
IDEs provide reliable and consistent benefits. Their automations are deterministic, thoroughly tested and well documented. They are things you could do manually if you wanted to because every automated step is understood.
AI Agents provide automations that are in no way consistent or reliable. They may improve your efficency, but by your own admission this requires training yourself on them. And, even then, they have a hard limit on what they can offer before they start to hallucinate wildly. Which brings me to their fatal flaw as a tool.
Any tool that deliberately hides errors (hallucinations) instead of failing early is a BIG NO for me when it comes to professional work.
And nobody's measuring dicks here. You don't have the experience to participate in this sub. And you certainly don't have the experience to be assessing AI as a viable tool for experienced devs.
AcesAgainstKings@reddit
It kind of just feels like you came into this thread to pick a fight. You've got a problem with AI and you need everyone to know it.
You're right, IDEs are very different to AI Agents. I didn't say they were the same. But someone suggesting that adding AI to their coding process is just "adding an extra step" is a crazy simplification and honestly quite an ignorant statement.
FetaMight@reddit
> It kind of just feels like you came into this thread to pick a fight. You've got a problem with AI and you need everyone to know it.
That wasn't my intention, but I can admit my execution was pretty poor.
I don't have a problem with AI per se. I use it myself and I'm developing stuff on top of AI. I see the potential.
I'm just particularly irked by people who misrepresent its abilities. I''m not accusing you of being that kind of person (not anymore, anyway. sorry). I'm admitting I went off a bit half cocked because of this other kind of person.
They're the same kind of people who lectured on and on about Blockchain and Smart Contracts, or NFTs without any understanding of what they were preaching.
I've been seeing them in here more and more and it's just annoying.
Anyway, sorry for being a dick.
kibblerz@reddit (OP)
What I mean to it being an extra step, is you have to think about how to explain what you want it to do.. Atleast for me, that tends to be more difficult than just writing the code. I guess I didn't get into coding to be social lol.
Randromeda2172@reddit
Dog being able to verbalize what you're trying to accomplish isn't rocket science.
It's a text box let's calm down. If typing into Google isn't too social then neither is this. You're not supposed to conversate with the LLM
creaturefeature16@reddit
I definitely agree that there's been many times where writing the requirements in "natural language" feels as arduous, if not more, than just writing it yourself, and in those cases...I do, but I still leverage the LLM for those highly specific tasks because my two human hands are no match for 100k GPUs.
There's also been instances where I've sat down and really hammered out a decent plan using "pseudo-code"...not quite english, not quite syntactically correct code, and then I just assign a language to it and...huzzah! I can get pretty complex logic done in very short order. And since I set up my coding standards that get fed into the system prompt, it abides by all my formatting and architectural requirements I need.
FetaMight@reddit
Can you give a concrete example of this complex logic?
Nobody ever seems to go into any detail about the problems they solve.
It's hard to tell if people are getting it to do 2d pathfinding or develop innovative lossy compression algorithms that retain perceptually fidelity.
kibblerz@reddit (OP)
I get what you mean about it being a typing assistant, but I honestly believe fluency in vim probably brings more productivity.
IMO coding is best when I can bypass the English portions of my brain entirely lol. I don't think about my code entirely in words though, lots of it is more intuitive at this point. English is a bottleneck 😂
creaturefeature16@reddit
For sure, I agree on a lot of fronts. Reminds me of this comic.
Lately I just communicate with the LLM using the same code-ish way that my brain thinks about these things and it works surprisingly well. Like, here's an example prompt I had recently:
This is how I might write this out elsewhere or in my head, and using some of the more recent models like Claude 3.7 and Gemini 2.5, I am often quite impressed by how accurate they are even when I'm a bit light on details. And while it's running those tasks, I'm often working on something else and I can swing by and check the work. I try to parse out smaller tasks, but I really don't want to spend all my time doing code reviews, either! 😅 In these cases though, the time saved really adds up over the course of a project.
Give it another shot, man. You sound like you like to be in control, and tools like Cursor or Cline really give you the tools to reign these models in and ensure a LOT of control over their outputs. I promise you'll enjoy them more if you put some time into fine tuning the workflow a bit. My hot take is these are power tools meant for power users, which is counterintuitive to their low bar for entry. Non-coders will reach a hard ceiling with them, but with skilled developers, there is no ceiling.
absqua@reddit
This is my experience and how I use these tools. There's no doubt they've increased my productivity, not least by removing psychological barriers to getting started on a task. There's just way less trepidation wading into a language or technology I'm not fluent in
PureRepresentative9@reddit
I agree, the requirements document is far far larger than the code.. Both literally and in terms of effort.
The only way that wouldn't be true is if these other people are writing everything but hand and not using libraries
trcrtps@reddit
I have the same experience. I still think it can be garbage, but usually useful. When you get trapped in a circular conversation though it feels like a k-hole mixed with a stroke and I think it's garbage again.
HQxMnbS@reddit
It’s more like asking a junior to complete a task and they forget everything about the entire code base each time you ask. It’s just not good with large code bases
bikeous23@reddit
Microsoft calls their AI “Copilot”. It is an assistant, a tool in your toolbox so to speak. It is not going to do your job for you otherwise they would have called it “Pilot.”
bittemitallem@reddit
I think devs shouln't look at it like it's black & white, vibe coding vs assembly - I'mo there's still tons of time to be saved by using llm's when it makes sense.
Bladesodoom@reddit
I just try to make things that work
mrhinsh@reddit
I've been using AI ( specifically ChatGTP and Copilot) to write code for the last 6-12 months.
For much of my work it's made me significantly more effective and shortened the feedback loops.
AI does however write shitty assed code that barely works. Agents are going to make that better (seen some cool stuff that's comming at Build), but for now it's a copilot. It's great for:
There are many instances where it just makes shit up, or gives the worst possible option... And that's where your dev-brain comes in. I don't know how many times I have to say "would that not be easier with [insert construct]" and AI is like. Yea, your a re right...
Junior coders should not use AI, experienced coders should.
Watch out for it just making shit up when it does not know as well. It sometimes takes me a while to realise it's in a loop because it's ignorant and still wants to answer.
Apprehensive_Elk4041@reddit
The only use I have for it as a refresher for things I already knew to some degree (basically just as hints). From what I've seen, it's basically a far worse version (since there's no context, and no arguments about the pros and cons) of stack overflow.
I have had use for it with a section of API that I haven't used in a while, or for some simple questions, but certainly not for anything complex, as from what I've seen the responses you get out are very often incorrect (not in intentional ways like taking out a semi-colon to make sure people do some work for it) and all are presented with a lot of window dressing that makes them look legit.
The other issue is that it has no idea what it's doing, so you're likely to see anti patterns at least at the level you find in the underlying source code it's built on, which if they're using public git hub type sources is going to be A LOT. Most individual people that publish code out open source are early in their careers and are trying to present something out as a portfolio to get a job. Most work done by more experienced people is proprietary to the company they work for so it never sees light of day. I'm not saying that all code held privately is better ; but I think it's more valued by the people that maintain it than much of the long forgotten early career code available publicly.
I can see it as a tool for an experienced developer who has already forgotten a lot in their travels, certainly not as a tool for beginning level developers.
Trying to run a company by 'vibe coding' with a bunch of juniors is... certainly not a company that I'd want to be invested in personally in any way.
ShivonQ@reddit
It only works if your are solving a pretty simple premise.
Sometimes it can get you almost there, then it's on your to make it actually work.
Or alternatively, sometimes I'll use it and just be like "how many effective ways could you solve this specific problem?" And then see what it comes up with. The biggest win from those scenarios is often discovering a package/library that does x for you.
morentg@reddit
I think coming to AI expecting to outsource critical thinking to it is a grave mistake. It's powerfull tool, a number multiplier that allows you to outsource menial tasks to it to improve efficiency. But expecting it to write functional large scale projects is going to be hell, at least in near time window. There's too much at stake there to let AI randomly hallucinate and create vulnerabilities while making large amounts of code unmaintainable.
Stellariser@reddit
It’s a handy assistant, but not magic. But you need to learn how and when to use it. Your VIM macros s and regex aren’t going to automatically write sensible comments for instance.
notParticularlyAnony@reddit
This sub has basically become old man yells at cloud
sfryder08@reddit
For real. It can do what I can do 1000x faster with 15 years of experience. Oh no, it cost you 50 cents? Great, I can have my night back. It’s not always perfect but you can get 90% of the way there if you can explain what you want to do in detail and fill in the rest. Use the tools available to you and give up your ego.
mitchell_moves@reddit
These are my most common use cases: * sanity check and brainstorm high level designs * error debugging * ETL flows * mass refactoring, sometimes directly and sometimes with an intermediate script * subject matter agnostic modifications eg wrapping existing logic in parallelism mechanisms * proofread copy * math modeling ie producing a verifiable estimate of something that is too abstract to precisely quantify
Jeremy_Monster_Cock@reddit
The AI codes wonderful things if you have general knowledge and you communicate with it as if it were your colleague. Free Grok3 surprised me a lot, with its qualities, openai who knows everything, and Claude who knows how to bring grace and elegance to make the code beautiful, pleasant to read without too much bullshit and which looks like a release. :) That's it too
rydan@reddit
I mean I set up some automation on AWS that has already saved me money. And that was with the free ChatGPT. Maybe you just don't know what you are doing. If you hired a guy from India on Upwork to make that drawer it would have taken 2 days and $100. $0.50 is dirt cheap.
AssistPrestigious708@reddit
My former boss constantly promoted the capabilities of AI within the company and required every position to find ways to leverage AI to improve productivity. Because of this, some employees were laid off. He firmly believed that AI could do the work of several people.
Medical-Ask7149@reddit
The only thing I’ve found AI useful for was quick answers to things I forgot. Like how does this function work in this library. It provides an answer that is wrong but it jogs my memory enough to where I remember how it actually works. That’s the only good thing I’ve found works with AI. The other thing not, programming related, is brainstorming website copy or prettier Lorem ipsum.
OtterZoomer@reddit
It’s like working with an idiot savant golden retriever. Keep it on a short leash and make sure you review everything it does and it can be a net win.
kayakyakr@reddit
What models you use has a lot to do with it. The models vary wildly in quality. GPT 4o, for example, is trash at coding more than a unit test. Gemini 2.5 is much more capable.
The workflow also matters a lot. Most tools are trying to treat the AI as a sr engineer who does the full implementation or a peer that works beside you. I'm working on a flow that gets the AI out of the editor entirely and into a code review sort of workflow. (Trying to launch as a GitHub app, won't self-promote here, though)
Treat it like a Jr Dev and be very particular in what you hand off, and you'll find more uses cases where it can help. Try different tools: cursor may be awful for the way you approach problems, but aider might work. Or a tool that doesn't try to work with you at all.
Far-Income-282@reddit
Yea this. I was saying above I'm a tech lead and I've been able to just code the grunt work tasks with AI and not be miserable while giving the more fun bits and bobs to my team. Historically I'd have to give an SE1 a sprint to do something like a package upgrade but I did one in my free time this week with AI being my junior dev.
TruthOf42@reddit
Treating it like a junior dev is absolutely how I use it. I just used it a lot for creating tests where there is just a lot of repetitive code, but it follows a pattern. It did pretty good at this.
Lerke@reddit
No, I feel you. I find the development experience / feedback loop too slow to be practical, i.e. translate your requirements into a prompt, wait for the model to throw some code your way, get it running in your program and hope it compiles in one go, too slow and not reliable enough to be practical. It just doesn't spark joy on a personal, or professional level to develop something with such an imprecise feeling tool, and be left with a bunch of source code I will still have to read, edit, and understand anyhow (at which point I may as well have written it myself), or to subject my coworkers to this during merge reviews or any collaborative setting.
If all one cares about it speed and time between picking up tickets and creating a merge request, then cooking up generated code all day every day is likely to outperform any human worker in due time. But I do not believe this is the only metric that matters in the long run.
The argument equating a state of the art AI programming models to a junior coworker is nice, but guiding junior coworkers often slows one down, which I feel also is the case with LLMs.
And none of these even touches the ethics or business-sense that is transmitting parts of your source code to some business, relying on a 'trust-me-bro' approach that they won't long-term store and process this data.
I have found them useful for essentially creating tutorials or rundowns on the usage of specific libraries, patterns or methodologies however. Anything essentially self-contained and not overly complicated.
50 cents is peanuts compared. The real metric would be whether or not your former boss and/or vibe-coding coworkers actually have a significant edge in development speed (with or without some acceptable loss in quality) over you.
kibblerz@reddit (OP)
One of my big issues with the AI hype, is so many developers act like code generation is new. When in reality, programmers have been using code generation with far more precise methods for awhile now.
When making APIs, I start with the database, then use SQLc to generate the types for a golang project, create a gql schema, and then use gqlgen to link the schema with the SQLc types. What's left for me after, is filling in a few revolvers with rather simple and straightforward logic.
All these vibe coders would be blown away with the capabilities of classic codegeneration, and you don't have to be so skeptical about the results
Far-Income-282@reddit
I think we always had prompts for testing but the LLMs ontop of the code generation are nifty.
Im a tech lead and I feel like I enjoy using AI to do all the grunt work of "set up all my packages to do X", "make my httpclient but use the authentication method from this file"- I also use AI to write my powershell scripts that do some of the "okay now find everywhere where I didnt set this variable and set it this way for me.", honestly that one I'm not sure why my prompts somedays are fuego and other days its like basically tells me "look. You lazy SOB. Just try replacing them yourself. This is a waste of my processing power."
Previously I was never able to check in my own code, it was like once a quarter. But now I've been checking in small fixes once a week (think tribal knowledge fixes like oh this package needs an upgrade or this data base schema had a change).
Independent-Bag-8811@reddit
I like it for SQL. I like that I can pretty much build DBs in a diagram app like lucid chart and just upload a screenshot and have my SQL file ready to go. Its also really good at "hey here is some random unstructured crap plz format this as SQL inserts".
Its honestly good enough at SQL that i'm surprised there isn't an LLM postgres extension I can just use as my interface to postgres yet.
TDHooligan_@reddit
I spent the tenner or so on Copilot a while ago as an experiment, expecting very little.
In backend, I'll stop for a second to think about a function or class. Surprise, autocomplete's spot on and 1 minute of work has been avoided. It's not super frequent, but it does add up.
Game dev? It's nearly useless.
Ops, though...? Magic. Copilot will take some complicated IaC requirements and give you the boilerplate. Most of the relevant parameters are there with (mostly) correct values, and consistent(ish) naming conventions. As long as you're telling it to build the right things in the right place. A little time to clean it up and it's good.
So it's always a little shaky. The quality is dependent upon the language/space you're working in. But, if and when it does work, it's a serious force multiplier.
If you're uncertain, it's one expensive coffee to cement how useful it is.
RobertB44@reddit
I have found 2 usecases AI is decent at: Writing unit tests and refactoring. However, I don't ever vibe code. I always review the changes the AI made and fix the code as needed. Overall I'd say AI gives me a 5-10% productivity boost.
I have tried building features with AI before, but everything that is not self contained ends in a disaster. Maybe I could have the AI do it by breaking it down in smaller tasks and iterating over and over, but at that point I can write it myself faster, so why bother.
SagePoly@reddit
I’ve tried cursor AI on complete vibe mode with mixed results. I’ll have to circle back and try again.
Right now I’ve been using Copilot and ChatGPT. I find that small snippets with reviews works well. The AI gets some things wrong.
The art of writing beautiful maintainable code is dying. The future is all domain knowledge, designing, working with AI, etc
space_iio@reddit
It's a new tool. You have to learn how to use it and when to use it. Understand its limitations
flatjarbinks@reddit
I use Cursor a lot with custom rules per project. The autocomplete is life saving for repetitive tasks. The agent is ok for writing code especially for utility classes or functions. Across huge code bases I also find it useful for explaining errors or analyzing parts that they are not well understood. Finally it’s also great for creating fixtures or instant documentation.
Nize@reddit
This might be controversial but I genuinely think if an engineer can't get at least some productivity benefit out of AI tooling then they are actively resisting it. Google have confirmed that 25% of their production code was written by AI last year.
kibblerz@reddit (OP)
Lines of code is not a good measure for productivity...
Nize@reddit
Not as a number but certainly as a percentage or your production code
croakstar@reddit
It’s much better used as a pair-programming partner. I’m a staff-level software engineer and I mostly get it to explain concepts I’m not familiar with in a way I understand more easily (I learn primarily by associating concepts with real-life experiences, e.g. telemetry being a security camera system to spy on your internal processes).
Yeah, I let it write code sometimes, but I always understand the code enough to realize when it has made a mistake. Then I’ll either explain to it the mistake it made and let it fix it or I’ll just fix it myself. I do try and still write code myself, but for boring code similar to other chunks of code in my app? Yeah I let it do that. If I’m working with tech I’m not familiar with I generally do all the coding myself and pretend as if the AI is a subject matter expert available to ask questions.
Also, I use Cursor but I have an enterprise license so I don’t run into many token count issues, but in general if I prefer something to be done a certain way I usually end up getting the AI to generate a markdown file that describes how I like a certain thing to be done. Then I can pass that in as context in any other chat. So for example, we just started adopting Clean Architecture and honestly it goes a bit over my head conceptually. I had the AI generate a best practices markdown document for building a feature from scratch in the newly updated architecture. Then I immediately used that markdown file as context in addition to a JIRA ticket description and told it to create all the files it thinks I will need for the feature in the right place with inline documentation about why the file is where it is. Did it do a great job? No. Did it do a good job? I think so. It saved me a lot of time.
grahamulax@reddit
Do you pay?
Actual__Wizard@reddit
This should be understood already: Highly skilled programmers are typically hindered by these tools, not helped...
Chronic-overindulger@reddit
Yet another incorrect post on here about AI being useless. Given by your tone in this post, it’s clear that you are not giving it a fair shot.
hackedieter@reddit
Yes. But only when you know what needs to be done. Just asking for features doesn't work. I see it as a golden rubber duck. Just try to describe your idea of how to implement something. Ask it to challenge your plan. Ask for pitfalls, if you forget something, if that's even a good idea etc. tell it to wait until you agree on a good plan. Works most of the time.
Big-Environment8320@reddit
As a developer you are responsible for understanding the codebase. Whether you wrote it or not. Now you can pore over code written by someone/something else if thats how you like spending your time.
But don’t forget. If it breaks. You are the one who needs to fix it.
santzu59@reddit
I think the key is being a good coder prior to using the tools. I personally use them quite a bit but I don’t ask them to move mountains. I use them as part of my own development process where it will speed up method creation, refactors, stuff like that but like we’re miles away from just telling it to go to town on some huge feature that needs tons of context. Also everything that generates needs to be heavily reviewed.
ConsulIncitatus@reddit
No one wants to see past the current iteration of a technology's flaws to where its going.
No one wants to recognize that every big tech company on the planet is spending almost all of their resources trying to solve these problems with the current generation of AI technology.
Everyone wants to hope that they fail and this is as good as it's going to get and it will never be enough.
No one wants to admit that their skill set is obsolete.
AI already knows more than you, but you are better at applying your knowledge.
For now, anyway.
shesprettytechnical@reddit
I feel like there's some real dunning-kruger going on with folks who are huge proponents of 100% AI coding. If you have only superficial knowledge about how good, enterprise-grade software is built and no knowledge of the challenges that requires, it's easy to see why things like Bolt or Cursor seem like silver bullets.
Good for prototyping, bad for production.
sir_racho@reddit
LLM’s lack precision. Try to design something visual - it’s so frustrating and tiresome, and never quite right (for me at learn). I suspect all the “wow” results are essentially luck. Given all this I don’t see LLM’s being good for specific web designs at all. They can do ok and generic stuff, but beyond that I have doubts
EducationalZombie538@reddit
*goes to previous project, gets drawer*
yeah, if you've done shit before i'm not sure ai is as much of a time save as people claim. it's still great in some cases though
caksters@reddit
I think you need to learn how to use it effectively.
I don’t use it to code for me complicated systems, but I do use it to help me to code and find useful content that I had not thought about (helps to brainstorm).
I know what good looks like so I have written prompts that help me to refactor code, include docstrings, more meaningful function and variable names etc.
It definitely inproves my productivity
Hawkes75@reddit
Right now AI is good for saving you trips to Google sometimes and that's about it. A recent issue yielded about ten different answers from AI, all of which were buggy deprecated messes, and then I found an article written by a human that solved the problem correctly and quickly.
Weak-Attorney-3421@reddit
Completely agree. Trying to get a React project up with Shadcn and the new version of tailwind with ai was quite literally impossible as it was using tailwind 2.0 docs which were no longer compatible with the new ShadCN library. Devs love changing stuff and AI hasn't gotten good enough to scrape new docs everyday unfortunately so for now it seems shit for anything modern to me. I also think when you use tools like cursor for me it takes a lot of my thinking process away.
Standard--Yam@reddit
I only use it for brainstorming and ideation. Autocomplete of code slows me down because I gotta adjust it. Just chat based convo like I’m asking a coworker for advice helps.
vguleaev@reddit
It can be useful, don't give up, just need a lot of tryings, simply continue and slowly will start liking it
bynaryum@reddit
I’m sure its helping some people. For me its a mixed bag. As other have said it’s mostly just been a google and StackOverflow replacement with roughly IMHO the same success rate (which is nowhere near 100%}.
ChuyStyle@reddit
Skill issue
marmot1101@reddit
I find it useful, in certain contexts. I tend to hop languages a lot, so it's helpful for remembering rote syntax. It's good for describing error messages. I don't like it auto-completing all but the most trivial things, and I'm not a fan of asking the chat a question and it hammering in 1000 suggestions that I probably don't want. And I refuse to accept any change that I don't 100% understand with my wetware. If something goes sideways I'm going to have to debug it, so I'd rather spend the upfront time understanding than trying to grok things while I have a dozen people staring at me in an incident call.
HolyPommeDeTerre@reddit
Does it actually help you be more productive?
Genuine question. You were able to do everything you mentioned before LLMs I guess (hope?). But now that you can ask some questions in some contexts, is it really better than before?
I ask because I spend my time explaining implicit things to the LLM so it has context. At some point, I am explaining everything and then checking that everything follows what I asked. In the end, I don't get more productive. It consumes more energy, take more time, to get to the same state
Easy-Philosophy-214@reddit
I agree. And also - isn't it less fun and enjoyable?
marmot1101@reddit
Yes, I would say I’m more productive. Not in some kind of earth shattering way, but the tools are good and do produce right-ish information quickly. I think my best analogy would be when stack overflow came out and got traction. There was nothing on stack overflow that I couldn’t have figured out reading javadocs. Or if I was really jammed up I could remove the paywall banners from experts exchange. But stack overflow had it in all one place and people were face planting into the same problem I had a lot of the time. It didn’t replace the need or speed of reffing javadocs, but it helped make the information I wanted faster than it was before.
I don’t fiddle with tools. I’m a mostly defaults guy. If there’s something worth some customization I’m happy to take the time, but it has to be worth the time. I use windsurf because it’s mostly all ready already. And when it gets in my way I disable all the autocompletes and Cascade for a while so that I can barf out the code I already know how to write. Then turn it back on. The amount of time I’ve spent optimizing my ai environment is near zero. I could probably get more out of the tools, and when I have a need I’ll do so. But for now I’ll take my keyboard shortcut for ChatGPT, and my not at all customized windsurf, and shave 5-10% off of time spent coding.
I’m in the devops/production land so that’s not a lot of time. Probably get more true usefulness out of ChatGPT conversations about “hey what are my options for doing zyx” than code complete. For someone who learns by talking that helps me annoy those around me less. And my poor duck finally gets to have some goddamn peace and quiet for a change.
As is always the case, YMMV.
Toph_is_bad_ass@reddit
If you write any amount of boiler plate it's hugely helpful. That's where I see the most productivity gains. Our apps have a ton of data models and validation and it rocks for that.
iagovar@reddit
Useful when you hate pandas API too.
marmot1101@reddit
Oh man, I'm actually looking forward to my bi-annual glue scripting that inevitably pops up after my pandas/python in general context is gone.
iagovar@reddit
You'll always hate pandas, no matter the context, no matter if you use ai or not.
MetronomyC@reddit
10000% this. If I don’t understand it i don’t want it. I only find it helpful when I’m switching back to a language I haven’t used in forever and need to refresh on some syntax/grammar specifics (I’m looking at you Java)
the_fresh_cucumber@reddit
Same situation here
The only thing that gets me miffed is that it actually sucks at some of the auto completing.
I sometimes am editing config files and ask it to just "use this list and add an entry for each item in the yaml file using the pattern I already used" and it somehow fails. Literal intern-tier copy-paste work which takes nothing beyond the ability to type. Then I have to go make a little script which takes 15 minutes.
WoodenPresence1917@reddit
I experience a lot of this. Failing at a simple task even with guidance. Now I've spent 10 minutes asking AI to solve something I could've done myself in that time if I wasn't feeling lazy.
At least if I was asking a junior colleague to do it, I'd be teaching them something
Intelligent_Water_79@reddit
There's this thig in psychology called schemas. Humans see what they expect to see rather than what is there
So the worst is when it auto completes with something that looks almost exactly like what I wanted, but isn't. That can waste a lot of time.
Far_Engineering_625@reddit
And don't get me started on the false hope that I keep getting when I submit a decent prompt (as opposed to my normal "refactor this") to help it actually do the task, I then sit there watching it generate 50 lines of config or "prettyfying" some json-like code and it just turns it into a mess by changing vars/names/code etc etc...and I just go back to doing it manually after wasting 5 minutes trying to believe in it.
monsoon-man@reddit
Boy, how quickly muscle memory of one domain creeps into other domain
A month with PHP and I started adding '$' in my rust code and foreach started replacing 'for'!!
FearsomeHippo@reddit
This. It’s more & more useful the further you get from your expertise or comfort zone.
You can switch frameworks or languages with relative ease, then get up to speed on both the APIs and the idioms of the platform. Debugging is also WAY easier.
When it comes to languages or frameworks I’m familiar with, I use it much less. Maybe have it write a quick function that I know it’ll get correct or write a simple interface. The more I try to have it do, the less useful it is.
Easy-Philosophy-214@reddit
I actually tried "vibe coding" Windsurf and I feel so stupid.
I'm "waiting" all the time for the AI to do its thing. When it finishes, it generates a lot of average code. This code works yes, but I don't even want to read it. It makes everything less enjoyable, and I'm wondering if I'm actually going faster. What's sure is that I won't enjoy reading all this slop when they call me to fix a bug.
Where I stand now is that I really laugh and don't want to be involved with anyone that tells me that "AI is going to replace us" or that "there's no need to write code manually anymore".
I have the feeling that people praising AI are average coders at best. I'm pretty sure the whole JS ecosystem is going to be flooded by this slop, this is why I started learning Rust - because I enjoy coding, I want to be good at it, I want to do hard things. I don't want to waste my time waiting for an AI to do my things.
sagiadinos@reddit
I am much more productive with AI support, but I do not use vibe coding at no costs.
Use it as a sparring partner to learn concepts or to get some small snippets. That is better than StackOverflow and a search engine
Give AI easy jobs like build me an entity class based on this data table, or document this class, or write unit Tests. Ok last is painful as about 50% will fail.
At the end you have to review every line AI wrote, so or so.
Jetbrains AI is good enough for my cases and gives me better results than Copilot for example.
Cursor & Co are popular for vibe coders. Just tried and they failed. On the top VSCode is only an editor which pretends to be an IDE. Never liked it.
Greetings Niko
Adam0-0@reddit
This is like when the pneumatic drill was invented, construction company owners getting over excited and thinking they can replace their construction workers.
After letting them go and looking on at their stock pile of drills, they realise they fucked up.
Ok, drills aren't AI, but at the end of the day, llms are tools and tools don't drive themselves, they need operators.
I_love_big_boxes@reddit
Frankly, I think people claiming AI is currently able to replace developers in any capacity are either the worst developers ever, dishonest, dumb or a combination.
If all you're doing is CRUD, you can already be replaced by CMS. The same can be said for any highly repetitive task. But the core of my work is innovative.
Western-Image7125@reddit
You’re not entirely wrong, and people need to calm down a bit about vibe coding. I’m someone who actually does use Cursor a lot for my day job, but I know where it excels and where I should write it myself. Specifically, in situations where the syntax is more complicated than the concept of what I’m doing, Cursor excels. For example, unit tests, a bash script with lots of if else while clauses. Like of course I can write all of those myself, but the syntax is not always at my fingertips, so I can’t be bothered. And it’s very easy to catch if anything went wrong anyway. However things like implementing a variation of an algorithm based on an idea I have - no way I’m letting anyone but myself implement it because I have to understand it deeply. So if you OP are already very good at React programming and want things to be in your control all the way - please keep doing what you’re doing. But yeah eventually you’ll find some specific cases where it will actually help and hinder you.
Rarest@reddit
if you work on a highly complex full stack app where things work in certain ways because of nuanced business logic etc etc then yea AI is not very helpful. if you’re building full stack apps from scratch it’s amazing.
skamansam@reddit
Tl;dr - as with every tool, your results depend heavily on how you use it.
My company is an AI company. We build models of various types for use in highly specific things. Me and 2 other (not junior, but more junior than me) devs vibe coded a whole new design for our web app. We used various tools like figma MCP and very lengthy .windsurfconfig files. Yes, we use windsurf. The generated code wasn't the greatest but it did the job. We heavily ported a lot of existing code, so the main parts of the app were already done. Porting was easy, just saying "port component X to the new refresh directory" and it applied all the (mostly correct) tailwind styles and built new components instead of relying on existing ui toolkits. I heard that AI assistants are like junior devs - they have the information but lack experience to apply it. YOU need to supply the experience. Be as specific as you can. If you can't do it, your assistant is gonna suck at it. You also need to have rigorous code reviews and always share tips/tricks with coworkers.
I have also vibe coded several personal projects with OK success. The results depend heavily on the frameworks you use, if any, and the model you choose. I like Claude 3.7. Svelte sucks. Vue works great. Plain ol' html/css/js works great. I dont use any more, so I cant tell you.
Cyranbr@reddit
I find it super useful for asking high level questions on how to approach a problem especially in some framework or language I’m not super familiar with.
Been writing some new glue pyspark jobs and asking it questions about optimizations and it just saves me time from having to read a bunch of documentation to just get started with a basic job. But I don’t really use it to generate a bunch of code for me so I don’t have to think. I use it more like a teammate with me that has more experience in that new framework or language.
For example, “is it possible to trigger some glue crawlers to run from aws account A but the crawlers exist in account B? What would be the best way to do that?”
Gives me a few recommendations and some starter cdk code to start on that. Gets me going and shortens the “try-feedback” loop of trying. It doesn’t feel as painful to start going in a problem space as before.
tatojah@reddit
"Write a python function that, given a reference date and an integer months_back, outputs a list of tuples where each tuple is the first and last day of all the months going from the ref date back months_back".
Prompt is a bit more refined than this, but I don't use it on things more complicated than that. If I can't debug the AI output faster than writing the code myself, then I'm actually wasting time.
kibblerz@reddit (OP)
I feel like it takes me more time thinking about how to describe the code I want it to write, than writing it myself lol.
And I forgot I had a prompt running, it got stuck in a loop, and I just spent 2 bucks in API credits in like 5 minutes lmao
Strus@reddit
Like all skills, it gets easier the more you try.
kowdermesiter@reddit
Copilot is free.
Beli_Mawrr@reddit
An example would be that I have an ORM spec in Prisma's schema syntax but I need it in Python. I can put the prisma schema in a comment and then copilot knows I want that in python and will automatically transpose the correct code.
I can put in my boss's raw copy in whatever template language he uses, then copilot will translate that to python or whatever, in seconds.
It saves a ton of time, and costs me 10 dollars a month at most. The writing the code part of coding is the least fun part. The funnest part is the puzzle/challenge, which AI can't solve for you.
InternationalFee7092@reddit
That's pretty creative. Nice!
tatojah@reddit
That's the thing. With some code, you know the architecture right away, so it's quite easy to describe, and that's usually what I do. AI may even demonstrably code better than I, but they sure as hell don't know the business and its requirements, so I always abstract that away.
Models tend to predict and address certain details quite nicely. But even in the case above, I still needed a second prompt to clarify "the list needs to be ordered lowest to largest", even though it managed to address date cyclicality first try.
It's an assistant, really. I stopped using copilot for this reason actually. LLMs/agents work better when you're interacting with them compared to them having free access to the whole file.
Strus@reddit
Cursor is very useful if you know exactly what you want to write, but you don't want to write it - boilerplate, repeatable code, fixing linter errors etc. I work as usual in Neovim and then switch to Cursor when I have a feeling like "I need to write this but I really don't want to". Things like when you've added a parameter to the function and now it needs to be passed through 10 different functions to be used when you want it to - Cursor nails things like that.
It's also very good in exploring the codebase and explaining various concepts in ask mode.
Combined with git worktrees can do work for you in the background (remember that linter errors in legacy codebase that are easy to fix but there is hundreds of them? Delegate that to Cursor).
But, it also require some groundwork - defining rules (if you find needed to specify some detail in a prompt often or always, then add it to the rules file), knowing how to write prompts etc.
Also remember to manually choose a model - either Gemini 2.5 pro or Claude 3.7.
jimmiebfulton@reddit
I was an AI skeptic for a while. Not that it wasn't useful for some domains, but the Copilot experience wasn't working for me. It was just glorified auto complete with a bunch of crap suggestions. I haven't used Cursor, as I also use NeoVim. I am, however, using Claude Code. It is a game changer. I still code/fix stuff, but if you take the time to be specific with your prompts, you can get significant speed in development. You can become a one-person army. A small one, at least.
For instance, I had a big nasty Github Workflow filled with all kinds of bash and scripting. I'm decomposing these monolithic workflows into Github Actions. Not the most fun task, considering the mix of weird syntaxes.
I have an archetype I created that sets up the scaffolding for a Github Action, including a skeleton README.md, and a workflow for semantic version releases of the action itself. I open claude in the terminal, tell it what the project is for, and paste the workflow contents in. I ask it to pull out the Repository Login, Build Tool Setup, Artifact Publishing, etc potions one by one for each action. It faithfully written high quality Actions with proper inputs and outputs, and documented the whole thing with samples.
After decomposing all of the actions, publishing them, and kicking off their versioning workflow, I then went to Claude Desktop. I pasted in the original workflow, and links to the Git repos of all the actions. I asked it if all of these actions faithfully replicated the legacy workflow, and it it did, create a new workflow using the new actions. It identified that the new architecture was modular, versioned, composable, and generally more maintainable, and spit out a new workflow. I only made a few hand edits, mainly to give better names to the Actions. No actual code.
If you aren't getting clue out of it, keep working with it until you learn when and where it is effective. I guarantee lots of other people are. Even the skeptics.
Waste_Tumbleweed_206@reddit
Just ask, who will pay for these AI tools?
iComeInPeices@reddit
Have been using copilot and honestly the auto complete is either super annoying or amazing. Sometimes it writes a large block of code that are exactly what I am about to do.
But ultimately I don’t think it saves a lot of time. Having another dev to bounce ideas off of would be better.
a_reply_to_a_post@reddit
i don't really use AI to build features, but if it can whip up a utility function i want and save me 5 minutes of writing my own, i'll use it for small specific stuff for coding, but usually end up rewriting something in the output anyway
i'll use it to sketch out scoping documents and general timesucky things that aren't coding so i can have more time to code
my kid was studying for a geography bee all winter and my wife and I were googling random "geography quiz questions for 5th graders / 6th graders, etc" but all the sites are like ad heavy and have like the same questions, but i used ChatGPT to compile like 500 questions, gave it a typescript format and had it output JSON, then built a little UI one night super quick so my kid could study with that, and he ended up winning the district-wide geography bee
i dunno, having AI write all the code takes the fun out of the job
kibblerz@reddit (OP)
Generating that JSON actually sounds like a pretty nifty usecase.
sudosussudio@reddit
Yeah I had to work with a malformatted csv and I had AI write a script just to fix it. I’ll likely never use it again. AI is perfect for disposable code.
For other stuff it needs a lot of help. TDD, linters, typescript are essentials for me working with it on JS.
Fickle-Property-2467@reddit
I think it’s a actually pretty simple and AI is very helpful. I have many years of CS/coding experience and now I rarely write any code from scratch. In your prompt describe the data sources and columns/variables that are relevant and what processing/operations you want to apply, as well as the output structure. In most cases, with complex projects start with simple base cases - eg df1 has columns A,B,C and df2 has columns A,E,F,G and I want to merge on A using left join. Then I want to do (describe logic and columns) and output
workinBuffalo@reddit
I’ve been coding a site with OpenAI’s free version and sometimes Gemini or Claude. It is good for small stuff but when the context gets too big it is a mess. Do cursor or the other pro versions allow you to have your entire solution in memory like a RAG?
ajones80@reddit
At a company with a high volume of users I’m not sure how anyone has the confidence to ship ai code. I find myself having to sift through anything it produces to make corrections and end up feeling like things would have been better off if I just wrote it in the first place. I feel like it’s beneficial for projects without stakeholder pressure but feel much more confident in myself over ai if repercussions fall on me. I don’t believe it’s fair to be forced to use ai to generate code if you’re going to “get in trouble” for producing ai code (bugs). This is in regard to code generation by the way. I do find it beneficial for things like generating test data or wording error messages.
desolstice@reddit
I personally really enjoy using GitHub copilot. On certain problems I’ll use chatgpt. I’m not using either of them to do full problems for me or using either of them to write code that I just copy and paste. Copilot is good at speeding through tedious tasks or doing quick inline prompts for small code snippets. ChatGPT is great as an advanced search engine, or exploring options to solving a problem.
Both of them are only as good as the developer using them. I still write 95% of my code myself without AI, but that usually because I can write the code faster than I can write the prompt.
Slims@reddit
It's incredibly useful and is clearly the future of our career. Denial of this reality is pure cope. If you are struggling to make it create a simple drawer in react your prompt engineering just sucks. These tools are absolutely insane and it is only the beginning.
kibblerz@reddit (OP)
The UI library that im using is on a newer iteration which is unknown by the AI and the syntax changes lead to incompatibilities.
It sees a package and makes major assumptions, not considering its version or anything. That's a pretty big issue if we have to stick with older versions of libraries that are familiar with what its trained on.
These things are basically snapshots of the internet X months ago. Itd be nice if they could reason and instead of spitting out code resembling something its seen, if it could actually consider the types instead.
Slims@reddit
Ok then don't use it for this use case. It doesn't mean AI is bad or that using it is fundamentally miserable. It's a very useful tool. Use it wisely.
Also you can be better with some prompt massaging to make it not be so brash with trying nonsense when it doesn't know what to do. Like if you're using claude code, you can write up a good Claude.md file to make it behave much better. There's guides out there on writing good system prompts.
Ok_Addition_356@reddit
It's really good as a reference and for pointing me in the right direction and giving me stuff to work with. But I don't use it and would not trust it to write 100% of my code.
And that's just one part... I still need to understand how it works and how to unit/integration test with it.
GaTechThomas@reddit
Check out the Wikipedia article on vibe coding. It's great for hobbyist projects that have been written before.
Holden_Makock@reddit
I use AI (Copilot, charGPT and Perplexity) and find it useful for sure. However, junior on my team dont find it useful.
So, my answer would be, If you already know what to do and just need a faster pace and not do the mundane work, yes AI is extremly useful. But then again, I am not saying tasks like. Optimize my service, or build the most asked feature by my customers.
Juniors, need to hand held more than AI agents and that has been extremely useful for me. I could explain with a prompt and 3 promplts later, I have the exact code I want. Mind you, this is my code, I havent let AI think or implement. AI is more of a fast typer, syntax fixer and sometimes dumb worker, like follow pattern similar to this file or implement a function similar to this but for ....
So, I see it as the smartest Junior SWE I have and I can get work done. But do not think of AI as Staff SWE to handle it on its own.
eazolan@reddit
Yeah, it takes a completely different workflow.
SociallyAwkwardSnake@reddit
I’ve been more productive with it lately for sure. Enough to replace the value of another developer? No not at all lol.
But “make me a typescript type for this database table”, or “mock an example implementation of x library I haven’t used before”, things like that saves me a fair bit of time.
secondhandschnitzel@reddit
Were you great at anything you tried the first time you tried it? Probably not.
Using AI to code is a skill. It can be incredibly helpful or time consuming and distracting. There is a learning curve.
“I tried it once to try to be open minded but it cost $0.50 and was problematic” sounds like doing the bare minimum to claim that it didn’t work. That would be like trying LaTeX and forgetting to escape a character on page 2 and concluding that it wasn’t useful for typesetting long documents.
Part of the learning curve is learning how to prompt. I tailor my prompts heavily to the model which means I’m still using ChatGPT even though Claude is pretty clearly better. It’s not enough better right now for me to invest a lot more time tailoring my prompt intuition. I use cursor, but I don’t use it for everything. Part of the learning curve is learning what’s a good fit for AI. I also generally have AI do very specific, self contained things to help accelerate my work. I don’t ask it to develop whole features. That’s a recipe for getting garbage that’s annoying to clean up. I also increasingly learn what AI isn’t good at and when to switch to a different approach if it’s not working well.
CosmicSherpa@reddit
This.
Also with how fast things are advancing, it's a perpetual relearning as new models come out to see what they are capable of and how to interact with them.
secondhandschnitzel@reddit
Yes. I really have yet to figure out how to keep up with ML research as much as I want to. “Just read arxive” yes, but which papers. I need HN but for ML papers.
ballinb0ss@reddit
What if the reality becomes AI development is just another cost center for the business and you can't lower salaries sufficient to offset it because juniors need experience to become senior who can fix the errors that AI equipped juniors make lol.
michel_v@reddit
I keep being told that I must "learn to prompt", even when using Copilot.
So, yesterday I started working on example code for the Python class I teach for children, and I had just created a base class for characters/sprites on screen. I add a
move_left
method that removes 10 from theself.x
, Copilot helpfully proposes a ˋmove_rightmethod so I hit tab because eh it’s trivial code. Then after a few more lines, I run the code. Sure enough my character moves to the left when I hold the left arrow, but it moves to the bottom when I hit the right arrow, WTF. Turns out the predictive text generator at the heart of Copilot predicted that after left comes right, but also that after x comes y; the move_right method removed 10 from ˋself.y
.Maybe it’s my fault, maybe I should helpfully remind Copilot that left/right involves only the X axis. Tell me how that’s a good use of my precious brain time though.
Emmanuel_Karalhofsky@reddit
The only vibe I am getting is a bad vibe and it looks like redundancies based on assumptions pushed onto the plates of decision-makers by the Tech Bro Mafias - for example such as Open AI which is losing Billions, never made a profit and for some reason the Narrative is that without AI companies will go bust.
FUD - Fear Uncertainty Doubt.
BoBoBearDev@reddit
Not a single time I refute SonarQube and eslint. So, I am sure eventually I believe AI just as much.
Emmanuel_Karalhofsky@reddit
The essence of the problem is as follows:
- Developers who use AI to code quickly understand (as soon as they ask for the second iteration) that AI will generate completely different results every time. It does this because logically inside the Neural Network it's all about Mathematics and Performance
- And then there is everyone else.
RegularLoquat429@reddit
I’m using Grok for a decentralized MVP in Node / React / Gun / libp2p. It’s like peer programming with a 12yo ADHD genius with memory issues. But once in the flow (ask it to maintain a requirement and architecture document you can refer to when it forgets what it was doing, reupload all files he changes so you are sure it didn’t imagine the content, advising to break down bigger features into smaller chunks to avoid problems, …) it was really a productivity booster because I’m a good but old software engineer. Never did web dev. Don’t know JavaScript / Typescript well and could slap together a first version in 3-4 days and make a quite nice next version in 2-3 weeks. The best part is that I upload logs, devconsole extracts and screen shots and it does a good job at finding the issues.
itijara@reddit
You are not alone. I am also being required to use AI to be "more productive" but it honestly is just not ready to make production code. The only utility I have found is to do proof of concepts, boilerplate, and sometimes to write tests. Most of the stuff I have to do AI still sucks at.
I don't get too annoyed with it, as I am basically being paid to play around with LLMs, which I think is fun, even if it doesn't actually serve a business purpose.
On an unrelated note, there are nvim plugins for AI agents. I haven't configured one yet myself, but I want to try it.
Fidodo@reddit
Non technical leaders prescribing technical workflows is an absolute recipe for disaster. AI can be selectively and carefully adopted for productivity but how it gets integrated needs to be very thoughtfully done and working workflows and best practices are still not established yet and many use cases aren't even viable yet, which makes being careful and thoughtful even more important.
Putting out a blanket command of "use more AI" is completely nonsensical and shows that the technical leadership of the company is inept. Where's the principal engineer responsible for devising selective and working processes that allow AI to be used in safer prescribed ways with oversight? Just telling your team to go use AI is so fucking stupid.
sozer-keyse@reddit
I find it useful for "monkey work" and using it as "Google on crack", as for writing code I only use it to write very basic stuff that I just can't be bothered to type out myself.
kibblerz@reddit (OP)
This just gave me an idea: The next iteration of Gemini should have the persona of a crackhead lmao
JLDork@reddit
It's helpful to bounce ideas off of, but still hallucinates a lot or references old documentation. So I generally don't rely on it for code code, but I use it to test my ideas or check objections against my architecture
await_yesterday@reddit
I've had a fair amount of success using Claude as a sounding board for algorithmic ideas. Recently there was a particular kind of high-performance concurrent gizmo I wanted to make that was a bit beyond my experience. I knew the theory but I was worried about unknown unknowns -- maybe there was some gotcha that would ruin it? I outlined the problem and Claude gave a number of possible approaches with pros and cons. I was happy to see that the solution I tentatively had in mind (which I was careful not to mention) was in fact on the list and Claude thought it would probably work. And it has.
As always, you have to push back when it starts agreeing with everything you say. Try not to let it know what you're really after (and it can sometimes be scarily good at inferring that).
thephotoman@reddit
The biggest problem with the AI hype is that so many managers think it’s a productivity tool. The problem with that assumption is that cleaning up the output of an LLM takes more time than you saved on the typing exercise.
However, I’m not anti-LLM. (I am against diffusion engines, as I cannot see a use for them that isn’t fraud or misinformation.) It’s just that LLMs are not like the Internet. They’re like Windows 3.0: a new user interface for existing systems. I see value in conversational user interfaces, which are a thing people actually do want. But using Windows 3.0 versus using the old DOS prompt was a bit of a wash: yes, most found Windows 3.0 to be more discoverable, but skilled users only used it for some things in their workflow.
Interesting-Pie9068@reddit
Yes. But I don't use it to actually solve problems.
I use it for:
- debug messages
- unit tests (and then I manually write it, never copy paste, this reinforces terrible habits)
- initial documentation searches instead of google
- alternatives. I post some code, and I say "give three alternatives" to see if I missed something obvious.
ashemark2@reddit
i literally wanted to post this to ask if anyone had had the time to disrupt their workflow to waste their time with cursor and the like.. glad I’m not alone
rar_m@reddit
I'm coming from vim and started a new project with vscode and copilot .
I find myself constantly battling the editor CONSTANTLY typing shit for me, out in quotes around shit while I'm typing, forcing me to erase. God all the code completion, intelliaense whatever the fuck is a MASSIVE nightmare and productivity killer for me so far
I spend so much time undoing automatic bullshit. I use copilot to help me turn this shit off so I can just type what I intend to type. I have no idea how people get by with these defaults
Beyond that, it's been pretty helpful with me learning typescript and explaining syntax or type errors and what those fixes are.
It basically replaces Google search for me. I also find the code it wants to generate is pretty inline with what I want. I tend to let it write it, while I just code for it review it and move on.
Copilot giving me AI on demand is nice, I think what I hate most is all the autotyping shit in vscode . Just typing a function declaration requires about as many manual fixes as I needed typing this post in my phone .
Vscode so far, is a massive productivity killer for me.
kibblerz@reddit (OP)
I've been trying avante with NVIM, it's fairly nice, though a bit buggy. It also got stuck in a loop the other day, I didn't notice because I was watching a YouTube video, and it drained my credits lol
rar_m@reddit
lol that sucks. For me, I think the AI part is probably fine.. It's really just the editor that is ruining everything for me.
I can't stand having it auto type and put in characters I don't intend constantly. Probably a skill issue, maybe I need to be slamming my escape key constantly or hitting tab to accept but I find just turning all this shit off whenever it comes up will be my solution.
I like having AI on demand. Like if I need to refactor something, selecting it hitting ctrl+I to ask copilot questions about the code or to refactor.
Or instead of googling something, I can just ask copilot directly and have it do the same.
As far as writing code, I have found it nice to give me initial setups for things like docker configurations. I can make a Dockerfile and ask copilot "generate me a dockerfile for a django backend project, using python 3.1" and then just code reviewing and tweaking what it generates. This would normally take me a bit to go over the docs again, maybe reference previous configs I've setup before.
Asking it questions like "is it standard practice to checking node_modules" and having it dump out it's answers.
Stuff like that is nice. I was implementing a simple JWT authentication middleware for my django backend and it basically.. knew exactly what sort of boiler plate I wanted. I could see that being useful. It did have errors, like referencing functions that didn't exist yet, or passing parameters to function definitions that shouldn't be needed.
Overall, I would say it's like having a smarter google that is also aware of my current code.
It's nice in small amounts, I'll continue to take it slowly. I'm forcing myself to use all this new shit on my personal project as a learning exercise anyways, I still do all my professional work in a terminal w/ vim.
thehomelessman0@reddit
I found it was really useful for code that is tedious but I wouldn't be touching often. For example, I made a CLI tool that helps with development in an hour, which would have otherwise taken me a day or two.
However, I wouldn't want to touch the code it wrote with a ten foot pole.
kibblerz@reddit (OP)
So it's the Temu for software
thehomelessman0@reddit
That's a good way to put it! I think it will get better over time though.
kibblerz@reddit (OP)
Honestly, I'm 99% sure that on some random morning, a complete geek will stumble upon a new method that will perfect AI coding, and everyone's jobs (including not devs) will rapidly disappear. It's crap right now, but one revolutionary insight is gonna render everyone obsolete. Could be tomorrow or 20 years from now lol
EspurrTheMagnificent@reddit
Honestly, my biggest issue with code generating LLMs as of right now is they are a solution in need of a problem. Because either :
A) The problem at hand is too complex for it, so it's useless and you have to do it yourself
B) The problem is simple enough for it to handle, but complex enough that you have to wrangle the thing through and through like a toddler, so you're better off doing it yourself
C) Trivial enough for it to handle, but there're already cheaper and more reliable tools to do those, so AI is not particularly better in those situations either
They're the same deal as NFTs. Whatever uses you can find for them are either handled way better by something else, or unique but so stupidly convoluted and forced you're better off using something else that's simpler.
drumstand@reddit
I've found a lot more success in using AI to help me refine ticket descriptions, technical designs, and email/Slack message content than I have with generating code. It's actually been pretty helpful for turning my sloppy bullet points into something fleshed out and generally well formatted. Usually still requires a human touch at the end, but it gets me a rough draft pretty reliably.
spicymato@reddit
Regarding AI being useful for "monkey work," I think your definition of monkey work and mine are different.
It's not things that get done via regex, macros, etc. It's bulk work, like filling out basic methods, which would be done by a junior and reviewed before accepting.
AI for me has been good for work on the periphery, like scripts or query commands in areas that my core work does not require regularly. For example, I can absolutely go look up how to write a Powershell script to do whatever, but I rarely work in Poweshell, so it will take me a while to find the right commands and syntax; but if you give me an existing script that's at least 70% of the way complete, I can pretty much immediately digest it and edit it to serve my needs.
Scientific_Artist444@reddit
If productivity is measured by size of content, then yes. But if it is measured in quality, then no. It could even reduce quality.
The quality team is never anyways taken seriously, so cranking up a lot of code becomes a good way for management to talk about productivity.
Because management is obsessed with delivering fast, they think AI code is productive. AI can be productive in terms of quality, if the person using the AI knows what they are doing, why they are doing it and potential implications of the generated code. Otherwise, quality will suffer.
So productivity as measured in time by management might be high, but productivity in terms of quality work is not.
fuckoholic@reddit
You need AI when you don't know what a drawer code should look like. It's like an advanced search engine. If you know how to build a good drawer, it won't help you much, but if you don't, then it's golden. It will be wrong a lot, but if you're good at programming, you will quickly take the good parts and discard the rest.
The productivity boost for me is over 20%. Hard to measure, probably more than 20%, since it saved me potentially days for stuff I would have had to learn how to do, which is time consuming.
I only use it as a search engine basically. I do let it generate functions for me, but never anything complete. You can also ask stuff about code you don't understand.
Tenelia@reddit
It's exactly what you think. If you're hammering together a prototype using LTS libraries and such, you'll be fine. It's almost criminal that AI founders and VCs never declare what they're priming the models for.
beachguy82@reddit
Absolutely. I’m easily moving twice as fast now. I’m using AI at every step of the process. We talk high level about project goals, develop a plan, move that plan into detailed requirements and architecture, then i use AI to speed up the process of writing each class and spec.
I don’t consider this vibe coding at all, but it’s definitely a multiplier.
YetMoreSpaceDust@reddit
I haven't had to go through this yet, although I'm sure it's coming. I do remember being forced to use drag-and-drop UI builders in the 90's because the bosses could see the draggy droopy prettiness and concluded that it must be more productive because dragging and dropping sure seemed easier than typing in the mind of somebody who'd never tried to do it before.
I learned early on that if you're low on the org chart, it doesn't matter what sort of education or experience you have, you need to appear to cower in fear before your "betters" who have dedicated offices. But you also have to get your work done or they'll stop paying you. So what do you do? Pretend to use this useless thing, but get the work done behind the scenes. Drag and drop the pretty UI but then write the Swing code to produce it yourself. They don't care.
Cercle@reddit
No, it just looks productive short term. Long term is ass
blindsdog@reddit
You’re probably just bad at using it. It’s a useful tool.
PizzaCatAm@reddit
Is so sad to see developers this salty, and you are right, is a great tool when used properly. It requires a bit of a mind shift and focus on context instead of task, then is breath taking.
Cercle@reddit
There's certainly a mind shift happening, but it's not what you think it is. https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf
PizzaCatAm@reddit
I knew this was going to get quoted, and we are not talking about the same thing. Don’t worry, stay in the past, the industry will just leave you behind.
Cercle@reddit
Feel free to ask an LLM to explain my answers for you :) the more you use them, more job security for me, both short and long term
PizzaCatAm@reddit
You think you are smart, let’s talk in 5 years ;)
Cercle@reddit
I know exactly how to use it to code, because I am one of the many unfortunates teaching a household name LLM how to code :)
PizzaCatAm@reddit
Oh yeah? Care to share?
Cercle@reddit
Sure. Right off the bat, it's literally all bullshit, it has no capacity for distinguishing facts or context or nuance. It also is incapable of research, or even retaining provided context. Just from statistical chance of the predictive text you can eventually get resolution to some code problems, but only if you understand both the problem and solution. If you use it to replace research or learning, which happens way too much, you're fucked. I use them to find stupid mistakes that aren't caught by IDE or lint, and hard to search for, like weird UI bugs. The moment you ask for a component you felt lazy about, it will add tech debt, even without going into dependencies, existing types or classes, etc. It's trained off stolen existing code which is often shitty and deprecated, then refined with nonsensical parameters like forcing specific frameworks or weird researcher rules. Recently my company shifted to only using vibe coders to train it to vibe code, all of us with training/skills were reassigned. Shitshow.
PizzaCatAm@reddit
So your response to that is a rant? I think I know all I had to.
Cercle@reddit
I'm genuinely happy for you that you are able to find bliss in handing it your entire codebase, since clearly nuance has already left the building
Reedittor@reddit
That is interesting, I'd love to hear some of the biggest weaknesses of coding llms from your perspective. If you care to share.
Cercle@reddit
Trained off stolen code, regardless of age or shittiness or even if it works or not. From that crappy base it's heavily biased by nonsensical researcher requests, vibe coder feedback, etc etc. Once they trained a model to output web code from two specific (bloated) frameworks regardless of the nature of the web code request. So even if it could be solved in plain html it still would output a ton of shitty unnecessary boilerplate. Ridiculous
SomeEffective8139@reddit
We have a leader that is really pushing it also. I find it useful for boilerplate and rubber-ducky debugging. Some of my coworkers are extremely, ideologically opposed to using "AI" in any capacity whatsoever. I am a practical person and I take these things for what they are, which is tools. If an LLM-based tool can do fancy autocompletion and search queries and it saves me a few minutes every day, I'll use it. I don't care if it's built with an LLM or some other technology. As soon as it stops being useful, I will use another tool or hand-write the code.
Constant-Listen834@reddit
Some people are some people aren’t.
kibblerz@reddit (OP)
Why not use normal autocomplete then? To waste an extra 50 cents a minute of typing?
Lopatron@reddit
For example, you have a dataframe and you want to plot the data, but forgot the Matplotlib API (I refuse to believe that people actually remember it)
Writing
And having it fill in the 10 lines of charting code instantly is of course more than normal autocomplete can handle.
kibblerz@reddit (OP)
Sounds more like an issue with python lol. You wouldn't have to use AI to remember the API's with a proper type system haha.
I get that python has a load of useful libraries that other languages don't compare to.. But python coding is miserable lol
Lopatron@reddit
Those 10 lines of untyped Python code would translate to 25 lines of strongly typed Java or C++. All for a single visualization that I will inspect and throw away in 5 minutes. It's BS work that isn't worth my time of day. No way I'm not using AI for a task like this, no matter the language.
Toph_is_bad_ass@reddit
This is spot on. Python is good for plotting even if matplotlib is insane and you're totally right that 99% of charts you look at for 5 minutes and then toss them out.
classy_barbarian@reddit
You're really starting to come across as one of those people that believes the only "real" coders are people that only code in low level languages using Vim and Notepad and don't give the slightest craps if anyone else can understand their code.
kibblerz@reddit (OP)
Lol no i don't use super low level languages. The APIs I write are done in Go so that new devs can easily catch on. My frontends are react so nothing particularly unique there.
I use VIM because its the fast way to make edits to code. It's quite the learning curve but we'll worth it. Plus Lua with nvim make writing my own plug-ins fairly simple.
TimMensch@reddit
As traditional autocomplete is to manual typing, AI-autocomplete is to traditional autocomplete.
It's kind of crazy how good it can be. And it can also produce crap, but that's why we're still paid the big bucks; to determine which is which.
Sometimes I'll add something at one point in the code and then I'll go elsewhere to initialize it and it will show the entire initializer I was about to type without even hitting the first character.
Other times I'll add a properly named conditional variable and when I move my cursor to wrap a block with the conditional, it suggests the entire change, exactly as I was going to type it.
And still other times it will suggest things that make me wonder what it was smoking.
BUT. To some degree, you're also totally right. It's really only saving you at most the time you would have spent typing, and when it comes down to it, typing isn't actually a majority of the average developer's time.
So there's a balance, and it's kind of dumb for your boss to be attempting to micromanage you like you've described. Either your performance is up to par, or it's not. Especially for a vim user, I wouldn't be surprised at all if the actual net gains you could get from AI wouldn't be that great, at least at first--if you count the lost productivity of needing to learn a new way of interacting with the editor.
Once you got past the learning curve, you'd probably be faster overall though. But for a strong programmer, it's not going to be even a 2x improvement, much less the 5-10x improvements people keep claiming--strong developers are already that much faster than mediocre developers, and the AI might be at best a 20% speed bump for typical dev work.
Mediocre developers can produce mediocre code 2-5x faster, though, so there's that.
whostolemyhat@reddit
Except it's non-deterministic and half the time it just makes stuff up
TimMensch@reddit
I did mention it does that. Try to pay attention.
And as I pointed out, that's why we still make the big bucks.
Having used Cody (for VS Code) and Copilot a bunch recently, I'd say its autocomplete suggestions are useful maybe 70% of the time. So you have to look at what it suggests. You can't blindly accept, and you never will be able to.
But time savings is time savings.
kowdermesiter@reddit
Because "normal" autocomplete can't do this: https://imgur.com/a/H2cspvc
gumol@reddit
are you getting paid less than 30 USD per hour?
old_man_snowflake@reddit
paying per autocomplete? microtransactions for my fucking work? FOH.
gumol@reddit
well, it's my employer paying the cost. I assume it's worth to them.
kibblerz@reddit (OP)
That's money that could be going into your pocket
Randromeda2172@reddit
I would rather spend the $20 a month and save myself 4 hours a day. If the company values my time at $100 an hour then I value it even more.
digitalwankster@reddit
…$0.50 a minute? That’s only $30 an hour to provide exponentially more output. If you aren’t utilizing it to the best of your ability because you’re worried about making an extra few bucks, that’s on you.
kibblerz@reddit (OP)
Exponentially more output or technical debt?
ShroomSensei@reddit
Depending on your setup the AI autocomplete can be leagues better. I'll admit, it's about 75% helpful with 25% being pure bullshit autocompletes such as fake methods or incorrect logic. However, when it does do the autocompletes well, it is extremely helpful.
The most useful I have found it for is programming in a language I am unfamiliar with. I code primarily in Java. When I want to make a helpful script though I'll almost always go with Python which usually requires a bunch of file I/O, API requests, invoking processes, etc. It's not that I can't figure that stuff out it is just AI makes it extremely easy. I can write out the pseudo code via comments and the AI fills in the rest. Yeah there is 100% going to be mistakes but it's not like I can clean it up.
How much does this help me in my day to day? Not a lot lol. The autocomplete is probably what speeds me up the most if at all.
kibblerz@reddit (OP)
My former boss was acting like me preceding to use NVIM when Cursor exists was comparable to fortran fanboys. Determined I was gonna go obsolete for not jumping on board the vibe coding stuff lol
It seems like a bunch of hype that needs to settle down. Obama just had a talk where he claimed AI was better than 70% of coders... If that's true, then that's terrifying lmao
Rakn@reddit
Something like augment code offers a similar feature set an works in nvim. It's a little behind cursor, but IIRC they've also now introduced an agent mode for example. Which is the only thing I use in Cursor tbh. the ability for it to automatically obtain further context, read additional files as needed and use tools is what gives it the power to understand my code base and solve "my" problems. Albeit with all the limits already discussed here...
kibblerz@reddit (OP)
I was trying out avante for nvim which is similar, its pretty buggy though and it got stuck in a loop for 5 minutes and managed to drain 2-3 bucks of API tokens with claud lol
Blecki@reddit
That's not really saying AI is great, have you seen 70% of programmers? (and I'll take Obamas opinion on presidenting... I'll need to see some code before I value his opinion on programmers.)
ShroomSensei@reddit
70%.. nah, 30%? Yeah honestly I might agree. Maybe I have pretty piss poor experiences but a lot of colleagues lack critical thinking and if you don't tell them exactly what to do they flop on a ticket until someone helps them.
I'm pretty cynical about the AI stuff, because it just really doesn't help me in my day to day work. It is always when trying to do something in another language someone is unfamiliar with that I see it really shine.
DealDeveloper@reddit
Can you think of ways to use an LLM to beat 70% of programmers?
kibblerz@reddit (OP)
Yeah, using it to understand the basics in newer languages is useful. It just always falls flat when I want it to write something useful.
I'm pretty cynical about AI myself.. Seems like a dystopian nightmare imo
nimbledaemon@reddit
Yeah AI isn't going to do software engineering for you. It can inform your decisions, it can do a lot if you know the exact specific thing you need it to do, but you need to be very conscious of what scope you give it. Too much and it loses track of stuff, too little and it tries to fill in the gaps with some bullshit or doesn't factor in your existing design. In my experience though, once you find the right level of specificity you can trust it with, it feels more like just abstract software engineering without having to directly write much code at all. I'm still doing all the thinking, I just don't have to worry about writing up the specific code details and boilerplate as much.
My programming loop without AI was basically 1) think of small incremental change that needs to be done to accomplish a bigger task and then 2) write the code to make that incremental progress. With AI I just do 1 and then 2 is just "tell the AI specifically how to write that incremental bit of design". Most of the time it can handle stuff like writing a new function/page/endpoint and integrating it in a vertical slice on just frontend or just backend at once. Then in between a few of these pieces there's a loop of cleanup, checking for bugs or design issues, sanity checking that everything works and then fixing the issues that are found, that the AI is somewhat but not always helpful for. For me it's great, since actually writing code was the part I didn't like so much and now I can just plan solutions and tell the AI to do each little piece, relieving me of that cognitive burden, meaning I get more done. Also important is to incrementally write up a custom_instructions.md that you give copilot as context to help the AI avoid making the same mistakes over and over. Basically just a big document list of "when doing x task, do it this way", but specific to a project.
13ae@reddit
Would using Cursor w/ a neovim extension inhibit your workflow much? I think theres a world where you can glean value from both worlds wherever it's applicable. at the end of the day its just a tool and so the value is subjective and predicated on how you use it as well.
Business-Row-478@reddit
I feel like cursor is just a shitty version of vs code that costs money
13ae@reddit
im curious, what features does vscode provide that vscode doesnt? at minimum its equivalent if you turn off the ai agent capabilities
Business-Row-478@reddit
I mean it’s just a wrapper of vscode. I don’t like the ui/ux design of it and it remaps a lot of the important keybinds. It seems less customizable too but I honestly haven’t put much time into it. Just seems like an unnecessary cost over using vscode and copilot.
13ae@reddit
The UI feels largely the same to me and you can undo the custom shortcut mappings, though im not sure if they give a quick and easy option to do so, so you might need to do some menu or setting diving. I don't think the customizability should be that different though but I could be wrong.
As for cursor or windsurf vs copilot, if you're using the AI tooling theyre just much better. It's also like having chatgpt built into your code editor, if you dont want it to write code you can still leverage it to answer questions or use its thinking/reasoning models to provide suggestions or insight.
kibblerz@reddit (OP)
I've tried using neovim extensions with IDEs before, I always disliked it. They're usually buggy and the keybindings can be prettier screwy. I've yet to find a Neovim extension for an IDE that wasn't just a PITA lol.
13ae@reddit
thats fair. something a staff eng does is he swaps between intellij (or goland i guess, we use go) and cursor depending on the task. if he needs cursor, he'll swap over and try to pump out some functions, otherwise he'll largely use the idea hes comfortable with. it might be an unpopular opinion here but I think AI agents can provide everyone here value with some time spent because there are "tricks" to unlocking more potential and efficiency with the tool, such as creating templates to sandwich the input and output for more predictable results, using ai to break larger tasks into smaller ones that suffer less hallucination and give you more find control, etc. Just don't think it's the silver bullet/engineer replacement it's hyped up to be.
chunkypenguion1991@reddit
Its creating 10x the amount of code, not making people 10x more effective. What does that even mean anyway? Do they think it's better than having 10 devs at your skill level? There is going to be a lot of technical debt to pay soon because devs are over relying on the AI to write the code. It does have uses for coding but the hype machine around it is out of control at this point.
kibblerz@reddit (OP)
Yeah, honestly I think the devs who avoid too much AI will be more desired in the future since devs who rely on it will face a decrease in ability, which is probably most devs entering the industry these days.. It'll probably pay well, though fixing shit code isn't my favorite use of time.
Much of my work is building new websites with high budgets. I do maintain some very poorly put together websites, but for sites which I architect, I make it as easy to understand as possible. So AI ends up being less helpful here. Probably because my architecture ends up fairly unique.
muslito@reddit
I use it when I switch languages and I forgot the syntax of what I want to achieve.
I treat it as a junior dev and tell it why not do this instead of x etc.
I split the work in smaller parts since if you give it to much it usually breaks other functionality etc.
It's awesome for creating jest tests.
I've even used it as a debug tool asking what could make this happen and gotten actual suggestions that I hadn't thought about.
Also helps as a rubber duck as I'm typing and talking to it I usually come up with the solution.
PR description, it does a far better job at explaining what changed and sometimes the why.
Reasonable_Pie9191@reddit
I'm still learning programming and anytime I use Chatgpt for something related to my code. It's not for it to write the code but as a search engine different questions. If it gives me a block of code. I ask for 5 different ways to write it and then ask for wach way to be explained and why they are like that so I can google to see reason.
But then I get scared when people act like if you ever use ai at all you'd never learn
MoreRopePlease@reddit
This is actually a good way to study and learn. Look at different ways to accomplish the same goal and understand the tradeoffs, why you would pick one way and not another.
I used the ai a lot for this when going through leetcode problems.
Business-Row-478@reddit
I always hear people say this but the majority of the time I ask it questions it straight up gives me wrong answers or hallucinates something that makes no sense
MoreRopePlease@reddit
I find that it makes a big difference how you phrase your questions. "Here's an error message, what could be the problem" or "here's a problem I'm trying to solve, tell me how this code could be improved (or compare approach A with approach B)" "This is javascript; rewrite it in bash" "Here is some input, I need to generate permutations that will end up looking this this, and there should be no duplicates, and the result should be lexically sorted".
If I include any speculation or leading phrases, I can easily get bad answers.
Reasonable_Pie9191@reddit
I first either watch a video or read something in the docs so I can structure my questions. I'm fairly new so I'm not comfortable reading docs.
But when I look at it and see how the syntax should be. I copy and paste for it to dumb it down. If im not satisfied I go to reddit
iamapinkelephant@reddit
Depends on the context and situation. I'm using AI to write a lot of boilerplate, but I'm also mainly using the suggestions features and not asking for anything from scratch. I work across a lot of different languages and contexts and I'm not granted the time to set up strong tooling. It wouldn't be worth it for me to investigate and write out autocompletes or snippets, AI tools at least get me in the front door and with multiple languages that I'm not 100% familiar with, definitely helps to hunt me about syntactic differences.
But then I have a colleague whose work has fallen off a shelf and every PR there's at least 2-3 random changes where his defence is 'I just followed the AI's suggestions'. To me that's no different than blindly copying and pasting from stack overflow, if I wanted the quality of an AI, I'd just use the AI instead of dealing with a lazy dev.
DealDeveloper@reddit
Consider writing pseudocode.
Can you show me an example of a case where it is easier to prompt the LLM than write pseudocode? I cannot imagine "AI tools get you in the front door" where the plain English prompt is better than pseudocode. I'm eager to see an example.
For your colleague, perhaps show them Codex.
Baconaise@reddit
It autocompletes things like "rebuild utilities.filterItems for use with an object map of categories of arrays of items" and then it just fucking does it better than a junior, then you go back to whatever you were writing originally.
Crafty_Independence@reddit
AI-dependent developers calling others obsolete is quite ironic
Baconaise@reddit
Take or leave it but I'm an AI-enhanced developer at this point that 3X'd my productivity. I only hire AI-compatible developers for my team of 10.
Crafty_Independence@reddit
Judging by what I'm seeing here I wouldn't want to hire you for my team of 20, and if AI made you 3x more productive your productivity was really awful
Baconaise@reddit
I read into your comment history and completely understand why you're so anti-AI. You're seeing new teams come in and work much less hard than you and they may not be applying AI as well as a more experienced team with strong experience with GitHub Copilot et al.
It feels more like you're coming from a highly politicized environment with regard to AI assistance in the workplace.
Likewise on the not wanting to hire/work with you, I feel like not understanding the amplification AI provides to your team shows a lack of solid understanding of the quality and rapid advancement even the last three months with these tools. Both as an employee and as a manager of a team I might be joining I'd be looking elsewhere as a top quality contributor.
For my teams, building with AI tools allows us to focus on the implementation as opposed to the boilerplate while still implementing standards we all can appreciate on both sides. Do I have to have new deep discussions with juniors implementing functions they don't understand? Yes. Is it worth it? Absolutely.
Crafty_Independence@reddit
If the main problem you are solving is boilerplate, you were severely underutilizing excellent existing tools that don't hallucinate.
And no, AI isn't at all politicized in my organization, nor are the people and teams new. On the contrary they are people who were quite capable developers until they got on the bandwagon.
Baconaise@reddit
Yeah there's no talking sense into you. Good luck over the next decade.
PureRepresentative9@reddit
That's like 1 function lol
How long are you taking to write one function?
Baconaise@reddit
It's the compounding effect of being able to write psuedocode and get full functions back with minimal adjustment. Then outside of amplifying you this way (allowing you to stay focused on a deeper problem while not having to stop to change context yourself) you can also have it pump out SQL/Mongo queries that are 8+ hour tasks with roughly 20 minutes of back and forth.
hyrumwhite@reddit
I disagree. I think there’s a balance, and learning where to use ai effectively is better than using it Willy nilly. Sometimes you’ll spend longer massaging prompts than if you’d just done it.
Baconaise@reddit
Again, practice.
JarateKing@reddit
Isn't the major selling point of LLMs is that they're an extremely low bar to use? So far I haven't seen anyone using AI in a way that couldn't be learned in a weekend.
I don't get these comments that are like "well it can replace me just fine!" as if that's something to brag about. My first thought isn't that you're an expert prompter, my first thought is just that your line of work is more easily replaced for one reason or another.
PureRepresentative9@reddit
This is correct.
As a side note,
The people saying this are very often not real or not real programmers actually working.
As soon as you ask it a very basic question, they completely unable to answer.
Western_Objective209@reddit
It can do far more then autocomplete. If you're using cursor and you open a new repo you've never seen before, you can do something like ask "I have this this requirement:
Find files related to these requirements". And it'll just do it. It turns an hour of extremely boring code splunking into something you type in 1 min and walk away and do something else.
kibblerz@reddit (OP)
Do you even Grep? and I don't think most devs would be spending an hour searching a foreign codebase very often.
Western_Objective209@reddit
Yes I grep, and I inspect foreign code bases all the time. You never open code bases you aren't familiar with? IDK kind of sounds like you don't push yourself out of your comfort zone very often, which I guess also lends itself to being very skeptical of new things
kibblerz@reddit (OP)
I do work with code im not familiar with often, but it's usually shit code that not even the AI can figure out lmao (some sites we inherited, people who tried using wordpress like it was .Net).
I have plenty of observability systems in place, so I can catch relevant logs pretty easily.
Western_Objective209@reddit
Do you think search engines are bad?
kibblerz@reddit (OP)
They've certainly gotten worse than they used to be, from what I recall this was intentional by Google to get people using AI more.
I prefer using the docs. Occasionally I'll try AI when I'm stumped, but I've had very little luck we it that.
Western_Objective209@reddit
You definitely don't have to use it, but it gives a better natural language query then search. If you don't even search things related to programming and just rely on documentation, then you're not going to get anything out of it.
I'm just not into doing lots of work; I would rather just type a natural language query then write complicated grep queries (which the chatbots are actually really good at writing).
But, your original question was "is anyone actually being productive?" and the answer is yes. Not only can it write and analyze code quickly, it can help you learn new concepts much more quickly. Search is built in, so if you ask it to find sources it can find you 10 links in a matter of seconds. Like it's just incredibly powerful, the weird existential hype aside
Capable_Mix7491@reddit
can
grep
search semantically?kibblerz@reddit (OP)
The comebacks I've worked with atleast were structured enough to know where to look.
I pitty anyone who has to work with a codebase where grep isn't sufficient
Dreadmaker@reddit
There are cases where normal autocomplete isn’t gonna do the job, though.
So I don’t use AI frequently and I don’t use cursor. I do however use copilot in VS code, and on Friday it saved me a buttload of time.
Basically, I’ve been working on an api at work, and we had to get it out fast fast. That included in some cases skipping tests because what we were writing was “simple enough” and the output of what we were doing was going to ultimately be tested downstream - so that particular service layer went without unit tests. Didn’t love it at the time, but in the spirit of ‘going fast and breaking things’ it made sense.
Recently, we’ve had a bit more space, so I claimed the day to go fix all of that, and add in unit tests.
We’re talking about 10ish different files that look quite similar - all of them are communicating with our central service and basically mechanically doing the reads/edits etc. so all the tests are going to be extremely predictable and formulaic - but still a lot to write.
I wrote one file to my liking. My style, making sure everything is organized well and commented appropriately where it mattered, all of that kind of thing. Then, for each of the other files, I told copilot to write unit tests, giving it the example of the file it had to write tests for, as well as the original test file I wrote for context.
It very quickly generated all of the remaining files. I had to proof read them, obviously, and I did have to fix a couple small things, but by and large it worked - it more or less directly copied and pasted what I did for the first file, but replaced the names with names that followed the naming pattern I had set out and were appropriate, and changed everything necessary to actually make it work well.
That probably saved me a few hours of work, including with the proofreading, and it would have been boring rote work, too.
So, obviously this isn’t a universal situation, but for sure if you use it for specific well-defined thing, and you give it good context, it can for sure save you time.
It cannot save you time on everything and it definitely isn’t gonna be good in many cases if you just say “build me a website” - you need to provide examples.
ZorbaTHut@reddit
Yeah, I feel like a lot of people expect that AI is either perfect or worthless, and there's a lot of room in between.
I'm working on a project right now that has a few classes. There's Vector2, and Vector3, and Rect2, and Aabb (think "box".) There's also integer versions of these classes; Vector2I, and Vector3I, and Rect2I.
You might notice one is missing.
Yeah, that's right - we needed AabbI and it didn't exist.
So I grabbed all the source code for all the above classes, shoved it into Claude, said "write me AabbI, but in C# instead", and it did.
Then I said "wait that's only half done, you missed the second half of the functions" and it said "oh right I did" and gave me the rest of them.
I spent maybe fifteen minutes looking over them and fixing some minor issues; it probably saved me an hour or two of writing hundreds of lines of really simple code while mentally translating from C++ to C#.
Is it perfect? Nope. Is that worth the $20/mo subscription? Absofuckinglutely.
Lethandralis@reddit
Because it is much faster to hit tab than to type 80 characters.
Would you say autocomplete is a bad tool and you would rather type every character? Probably not.
If you don't type the first letter and hit tab when you use autocomplete, why would you expect anything different from e.g. copilot?
I'm a firm believer that you can multiply your productivity by learning these tools. But to each their own.
kibblerz@reddit (OP)
It's not just hiding tab though, it's reading and reviewing what the AI wrote too. Even with normal autocomplete, you wouldn't be typing 80 characters. That's less than a minute of typing when you factor in normal autocomplete..
Lethandralis@reddit
Saving a few seconds here and there helps me a lot over the course of a day. You also get better the more you use the tools, e.g. you develop a sense of what to ignore and what to trust. To be fair I'm pretty happy with a copilot + chatgpt setup and haven't explored cursor or the other more hands off tools.
Crafty_Independence@reddit
The majority of existing boilerplate tools in .NET far outperform LLMs right now, but vibe coders act like we've been manually writing boilerplate all this time.
I mean, maybe they have, but the vibers I've observed aren't exactly the cream of the crop
kibblerz@reddit (OP)
Yeah, that's the other thing.. I rely on code generators quite significantly. SQLc and GQLgen are godsends for creating APIs in Golang. I write some SQL queries and a GraphQL schema, and I get all the methods and types I need through code generation as well as a good framework for my resolvers.
I tried explaining to my former boss that I was able to easily automate my workflow with precision as opposed to getting lucky with a prompt, and that it's entirely reproducible. He thought that kind of thinking was obsolete...
I don't think AI made him a better coder... lol
Constant-Listen834@reddit
Idk it’s probably a skill issue on your end tbh
neosiv@reddit
Oh it certainly can write whole features / apps for you. Try Cline (I recommend Claude) or Claude Code. Been in dev for over 25 years, and it’s already better than most junior developers.
Right-Tomatillo-6830@reddit
try aider or claude code (read the tutorials or watch screencasts first). you may change your mind.
PizzaCatAm@reddit
Is more than that.
FarYam3061@reddit
it's way more than auto complete and if that's all you're using it for then you're missing out
illhxc9@reddit
I really like the local only model built into Intellij IDEA. Its just a souped up auto complete but its a lot better than the previous autocomplete.
Dodging12@reddit
Checks out for me. For example, it really hates that MUI deprecated the original Grid component in my experience from using it in my free time.
PerspectiveLower7266@reddit
Your boss would have the same problems regardless of AI.
Tell your boss what you can deliver and when. Ignore the 'extra' work and require prioritization. Work your hours and clock out or unplug. Your employer already told you that they're not going to go spend more money on employees. You'll be fine. Lazy employers like this rarely replace good workers even if they don't meet crazy expectations.
Can you be productive with it? Yes. Can it speed you up. Absolutely. But it's a tool. You gotta know how to use the tool right or it'll just cause issues. AI is my new rubber duck. Instead of talking to coworkers I shoot off a message to it and get some possible feedback. I use it to make simple scripts that I can have it write using 1 prompt. I have it generate basic test cases for me. I have it create utility functions. Lots of great value to be had and it can do it fast.
beachandbyte@reddit
I spend more time developing my AI tool skills then programming. Feels like a waste every time I actually have to code on the project now. Would rather just fix the bug in AI pipeline.
mkx_ironman@reddit
I using it for a pre-code review, creating unit tests, (especially Mocks), adding comments, creating docs, devops scaffolding (generating terraform files and yaml files for ci/cd), and prototyping. It useful for those mundane tasks, brainstorming, and doing boilerplate coding.
I don't think the quality of code that I have generated from pure "vide coding" was ever good enough to be production quality and it would create more work for me to get it to that level.
rebelrexx858@reddit
MCP + context7 will update the results to more current implementation. A little work on your prompting and youre likely to get better (NOT perfect results).
kowdermesiter@reddit
Yes, I'm having a blast generating small iterative refinements in the product. About 60% in my current project is instructed and not "hand coded". I haven't paid for any subscriptions yet, just used the free tiers.
0x7FD@reddit
I had the same frustration as you until I tried Codex. It’s the one tool that’s actually been helpful
dedi_1995@reddit
It has trouble understanding designs. It hallucinates alot despite explicitly telling it not to. I don’t think the AI will replace engineers and developers in a long time.
schmidtssss@reddit
I actually used AI to generate an ~50 line function to aggregate/manipulate a bunch of data just last week. It was as simple as prompting it, correcting/expanding the prompt a couple of times, then plugging in some specific variable names. Took <10 minutes and the code was actually correct and took into account a use case I hadn’t.
jbristow@reddit
50 lines? That's like 5 minutes of work.
schmidtssss@reddit
Meh, it was some pretty complex logic in a language I rarely touch 🤷♂️
jbristow@reddit
It's barely more than 2 screens on a VT100, though.
What language? I find Copilot/Cursor/Qodo more than a bit shaky outside of Java (8-11), Python, or Javascript. And I haven't had good results with non-algol descendants at all.
schmidtssss@reddit
It was Java but you seem really bothered by 50 lines for some reason, lmao.
jbristow@reddit
I dunno, I have a hard time justifying the expense of LLMs, especially since writing the code is the easiest and most fun part of the job.
I'm not a fan of code review, and having an LLM write more than a line or two here and there is like my personal hell.
And I can't philosophically trust the output without looking at it or writing tests for it because I know in my heart of hearts that the only way to truly know what a program will do is to run it. (I blame the "Theory of Computability" class I had to take in college)
Be careful with Java, though... Qodo and Copilot keep suggesting the "bad old way" of doing things. Sometimes I feel like LLMs are most helpful to me when they're wrong (as it usually shakes something loose)
schmidtssss@reddit
I mean, I don’t pay for it, so it’s fine to me lol
bluewater_1993@reddit
I’ve begun using it, with mixed but overall good results and it does save me quite a bit of time. I’ve found that it doesn’t do a great job predicting what you are going to write for functions, although it will get fairly close. Enough that a few small corrections will get me to where I want to be. Where I’ve found it really shines is when I either need some quick help on implementing something new, or with unit testing. With unit testing, it writes my tests completely correct about 90-95% of the time, and when it doesn’t it’s typically one or two lines I need to fix. The part that really impresses me is that it uses my styling when outputting code, so it fits in seamlessly.
Overall, it saves me a great deal of time. I do have to point out though that I’m using GitHub Copilot, which integrates with Visual Studio/Visual Studio Code (among others). So the tool has context of my code base and how my project is laid out. When I used Copilot alone, it wasn’t all that great beyond basic questions.
s__key@reddit
Man, same thing here. What is ridiculous is that managers and CEO’s are buying this marketing crap, and myself (who was into neural networks long ago and know how it works) understands that this probabilistic model is simply unable to replace anyone. For me it’s much simpler and faster to implement the code myself that with LLMs. Some boilerplate, yes it helps with it a bit, but nothing more.
sonofchocula@reddit
Roo + OpenRouter is very powerful if you use it like a tool and don’t get lost in branding or emotion.
kammce@reddit
I use Claude to build out boiler plate and to come up with ideas. I ALWAYS have to patch its mistakes. Also I only ask it stuff I believe is likely its been trained on. For example, I was building a custom C++ shared_ptr for smaller binary size and had Claud generate that. I had to fix a bunch but it did reduce the boiler plate.
But yeah, LLMs suck at writing code. I have to instruct the LLM, on so many occasions, "no no no, stop trying to solve the problem in the way I told you 6 times to do not do!"
It's productive if you use it but attempting to depend on it is a disaster.
phil_lndn@reddit
yes, it multiplies my productivity somewhere between 5 and 10.
there are things it is good at, and things i'd never use it for, tho.
and even with the things it is good at, i think it helps to be aware of how the ai is basically working and what the limitations of it are, so that you can approach the project and manage the AI in such a way that it is a help rather than a hinderance.
SparklyCatSyntax@reddit
Sounds to me like you and your boss are at the opposite ends of the AdoptAI spectrum :)
Vibe coding per se is useless IMO for real projects, let alone real large code bases. So he does sounds a bit too fanboy to me.
But from recent experience, Cursor with just one or two rules set up, is amazingly useful for important but not fun coding ( say, creating tons of new tests based on a specific framework)
sehrgut@reddit
It's for monkey work by people who ARE the monkeys, therefore not capable of usefully automating such work. And it's for job security for the rest of us, who will be cleaning up after all this vibe coding shit for the next decade.
Ace2Face@reddit
I think there's plenty of stuff where the AI clearly falls apart, even the state of the art ones. We just aren't there yet. I work in low-level C++ and often work in scenarios where documentation is sparse, and knowledge is hard to come by. the AI just can't pull it off, it hallucinates too much, even the o3 or o4-mini-high.
Artistic-Feature1561@reddit
I’m a old school developer but last year I’ve used Cursor and windsurf a lot.
You must learn prompt engineering and some basic techniques that work such as keeping scope of each question small, giving the llm specific context and so on. Btw if you need help happy to have a chat
kibblerz@reddit (OP)
I'm no stranger to prompt engineering, I dove into that stuff when the first stable diffusion models released. I have productive conversations with AI.
It's just, when it comes to coding, trying to frame the task as a linguistic problem ends up more difficult than just writing the code
church-rosser@reddit
OP what's wrong with you? Why do o you not believe the hype?
PresentWrongdoer4221@reddit
Depends on the stack used. How much internet scraping and training they did. For python scripts it works great.
For rust or groovy I find it miserable.
church-rosser@reddit
"works great" especially given how incredibly reliable all that scraped Python must've been....
drumnation@reddit
It’s not automatic. If you got into it a while back and have picked up a toolbox worth of skills by now AI can do ridiculous things, well, clean, and 20x faster. If you are coming into it with no skills, using it vanilla you’re going to have a bad time and wonder what everyone else is smoking.
Impossible_Way7017@reddit
The autocomplete and searching of a code base is a big advantage, I have cursor at work and being able to paste code, tab through autocompletions, and ask the llm to search for files is nice.
I just use vs code and a custom chat bot for personal stuff I notice it’s a bit slower.
60days@reddit
Asking if AI is useful without specifying the model is a bit like asking "Is car good?"
floghdraki@reddit
Yes I generate a lot of code. But then again I do data data science and most of the stuff I code is just disposable code.
If I did maintainable software, I'd probably use AI in very different way. I'd still use it but I don't think I'd just copy paste generated code so much because when I do that too much I lose touch with the code base. The experience can be pretty painful definitely if you rely only on generated code. Mastering a code base > using LLMs to code.
Using LLMs skillfully to boost your productivity, yes.
Using LLMs to skip on learning good fundamentals, no.
coldoven@reddit
There are bad and good use cases. And it is a skill in how to make coding agents work. 1) Understand what are good and why. 2) Repetition and failure.
Computerist1969@reddit
I tried Claude sonnet recently. Just asked it to write init tests for 3 functions. It faffed about for half an hour (constantly trying to rewrite my code and having to be told off multiple times) before proclaiming victory. I checked it's work and it wasn't even my functions; it had rewritten its own versions and tested those. I was like having a junior developer who could type 10,000wpm but was unable to retain even rudimentary instructions. This was C++ code, if that matters.
kibblerz@reddit (OP)
You gotta learn to prompt engineer bro, youre just not using it right! /s
😂😂😂
Computerist1969@reddit
Lol, you might be right!
Way I figure it, the models keep changing so being good at prompting now doesn't mean you won't have to re-learn it later on. So, I may as well bury my head in the sand and code like I always have. If AI starts working then I'll learn it then, can't be difficult FFS. If it doesn't then I've won.
blobbleblab@reddit
I presonally find it quite useful. But, I give it:
1. Great prompts, with context around what I am trying to do
Pseudocode up the basics
Give some examples of the type of inputs and what I am trying to get out of it
Once you have all these things (which has become a stream of conciousness), I hit up ChatGPT usually because I think the most recent versions of it are actually pretty good... and its free. It also offers surprisingly good suggestions.
That gets me 80% of the code I need. It won't work out of the box. I add another 10% with follow up clarifications. Then I am 90% there. That's when I take over and finish it off.
reini_urban@reddit
Yes, I'm productive with this crap. The hit rate is about 10%, but it's trivial to skip the suggestion. In some cases it produced so good and terribly needed code we were not able to produce by ourselves for 2 months.
With python the hit rate is pretty high, with C extremely low, with C++ ok. You need to treat the code very sceptically as produced from a junior who has no idea what he is doing. But you get surprises.
MrLyttleG@reddit
Hello. The problem of AI is especially to be understood by the heads of tech companies who know nothing or very little about programming and who are convinced that the technique is easy, and that paying less dev means ensuring more profits. We are still in the middle of a dream when it comes to uneducated and blind leaders. Even if the AI is capable of writing code, the results are quite funny, even hilarious. As a senior dev with 27 years of experience, I have a lot of ideas that I draft and sometimes I challenge LLMs, oh my God, the results are often relative when they are not counterproductive. In short, when the industry wakes up after believing in a dead fly and an LLM that has been used by a bunch of amateurs with code that is impossible to maintain, we will try to get the devs out of the closet... in the meantime, dear IT colleagues, do not despair, nothing is acquired, nothing is eternal in this world. We are in a cycle again, the same as at the beginning of the 2000s where we considered devs as idiots who have no added value, but the wheel will turn like every cycle. Let's keep hope and build our values ourselves, it's not an LLM or a no Code or a Big data or a Data lake or I don't know what pseudo-technical commercial wanking that impresses us, we are much better than these mantras coming from the tap-dancing gurus of Silly idiots valets
Proud_Refrigerator14@reddit
I use it as a better code completion. Helps me a lot with appearance stuff - I am a full stack dev, but actually I just want to do backend. Since I am not getting paid for perfect UIs but also don't want to torture the users, this saves me a ton of time when I have to write HTML/CSS. On the other hand, it is only a matter of time until I will have picked up enough CSS by accident that I will rewrite the abominations I have vibe coded in terms of appearance, rendering the LLM almost useless again.
Dry_Way2430@reddit
AI is incredibly useful, but you just have to know how to use it properly as it's still a tool. What AI does really well is amplify decision making. What you CANNOT rely on it to do well is write fully functional and working code with no oversight. It is not a senior engineer but rather an eager intern with good ideas and sometimes bad ones
RoadKill_11@reddit
Some things that will help:
Use cursor rules Use detailed prompts Don’t jump to coding, first discuss design and choices that need to made, get the AI to make a PRD or a document and iterate on that until you are happy with it Then let the Ai proceed with the plan. Task master MCP is useful for organizing this task breakdown/PRD process
If you use it right it saves a lot of time
xxtruthxx@reddit
Accurate.
Consistent_Mail4774@reddit
Also wondering the same thing since I didn't find it that helpful. I'm only using the free copilot version so I could be wrong (mostly using Claude 3.5 model, also used 3.7), but so far, it produces a lot of unnecessary code that needs lots of cleanup and refactoring so not saving me time. Also many times it takes multiple attempts to do something no matter how detailed I make the instructions. It also doesn't write clean, scalable or efficient code from my experience.
I wonder why everyone keeps saying it's making developers more productive. Like what tools are these devs using and what models. I keep hearing some companies are laying off most of the devs and keeping some seniors because AI is making them more productive, I wonder how.
secondhandschnitzel@reddit
I don’t think the layoffs are based in productivity gains. I think “layoffs because of productivity gains” is a fantasy told to investors to increase the valuation. It’s possible because most of the orgs doing layoffs massively over hired when capital was cheap. After all, if teams were actually that much more productive, wouldn’t they primarily be investing into new product development?
Consistent_Mail4774@reddit
There is someone who said the same in the comments, that their company is not hiring devs and they're the only person to do dev work because the CEO thinks AI can make them more productive, which isn't the case. I think this is happening frequently. CEOs and managers are obsessed with AI.
Intelligent_Water_79@reddit
I've stopped coding for the most part. AI does it faster. But I only let it code at the method level.
Beyond that it is more likely than not going to screw things up
Intendant@reddit
To be honest with you, using it is a skill. You need to find the workflow that works for you, but it's really very useful once you've got that figured out. It is painful for you and especially for your teammates before that point (huge messy PRs)
CalmLake999@reddit
Try Claude CLI
hotcoolhot@reddit
I have been quite successful with regex until now. That thing is beyond my ability to handle. Everything else is hit and miss. But I try to guide the AI and some success.
kibblerz@reddit (OP)
Pretty sure everyone just wings it when it comes to regex lol.
StTheo@reddit
I’ve learned that it can produce some useful TypeScript mapping types that I end up regretting adding to a codebase because they’re so convoluted and difficult to understand.
kibblerz@reddit (OP)
Yeah you probably want to be pretty well versed in types for typescript. Though I don't think individual projects need typescript much, just libraries since the types make great documentation.
I stopped bothering with typescript because the types seemed so limited compared to other languages.
Fspz@reddit
Yes, depending on what purpose and in which way you use it your results will vary a lot.
If you keep playing around with it, eventually you'll start to get more familiar with what works, and what doesn't so you can better judge what it's strengths and weaknesses are so you can apply it in a more targeted way with less time wasted.
I've found that the more framing/direction you give it the better, also providing context tends to be important and try not to give it too much creative freedom unless you're just brainstorming because when it has creative freedom it tends to use it.
I've gotten some pretty nice stuff out of it, granted it took many iterations and edits but overall it's been a plus and for certain things I'll definitely let it spit out an initial draft of to get me started which can be a nice time saver. I've also had it come up with some ideas to optimize things in ways I wouldn't have thought of. It's in fashion to shit on AI in subreddits and forums like this one but if we're honest and humble we'll admit it can be used to improve some of our code beyond what we could do alone.
Consistent_Mail4774@reddit
May I ask what model did you find most helpful? I've tried giving it very detailed instructions but like OP, not finding it very helpful. How do you get it to optimize things? So far using Claude 3.5 or 3.7 and it writes a lot of unnecessary code and doesn't optimize even when I tell it to.
You also mentioned brainstorming, does it ever help in that? I find it never disagrees or discusses things (tried multiple models at that).
Fspz@reddit
The more advanced models of openai have had the best results for me. I built a little .bat file which copies all the relevant code in a directory/subdirectories to clipboard so I can quickly copy the code and context I want into a prompt when necessary.
I've also used some plugins in Rider to generate code in a more convenient way albeit not quite as advanced it's good enough for some stuff, one which will suggest autocomplete on the fly in situ which tends to speed up writing tests or linq queries etc, and another plugin which called codebuddy which allows me to pass context and modify prompts on the fly and even do something like filter changes by selecting files and using something like a git difference where I can selectively allow changes.
PizzaCatAm@reddit
You need to get an agentic framework and build lots of context, then 3.7 flies like magic.
TheRealStepBot@reddit
100%. There is skill and patience required to build up a good context. It’s not one prompt. It’s progressively expanding a context until it allows the model to solve your actual problem.
Knowing where to start is tough. I often have to start over. But when the thread is in that happy groove it’s incredible what it can do.
My ability to cross stacks is kind of insane.
creaturefeature16@reddit
100%. I definitely think it takes a shift in thinking to know what you can offload to the LLM and what you'll just work through yourself. And the ability to generate contextual code examples that I can use for brainstorming has been one of the best things I've ever been able to do in my 20 years of coding.
I've been able to learn so much more by being able to have basically a "dynamic tutorial generator" that also functions like "interactive documentation".
Ok_Description_4581@reddit
I have a coworker that is using AI and AI code is better than what he usualy do. The problem is that he now introduces bugs faster to our codebase.
ILikeBubblyWater@reddit
I use it for 6 months now and have been insanely productive with it.
It's a user issue
Spider_pig448@reddit
It sounds like you could use some training on how to actually use these tools. They don't build things for you, they act as your assistant. They require experience to know how to use it effectively.
Dorme_Ornimus@reddit
I find it useful, when given enough context and constraints, it's like a junior dev way too motivated, I've also found it helpful to have a general layout architecture and technologies with versions file that I make the model relearn every single time a task is assigned. Also most agents have a file that helps them understand their own context, in normally use that file as general context, for example if the project uses solid principles, or if we're going for certain type of encryption just as general rules, this makes the code to be more aligned with your ideas, problem is, that you need to do the fucking work to make it work, so it's worth the effort in the long run, but not for menial stuff.
Comprehensive-Pin667@reddit
It is very useful for "monkey work" as you say. Here's the SQL definition of a table, please create the entity framework model, repository, dto and API controller. It's actually quite slow at finishing this type of task - much slower than I would be - but I get to work on something else while it does this. Maybe I'm preparing the front end for the same use case, or writing some business logic that I'll need later, or reading up on some documentation that I'll need later, or asking another instance of the AI to write unit tests at the same time, then vetting them and extending them to cover all corner cases.
The non-agentic inline editor is also useful, for example for writing regexes.
Knock0nWood@reddit
It's great at a lot of annoying boilerplate things, overall I love it but it frequently disappoints and can't rely on it for anything complicated
PlasmaFarmer@reddit
AI is good for making non-engineers believe they finally understand what engineers do. I use AI, it makes me more productive by summarizing concepts and documentation for me to quickly understand libraries or frameworks I never used before. It helps with boilerplate code but it hallucinates a lot. I asked it to write concurrent code for me last night and I just couldn't handle synchronization between the threads when accessing an object. I asked it to write me a gradle task and all it did was put my request into a println("My request here word by word I gave it in prompt") statement after asking it multiple time to fix it. It's AI slop. It constantly messes up something. I ask it to generate a service, it does, there is an error in it, I ask it to fix it, it regenerates the service fixing what I asked but breaking something else. And I play this until I get tired and write the code by myself.
tom-smykowski-dev@reddit
Not immediately. At first I used it as autocomplete. Then similarly like you for various tasks. The generation came later and it was the biggest jump. AI is only good at generation if it can generate a fairly working code in the first run and follow all the guidelines. Otherwise it doesn't make sense because changing the code is more time consuming than writing from scratch. I've tested several IDE to find one that works best to do that, still I use 2-3 AI to do stuff. It doesn't replace me in 100%, but now I mostly guide AI to do what I want and focus more on a big picture and quality. If you'd be interested I've started this newsletter where I share what I've learned
jondySauce@reddit
I pretty much just use it to do string manipulation in C because it's a fucking pain.
morbiiq@reddit
I disabled it.
autokiller677@reddit
I use AI a lot to generate boiler plate code, tests, get some ideas how to do a new feature etc.
Of course it’s not ready to ship code. But if I e.g. have a reasonably simple function that can throw some exceptions and takes a list as input, AI consistently generates good base level tests like testing that all the exceptions are thrown with the correct message, testing the behavior for an empty list and list with only one element as input etc.
Saves me a lot of time. Or, more realistically, improves our test coverage, because I would likely not write all the small tests or forget some of them.
It’s a tool with certain capabilities. Learn what those capabilities are (at the moment) and employ accordingly.
Loud-Necessary-1215@reddit
My employer is pushing for AI hoping to increase speed and productivity as the situation is hard atm. I use Copilot which helps a lot with tedious tasks like unit/e2e tests. Not much for other atm - maybe next iteration or me adopting more will help.
3flaps@reddit
It’s pretty good for languages you aren’t familiar with & one off scripts. Abysmal for UI so far, doesn’t show good judgement for clean code or architecture. Treat it like a personal, energetic, emotionally resilient intern who has a great breadth of knowledge, but not much depth and ability to connect the dots the “right way”. It’s not comparable to any other intelligence that we have as humans. It has different properties. This is also changing.
Designer-Teacher8573@reddit
Glad to see this. I can't for the life of me get usable code out of it. At least nothing I'd put into production.
roodammy44@reddit
I tried using a lot of different models. The best I’ve used so far is Claude Code. It’s a command line tool. You direct it to your project’s root directory and then you give it very specific instructions. It will then iterate over the instructions, compile and test them, asking for input along the way.
I then paste the results of some of it into ChatGPT for a code review, and then go over the code line by line. It’s very impressive how fast it can be done.
Consistent_Mail4774@reddit
Do you find Claude Code a lot better than for example using Claude models in Copilot?
roodammy44@reddit
Yes, absolutely. Copilot only has a very local context. Code traverses your project’s root directory to read it and do changes.
rudiXOR@reddit
Yes you might be doing something wrong. AI is super helpful especially when starting out to implement something with a well known open-source library. Sometimes I even learn new patterns or new functionality from libraries I work with for years. It doesn't help with architecture, following the existing patterns or with debugging though.It also tends to be verbose and sometimes finds impractical solutions. It's just a tool, but for sure a powerful one.
IamNobody85@reddit
Lol, Ai can't generate my react components and also can't calculate something so simple and formulaic as how much baking powder I need for a cake to substitute baking soda. I screwed up my cake yesterday, because I'm not yet a confident baker yet and decided to trust the AI. At least I can fix my react components myself. I'm still very salty about it.
As for being productive, it can catch easy errors and it helps me with fixing typescript errors and for tests. I almost exclusively use it to write unit tests, there it does save me a lot time. But for actual tasks, not so much.
Mistuhlil@reddit
Hot take here. If you’re a seasoned developer, you know what changes need to be made, and that is where AI shines - give specific instructions on what needs to be done and it’s a far more efficient workflow.
People are being resistant towards AI just like people used to be resistant towards technology in the past.
Adapt or get left behind.
You can iterate far faster with AI than without. It’s a tool. Use it.
Sevii@reddit
AI and vibe coding are great at simple small scale tasks. If you are a senior dev you can do the same things already so it doesn't seem impressive. AI isn't great at making precise edits to existing services.
They are great at building simple stuff. I used Claude to help me make a reasonably complicated simple iOS game without even bothering to learn Swift. It wasn't magic, but I would have spent hours trying to learn how loops worked in swift wheres the AI just handles that in my app.
AI can just write a bash script for you based on a plain language prompt. To do it yourself might take couple hours.
kregopaulgue@reddit
This. It just can’t handle precise changes in legacy codebases
Tired__Dev@reddit
I actually vibe coded a bunch of features with a 3D web framework and from the vibe code I had a direction to learn the things I needed to build an app. I still needed to understand the code to know what was wrong when it broke, but it was mostly vibes. That gave me enough insights to go through a Udemy course and see how things were structured wrong in the vibe code. I for sure gave me a direction and saved months of work.
I also used AI to teach me an entire language and echo system that would take me a year to learn otherwise. I could grind out well known Udmy courses, as AI to take notes to dumb things down, go and code, break shit, restructure it, get whole books broken down for me, reanalyze my code for best practices, and then do it all again. My progression amongst seniors that really know the language and echo system have been profound. My experience with other languages makes it easier to spot the shortcomings of what code it gives me back.
It only works if you treat everything skeptically. You need to verify what you're doing with other people. That said it makes everything go faster. Hours to weeks of research on a topic in seconds.
zopiac@reddit
I just used Copilot to code something for the first time, and this is about what I got out of it as well. I don't do GUI stuff but wanted something for my raspberry pi back home for handling photos, and it knocked something up very nicely in a few hours.
If I knew GTK widget declarations and everything already it probably would have taken me about as long to code by hand, but I very much don't so it was appreciated. I was honestly very impressed.
Now, the moment I decided to start asking it to tweak this and add that and change the behaviour of this... everything started falling apart. It would rewrite things it didn't need to, breaking them, or just spitting out bunk code that doesn't begin to accomplish what I asked.
And this is a <400 line pile of nothing for an offhand hobby project. If this were hooking into databases and managing large scale services with the need to be even moderately cognizant of how it all interacts? Yeah nah, I don't trust it for a second.
SonkunDev@reddit
Btw, are you on Arch ?
Blecki@reddit
The latest chat gpt is pretty good at math.
NOT arithmetic. Use a calculator for that. But math, as in, implementing algorithms? Yes.
My assessment remains that it's only successful when implementing small pieces with clear requirements and you need to know what you're doing yourself to use it well, but it can spit out working implementations of known mathy things.
716green@reddit
It's just a force multiplier. If you're a good engineer, you're going to be a more effective good engineer. If you're mediocre and ambitious, you're going to write good code with AI. If you're mediocre and lazy you're going to write bad code with AI.
Learning where and when to use it is a skill in and of itself but now that I've worked it into my process, I can't see myself going back just like I can't see myself going back to the days before LSPs.
I effectively get to work on the more interesting problems and delegate mindless, boilerplate or infuriating bugs to AI agents. It has made work fun again but I'd also consider myself a competent engineer who leverages AI to increase my productivity, I'm not some vibe-coder trying to automate my job.
I still have strong opinions on the correct way to do things, I design the systems, I create a style guide, I organize the project, I do the building, I just work faster and have more fun with it now.
A good portion of my team was laid off recently. They haven't said as much, but I suspect that the increased productivity with AI tools is probably responsible, which is ironic because we're technically not supposed to be using AI tools because there is so much international bureaucracy but that's irrelevant.
I have plenty of criticisms for AI tools but "unproductive" is not on the list.
Mountain_Sandwich126@reddit
/tabs for the win.
It it's about 35 - 40% right on predictable boilerplate.
Makes the boring stuff faster and I spend more time on solving the problem.
kibblerz@reddit (OP)
Don't you dare spread such heresy lol.
I understand predictable boilerplate.. But I use ordinary codegen tools, which behave predictably, to make predictable boilerplate..
Using an unpredictable tool to make predictable boilerplate seems like a bad approach. I guarantee that much of the people perpetuating AI code generation are ignorant of what metaprogramming can accomplish.
Mountain_Sandwich126@reddit
Yeah you can't skimp out on tests, and i still write my own tests rather than rely on ai.
It's not where marketing say it's at, and seeing non techys swallow it up like it's gonna solve all their problems.
But I guess we'll just charge 3x when things mess up beyond repair and the magic words come out:
"It's a rebuild"
kibblerz@reddit (OP)
All of us AI doomers should get together and start a company prepping for this day.
Another thought that just came to mind.. The federal government has cut all funding towards the CVE database from what I've heard.. So all this AI slop, and an underfunded/existentially threatened CVE database?
I feel like the internet may just collapse at that point lol. I may just start a gourmet mushroom farm 😂
Weak-Ad3985@reddit
I feel like it helps doing basic and repetitive code that I don't want to do.
spacedragon13@reddit
Garbage in, garage out. If you think you're "too good" for ai, you probably aren't. I'm a decent engineer but I work on projects with doctorate-level TLs with 20+ YOE in enterprise settings. They all incorporate AI within their workflow but they aren't letting cursor or ChatGPT make decisions. They are describing exactly what they need in detail and generating small chunks of code which are reviewed before accepting. They are enjoying the power of these services without being controlled or limited by them. Version control is managed intelligently and nothing is moved to live environments without passing through a testing pipeline. I would recommend using windsurf over cursor and learn how to leverage these incredible tools without becoming helplessly dependant on them.
_TakeTheL@reddit
I’ve been using the Augment plugin for VS code, my company is paying for it. It’s actually very helpful, and has sped up my development quite a bit. It indexes your whole project and can take into context all of your files when necessary.
NiteShdw@reddit
I mean... Using neovim to write code is literally using a tool whose foundation goes back to the 60s. So he's not wrong that you're literally stuck in the past.
As far as AI, it's good at two things: repetitive tasks, things you don't know how to do.
If you have been writing code in language X for 20 years, AI is probably not going to help. But if you suddenly get pulled into a project using language or framework Y that you don't know, AI can more quickly give you what you could find out for yourself with Google and reading docs.
Whether that's useful to you or not is up to you.
kibblerz@reddit (OP)
Neovim capitalizes on VIMs philosophy, but it flourishes with plugin integration. My setup is full of modern linters and integrations I can access with a few keystrokes. I know more about configuring a modern LSP than most who use modern IDEs would. I could say using computers based on binary is being stuck in the past.
NiteShdw@reddit
Yeah, I've seen these setups... Not sure how it's better than any other IDE that has all the same stuff built in.
I knew a guy whose VIM had so many plug-ins to make it similar to my WebStorm setup that it would hang and crash constantly and he had to start stripping out plug-ins.
You do you. I'm sure there are plenty of AI plug-ins for neovim.
kibblerz@reddit (OP)
There are a few ways that it makes things easier. First, you aren't searching a UI when trying to get things done. It's muscle memory, you get to whatever line in the file via as few key presses as possible. Instead of scrolling down to the error in line 784, you type 784gg and you're there.
Instead of going to your mouse to delete a line, you press dd. Instead of holding down an arrow button to get to the beginning of a line, you press ^. To delete a word, you just press dw. Want to delete all the way up to a capital S, you just type dfS (delete - find - S).
If you have a bunch of nearly identical changes to make, you can use / (find) and use the key/term you want to make the change at. Press qa and you start recording a macro on the key a, which records all of your keystrokes so that you can replay them later with @a.
Vim was made with text manipulation in mind, and its VERY effective. It has a high learning curve (It can take months to build that muscle memory and get productive), but when mastered its nearly unrivaled for edits.
Add in Neovim, and you can get all the advanced functionalities that other IDEs get. It's also pretty fun because you practically build your own IDE. I like configuring shit lol
NiteShdw@reddit
Do you think that IDEs don't have shortcut keys?
I rarely use the mouse in my IDE. I have tons of shortcuts for searching, switching files, looking at diffs, refactoring, whatever. Literally every command in Jetbrains and Vscode can have a shortcut or cord attached to it.
kibblerz@reddit (OP)
Do you use vim keybindings or something? Cause in my experience, the shortcuts don't really pertain to text edits (besides maybe a find and replace) unless you use the vim bindings. I'm not just talking about commands, im talking about keyboard shortcuts to directly changing the code. Not just running a refactor.
NiteShdw@reddit
No I do not use vim keybindings.
I'm saying that your argument that modern IDEs require a lot of use of the mouse to find and use features is an inaccurate statement.
Yes there are keys for selecting within a block, selecting to the next or previous word, expanding a selection, adding multiple carets to do simultaneous edits, moving functions up or down, etc.
They are not the same as VIM. They don't follow the same action/verb style combinations. But that doesn't mean that the vast majority of what people do with vim keybindings can't be done just as efficiently.
I would argue that IDEs can even be more efficient in cases such as doing symbol renames across a whole codebase or other actions that benefit from having the full codebase indexed.
I'm not saying that you should change your workflow. You do you. I'm just trying to point out that just like many people don't know vim and so complain about it, you're basically doing the same by putting down tools you actually aren't even familiar with.
BTW, I wasn't honestly trying to insult your use of neovim, I was poking a little fun since I know you guys love it brag about it.
I use neovim myself but only as an editor and viewer, not as an IDE.
kibblerz@reddit (OP)
For changes across projects, I just use the terminal.
I'm also primarily DevOps/fullstack Webdev, so I just need an editor with a linter. IDEs definitely serve a place with lower level languages though. I wouldn't use VIM for something like rust, that's for sure.
I'm a bit biased, I hate bloated UIs and personally I find them a bit overwhelming. So just having an editor focused on text and memorizing the commands that I need is preferable.
NiteShdw@reddit
There's nothing wrong with that.
I do wish setting up plug-ins in neovim was easier. The documentation talks about the plug-in functionality but doesn't say how do use it.
I once sent a whole afternoon trying to get lazyvim and plug-ins setup. It was not a fun experience. I eventually gave up.
kibblerz@reddit (OP)
Yeah I'm not that much of a purest where I set everything up myself lol. I use NVChad. Most things one would need are already pre-configured (including lazyvim) and they also have a tutorial on how to set up up new plug-ins or configure existing ones
NiteShdw@reddit
I'll take a look.
TheTrueXenose@reddit
I write my code and use the LLM as a less capable junior dev reviewing my code and mean that it is less capable.
PossibilityFit5449@reddit
Cursor, Live story:
Done. Saved me an hour of writing the mocks and debugging what exactly is wrong with a test setup.
——
For production code, I do two rounds of AI — one for ideation, another one for polishing. And between those just write the solution myself.
——
AI won’t do the work for you, but it may make it bearable to do things you like less.
techie2200@reddit
It's useful in certain scenarios. Today I wanted a bash script that had a bunch of different regex and other test conditionals. I prompted cursor to write it, then all I had to do was tweak it a bit and it was good to go. Took me way less time than trying to remember proper syntax seeing as I work primarily in typescript.
Right-Tomatillo-6830@reddit
try using it with a programming language and/or framework you are not familiar with.. now you see why people are hyped about it.. (because they don't know what they don't know).
kibblerz@reddit (OP)
I suppose so. It terrifies me how much hype AI gets. We're gonna end up in an avalanche of technical debt...
Right-Tomatillo-6830@reddit
yes, I'm already seeing it.. at first it seems like it levels the playing field until you find some security problem or some deeper issue and have to hire a real dev.. someone on reddit was showing their template vibe coding thing a week ago, it took me about 10 seconds to find a clear security issue.. sad, but ultimately real devs benefit from there being more code in the world that needs maintenance..
kibblerz@reddit (OP)
Idk, id prefer to build new projects with good practices than for all these companies to generate crap that I have to fix 😂
I've had to talk multiple clients our of AI bs this year.
Right-Tomatillo-6830@reddit
hey you don't have to take that work, but it does mean more demand for devs who know what they are doing..
kibblerz@reddit (OP)
I suspect much of the AI slop would probably warrant a complete rebuild 😂 Atleast that'd be my response to a client who brought me AI slop lol.
Right-Tomatillo-6830@reddit
hmm, don't discount AI too much. It's really worth learning so you can make a good judgement on it. sure prompting "make me a todo app" will get you some slop, but if you learn how to code an agent and perhaps model it and prompt it for your coding style you might find that you can generate some good code in less time than you normally would.. I'm personally interested in a version of aider that can do TDD or is tailored to a TDD process. In fact given that AI works better on small tasks I think TDD is a really good fit for it..
BertRenolds@reddit
Only good Jenkins file validator I've found
kibblerz@reddit (OP)
Understandable. Though analyzing yaml is closer to analyzing language than code IMO.
BertRenolds@reddit
Well, groovy
Blues520@reddit
It can be useful for trivial code and helping to brainstorm. Beyond that, I've not had much luck.
I've been wrestling with a difficult feature for the past week. Tried Gemini, Qwen, and Deepseek. All failed to produce working code. Gemini comes close, but it doesn't care about performance or maintenance. Then again, it can't really know because it's just an autocomplete on steroids.
billybobjobo@reddit
Very productive. Not really understanding the people who don't get use out of it. Dont't vibe code. Just break things into very small tasks that you architect well, micromanage and scrutinize the output. If I've built something 1000 times before, I can prompt an AI to do it just as well as I've done it--much faster.
crinkle_danus@reddit
I'm trying Cursor AI on the weekends. Its slow. Generates a solution that I need to read and analyze. Only to find out there's a bug. Then check if it can solves that bug. Slowly generating to produce another bug. And thats under their "fast response" generation. I can only imagine how slow it is on their mini models.
Reverted back to using nvim with Copilot autocomplete/review/unit test plus ChatGPT for brainstorming/documentation and other stuff.
TotalHoney2664@reddit
I find AI helpful but only to a certain extent, some basic stuff.
Individual-Praline20@reddit
Don’t loose time with that crap and loudly laugh at any middle or upper management face for suggesting it will save their software business 🤣
Synyster328@reddit
I probably burn hundreds of dollars a week on AI tools, it frees me up mentally to think about other things.
Generative AI is a legitimate skill though, it takes time to get good with it. First of all you need to approach it with an open mind. If you go into it already with your mind made up, all you'll perceive are downsides without appreciating the potential.
hell_razer18@reddit
AI is always enhancing, never replacing...problem is everyone already has perception that it saves time before evem fully implement and calculating the whole thing. They just have the utopia or ideal version of what AI can do.
Generating code, yes AI can do. Generating code that matches the requirement..hmm thats another thing
kibblerz@reddit (OP)
What blows my mind is that all of these pro AI people don't seem to talk about traditional code generation at all.. AI didn't invent code generation and it certainly didn't perfect it. It's the most imprecise way of generating code.
thehodlingcompany@reddit
I've used it to write a few "process" docs we've need to pass various audits. I just fed it some emails I had written ages ago to juniors and some stuff off Teams. I doubt the auditors even read them and neither will anyone else. Saved me literally hours.
kibblerz@reddit (OP)
That's a great use for AI. I use AI for writing language, I just find it counterproductive time for writing code. But when it comes to writing website quotes and things of the sort, AI has been a great method for putting my technical decisions into layman's terms.
I also will consult with AI about how some coding ideas work. It's just writing code that I've been repeatedly disappointed.
Code just uses language to represent abstract concepts, but so much of our time coding relies on parts of the brain for mathematical calculations, or in the case of video games, spatial reasoning. Programming just isn't really a linguistic task, it just uses linguistic syntax.
Amazing_Bird_1858@reddit
My work is data and analytics focused so I'm usually wrestling with hacky scripts and this helps me implement logic that I usually have a good idea on going into, same for boilerplate and db type stuff.
shozzlez@reddit
I can kinda get it to work after a good deal of effort. I usually feel that if I spent the time Googleing and researching like I used to and just doing it myself , it would end up being about the same amount of time but without as much frustration. Like driving the same amount of time on the highway vs stop-and go traffic. Same amount of time, but one is much less annoying.
Rabble_Arouser@reddit
Absolutely. You just need to know how to use it. Never vibe code, always be very pointed in its usage.
Also, if you're trying to apply AI code to back-end code, you're asking for a bad time.
For front-end, it's fine. It seems quite good at non-TypeScript. Start adding in TS and you're again in for a bad time.
All of that said, it's really good if you're VERY specific.
jeffzyxx@reddit
Using RooCode + Claude, it’s quite useful for me at work - though I use it less for writing, and more for research / summarizing. Working in a legacy Python app with tons of hacks, it’s handy to do the initial “research” phase of bug fixes. E.g. “I know I have this value in context on this page, but I’m not sure why. Find all the places we set this value and give me the stack of functions that got it there.”
It’s stuff I could do myself, but it might take 15-30mins. Instead I let Claude spin for 30s and write up a report which I then use to fix the bug in a couple mins. Sure it spent $0.30, but that’s a hell of a lot cheaper for the company compared to 15 mins of a dev’s time.
almost1it@reddit
Yeah I'm probably the most sceptical of AI coding on my team. That said I still do use it daily as a more efficient stackoverflow. I can tell it to implement straightforward utility functions and boilerplates but hit rate drops off drastically after that.
I do think the entire industry is being psyop'd into thinking AI is way more capable than it actually is by people with incentives to do so. There is a place for AI but I think people need to adjust expectations significantly.
Google releasing a 68 prompt engineering guide and OpenAI releasing a 34 page doc on building agents was massively hyped but IMO it was also just another example of building with extra steps. If I need to be an expert at min maxing prompts, then I'd rather just cut that out and write it myself.
jollydev@reddit
It's a hit and a miss. Sometimes, Cursor can one-shot small features in agent mode. Like half a days worth of work. And it does it in 5 minutes - so in those cases it incredibly useful.
In the best cases, I'm 90% happy with the implementation and just need to do some small tweaks.
But in the majority of cases, even if it gets it right, the code quality is bad. Outdated usage of libraries and programming languages, overly complex and often buggy.
Overall - as a cursor user I spend more time debugging, reviewing, prompting and refactoring than I do writing code line by line. I don't do that at all anymore.
IMO - the best use case I've seen is using it a programming language, just in natural language. It really needs that level of detail to perform well.
Born_Replacement_921@reddit
I only use the type ahead in editor.
It drives me nuts when I pair with coworkers using chat. I watched someone uninstall brew and Xcode cause chat told them to. They broken their env for like 2 days
kibblerz@reddit (OP)
Lmao peak vibe coding right there 😂
Particular-Walrus366@reddit
I work with microservices that are quite opinionated. Cursor is insanely good at writing code that follows the same patterns as the rest of the codebase and writing tests. I’d say it easily writes most of my code today (obviously I review and tweak as needed but it does all the grunt work).
FitBoog@reddit
Be extremely detalistic in your prompts. Describe exhaustively your intentions. It can save me hours of figuring things out
i-can-sleep-for-days@reddit
I use it. It solved a problem that I wouldn’t get from stack or google because it understands the context and comes up with an answer that works specifically for me and the problem I am solving right now. It isn’t like stack or google where unless you are using the same library or have the exact same issue then you can copy and paste. It takes the answers and applies it to you and that’s pretty huge.
Not to mention what I am working with a lot of come in the form of google groups and I am in no way looking to read through a long thread just to find that the situation doesn’t apply to me.
jujuuzzz@reddit
It’s fine as long as what you are doing is not new. If you need to introduce new patterns and packages that it hasn’t been trained on… it’s pretty painful.
Impossible_Ad_3146@reddit
Yes very helpful
SD-Buckeye@reddit
Tf are you doing with AI? Install cursor, code normally, hit tab to autocomplete when necessary. When codes finished, copy paste into ChatGPT and ask to make mock/unit tests for said code. How did you all get this far as a software dev and 99.99% of you can’t figure out how to be productive with AI.
kibblerz@reddit (OP)
Why not just write the code myself? That sounds like a bunch of extra steps to just writing the code..I don't even need to touch a mouse at all with vim lol
SD-Buckeye@reddit
There’s no way you can type up unit tests and mocks faster than ChatGPT can. You offload the simple stuff that’s dead easy to AI handle. It also combos a mild code reviewer to catch things too. If you expect to just tell AI “make me a react that does X and Y” of course it will be a waste of time.
CompellingProtagonis@reddit
One thing it is very good at--If I'm having trouble naming a variable it's really good at coming up with a good variable name from a description of it's functionality and the kind of vibe I'm going for. It's a long workflow for a relatively small thing, though, so it's not something I do often at all.
MeatyMemeMaster@reddit
U need to git gud with it and learn to prompt engineer correctly. Be specific about what you want.
kibblerz@reddit (OP)
Why put in so much effort to engineer a prompt when I can write the code myself with less effort? 0.o
TheMatrixMachine@reddit
I'm a student. I find it useful as an alternative source to documentation. Documentation can be tedious to look through. I wouldn't trust AI to generate a huge piece of code. It'll need debugging and alteration and that's difficult to do on a large amount of code.
Fearless-Habit-1140@reddit
I’ve been burned by AI giving me bad documentation SO MUCH. It provides something that seems plausible, but just isn’t there.
TheMatrixMachine@reddit
I haven't had that issue (yet). Wouldn't surprise me for things that don't have as much stuff online or if the resource has gone through a huge progression very quickly
Fearless-Habit-1140@reddit
I’ve had a similar experience.
A colleague posited that in the future, AI-assisted coding would be something like how we use compilers now: for the most part, we can take them for granted because some really smart people spent a lot of time getting them dialed in. Knowing compilers help engineers understand the whole stack, but for the most part we can do a lot of our work without having to really think about the compiler on the day-to-day.
Not sure I fully agree, but even if that is the case we’re a ways away from making that happen
Alone-Dare-5080@reddit
Eventually AI will be a service and all these dumb managers will change their minds.
Icy_Foundation3534@reddit
skilled programmers are laughing all the way to the bank writing code in a hours instead of weeks.
emphasis on skilled.
shrodikan@reddit
What model are you using? It was crap for me when I was using GPT4o. Then I switched to Claude Sonnet 3.7 (Thinking) and it changed the game.
I've used it for diagnosing errors, understanding the codebase using @workspace and even replacing parameters in SQL.
AyeMatey@reddit
In the category of quick hacks, Today I wrote - no, today I directed an assistant to write - a python web scraper tool, that had to do a serious of POST requests , like about 25, to a remote website. Then it did some counts and aggregation on keywords on the jobs it found, and produced a bar chart with the results.
Using AI to produce this was much faster than doing it myself.
I still had my hands in the code, moving things around, renaming, adjusting manually. But the AI was my pair programmer. And was much faster than me.
After I looked at the chart I decided I wanted some other aggregation, so I told the assistant to modify the code to cache the scraped data, with a timestamp, so it didn’t have to go make all those outbound post requests each time. then I told it to extend the analysis to produce other charts. This was all really fast.
I’m not a python expert.
dobesv@reddit
Using AI tools is a new skill, so it's normal to be frustrated a bit at first.
finally-anna@reddit
I primarily use it for weird edge cases and/or syntax in languages I don't regularly use. For instance, trying to figure out the properties available in a vcenter api for creating new vm's from templates that don't have properties available in terraform (like user data and Metadata properties used by cloud init). Let me tell you how useful the VMware docs are for that...
E3K@reddit
My productivity has had a massive boost thanks to AI. All I ever hear from people is bitching and moaning while I'm over here shipping features. Maybe y'all just don't know how to write prompts.
kibblerz@reddit (OP)
Maybe you just don't know how to write code 😂
daemonk@reddit
I am not writing web dev code. I use it to generate “boilerplate”-ish code. For example, I wanted a hardware component abstract class in python. I gave it some general parameters and it gave me a class and how to use it. I ended up removing about 25% of the code because I didn’t need the functionality and retained most of the code. It works and is being used alongside other classes I generated (ie. generate a singleton component manager class, generate a serial communication interface class, etc)
I don’t necessarily trust it to generate things at a very high level (ie. generate an app that does X), but writing a short technical prompt and getting something within minutes for me to revise and integrate into existing set of code quickly is nice.
Software development is only a part of my job though, so perhaps my use case is different from people who specialize in it.
cbusmatty@reddit
The fact you call it “ai crap” is demonstrating you are not giving this a fair chance nor are you putting an honest effort into understanding it.
Ai, like everything at your disposal is a tool. If you use it incorrectly you will have poor results.
Vibe coding is idiotic and if you’re even remotely thinking of that as agentic coding assistance again is demonstrating you are so far off the path to using this correctly you should start over.
You should be using agent mode to be building documentation for your code, building extra unit test, pipelines and smoke tests. Diagrams in mermaid or c4. You should not be “vibe coding”. You should be using it like super google that pulls current standards and library usages and updates. Anytime you would need to go to SO, you’re now incorporating ai to literally give you the better solution.
Ai will make your job a hundred times easier, and people with half your skill will do way more than you because they were willing to demonstrate a willingness to use tools effectively. I cannot for the life of me understand how we are being handed magic tools that simplify what we do and allow us to automate the boring stuff, and people are resistant to it. Ignore it at your own peril
crazyeddie123@reddit
The "boring stuff" is all the shit we don't have to do because someone is paying us to write code
jeremyckahn@reddit
I mainly use Neovim but Cursor is handy for scaffolding things and getting a jump start on some straightforward tasks. I use it a few times a week and I like it for what I use it for. I can't imagine using it for everything and actually producing better work that I would with Neovim, though.
WiseHalmon@reddit
In short I've had good success with cursor+Gemini on a vite react nestjs scss app. Small, from scratch.
I've had good success with files and functions context less than 3-10k lines.
Vibe coding for me has been a mixture of holy crap this 30hr idea took 3hr and also "damn why do you keep getting stuck on my linter / prettier that requires you to use " vs ' ... " Or some other bullshit issues
SirCatharine@reddit
I like AI code completion for exactly one thing: writing tests. My company’s testing library requires so much boilerplate that it takes 3x as much code to test a thing as it takes to build the thing. AI does make it easier.
soonnow@reddit
Using AI has a learning curve. Going in with the expectation that it will fail but hey at least I gave it a shot will set you up for failure.
AI is amazing but you need to learn when and where to use it. Saying AI built me this or that is probably not the right approach.
Here's what I use it for
Asking questions about language features I'm unfamiliar with (because I'm language hopping a lot), "how do I handle syncronization in C#.
Building classic algorithms that AI has a lot of sources for "Please make a sort algorithm that uses off-heap memory access, with the following data-structure"
Tests, "Write a test for this code" Often when the test looks wonky it's because your code is wonky.
Code reviews. "Can you see if there are any obvious errors with code."
Error and Stack Analysis. Dump in Stack Trace "Can you see why the program hangs?" or "I have a StackOverflowError with this stack trace. Here is the code. See any issues"
As you see I'm never asking just to build something. It's in a tight defined context. The better defined it is the better the result.
But it's a learning process. But I absolutely feel it's increased my productivity and code quality.
Qweniden@reddit
I use it to debug, do CRUD scaffolding. write utility functions and give API examples. It makes quite a bit more productive. I don't have AI in my IDE, I just go to ChatGPT or whatever and ask questions or give directives.
Confident_Cell_5892@reddit
It depends on what you want to build. Sometimes it speeds up your productivity, sometimes it even decreases it. You should be able to find the sweet spot.
For example, for stuff like Kubernetes/Helm manifest declaration, it worked like a charm. For backend development, Copilot helps me with the code docs and autocompletes in a way that really helps me (after I coded several parts of the project, it learns from your patterns). So a productivity boost here.
Nevertheless, I wanted to setup a bazel monorepo, it certainly helped me out, but I lost many hours following ChatGPT steps that led nowhere. I started doing things the old school way, searching for docs, diving into source code and so on. Got it working after a while. So, definitely a productivity decrease.
Choose wisely.
drink_with_me_to_day@reddit
It helps translate explain analyze into english, i just vibe coded some slow SQL queries away...
keelanstuart@reddit
I pretty much never ask AI for code... but it's been really helpful in tracking down problems.
throwaway1253328@reddit
I've found quite a bit of value in it. I do a rough design before I use any AI, then step through my thought process and describe how I think the solution should work. I've found the best models can spit out something I can quickly adapt to be something real.
Its best to keep it to a low-ish level of complexity. If the component is over 500 lines or spans multiple files, it gets lost and spits out garbage.
The_0bserver@reddit
I use it in a couple of ways.
Generally, go in with a plan on how to write, and then iterate over the code it gives
amenflurries@reddit
I’ve found Claude fairly helpful in a lot of different situations. ChatGPT is almost always pure garbage.
cactusbrush@reddit
AI is surprisingly good in the most loved tasks by developers: testing and documentation. And this is what I use AI the most. If your code architecture is good - it will create tests without any problems. If the tests are complex and struggling - then you need to refactor your application logic. Refactoring has never been easy with AI.
With regards of the business logic, you’re right. It’s often easier to write the code yourself than to explain the logic. And AI usually struggles to make changes across many files. And sometimes even in one big file. You might want to break that task like with the junior engineer.
I use three models. Gemini is the best in nuances. Claude is the best coder overall. And ChatGPT. Well. It’s good in creating unit tests :)
But if you try any infrastructure related items - you will fail miserably. Terraform, CDKs, go modules for cloud and k8s is not the strongest skill for any LLM. Nobody’s replacing devops in the foreseeable future.
cescquintero@reddit
Now I only use to generate very precise stuff.
Some weeks ago I tried Cursor first time and it failed miserable and a task. It needed to nest some code inside a module and it ended up creating new files, refactoring code, and creating new functions.
I reverted changes and did everything manually.
My next tries were just generating tests. It did better. I had to correct it a couple of times and then it went smooth.
Now I'm using DeepSeek via Zed editor and I apply the same principles. Small, concise tasks. Precise questions passing the just enough context. Been doing fine so far.
jam_pod_@reddit
I find it (Claude specifically) does pretty well at relatively small, self-contained tasks — “create a module that accepts a set of Prisma schema files as input and converts them to Typescript types” was one I used it for recently. It got about 90% of the way there, I had to add handling for some syntax myself
Icy_Peach_2407@reddit
I think it’s also important to understand that it’s usefulness highly depends on the domain you’re in. For web technologies I imagine that it can be very useful. I work on highly-specific embedded software (C++) with tons of internal technologies/HW/nomenclature, and it cannot understand the context. It can be useful for generic helper functions though.
ZestycloseBasil3644@reddit
Yeah, totally get this. AI’s great for quick boilerplate or explaining stuff, but for anything slightly custom or with new libs? Just give me my keyboard and let me vim in peace
Tomato_Sky@reddit
I used it for learning, but I hate it for actual work. I wasted a whole friday this last week with o3 and got 0 work done while trying to get it to solve my bug that I eventually found while fumbling through it.
A lot of “You’re right! We can’t do that because limitations.” But a lot of pretending it could.
jb3689@reddit
I find it useful for snippets - particularly doing annoying SQL stuff like rank and partitions
tomqmasters@reddit
The key is to break the problem down in to small easy to digest chunks. Same as ever.
SUMOxNINJA@reddit
What I do is write the monkey code then use AI to help me find the optimizations of a function or class.
I find that helps me avoid some of the hallucinating that AI does with functions that don't exist or things like that. Also I have essentially written the logic so I understand it fully.
creaturefeature16@reddit
I imagine the first IDEs were a hard experience to get used to, as well. I would say: if you're going into the experience with such skepticism, you'll find plenty of reasons to scoff at it.
If you go into it looking for how it can help you with productivity, you'll likely have a different experience with it. Context is king and you can really prep these tools with a massive amount of rules to ensure the code you get back meets your standards. I have numerous Cursor rules and .md files detailing what I am looking for. It took some time to set up, but once you done it, it's done and you reap the benefits as you go.
But if you just dive in and expect decent code, you're going to be let down.
dryiceboy@reddit
I still just use it as a more efficient search engine. It works wonders for me that way.
I’m also starting to use it for code auto complete for common snippets and refactoring suggestions.
kibblerz@reddit (OP)
I'd argue that it only works wonders as a search engine because the internet has become dead lol.
I do use it to get the general idea in an area I'm unfamiliar with, but I primarily use it to understand how different libraries work, not using them for me.
dryiceboy@reddit
And that’s exactly what it’s for. A tool for people to use the way they prefer.
reboog711@reddit
I'm not sure if it comes from Github Copilot OR super improved IntelliJ Intellisense; but guessing what I'm about to write in a loop or unit test has done pretty good. I still have to edit it; but it gives me a really good jumping off point.
I don't have enough of that yet to determine the productivity gains, though.
ttkciar@reddit
For writing code, no, I havent found it particularly useful.
For understanding code, Gemma3-27B was a huge win for me. I needed to get up to speed on a coworker's nontrivial project fast, so I dumped each python file to Gemma3-27B with instructions to "Explain this code in detail."
That worked very well. Some files I had to have it explain twice, because it needed one or two in-house libraries in-context to understand them, but overall it was a grand success.
kibblerz@reddit (OP)
So it's great for understanding bad code (like most python code) lol. I feel like it's less useful when there's a decent type system in place. I hate python and it's tabs...
Repulsive_Zombie5129@reddit
Literally just helps with what you said, monkey code. Things where i know what to write, i just don't feel like it.
Always still need to tweak it to get it to work though
ub3rh4x0rz@reddit
So one thing I've noticed is that, unlike when writing code yourself, faulty design communicated in the prompt will go "all the way", vs subtly changing course mid implementation when doing it yourself. Accordingly, smaller features with easier to conceptualize and communicate requirements can be scaffolded reasonably close to what I would do, then I can take over and bring it home.
If you pursue a bad design from the outset, trying to prompt your way back on course is a frustrating waste of time
Arneb1729@reddit
Apparently productivity gains from AI are around 20%. Hardly a reason for trillion-dollar investment when so much low-hanging fruit isn't picked.
I got bigger productivity gains than that by switching to a shell with a quality history – Fish in my case, though I hear that Zsh+Atuin is awesome too. Then I gained another >20% productivity by adopting Tmux. And it's not just us terminal junkies. My job duties involve looking at other devs' and QA folks' shared screens a lot. What I learned from watching them is that, no matter if they use VSCode or PyCharm or Cursor, they always have half a dozen cmd.exe or GNOME Terminal instances open and they always get lost finding the right cmd.exe window and the right command to copy-paste from a home-grown .txt file.
teerre@reddit
I spend some time setting up https://github.com/olimorris/codecompanion.nvim to the point it's pretty natural in my workflow. I would say it's ok. It saves some google alttabs. Sometimes I ask to replace some code when it's boilerplate-y enough. It works more or less
My main problem with it before was that the workflow was just terrible. I had to redesign it in a way that made sense so I could finally use it
drnullpointer@reddit
My organisation is pushing *HARD* for AI.
The issue is, that people who have trouble developing are the ones who are most enthusiastic about using AI and at the same time they are least equipped to make use of it.
The basic issue, as I understand, is that AI solves what should be *the easiest* part of the job. Coding is the easiest part of the job for a good developer. The real job is figuring out what you want to code.
And if you don't know what you want to do, the AI will not figure it out for you.
Then there are more second order effects:
* AI is simply unable to clean up any code. So there is a huge bias towards writing new stuff than cleaning up, refactoring things
* new joiners stop learning to code. Without being able to code, they are powerless to do anything the moment AI is not able to figure it out for them.
and so on.
Personally, I am half tempted to open my own consultancy aimed at cleaning up after failed AI implementation projects.
DeterminedQuokka@reddit
I really like ai while I code. I find it to be really helpful a lot of the time. I have mine pretty well trained to only generate the rest of the line and not like entire functions. And it works pretty well for me.
Radinax@reddit
Its very helpful to me, but you need to explain things like its a dumb kid, as much details as possible, without context it won't be too helpful.
zato82@reddit
I use it to write unit tests, quick scripts, OpenApi specs… all the tedious stuff that no one likes doing. To do my actual job, it’s just an auto complete that is a little better than most.
ryanstephendavis@reddit
Writing new features into your boss' AI generated slop is even worse 😭
hyrumwhite@reddit
I’m using cline and whenever I need to write a utility or mapping function it does it really well. Larger scale stuff it does something like 80% good stuff, 20% stuff I need to fix.
I don’t use it all the time for everything though. Generally whenever it’s a standalone, straightforward task.
No_Soft560@reddit
I am using AI all the time. From autocomplete on steroids (autocompleting whole methods sometimes) to drafting code to searching errors/bugs to discussing things.
chairmanmow@reddit
It's no silver bullet, I liken it to a junior developer on meth that will take my copious verbal abuse for an internal project that will never be updated, it's more useful for fun side projects than my job. I've used it to get up and running with languages and environments quickly that I'm not familiar with to some degree of satisfaction initially only to get deeper into the project to realize the AI left some bugs and missed requirements, also created a mess of spaghetti code that requires my intervention to unravel. Often the AI gets something wrong, you tell it what's wrong, it changes something, still wrong, try again, make things worse, be explicit about what changes to what lines, out of memory, try again, back to response 1 based on a faulty premise. Get angry, walk away, think about problem. Come back? Sometimes - I guess since I started playing around with it I've started and not finished more projects way more than usual. Easier to walk away from idiotic AI code than my own apparently.
gopster@reddit
AI should be used a coding buddy and yard stick. My team did a poc with copilot and this was my senior devs feedback. It came up with ideas sufficient enough for us to think differently. It did some boiler plate react code nicely and gave some useful debugging insights. We only a limited enterprise version to play with so it could only do 50 lines of code per function I think which was weird. Anyway, management is now pushing github copilot. Let's see how that works. Waiting in queue to try it.
ProfBeaker@reddit
I've found it useful for constrained tasks where I know what I want to do, but I'm not great at actually doing it. eg, writing a bash script to do some fairly straightforward AWS command line stuff, or doing some simple data manipulation with Pandas.
In areas that I'm already quite proficient, or that involve lots of context and loosely-defined considerations about future direction, it's a lot less useful.
I'm still somewhat skeptical of full-on vibe coding for anything larger than toy projects because I think an important part of coding involves is thinking deeply about the problem space and the solution, which you miss out on.
trcrtps@reddit
I use Neovim with CopilotChat and it's fine. I don't want to vibe code.
It's useful to know when the AI is starting to fuck up. Restart and ask different things. or just code it yourself if it gave you enough to go on.
I think I use AI pretty well in my workflow, as I have to jump around to different codebases from ruby to node to vue to terraform all the damn time. It helps quite a bit but I don't overuse it.
LateWin1975@reddit
I think you’re letting this experience define your perspective, which i think is a mistake.
AI is a tool, like a library, or saas or anything else that makes some people very efficient and others overly dependent.
Some use Claude directly (subscription) others use cursor (usage). Ultimately it’s extremely effective at super charging you if you know what you’re doing and integrate it into your flow in a way that suits you.
If AI is a hammer most great engineers are carpenters who leverage it and its variants to better utilize their own skills.
In my experience the people who tend to talk about vibe coding and one-shotting in cursor are closer to toddlers discovering a hammer and bashing anything and everything
enserioamigo@reddit
Yeah it's not great. I've wasted so much time trying to get it to help with Angular, when I could have just spent that time actually learning something while debugging the issue at hand.
RiverRoll@reddit
I've had some succes recently getting copilot to do most of the work in some refactors, it wasn't perfect but it saved me a lot of typing and searching.
Adept_Carpet@reddit
I'm a big proponent of it for monkey work, but the thing is my monkey work is highly varied (Windows, OSX, and multiple flavors of *nix) and a lot of my work is in a proprietary language tied to an IDE. It's a place where you have a dozen different projects with a dozen different workflows.
But when I was working on a single project, I could go from getting assigned a ticket to completing it and releasing it without leaving vim and sometimes without even using insert mode. Then there is the surrounding environment, a shell that has aliases and scripts for common tasks, it can be very highly tuned for productivity in a way that AI can't (at least not yet).
arcticprotea@reddit
It’s good for translation between languages. I had to go from bash to powershell and not knowing powershell it saved me maybe 10 minutes.
I tend to use it as a better google. Ask it questions. Bounce ideas around. Chuck error messages at it to figure out configuration issues.
old_man_snowflake@reddit
i hate powershell with a burning passion, but that does seem like a good use.
godwink2@reddit
Idk about react but its been pretty solid when I need some basic jQuery
iPissVelvet@reddit
The rule of thumb right now is — treat it like a junior dev.
As long as you’re challenging it, reviewing it closely, it can be good. But it’s not a 10x gain in productivity, no way. To me, I use it as a smart rubber ducky — it isn’t increasing my velocity any, but I’m sleeping better at night when my code ships.
bombaytrader@reddit
Yes definitely. Depends on what you doing .
super_slimey00@reddit
Op this is just how things progress. Before AI becomes better at you than writing code it needs to train and work its muscles. I know it’s scary to think AI will eventually stop using human logic and depend on it own language perhaps this is apart of a process to humans not being required to develop code. It’s no different than what’s going to happen to the entertainment industry
Specialist_Bee_9726@reddit
For html/css/js its quite good, for more advanced stuff not so much. Auto complete is 50-50 for me, sometimes it amazingly good but often its quite annoying I used it as search engine and brain storming tool most of the time.
intertubeluber@reddit
Agreed and this is my biggest gripe. Who wants a suggestion that’s right 50% of the time? It distracts from the task at hand.
defenistrat3d@reddit
Seems pretty good at unit tests, completing repetitive structures like enums and switches, generating mock data, creating short scripts in a "close but needs some tweaks" state.
Hit or miss with things like generating user readable instructions or code comments.
Pretty bad at completing a logic block that has already been started.
Giving a command like "write a function that does x" varies widely. From dog-poop insanity to "huh, I learned a better way to do something".
publicclassobject@reddit
If AI isn’t making you at least a little faster, you probably are not using it correctly.
roger_ducky@reddit
I use it in two ways:
After writing out a few unit tests by hand, have it autocomplete the next test after giving it a sensible name. AI generally guesses the “boilerplate” right in those cases so I only have to adjust the assertions.
Asking it to generate a module after specifying the details like I would do with a book smart junior dev. Instead of waiting half a day, I can get the “PR” in 5-10 minutes, then ask for changes. Usually I get mergeable code in 40 mins to an hour.
wackyshut@reddit
when it doesn't work with you, it doesn't mean it doesn't work for others. I have used it a lot in the last 6 months, it was steep learning curve initially, but once you have the right prompt, break it into smaller chunk prompt, it has helped me a lot with trivial tasks. Of course you won't expect it to do the entire feature for you with just basic instruction. You just have to know what to put in your prompt
dalmathus@reddit
I often ask it to provide me a snippet so I get the syntax correct. Literally asking it just for a boilerplate of something specific I want to do.
I haven't had alot of luck getting anything productive out of it yet for things I literally don't know how to do.
General AI like chatgpt is also quite good at quizzing and educating if you want to try and learn tye basics of a new topic and have it test you to make sure you leslarnt what's important.
But otherwise it's mostly a "create a wrapper checking if an object exists with a Create table statement with 4 nvarchar fields 4 numeric fields and 3 indexes" factory
Riajnor@reddit
Unit tests- nice Other stuff - not nice
dExcellentb@reddit
Are you using the agent or just code complete? I find the agent to be good when operating on simple codebases but as soon as there’s some complexity, all hell breaks loose. On the other-hand, the code complete is generally pretty good but less powerful.
One workflow I’ve been applying is: 1. Use reasoning model to generate design doc 2. Make edits 3. Use reasoning model to generate interfaces. No implementation 4. Make more edits 5. Code up the component(s) using code complete.
AI is pretty good at high level, or simple low level things. This has workflow has improved my productivity by probably 30-50% on average.
Then-Boat8912@reddit
It actually takes time to figure out how to use it properly. You have to get by the annoying phase. Forget about the vibe coding meme. It will also annoy you.
AppropriateSpell5405@reddit
It's helpful for either mindless repetitive nonsense or writing stuff in a language you don't have mastery over.
If you're being forced to use it in a scenario where it literally slows you down, then that's just stupid.
gumol@reddit
I use cursor as a really smart autocomplete. I find it very useful.
I don't really prompt it for anything, maybe once in a while.
MrJaver@reddit
I wouldn’t vibe code but it’s pretty good at making sense of various dumps of data and code. Eg it’s pretty good as explaining why my css is messed up when I dump html with tailwind into it. Or it can give some useful UX advice if you give it a screenshot. It might be good at writing unit tests but I’m very specific about that so I do it myself. I can also ask it how certain framework/language features work and why but it could make things up
cappielung@reddit
I've found:
When you're already an expert, it doesn't do a lot besides parrot your repetitive stuff. when you need to bootstrap, it actually is as good as they say IMHO.
Sandwich_Academic@reddit
I make it parse docs for me and extract the bits I need
DstnB3@reddit
Play around with it and you'll see what it's good it. For me I use it for well defined, small parts of the code or refactoring. Also for comments.
tragobp@reddit
tests, PR reviews but not in terms of design, just minor issues and code documentation