Is anyone actually using LLM/AI tools at their real job in a meaningful way?
Posted by WagwanKenobi@reddit | ExperiencedDevs | View on Reddit | 467 comments
I work as a SWE at one of the "tier 1" tech companies in the Bay Area.
There seems to be a huge disconnect between the cacophony of AI/LLM/vibecoding hype on social media, versus what I see at my job. Basically as far as I can tell, nobody at work uses AI for anything work-related. We have access to company-vetted IDE and ChatGPT-style chatbot interface that uses SOTA models. The devprod group that produces these tools keep diligently pushing people to try it, makes guides, info sessions etc. However, it's just not picking up (again, as far as I can tell).
I suspect then that one of these 3 scenarios are taking place:
- Devs at my company are secretly/silently using AI tools and I'm just not in on it.
- Devs at other companies are using AI but not at my company, due to deficiencies in my company's AI tooling quality or internal evangelism.
- Practically nobody in the industry is actually using it.
Do you use AI at work?
Azaex@reddit
Tried out a company Q Developer seat for a sudden proof of concept push building a scrappy python workflow in a data warehouse recently.
I don't use agent mode. Pushes too much random crap and burns tokens. Instead I do allow it to read the directory and I frontload a ton of context into my prompt and tell it exactly how I want something done. Like 5-8 paragraphs sometimes in huge cases. Not giving it a requirements list, more about "we are in this system building this requirement. handle these variables in this manner in this order, split your calls for maintainability and add reasonable comments". Either I review its proposal by eye, or I slot the whole thing in and review the git diff. I'm using it pretty much to avoid fighting through random syntax errors, which started to scale really effectively in this recent push.
This has started to lead to an interesting cycle where I'm focusing my attention more on refactoring pseudocode, and I just code review the AI's output to make sure it isn't doing anything heinously wrong with how it interpreted the language or the task. Kinda like reviewing a highly technical intern's work, except that intern works disturbingly fast.
So not vibe coding where I tell it to try doing something open ended, I'm boxing in exactly how I want it done. It has been crazy to be able to tell it something like, take that for loop we just wrote and reinterpret it in as a user defined function that we can execute as a cluster operation instead of a single node step. And it knocks out a sufficiently usable implementation in seconds instead of me having to burn an hour or so completely reimplementing the thing from scratch.
The Claude models advertise that you can tell it to keep making things faster and it'll oblige in interesting ways; I'm realizing on the flip side, if you're already experienced, you can tell it exactly how you want something done or optimized and it'll usually get that right if it's a common pattern. Like if you can already visualize the stackoverflow-learned rabbit hole it's going to go down if you say certain words, you can use this to your advantage to make it follow the patterns you want.
ExtremeAcceptable289@reddit
Copilot completions are pretty good, since usually I have a basic structure of what I want a function/code to do before writing it, so I can just skim the code and see if it's correct, if it is, saves me a few seconds along with RSI, if it isn't, then I only waste a few seconds too
seanamos-1@reddit
I've found it useful in some contexts, eg.
For autocomplete, I know people have had success with it here, but it got in my way and was wrong most of the time, so its off. This is primarily backend service and infrastructure work. You mileage probably varies here depending what you are working on.
As a technical search for common problems and basic scripting boilerplate, it's OK! It can save me having to look through reference docs, bearing in mind, it still gets things wrong even on simple problems, but its more of a net gain here.
That's basically the feedback I hear from most of the other seniors as well.
Our data scientists are also heavy (ab)users of LLMs with their python code. It is very much AI slop that they produce and it takes a lot more work and guidance before whatever they produce can be put into production, but it does allow them to convey their intent with code better than they could before. So for them, it is a clear productivity win.
Cases with a clear net loss in time:
Code reviews.
Feature implementation.
Bug diagnosing/fixing.
Code organization.
I encountered one SWE that was a HEAVY (ab)user of LLMs, but it was also a mystery how he got hired in the first place. He was clearly trying to use LLMs to close the gap in basic competency. He was eventually fired for performance issues.
TransitionNo9105@reddit
Yes. Startup. Not in secret, team is offered cursor premium and we use it.
I use it to discover the areas of the codebase I am unfamiliar with, diagnose bugs, collab on some feature dev, help me write sql to our models, etc.
Was a bit of a Luddite. Now I feel it’s required. But it’s way better when someone knows how to code and uses it
kwietog@reddit
I find it amazing in refactoring legacy code. Having 3000 lines components being split into separate functions and files instantly is amazing.
normalmighty@reddit
I agent mode in vscode the other day to say "look through the codebase at all the leftover MUI reference from before someone started to migrate away from it only to give up and leave a mess. For anything complex, prompt me for direction so I can pick a replacement library, otherwise just go ahead and create new react components as drop in replacements for the smaller things."
I did it for the hell of it, expecting this to be way too much for the ai (project was relatively small, but there were still a few dozen files with MUI references), but it actually did a pretty solid job. Stuck to existing conventions, did most of the work correctly. I had to manually fix issues with the new dialog modal it created, and I cringed a bit at some of the inefficient state management, but it still did way better than I thought it could with a task like that.
woeful_cabbage@reddit
My brother in christ -- why move away from mui?
normalmighty@reddit
It's super annoying to customize the styling to fit designs. Headless libraries are way better for the flexibility we need for clients. It's got It's own opinions baked in that just turn into a bunch of bloat when you can't just shrug and go along with the default library look.
woeful_cabbage@reddit
Fair enough. No point if you are just making custom styled versions of every component
edgmnt_net@reddit
How much do you trust the output, though? Trust that the AI didn't just spit out random stuff here and there? I suppose there may be ways to check it, but that's far from instant.
snejk47@reddit
You can for example read the code of those created components. You don't have to vibe it. It just takes away the manual part of doing that yourself.
edgmnt_net@reddit
But isn't that a huge effort to check to a reasonable degree? If I do it manually, I can copy & paste more reliably, I can do search and replace, I can use semantic patching, I could use some program transformation tooling, I can do traditional code generation. Those have different failure modes than LLMs which tend to generate convincing output and may happen to hallucinate a convincing token that introduces errors silently, maybe even side-stepping static safety mechanisms. To top that off it's also non-deterministic compared to some of the methods mentioned above. Skimming over the output might not be nearly enough.
Also some of the writing effort may be shared with checking if you account for understanding the code.
RegrettableBiscuit@reddit
Yeah, I can see the appeal, but I'd rather do this manually and know what I did than let the LLM do it automatically, and then go through the diff line-by-line to see if it hallucinated anything.
edgmnt_net@reddit
On a related note, there are also significant issues when trying to make up for language verbosity by employing traditional IDE-based code generation to dump large amounts of boilerplate and customize it. It's easy to write, but it tends to become a burden at later stages such as reviews or maintenance. While deterministic and well-typed generated code that's used as is doesn't present the same issues.
snejk47@reddit
Yeah that's right. That's why I don't see AI replacing anyone. There is even more work needed than before. But that's one idea there to check that. Also, it may not be about time but the task you are performing, aka after 10 years of coding you are exhausted of doing such things and you would rather spend 10x more time reviewing generated code than writing that manually :D
marx-was-right-@reddit
The time it takes to do this review oftentimes exceeds how long it would take to do it myself
snejk47@reddit
I don't disagree.
marx-was-right-@reddit
How is that in any way an efficiency gain then? Its just a hinderance that you pay for
SituationSoap@reddit
It turns out that hype is often not matched with reality.
snejk47@reddit
You get to collectively distribute work and let everyone earn the same low wages.
marx-was-right-@reddit
Then you test it and it doesnt even compile or run lmao
thallazar@reddit
I'm curious if you've ever actually tried this or just parroting based on 2 year old info on copilot, because cursor agents and open hands can absolutely iteratively do a task and run your test suite, linters, push to a branch and get results from GitHub actions etc.
marx-was-right-@reddit
if by "do a task" you mean "iterate against itself endlessly and constantly rewrite all the code for no reason and make up API calls that dont exist", sure. The time it takes to get the "agent" to do anything in a semi complex codebase doubles or triples the time it would take to do it myself. And thats for small building block things. The entire feature it has 0 hope on
Consistent_Mail4774@reddit
Are you finding it actually helpful? I don't want to pay for cursor but I use github copilot and all the free models aren't useful. They generate unnecessary and many times stupid code. I also tried providing copilot-instructions.md file with best practices and all but I'm still not finding the LLM great as some people are hyping it. I mean it can write small chunks and functions but can't resolve bugs, brainstorm, or greatly increase productivity and save a lot of time.
simfgames@reddit
Not OP, but let me put it this way. Whenever I see people saying 'AI is useless', their experience is typically with stuff like copilot.
I write 100% of my code with AI (and I work on fairly complex, backend stuff). With copilot that number would be 0%.
It really is an experience thing though. You have to get in there, figure out how each model works, and how to make your workflow work. It's a brand new skillset.
TA-F342@reddit
Weird to me that this gets so many downvotes. Bro is just sharing his experience, and everyone hates him?
simfgames@reddit
Watching reddit talk about ai code gen is like...
Let's say the oven was just invented. And on all the leading cooking subs, full of pit-fire enthusiasts, here's what you see:
-I tried shoving coals in my oven and it broke!
-It won't even fit an entire pig! What a stupid machine.
-I pressed the self-clean button and it burned all my food!
-I keep trying to use the broiler coils to boil a pot of water and all I get is a big mess!
woeful_cabbage@reddit
Eh, I've just always hated layers of abstraction that make coding "easier" for non technical people. AI is the newest of those layers. I have no interest in writing code I don't have control of
It's the same as a hand tool carpenter being grumpy about people using power tools
mentally_healthy_ben@reddit
When the inner "you're bullshitting yourself" alarm goes off, most people hit snooze
Consistent_Mail4774@reddit
Is writing 100% of the code with AI becoming prevalent in companies? It's worrisome how this field has changed.
May I ask what do you use? Is it cursor or what tool exactly? I used Claude with copilot and it wasn't useful. I'd like to know what models or tools are the best at coding so I know where this field is heading. When I search online, everyone seems to hype their own product so it's not easy to find genuine reviews of tools.
simfgames@reddit
I use ChatGPT, usually o3 model via web interface + a context aggregator that I coded to suit my workflow. An off the shelf example of the tooling I use: 16x prompt.
Aider is an excellent alternative to explore. And do a lot of your own research on r/ChatGPTCoding + other ai spaces if you want to learn, because that answer will change every few months with how fast everything's moving.
specracer97@reddit
This last sentence is so true and blasts a brutal hole in the weird marketing tagline the industry uses to try to induce FOMO: AI won't replace you, but someone using it will, so start now.
The tech and core fundamentals of promoting have all wildly changed on a quarterly basis, so there is zero skill relevance from even a year ago vs today's hot new thing. People can jump on at any time and be on a relatively even field vs the early adopters, but only so long as they have the minimum tech skills to actually know what to ask for. That's what gets conveniently left out of the marketing message, you have to be really good to get good results, otherwise you get a dump truck full of dunning kruger.
simfgames@reddit
I'd be shocked if it were common at all. I think most people don't think it's possible yet.
I'm running a startup. And I suppose it will become a lot more prevalent in the industry once the winners emerge out of the batch of ai-native startups that's starting up right now.
I use c# with rider, writing a simulation-heavy Unity game, and I use a custom tool I built that's designed specifically around the way I work. I wouldn't be able to use existing tools to write all the code.
But if I had to use off the shelf stuff I'd probably go with aider, and probably combine it with one of the agentic tools available. But it's moving so fast that it changes every few months. The only way to know is to start playing and to hang out in enough ai coding spots to keep up with the news. It's an unfortunate reality that you have to navigate the sea of spam and hype and the blind leading the blind to figure out what actually works yourself.
Ashamed_Soil_7247@reddit
What field do you work in? I feel it makes all the difference. Friend of mine showed me some absolutely impressive contributions to a numpy robotics project.
Meanwhile, in my projects it rarely knows what to do and is error-prone
thallazar@reddit
You could do RAG on your codebase and dependencies and expose that with an MCP tool to a cursor agent. Even just exploring cursor rules to provide context around the code would probably improve your quality.
ai-tacocat-ia@reddit
You have absolutely no idea what you're talking about. Do you even know how RAG works or why it's useful or what the drawbacks are?
Semantic search is a really shitty way to expose code. Just give your agent a file regex search and magically make the entire thing 10x more effective with 1/10th the effort.
This annoyed me enough that I'm done with Reddit for the day. Giving shitty advice does WAY more harm than good. RAG on code makes things kind of better and way worse at the same time. It wasn't made for code, it doesn't make sense to use on code. Stop telling people to use it on code.
If you've used RAG on code and think it's amazing, JFC wait until you use a real agent.
doublesteakhead@reddit
"I award you no points, and may God have mercy on your soul."
Ashamed_Soil_7247@reddit
I would if I could but I can't upload my codebase to an external model
thallazar@reddit
You can run models locally. If you've got a MacBook you can run some decently powerful models.
Ashamed_Soil_7247@reddit
That's the goal yep :)
Sterlingz@reddit
Interesting - I used it to build some absolutely insane embedded stuff.
Ashamed_Soil_7247@reddit
What kind of stuff?
Sterlingz@reddit
Here's one project: https://old.reddit.com/r/ArtificialInteligence/comments/1kahpls/chatgpt_was_released_over_2_years_ago_but_how/mpr3i93/?context=3
Embedded is a pretty wide field, so it could easily be that yours isn't one where AI is strong.
Xelynega@reddit
Am I tripping, or are you talking about c# in that post?
All the embedded work I've done in my career has been in C, I've never seen an interpreted language used for critical firmware.
Sterlingz@reddit
Arduino IDE is C#, phone app is Swift, web is react.
Ashamed_Soil_7247@reddit
That's a really cool project, kudos to you! It's def impressive and my a priori guess would have been it wouldn't work, so I stand corrected
I do think my field is a tad more niche than yours, and I certainly did not have such a good experience.
But I also cannot massively upload stuff to the cloud due to confidentiality issues, so I could just not be giving it enough context.
Maybe one day we will get a proper on prem model working and do this
DigitalSheikh@reddit
Something I found that’s really helpful is to use the custom GPT feature to load documentation beforehand. Like examples of similar code, guides, project documentation etc. I work on some really weird proprietary systems and get pretty good (not perfect) results with a GPT I loaded all the documentation and some example scripts to.
Ashamed_Soil_7247@reddit
I wanna give that a try but I can't upload stuff to the cloud, so I need to get something on premise before I can feed.it the docs
DigitalSheikh@reddit
That’s definitely a hurdle. Good luck!
Ashamed_Soil_7247@reddit
Thanks, we will see
Ragnarork@reddit
This. Even the most advanced AI tools stumble around topics for which there isn't a ton of content to scrap to train the models they leverage.
Some niche embedded areas are one of these in my experience too. Low level video (think codec code) is another for example. It will still happily suggest subtly wrong but compiling code that can be tricky to debug for an inexperienced (and sometimes experienced) developer.
ILikeBubblyWater@reddit
We have 90 cursor licenses, I donät think I will ever code without it again
Consistent_Mail4774@reddit
Is cursor that much better than for example github copilot or other AI tools? How is it helping you?
Western_Objective209@reddit
cursor is much better then copilot, in every way. One big feature is an agent mode, where like if you ask it to write some changes and some tests it will do that and also run the tests to see if there are any errors
marx-was-right-@reddit
Writing code and tests is like 5% of my day to day or less as a senior dev though. Any noticeable productivity gains will not be realized in that space. Seems absolutely pointless, also the agent mode frequently just spits out junk that has to be corrected
Western_Objective209@reddit
I'm a senior and like 90% of my output is code. I can seriously output 2x as much work with AI, and I can take on more challenging tasks in less hacky ways because instead of having to make up my own solutions when google fails, I can ask the AI about the concepts and it has pretty solid knowledge of really high level CS.
Different people experience things differently
marx-was-right-@reddit
Thats extremely alarming. Glad youre not on my team 😬 seniors are expected to spend over 50% of their time mentoring, designing, planning, and handling thorny ops.
xamott@reddit
Jesus why would you jump to harsh conclusions when you don’t know a fucking thing about him and his team.
marx-was-right-@reddit
Anyone who says they are a 2x engineer cuz of AI either isnt doing anything to multiple by 2x or is a complete air head , not sure what to tell you
xamott@reddit
They warned me about this sub…
kingofthesqueal@reddit
That guy was being a jerk, but he is also somewhat right, I’d be skeptical of anyone claiming to be in a Senior role while also claiming to spend 90% of their time coding.
It’s just not how that position shakes out in most cases. You’re expected to mentor, plan, etc.
xamott@reddit
It’s different at different companies. I’m head of IT/software dev and I make SURE my two senior devs are left alone to code code code. That’s part of my role. This jackass has no idea what it’s like at some other company. We don’t have stupid meetings or bureaucracy eating up our time. They both mentor but it takes 10 to 15% of their week. For one of them the mentoring includes coding.
Western_Objective209@reddit
alarming huh. And you're coding 5% of the time as an IC and think that's not alarming? What are you even doing, just hopping around meetings?
marx-was-right-@reddit
Theres a plethora of IC work that needs doing at enterprise level that isnt writing code. The fact that youre blind to that puts you more at the junior/midlevel area.
Western_Objective209@reddit
well your soft skills are certainly lacking so I'm questioning what value you add lol
marx-was-right-@reddit
Maybe ask AI?
Consistent_Mail4774@reddit
Copilot also has agent mode but seems less useful from what you're describing than cursor.
Western_Objective209@reddit
I haven't used copilot in a while I guess, I just remember it being so underwhelming compared to cursor when cursor came out
marx-was-right-@reddit
No.
Cyral@reddit
These comments make me think people haven’t tried any of the new tools and last used GPT 3.5. How find and replace could even be compared is just cope, sorry.
marx-was-right-@reddit
The "new tools" have the exact same flaws this technology has always had.
snejk47@reddit
You cold try Roo Code with Github copilot installed and select it as a model provider. At least since June you won't have to pay till Copilot goes usage pricing mode.
ILikeBubblyWater@reddit
I would say yes, but there are also a lot of people that would say no. I have build features that we haven't been able to realise in years because of lack of resources. Every dev is basically a fullstack dev here now.
You do need to know what you are doing though and verify code.
I do not use other AI tools because there was no need so far.
driftingphotog@reddit
See this kind of thing makes sense. Meanwhile, my leadership is tracking how many lines of AI-generated code each dev is committing. And how many prompts are being input. Which is insane.
Franks2000inchTV@reddit
I can see tracking it, just to decide whether it's worth it to keep paying for it, but requiring people to use it is just stupid.
Strict-Soup@reddit
Always always looking to find a way to make Devs redundant
it200219@reddit
Our org is lookiing to cut QE's. 4:1
Comprehensive-Pin667@reddit
Leaderships have a way of coming up with stupid metrics. It used to be code coverage (which does not measure the quality of your unit testing) now it's this.
RegrettableBiscuit@reddit
I hate code coverage metrics. I recently worked on a project that had almost 100% code coverage, which meant you could not make any changes to the code without breaking a bunch of tests, because most of the tests were in the form of "method x must call method y and method z, else fail."
Headpuncher@reddit
That's not just insane, that is redefining stupidity.
Do they track how many words marketing use, so more is better?
Nike: "just do it!"
your company: "Don't wait, do it in the immediate now-time, during the nearest foreseeable seconds of your life!"
This is better, it is more words.
IndependentOpinion44@reddit
Bill Gates used to rate developers on how many lines of code they wrote. The more the better. Which is the opposite of what a good developer tries to do.
RegrettableBiscuit@reddit
There's a similar story from Apple about Bill Atkinson, retold here:
https://www.folklore.org/Negative_2000_Lines_Of_Code.html
SituationSoap@reddit
I'm pretty sure this is explicitly incorrect?
PressureAppropriate@reddit
"All quotes by Bill Gates are fake."
- Thomas Jefferson
xamott@reddit
Written on a photo of Morgan Freeman.
gilmore606@reddit
It is, but if enough of us say it on Reddit, LLMs will come to believe it's true. And then it will become true!
Humble-Persimmon2471@reddit
I'd try a different metric even all together. Measure by the amount of lines deleted! Without making it harder to read of course
Swamplord42@reddit
Really? I thought he famously said the following quote?
“Measuring programming progress by lines of code is like measuring aircraft building progress by weight.”
IndependentOpinion44@reddit
He changed his tune in later years but it’s well documented that he did do this. Steve McConnels book “Code Complete” talks about it. It’s also referenced in “Showstopper” by G. Pascal Zachary. And there’s a bunch of first hand accounts of people being interviewed by Gates in Microsoft’s early days that mention in.
Shogobg@reddit
It depends. Sometimes more verbose is better, sometimes not.
Dangerous-You5583@reddit
Would they also get credit for auto generated types. Sometimes I do PRs with 20k lines of code bc types hadn’t been generated in a while. Or maybe just renaming sometimes etc etc
CreativeGPX@reddit
Gates was last CEO in 2000. (For reference, C# was created in 2001.) Coding and autogeneration tools were quite different back then so maybe that wasn't really a concern at the time.
While Gates continued to serve roles after that, my understanding is that that's when they moved to Ballmer's (also controversial) employee evaluation methods.
Dangerous-You5583@reddit
Ah I thought maybe it was a practice that stayed. Didn’t Elon Musk evaluate twitter engineers when he took over from the amount of code they wrote?
CreativeGPX@reddit
I thought this thread was about Gates so that's all I was speaking about. The Musk case was pretty unique. I think it's safe to say that he knew his methods did not find the best employees and was just trying to get as many people to quit as possible. He claimed in 2023 that he cut 80% of the staff. His "click yes in 24 hours or you resign" email (in which some people were on vacation, etc.) was also clearly not just about locating the best or most important employees and was pretty clearly illegal (at least as courts ruled in some jurisdictions), but was done as part of a broader strategy to get people to leave so he could start fresh.
junior_dos_nachos@reddit
Laughing in Million lines long code I add and removed in my Terraform “code”
IndependentOpinion44@reddit
But if that’s your main metric and you run Microsoft, it incentivises overly verbose and convoluted code.
WaterIll4397@reddit
In a pre gen AI era this is not the worst metric and legitimately one of the things closest to directly measuring output.
The reason is you incentivize approved diffs that get merged, not just submitted diffs. The team lead who reviews PRs would be separately incentivizes for other counter metrics that make up for this and deny/reject bad code.
Crafty0x@reddit
your company: "Don't wait, do it in the immediate now-time, during the nearest foreseeable seconds of your life!"
Read that with Morty’s voice… it’ll sound all the more stupid…
michaelsoft__binbows@reddit
more lines of code is better, clearly.
i remember gaming a code coverage requirement for a class assignment. i got around it by just creating a boolean variable b and then spamming 500 lines of b = !b.
aaronosaur@reddit
AI-KLOC is just as dumb as KLOC
ketchupadmirer@reddit
again, word to the wise, input as many as much, hello, how are you, thanks
and if they tell you that thay saw you input that. RUN
SympathyMotor4765@reddit
The metrics are to justify layoffs
DNAPolymeraseIII@reddit
We have the same. End me.
Encursed1@reddit
Use AI to write empty lines, easy
Howler052@reddit
Write a Python script for that. AI creates docs & unreachable code every week. Cleans it up next week. KPI met.
Thommasc@reddit
Play the metrics game. Goodhart's Law...
KhonMan@reddit
Yousaf_Maryo@reddit
Wtduckkk. Bro I'm so sorry
driftingphotog@reddit
I'm gonna save the leadership messaging about this shit as an NFT, that way I can charge them to view it later when it all goes to shit.
Those are still a thing, right?
Yousaf_Maryo@reddit
Even if they aren't you can make them pay for it for how they are.
Least_Rich6181@reddit
I asked Claude
" This is an interesting question about developer productivity metrics. Measuring developers primarily by the amount of AI-generated code they check in seems problematic for several reasons:
It incentivizes quantity over quality, potentially leading to bloated, unnecessarily complex code that's harder to maintain.
It misses many crucial aspects of development work - debugging, code review, architecture design, mentoring, documentation, and technical debt reduction don't necessarily create new code.
It could encourage "churning" code - checking in large amounts of AI-generated code that may need significant rework later, creating technical debt.
Engineering is fundamentally about problem-solving, not just code production. Some of the most valuable contributions involve elegant simplifications that reduce code.
It may not account for the human judgment needed to evaluate, adapt, and improve AI-generated code, which is a crucial skill.
A more balanced approach would measure outcomes like: - Working software that meets requirements - Code maintainability and readability - Reduction in bugs and technical debt - Knowledge sharing and team enablement - Business impact of completed work
AI tools should be viewed as productivity multipliers that help developers focus on higher-value work, not as replacements for developer judgment or measures of developer value.
"
They should've first asked the AI if their ideas were good as well
sotired3333@reddit
Could you elaborate? As a bit of a Luddite would be great to see specific examples
jonny_wonny@reddit
In general, I use it to generate small chunks of code that I know how to implement myself, or that I could figure out if I spent a bit of time thinking about it. That way, I can ensure the quality and correctness of the output. The problems with generative AI only occur when you use it to make larger chunks of code or changes that you don’t understand. However, when used correctly it’s literally just a massive productivity multiplier.
Second, it’s great for learning a new code base. If you’re ever in a situation where the only way to move forward is to just scour the code base searching for answers, Cursor will likely be able to get you that answer in 1% of the time. And it’s incredibly resourceful in how it scans through your code base, so you really don’t have to micro manage or hand hold it.
berndverst@reddit
I'm a senior SWE at Microsoft (but also ex Google, Twitter etc). I use GitHub Copilot in VS Code when working on open source SDKs (I co-maintain some in Java, Go, Python and .NET). It's quite good for this task. The majority of my work is backend infrastructure engineering for a new Azure service - here the AI tools are not very helpful beyond generating tests and a few simple self contained code snippets. The code base has too many company-internal SDKs and the AI agent / model I use hasn't been trained on the internal code base or any of these SDKs. It just hallucinates too much that I don't find it useful.
StrictLeading9261@reddit
They are also useful when we are trying out some new technologies or libraries and messup some syntax
govi20@reddit
Yeah, it works really well to generate test cases, boilerplate code to write read/serialize/deserialize json.
LLMs are really helpful for quick prototyping stuff
WinterOil4431@reddit
They're greatfor boilerplate. Anything that's actually novel (not on the internet anywhere) means it's effectively useless if not counterproductive
BoxyLemon@reddit
what could possibly be novel. we just reiterate, recycle
Accomplished_Pea7029@reddit
If you are using a badly documented software/library there's a high chance that there's no resources that help your specific use case.
WinterOil4431@reddit
Unironically a lot of poorly engineered stuff is really novel lmao so the requirements become pretty unique
DorphinPack@reddit
I’ve gotten a good flow down for generating codec boilerplate. Managed to get some very annoying data wrangling for a prototype done in no time at all today.
But I’ve struggled with test cases — any tips on prompting for that?
bizcs@reddit
This more or less tracks with my experience. It's great for some things and I lean on those things, but it's not great at all things and I still need to know how to do the actual job.
Constant-Listen834@reddit
The AI tools are definitely good. Problem is that I don’t really want to train an AI that is designed to replace my job, so I don’t use them.
More of us should probably do the same tbh
jjirsa@reddit
Using the model in an IDE isn't training it. Transformer based models care way more about the final product (the code you write) than how you're using the IDE.
Shady-Developer@reddit
The iteration process of working with the model in the IDE is basically free RLHF, no?
Elctsuptb@reddit
No, usually only the UI version such as on chatgpt.com is being trained from your conversations, not when using the API
Szpecku@reddit
Living in Europe helps too with stricter laws.
Reference for chatgtp: "This Privacy Policy does not apply to content that we process on behalf of customers of our business offerings, such as our API" https://openai.com/policies/eu-privacy-policy/
I found that they allow opt out from using data for training: https://help.openai.com/en/articles/5722486-how-your-data-is-used-to-improve-model-performance#h_10bcee4719
Quite similar from Gemini - usually they don't use your data for training if you pay for a service. But then there loop holes - outside of Europe when using Gemini API within free allowance you don't pay so they use data.
govi20@reddit
Didn’t understand? Can you EL5? 😅
Constant-Listen834@reddit
No
nodrogyasmar@reddit
Have you tried loading your internal code into AI before giving it a task?
berndverst@reddit
I don't have the source myself - these are SDKs published to private feeds that I need to consume.
biggamax@reddit
By the way, are you OK? Hope so. Heard about Ron B.
berndverst@reddit
Thanks for asking - yes I am fine (as is most of my extended engineering org). I'm fortunate that I work on Azure services that are very profitable. It's too bad that the faster CPython team or TypeScript team were impacted.
StrictLeading9261@reddit
I have a friend who works as SRE, he uses ChatGPT alot
Goolong@reddit
I'm a systems admin, for a specific department so I manage networks, databases, vms. I like creating so I built out a quality assurance system, using postgresql, fast api and python, along with html pages. All built modular with ai chatgpt, just functions and tying it all together into erpnext
Grownwords_@reddit
Yeah, we use a few depending on the job:
qolaba.ai - access to top llms (gemini, claude, gpt, etc) with custom agents + knowledge bases
https://github.com/features/copilot - daily driver for autocomplete and doc generation
qodo.ai - for generating unit tests automatically, saves a ton of time on QA
cursor.com - Copilot alternative with better multi-file context and chat interface.
svfen2@reddit
hey
Secure_Maintenance55@reddit
Vibecoding is the dumbest thing I've ever seen... it's 100% hype. No one in my company uses AI for development work. Coding requires logical and coherent thinking—if you have to verify everything the AI generates for mistakes, it's a huge waste of time. So why not just think it through yourself? Basic code might be okay to hand off to AI, but for the most part, writing the code yourself is definitely more time-efficient. AI might replace junior developers, but architects and senior engineers are definitely more valuable than AI
LateWin1975@reddit
The layoffs are coming for you
Crack-4-Dayz@reddit
I don’t doubt this…but I do doubt that it will happen because the tools are actually good enough for this to be a smart business decision. As opposed to MBA groupthink/FOMO, for example.
LateWin1975@reddit
100%, i don’t think we ever get replaced by the tool. I think an engineer using a power saw, while all the dudes with their hand saw asking people what’s so good about a power saw are going to get laid off
deepmiddle@reddit
Why do people feel the need to be assholes like this
BoxyLemon@reddit
you my sir are a gatekeeper
ArriePotter@reddit
Vibe coding is amazing when you want to make a somewhat-impressive POC in a pinch. I also find it helpful when I have to do very small scope tasks outside of my domain - given competent code reviews ofc.
But yeah vibe coding anything for production, that's in any way fundamental, is a disaster waiting to happen
Venthe@reddit
I concur. I'm usually from banking; but I wanted to create a game engine architecture - just to understand the basics of ECS. I've vibe-coded the hell out of it; the end result did not do what I've expected; and it did not really work - but it helped me to "see" what is usually done, and created a good enough basis for me to refactor.
Still, for regular work - it's more of a niche tool rather than a primary one.
marx-was-right-@reddit
The number of times ive needed to do that at an enterprise level over a decade starts with a Z and ends with an O
jonny_wonny@reddit
It’s really not, you just have to learn at what scale to use it. It’s amazingly useful when you use it to generate small chunks at a time, or make minor changes.
Virtual_Substance_36@reddit
Skill issue
EmmitSan@reddit
Vibe coding isn’t hype, but the way 90% of people do it is wrong.
ChimesFreddy@reddit
People use it to write code, and then rely on others to do the real work and review the code. It’s just pushing work onto the reviewers, and if the reviewers do a bad job then it can quickly lead to trouble.
Hot-Recording-1915@reddit
100% this, I used it to vibe code some python scripts to generate CSVs or some secondary stuff, but for day-to-day work it's a huge waste of efforts because I'd need to review every change and it would quickly get out of hand.
Though it's very useful to help me analyzing or optimizing SQL queries, give me some better ideas on how to write small pieces of code and so on.
getschooledbro314@reddit
I’m not in a programming job. I work on machines in a factory. We are adding an AI camera for quality check purposes. After running for an hour I had 1000 svg images in a folder. I wanted an html page to help me sort them. If I wrote it myself it would’ve taken like 20 hours. AI wrote it for me in under a minute.
schnapo@reddit
I work as a developer for analytic tools in medical research. I used my own coding skills to develop tools for bias analysis in offline databases of historical medical records. I took my code and optimized it for new angles. First only partial code with Claude, but then i switched to windsurf for development from scratch. While the debugging took a little longer than expected the actual coding work decreased nearly 90% of my time.
Even in coding languages I never used before, it was a tremendous help.
nio_rad@reddit
Sometimes, but not for direct code, more like for "what does this do?" when I'm new at a framework or similar. We're an agency and have a high diversity in types of devs, but AI is generally not yet allowed by default, and not paid for. So most are not using it currently.
But in general, we have never been told which dev-tools to use, and if this were the case, I'd probably find a new place to work. It should always be the decision of the dev.
12candycanes@reddit
I use it to write things non technical people will read.
Folks on the product side are open about using ai tools to do writing and text summarization, so I use it to do the same when those people are the audience 🤷♂️
pewqokrsf@reddit
I have to write technical feature PRDs as part of my role. AI is great, I can just give it the document structure, infodump, and add more context iteratively where it gets things wrong.
officerblues@reddit
Currently working a new job at a startup, team culture encourages AI use extensively, and team has been vibe coding a lot, historically. According to legend, they were very fast in the beginning, but now (about 6 months in) it's easily the slowest team I have ever worked with. Nothing works and even the smallest feature requires major refactoring to even come close to doing anything. It also doesn't help that people in general seem to be incompetent coders.
This was very surprising to me. I was brought in to handle the R&D team, but the state of the codebase makes any research useless at the moment, so I have had to wear my senior engineer hat and lead a major refactoring effort. I honestly want to murder everyone, and being fully remote has probably saved me from jail time. I used to be indifferent to AI tools, they didn't work for me, but maybe people could make use of it. This experience really makes me want to preemptively blanket ban AI in any future job.
Fruitflap@reddit
I attempted writing a solution entirely with ai and it is the worst piece of spaghetti ive ever created. Having everyone vibe coding extensively, especially if theyre incompetent, sounds excruciating..
kur4nes@reddit
This is danger I see with encouraging junior devs to vibe code everything. Initially faster, but the resulting mess is larger. AI tools seem great to produce bad code and systems faster. That LLMs only have a limited context window isn't helping.
Tried LLMs on our legacy codebase. Results are at best mixed. Everything the model spits out needs to be checked and fixed. Analyzing or finding bugs just doesn't work.
magheru_san@reddit
I used AI tools ever since the first version of ChatGPT was launched (mainly Claude these days), and I can see how this may happen if you just accept the LLM output code blindly.
LLMs are amazing at producing a lot of code quickly but you have to be relentless in challenging them to have very high standards and refactoring the code, otherwise the output quickly devolves into a huge spaghetti mess.
Nothing should be taken as face value!
Ragnarork@reddit
This question pops every now and then and one of these threads had a very concise way of putting it: it makes crappy developers output more crappy code, mid-developers more mid code, and excellent developers more excellent code.
AI can magnify the level of competence, it doesn't necessarily improve it.
Few-Impact3986@reddit
I think the problem is worse than that. Good coder usually don't write lots of code and bad coder write lots of code. So, AI's data set has to be more crappy code than good code.
jonny_wonny@reddit
Generative right now AI will 100% make good, intelligent coders better, if they use it properly. However, it will also make bad coders more dangerous and destructive as they will use to write more bad code, more quickly. My suspicion is that the team is slow not because they are using AI, but because they are poor coders and the company thought that they could use AI to offset that.
BoxyLemon@reddit
Idgaf. I am a chameleon. I will be useful fir every task. If my employer wants me to code, I code with Ai. That way I am more valuable for the company
officerblues@reddit
100%, the company has two separate teams. The R&D team is basically grizzled veterans with lots of experience, the dev team not so much. It's the old adage, if you think good developers are expensive, wait until you see bad ones.
Loboke-Wood-9579@reddit
That's why I always advise using AI in a subject where you already have acceptable mastery. Because you'll be able to detect hallucinations. It is a copilot, you are the pilot. period.
bilbo_was_right@reddit
Ai is like having a junior engineer. It’s trained on a whole lot of mediocre code. And just likes a junior engineer, if you don’t guide it well, it will probably fuck up. A lot. But if you give it tasks with very limited scope and well defined structure, it can frequently complete the task.
officerblues@reddit
Eh, I also think the best part about the junior engineer is that they eventually stop being junior, which the AI can't do. It's nice to have it and I think it does something food, but I also find it hard to say it's a meaningful improvement over the good old days of Google actually working.
But yeah, no one who put in more than 30 minutes thought into it would know that you can't "vibe architect" stuff and that code reviews are more necessary when you start using AI. This is likely a rookie mistake that is now costing the company quite a lot in opportunity cost.
hhustlin@reddit
I hope you consider writing a blog post or something on the subject - even anonymously. I think companies that have been doing this long enough for the ramifications to set in are pretty rare, so your experience is unique and important. As an eng leader I don’t have many good or concrete resources to point to when non-technical folks ask me “why can’t we vibe code this”; saying what we all know (it will create massive technical debt and destroy forward progress) sounds obvious to me but sounds whiny and defensive to non-engineers.
rding95@reddit
To get to this point, were there any code review/testing to ensure quality? Or were the reviews low quality too?
officerblues@reddit
Reviews were low quality / AI assisted.
rding95@reddit
I'm overall optimistic about the use of these tools (we use Cursor and Devin at my job) but I'm afraid of getting to a point where the code is a rats nest. We (senior engs) tried Devin for a couple weeks to find where it might go wrong, then released it to the team broadly with some guardrails. We also use it heavily for generating tests, which feels less risky. I'm still a little nervous though about where our code could end up
officerblues@reddit
Yeah, I think the biggest issue was that the old senior guy leading the team tried to delegate reviews to the LLM (we have copilot), and this is obviously a bad idea. IMO, LLMs can still be used, but you really need to read code reviews and care about it, now. No more LGTM rubber stamping the things that seem low risk, this can only go wrong.
marx-was-right-@reddit
Theres gonna be alot more workplaces like this once all these "Cursor is REQUIRED!!!" people work for another month or two
officerblues@reddit
I, for once, could not be happuer about this. I did some refactoring work at the new job that was, honestly, half assed due to anger and people treat me like I'm cyber jesus, now. I hope everyone devolves into vibe coding, because it really empowers me to slack off and deliver.
SilentToasterRave@reddit
Yeah I'm also mildly optimistic that it's going to give an enormous amount of power to people who actually know how to code, and there aren't going to be new people who actually know how to code because all the new coders are just vibe coding.
hawkeye224@reddit
Cyber Jesus lol!
tcpukl@reddit
Startup and r&d doesn't really go together does it?
It's there a bit pot of money with no product?
Ragnarork@reddit
What do you think all these VC invest in? They bet millions on something that range from "idea of a product" to "established product" going through "embryo of a product" and "product without a market fit yet" in the middle.
Most of the startup I worked for had a sizeable R&D component. Also in multiple occurrences, the coolest stuff we would put out wouldn't involve rocket science but smartly combining existing (and sometimes quite old) tech and concept in a way that would produce impactful results.
(In some other instances it was a lot of noise to recreate the wheel, and sometimes not a great one...)
tcpukl@reddit
Thanks for a good answer.
officerblues@reddit
Oh, that can work pretty well, sometimes. You make a prototype and use it to raise under the promise of improving the prototype further with R&D, for example. It's actually a pretty grounded plan, and I joined the company partly because the business plan made sense to me in the long run (which is not the case for most AI startups out there). I did not expect the current mess I am in with the tech folks, though, but I think we can fix it (maybe).
tcpukl@reddit
Fair enough. TIL.
Equivalent-Stuff-347@reddit
Startup and R&D are like peanut butter and jelly.
You have an idea and a roadmap, you use that to secure funding, you spend a lot of time and money on R&D, then maybe get a product to market
DjebbZ@reddit
Yes, 100%. To be impactful the dev needs to be a good SWE AND know how to leverage this new tool.
Some examples of good usage : brainstorming, architecting, exploring unfamiliar (parts of) codebases, reverse engineering, debugging (not necessarily fixing the bug, but finding the root cause), doing code reviews, semi-automating boilerplate, creating custom learning materials for unfamiliar tech/framework, refactoring...
In no case the workflow is 1 prompt = 1 perfectly working solution. It's also not about delegating the thinking, at the risk of brain rot. It requires you to create the proper context, iterate on the AI's understanding of the task, challenge it and be challenged, all in order to align the AI for the task at hand using proper SWE techniques.
I've personally experienced dramatic productivity gains, way above 10x, and a few devs I know who are good and good with AI tools share the same opinions. I have a specific example that I'm sharing next week in a local meetup where I'm confident saying the productivity gain is around 30x. So big that the previous dev who worked on the same task without AI assistance had to severely reduce the scope and quality of the final code because the proper way of handling the problem was just too big and cumbersome. I'm talking hours versus weeks/a few months.
hidragerrum@reddit
Kind of, to make document longer than it needs to be. Somehow I'm bad at sputtering words hence my wording always too dry compared to the rest.
For coding it's good for brainstorming and prototyping, post inception the llm is less useful and the generated is not production ready.
PapaOscar90@reddit
Generated a bunch of ISO compliant documentation from the existing code. It’s great at that. But it is absolutely useless for coding.
Front_Mirror_5737@reddit
Used open ai apis to detect labels and bounding boxes. Otherwise it would have been a very comprehensive task
CrashXVII@reddit
My work pays for copilot. I turned it off for Advent of Code and never turned it back on again. Too annoying when it’s just bad auto complete. There are use cases with writing tests faster but overall got in the way and disrupts my thought process.
kyngston@reddit
I just built an angular web app using natural language descriptions of what I want, using cursor in agent mode with Claude 3.5 sonnet.
"Go to the Atlassian crearemeta rest api to find the fields that can be pre-filled and make a web form allowing me to pre-filled in those fields"
It write the code, lint it, build it, review the errors and rewrite the code, until it works. It's like watching the desktop of a remote dev
OkWealth5939@reddit
It basically replaced 80 percent of my google searches. It also rewrites most of my messages
leroy_hoffenfeffer@reddit
70% of the code for a project I help is lead was generated by the anthropic console.
The logging, test infrastructure, etc mostly still had to be done by hand, but the core functionality was written by LLMs for the most part.
Best time saver ever. I now save an hour or two each day and fuck off and do other more rewarding stuff instead.
donnymccoy@reddit
As a company, we have taken the stance that it’s a valuable tool to assist people in their tasks. We are a $500MM logistics company and even though we move quickly in some areas, we move slowly in others. Adoption outside of IT has been slow - as we expected.
Devs: I set my team up with business ChatGPT and CoPilot for VS code.
IT: we use it for vetting ideas, troubleshooting SQL performance issues. My peer uses it to track down network performance issues in our cloud.
A key change agent on our AI task force is a marketing guy who’s been with ChatGPT since it was in beta. He uses it for everything personal and business.
To me, the challenge is getting c-suite endorsement for more than just having AI write code. It can certainly help there; but the bigger opportunity lies in identifying ways AI or LLMs can help the company improve business processes.
That’s why we took the stance we took, for now.
Sensanaty@reddit
The Juniors are pushing out obvious AI code in their PRs because management is setting up a firing squad against anyone not buying into the AI hype headfirst, and they're causing massive headaches (for me who has to review the PRs).
Huge, massive refactors of legacy components with the commit message saying nothing, when the ticket is about some tiny thing that would involve at most 10 lines of changes. Hundreds of lines touched, all with those overly verbose comments that don't actually tell you anything useful about the code you're reading that LLMs love to spit out. Sometimes the comments are even contradictory to what the code is actually doing. Tests, if they bothered writing them in the first place, are testing the wrong thing half the time, and sometimes are just blatantly incorrect or contradictory to what the code is doing. You ask them "Why did you decide to go X route rather than Y or Z?", they usually reply "Well, Cursor wrote that part!". So why do we even employ you at this point?
Look, I'm not even necessarily anti-AI or anything, I use Claude almost daily for a variety of tasks from mundane to complex. It can be a massive time saver for certain tasks when you know what you're doing, I love that I can throw some massive JSON blob at it and tell it to produce the typedef for me and it will (80% of the time, but better than doing it manually most of the time). I get to focus on the actual complex parts of the work and not those truly annoying slogfests that pop up from time to time, and that's great.
My entire issue stems from the insane hype being pushed by the AI providers and the charlatans that have vested interests in it one way or the other. It is NOT a magical panacea that can do the work for you automagically. My fucking head of product, who can barely login to his work laptop without contacting IT for help on a weekly basis, is breathing down my neck to use Cursor, because he "Keeps hearing from friends at other companies (AKA, other clueless C-levels like himself) that it works great for their team!" This man doesn't know his ass from his elbow when it comes to technology or anything engineering-related, yet he keeps trying to give me advice on how to solve tickets or whatever. Motherfucker, I already use Jetbrains and their AI tooling! You pay for it already!
It is a genuinely useful tool that is being massively overhyped, because there are hundreds of billions being invested into it from many people. It's a gold rush, and the C-level and other managerial types are blindly buying into the hype being put down by the AI providers for fear of missing out on the Next Big Thing. You could have the provably greatest product on earth, but if you don't have AI somewhere in your tagline, investors won't bite, because they're single-minded morons that only chase hype and nothing else.
phihag@reddit
If the PR touches things not in the description, isn't that enough to reject it?
I work in a small team where the manager is the most junior developer with only 15 years of experience. Most PRs are approved & merged without comment. But when I accidentally commit something that's not related to the PR, it will rightfully be pointed out and lightly ridiculed.
Martelskiy@reddit
I personally use the Copilot extension in VS Code. It's good for certain tasks. For example, a small playground project to try a new lib or framework, etc. Test generation is OK, although if your team cares about quality, most likely these tests need to be refactored anyway.
Otherwise, vibe coding is certainly not for me. Not understanding the stuff I push to production(or even some internal tooling) scares the shit out of me. Engineering productivity is not about typing speed, but rather about knowing the domain, and vibe coding is the opposite direction.
Franks2000inchTV@reddit
Work for an agency and I use it a lot -- have a Claude Max subscription.
It's great for some things, terrible at others. but it's best for joining a new project I can fire it up and say "How is state managed in this project?" and it'll give me a decent answer Or "what API calls does this make if I click this button"?
It can save a lot of chasing.
Also now you can connect claude to github. I use a terrific, but poorly documented PCG library in a game I'm working on as a personal project, and I added the repo to a claude project. Now I can just ask it questions about the code and it acts as sort of interactive documentation.
And I can say something like "write me an FromPolygon extension method for this class that takes a series of points and returns a Polygon. make sure it has robust error handling and validates its inputs" and it'll just spit it out in less time than it would take to write it by hand.
CarelessPackage1982@reddit
Here's the real. Is it a value add? Yes.
Is it life changing? If it were, why aren't all these devs just inventing their own startups in less than a week and going into business for themselves instead of making their bosses filthy stinking rich instead. The market will prove or disprove the hype. If 10K competing Githubs launch next week I might believe.
mia6ix@reddit
The responses here are surprising. My team builds enterprise e-commerce websites and apps. We use ai for everything - it’s everyone’s second set of hands. I have no idea why some of you can’t seem to extract value from it. I assume it’s either because of the type of work you do (too niche or too distributed), or it’s because you haven’t bothered to learn how to use it properly.
I plan the architecture of whatever I’m building or fixing, but with ai, I take the extra step of breaking the steps into thorough prompts. Give it to ai, review and refine the output (if necessary). For bugs or refactoring, ask it good questions and go. It’s like a brilliant dev who can do anything, but isn’t great at deciding what needs to be done - you have to instruct it.
It’s at minimum 2x faster than writing the code myself, and the quality is not an issue, because I know how to write the code myself, and I fix anything that pops up or redirect the agent when it goes off the rails. Our team uses Windsurf and Claude.
BoxyLemon@reddit
+1. I vibe code everyday. Before I confirm the llm to generate the code for the application, I make sure that I answer at least five questions that specifies my requirements concisely. This is to make sure the llm thoroughly comprehends my goals.
Least_Rich6181@reddit
I use it all the time
srawat_10@reddit
Are you also working for Tier 1 company?
We use copilot and glean extensively
Least_Rich6181@reddit
Yes.
I remember the days when old heads used to say real programmers don't rely so much on IDEs or whatever.
https://xkcd.com/378/
I feel the same bemusement from folks who say they don't think Gen AI is all that useful.... once you start using the tools it's a whole different level of productivity (or laziness)
BoxyLemon@reddit
Damn. Is it me or are the comics lame af
marx-was-right-@reddit
The difference here being modern IDEs do all the things people are trumpeting AI for, without making shit up thats blatantly incorrect over half the time.
60days@reddit
notepad.exe + winftp.exe. I'm fullstack.
srawat_10@reddit
Agree with you. I am 2X productive with all these AI tools. All the engineers I known of are using the AI extensively (sometimes a little too much too :D)
PS. I am working with Tier 1 too
Majestic_Sea-Pancake@reddit
I've found that 9 times out of 10 it is quicker for me to use Google and the official tool/language/library documentation than it is to use AI. All models I've worked with tend to give me code and/or code advice with obvious mistakes.
E.g. it attempts the use methods that don't exist and then tends to claim that said feature it is from x version of the language. When corrected it comes back with a "I was wrong I meant y version of language".. so and and so forth. (In this experience I was working with c# dotnet so it wasn't some less used/known tool)
Another example I've run into is that it (GPT enterprise in this case) will give bad advice with React code. In my experience, a decent amount of it's claims are anti patterns that contradict documentation.
I have run into things like the above with gpt, Claude, Gemini, etc.
I have found it okay for brainstorming but I'm still wary of it due to how often it seems to provide me with bad information.
redMatrixhere@reddit
r u working at a tech or a non tech company & is it a startup
Least_Rich6181@reddit
Tech company. Not Mag 7 but pretty large and publicly traded
WagwanKenobi@reddit (OP)
I find that Google's LLM answer at the top of the results page is faster goo than entering it into a chat UI.
Qinistral@reddit
So you do use AI then
Least_Rich6181@reddit
I guess the difference is minimal for that action. But I find myself using Google less and less.
When I use Cursor I can just hit CMD + L to open up a side tab to input my question into a chat bot then also copy the snippet directly into the file I'm working on.
Or I can just press some hot keys to generate code inline as I'm working or even when I'm debugging stuff on the terminal
"write a function that parses this and does x"
then I switch to my test file
"write a unit test for this function" (I provide the file as context)
I just verify the results and the logic.
In the terminal I might write something like "loop over this output and organize into csv format with columns x,y" etc.
Plastic_Mind3223@reddit
I love using Graphite for stacking but haven’t tested out the AI reviewer. For us, it’d be another round of vendor security review. Do you recommend it? How does it compare to GitHub’s copilot reviewer?
Least_Rich6181@reddit
I actually use it mostly for the stacking as well
Then maybe 1/3 of the time the AI reviewer will surface something that makes me scratch my head and go...."hmm you know what you're right. Good bot."
2/3 of the time it's just meh or useless. But hey I can just ignore it in that case.
moh_otarik@reddit
Yes because the company forces us to use it
BoxyLemon@reddit
are you a slave or what
Azianese@reddit
I work in one of the biggest private companies. Company has the resources to train our own models. As such, models have full access to our codebase, tech docs, APIs/databases, oncall tickets, and more.
I use AI every day to auto complete short code snippets. It works pretty damn well tbh.
One of the nicest things is that our AI can triage issues, such as "why did X return Y?" Or I can ask it "under what business use case can Z occur? And what is your source/reference?" It isn't 100% reliable, but it's a great start.
It's pretty crazy how far things have improved over the past few months. I didn't use it at all half a year ago. Now it's my go-to.
BoxyLemon@reddit
fuzzy wuzzy 🌀
Ok_Island_7773@reddit
Of course, working at an outsource company, I need to fill out hours for each day with some description of what I've done. AI is pretty good with generating some bullshit which nobody checks anyway :D
BoxyLemon@reddit
‘Which nobody checks anyway’ - thin ice
MuscleMario@reddit
Yes, inside and outside of work. I tend not to use plug-ins for my editor. I manually prompt the LLM.
Saves a bunch of time from ceremony and is great to just use as a learning aid.
hammertime84@reddit
Yeah. Off the top of my head:
Tweaking SQL
Anytime I have to use regex
AI auto-complete is good
Making presentations or writing documents
Brainstorming ideas. It's pretty good at going through AWS services and tradeoffs and scripting mostly complete terraform for example.
"Is there a more efficient or cleaner way to write this?" checks on stuff I write.
Goducks91@reddit
I also like it for PR reviews! I’ve found AI catching things I would have missed.
Qinistral@reddit
How do you use for code reviews?
grumpiermarsupial@reddit
Sourcery/CodeRabbit/Greptile all have review bots you can add into PRs
Ihavenocluelad@reddit
If you use gitlab/github you can embed it into your pipeline in like 5 hours. Push all changed files to an endpoint with a fine tuned prompt, post results to the MR. Cool fun project and your colleagues might appreciate it.
Complex-Equivalent75@reddit
Do you have any public examples of this I could look at? Seems cool.
Ihavenocluelad@reddit
https://github.com/Evobaso-J/ai-gitlab-code-review
Maybe something like this? When I built it i used AWS bedrock and cdk
Toyota-Supra-6090@reddit
Tell it what to look for
Maxion@reddit
Yeah but I guess the question is how do you give the PR to the LLM? Do you git diff and hand it the diff, or what?
I've never used an LLM for PR review and I'm not quite sure how to approach that.
danmikrus@reddit
GitHub copilot does code reviews well
Maxion@reddit
GitHub copilot is a lot of things, and there's plenty of ways to interface with it.
Do you mean the interface on GitHub.com the website?
My team does not use github as a code repository.
danmikrus@reddit
Yes it’s inbuilt into the website and you can add copilot as a reviewer if it’s enabled for your org, and it will act as a human dev would.
drdrero@reddit
Yup we have it automatically requested on every PR, it’s annoying at first, but it caught semantic issues quite well.
loptr@reddit
Agree, it's a great complement and is also a great first pass, allowing you to ensure low hanging fruit like spelling mistakes etc are all taken care of before you send the PR to colleagues.
Also been helpful in pointing out inconsistencies in camelCase vs TitelCase, or when an error message is undescriptive/doesn't match the usage.
FactCompetitive7465@reddit
That's what we do. Then prompts fire for each rule we have defined for each file in the diff and results are posted to a comment in the PR.
rding95@reddit
We use CodeRabbit at my job. I wouldn't say it ever gives good high-level feedback, but it's great for catching smaller things you missed
ArriePotter@reddit
You can add copilot as a reviewer in GitHub now lol
RegrettableBiscuit@reddit
Yes, this is actually useful. I'd say that 90% of the time it produces nothing of value, but it's quick to read, so not a huge waste of time, and the 10% of time it does produce something of value make it worthwhile.
marx-was-right-@reddit
What on earth.... This comment section is complete crazy town
Goducks91@reddit
What?!
marx-was-right-@reddit
Were really calling code linting tools AI?
Goducks91@reddit
What? No. I utilize GitHub copilot + ChatGPT for code reviews. Lint issues fail the pipeline and don’t even need review!
marx-was-right-@reddit
This amounts to a linting tool in practice ...?
Goducks91@reddit
No, but I don't really feel like arguing with you because you are adamantly against all uses of utilizing AI from your post history lol.
marx-was-right-@reddit
I dont think you fundamentally understand what an AI is, an LLM just spits out preprogrammed text. No different than a linter
tinycorkscrew@reddit
I agree with everything you wrote here except scripting terraform. All of the LLMs I’ve used are so bad at greenfield terraform that I don’t bother.
I have, however, learned a thing or two by having AI review first passes of terraform that I’d written myself.
I have been working more in Azure than AWS lately. Maybe current models work better with AWS than Azure.
b87e@reddit
I use Amazon Q to write terraform (for AWS services) every day. It is really good. It is also decent at using the AWS SDKs in every language I work in regularly (python, C#, java, javascript, and go). It is mediocre at any other programming task though.
met0xff@reddit
Yes, generally "how can I do X in AWS" saves a lot of time versus digging through 30 AWS doc pages where every link opens a new window with even more blah blah ;)
creaturefeature16@reddit
These sanity checks are my absolute favorite thing to do with them. They just keep the gears turning in a variety of ways to approach whatever I am wring. I love that I can throw some absolutely downright absurd limitations and suggestions and it will still come up with a way to meet the requirements. A lot of what I get out of it I never use, but the ideas and suggestions are indispensable.
I don't know where else I could get this kind of assistance; StackOverflow would never approve the question and Reddit would likely turn into salty comments and antagonizing. I'm self employed so I only have a handful of devs here and there on other teams to bounce ideas off of, so these tools have drastically improved my ability to become a better developer just by being able to learn by experimentation.
U4-EA@reddit
What you said about regex and brainstorming. Sometimes I just can't be bothered deciphering a complex regex and it's also quick and easy to get AI to write a regex for me. However, I thoroughly test all regex regardless of the source I got it from.
Brainstorming ideas - yes, I have been using it a lot recently with AWS infrastructure ideas but I then make sure I validate anything it says. It's just a faster google search.
For me, AI is a sometimes-useful time saver but not a revolution. And it needs to be used carefully. Example - I recently asked ChatGPT to give me a random list of 400 animals, which it did. I asked it to give me another 400 that were not in the first list and it gave me another 400, 6 of which were exact duplicates from the first 400.
michaelsoft__binbows@reddit
i love fiddling with regexes. but you're right because AI is far faster than me at screwing around with them. Blew through that challenge some time over a year ago, like they did with the turing test like a sledgehammer through tissue paper. Regexes are fun little puzzles but it is only worth applying cognition for them for the fun of it now... Bit of a shame really.
it200219@reddit
I shared what I found to my boss. Couple example of prompts and responses and how they were incorrect. My prompt had lot of context, details, sample code, situation etc defined. I also shared my multiple attempt to get correct response. Tier-3 Bay company.
Battousaii@reddit
It's actually that each dev is using their very own specially built for them and their particular job at the moment ai helper and you all can't tell cause they've been doing this so long you all can't tell.
pegunless@reddit
This seems most likely. Your company culture must not be conducive to admitting you’re writing code with AI, or it’s a problem with your local team and you’re extrapolating too far.
Where I work it’s around 60% that use AI to help with writing or debugging things weekly, maybe 10-20% are uber power AI users. This seems like the common numbers when asking around with friends elsewhere.
gravity_kills_u@reddit
This is a very interesting time to be in development. My previous job was in a heavily outsourced environment that was high code with massive production issues while my new job is a startup with a shrinking high code team and growing low code team. This feels very similar to the 2017-2022 ml era.
My context: I am an ai/ml engineer with 25 yoe all over the place. The vibe coding wars today remind me of the ml ops crisis during the pandemic. At the time there were lots of new data science grads because machine learning models were supposed to take over everything. Allegedly the big opportunity was in getting ml models into production using ml ops tools. It became obvious after a while that getting models into production was a lot more valuable to junior (mostly ofgshore) developers needing resume expansion than it was to stateside business users. So I doubled down on system design and SQL to cover scenarios in the real world:
These days I see system design to remain a valuable skill. 10 years from now there will be custom coders doing their thing, agent developers doing their low code thing, and businesses still running. As before, there should be some excellent opportunities ahead:
To conclude, at my current workplace we have a high code team of younger (5-ish yoe) developers building an LLM app that is having issues due to typical lack of prompt engineering and tool use expertise. The older (10-20yoe) low-code team uses LLMs as a SO replacement to keep up with increasing ticket volume and is wildly productive in its unofficial usage. I am continuing to architect designs that make our systems compatible with agentic concepts while improving my Salesforce Kung fu to build more domain expertise. No matter whether LLMs can code or not, I will always get paid to improve outcomes on business reports.
Lalalyly@reddit
We use it to convert from one library to another when it’s a tedious task. Otherwise, I’m still using vim and pdb for most of my own work since we have a lot of custom internal libraries that the models don’t know anything about.
RegrettableBiscuit@reddit
I'm using Copilot in IntelliJ every day, and so is everybody I work with. Not really to write code, more to ask questions about APIs and stuff like that.
TheLion17@reddit
Yes. I constantly use it for small things, what I previously would have Googled (how to do x in y language), I now ask ChatGPT and 90% of the time get a more helpful response than Google/SO offers and without having to scroll through ads and bloat. Sometimes I will use it as a rubber duck, for example to get ideas for how to handle some particular edge case in an isolated piece of code or how to architect a piece of functionality. For bigger architectural discussions or in cases when what I am working on is too entangled in the internal code base, I do not trust it and prefer to have full control.
failsafe-author@reddit
I use it when I’m working in a new language or when I forget how to do something I haven’t done in a while. Sometimes I use it to rewrite example code from a different language into the one I’m using.
It’s fine for these tasks.
maclirr@reddit
Here are some ways I use AI in my work:
What I'm not doing is using copilot to write code for me. It seemed cool when I set it up but I just have not needed to use it. Maybe it's because I don't write so much code these days.
-ScaTteRed-@reddit
My team user Cursor, it help to generate code faster. So that can increase the output of the team. We also try LLM to generate TDD from PRD, or review PRD (still in experiment). For me I use LLM for enrich, analyze, labeling data for my business.
eslof685@reddit
It's been writing half the code for me over the last year or two. If your devs aren't using AI then I feel truly sorry for you. What's next, you still write code by punching holes in paper cards? xD
WagwanKenobi@reddit (OP)
Half the code is incredible. Which language?
Any-Bodybuilder-5142@reddit
don’t believe these 🤡 they are likely bots / promoters from these chatgpt wrapper companies lol
eslof685@reddit
so sad that your own incompetence makes you lash out at others like this
Any-Bodybuilder-5142@reddit
incompetence is the kind that needs AI to write code for them
eslof685@reddit
I've mostly been working with C#, PHP, and Go; and I use Python a lot personally.
SympathyMotor4765@reddit
Our management spent 50 minutes out of 60 in the last all hands talking about AI. We're a firmware team with 95% of the code being ported forward and you'll be lucky if you get 4-8 hours per week of actual coding
Clean_Foundation6267@reddit
I work in the field of AI and am a Developer. I use Github Copilot very frequently in VSCode. I also realized that OpenAIs LLM including ChatGPT is very good at detecting solutions to errors. And it also makes good code suggestions. They make the coding very fast and efficient.
MindlessTime@reddit
I use GitHub copilot because I’m used to it and find it less invasive. I use it for documentation stuff—updating readme files, adding doc strings, etc. I’ll also use the chat when debugging. Maybe 30%-40% of the time the chart will adequately diagnose an error faster than I could. I don’t do the fully agentic/vibe coding thing though. If I ask the AI to write code, I do it for small parts like a function or class. And even if I’m asking the AI to write snippets of code I do it in a chat and type the change manually. Physically typing it helps my mind form a mental map of the codebase, and that’s necessary for keeping it clean and maintainable.
Szpecku@reddit
We've just started adopting AI tools at the small company (4 developers and I -hands-on engineering manager ) and I've categorised those AI tools into 3 categories: - code assitants - chats - agents
We're Java house and we could see suggestions from code assistants kept improving in Intellij and their local model. We have decided to take it further with Tabnine and we can see it improving even further. Especially our junior and mid developers see that it helped them with implementing few functionalities which are known problems but they never done it before.
Then our Architect quickly learned then he needs to tweak prompt to get framework usage examples using concise approach which was introduced later and there are fewer examples in the Internet.
I like what he said - "use AI to explore how to implement solution for a problem you're not sure how to resolve but if you know how to implement something do that first and then just ask AI for review to avoid getting into this vibe coding loop"
Chats we use already across company - programming is not my main job and it helps me to create some scripts whenever I need something as well as our analysts use it to help them with documentation.
And we're still early in using agents.
Overall what we try to achieve is an experience what AI tools are useful for and being pragmatic
Ok_Incident8009@reddit
Consider speedy-apply [.] com a smart investment in your job search strategy.
diaTRopic@reddit
It’s nice for stuff like converting a spec for a planned API into a data structure for its response, or writing out snippets of CI pipelines for specific tasks. It’s nowhere close to reliable enough to code an actual something out of nothing, though.
TinyAd8357@reddit
Senior at Google here: all the time. It’s made my job 10x easier
Coreo@reddit
Help with writing tests - when it actually makes substantial tests that pass.
Also help it with sanity checking my stuff from time to time. I treat it like an intern QA dev before handing over to the actual QA.
diggpthoo@reddit
Not me, but " has requested a review from copilot", so yeah... AI is actually using ME meaningfully at its job.
Admirable-Guide6145@reddit
I use chatgpt a lot for quick questions. It has a much friendlier tone than SO.
We also use one of those vector search services for internal documents which is so much better than confluence, but still worse than slack/discord search
We use OpenAi for some product purposes and it takes a lot of examples and specific instructions to get good results.
When it comes to code it only gives garbage because I dont want to spend the time being explicit with my instructions
bilbo_was_right@reddit
Yes. ChatGPT frequently for harder to google questions. It’s generally better than stack overflow. Cursor occasionally if in working in a condense that’s easy for an ai to understand like a react project. Copilot is basically always enabled, I find its autocomplete is frequently faster and better if not the same as what my actual lsp suggests. It also saves time because I frequently find it thinking the same thing as me. We also use ai review tools to cross check our prs in addition to other people’s reviews. We transcribe and summarize meetings with ai. We use as a knowledge base that can integrate with our chat system, or ticketing system, and our documentation.
If your coworkers aren’t using ai by now, they’re basically choosing to be inefficient with their time. Even if you don’t use it to code, there are plenty of extremely useful professional applications
notkraftman@reddit
We were given windsurf Gemini and glean at work. Glean is incredible because we have so much in slack threads and confluence docs.
I use AI for everthingz every day. It's like a free instant second opinion that you can take the advice of or ignore.
unixmonster@reddit
Glean is incredible. Easy for anyone in the org to use LLM/AI in a meaningful way. We find and share ways to speed up many generation, research, and clean up tasks.
ninseicowboy@reddit
I find it’s most useful at architecture and tradeoffs, and pointing me in the right direction for learning about something I didn’t know existed. Basically just using it as an ultra-literate search engine which hallucinates sometimes (thus requires fact checking)
Substantial-Tie-4620@reddit
I use it to organize meeting notes and shit
lesChaps@reddit
Documentation.
DrTinyEyes@reddit
I'm at a smallish startup. Each engineer has an AI budget and we have a copilot license. I've used AI to explain some complicated bash scripts, some undocumented legacy pandas code, and for writing unit tests. It's helpful but not a revolution.
slash_networkboy@reddit
I do but it's a narrow use case. I'm QA and I use LLMs to make realistic datasets for test data. E.g. I need 100 person records that have a first name and last name, 30% need a middle name, all need a social security number but it needs to start with 900-999, address, etc . I also use LLMs to parse DOMs into accessors. Usually have to do some cleanup but it takes about 4+ hours of annoying work and turns it into about an hour of fine tuning.
Have yet to see it make really good test cases though. It especially falls flat on e2e tests because of the lack of business logic knowledge.
TheRealJamesHoffa@reddit
I mostly use it as a much better Google/StackOverflow to answer more general kind of questions about concepts and ideas in a more digestible way. The best part is being able to ask it followup questions to clarify details, which isn’t really an option with posts on StackOverflow or whatever. And as someone who always is asking “why” in order to better understand things, it has accelerated my learning greatly and made me a more impactful engineer. Writing code is not its main use for me.
MattTheCuber@reddit
Yes. I am a tech lead for a startup software development group in our organization. We work a lot in R&D, computer vision AI, and application development. Our company pays for ChatGPT for each of us which we use frequently for all sorts of uses. We hire a lot of junior devs and interns who use it in place of Google. We use it for coming up with ideas, getting quick reminders, learning a tool/algorithm/concept in more depth, and just hashing out ideas with the internet wrapped in a chat bot. One of the most overlooked benefits we have with ChatGPT is debugging. It's nearly impossible to measure how much time it has saved our company in manual debugging time. Sometimes it can pinpoint a solution to a minor bug caused by something that would have taken us hours to track down. We also recently started hosting a local LLM (Qwen2.5 coder 70B) since we work with government controlled data (as DoD contractors). This has allowed us to start using code completion which has a significant impact of development time by reducing the time to write boilerplate classes, loops, ifs, functions, doc strings, etc. Our team definitely over uses and over values AI, but I believe it has improved efficiency to some degree.
Valivator@reddit
I'm a newly professional SWE, longtime hobbyist, and the only thing it's been helpful for is slightly better/longer text prediction. If you are applying a similar pattern on many places it can help as well.
As I am learning c++ on the job it can jumpstart my research by finding the appropriate keywords to plug into a search engine.
codemuncher@reddit
I use it to ask questions for technologies (like react) I’m only somewhat familiar with.
I use it to vet design ideas.
I use it for some lightweight research.
I use the “agent” stuff - cursor aider - a bit but it has a hard time dealing with complexity and I tend to not rely on it a ton there.
It’s alright but I’m a fast reader and good researcher and excellent coder (okay beyond excellent), and I don’t feel like it’s a game changer for me. Maybe I need to “vibe” more but on my projects which are security sensitive … just can’t trust the lying machine!
porkycloset@reddit
I use it as basically a shortcut around stack overflow. It’s quite good for smaller style questions like that. Anything serious or more complex, nope. And vibe coding is one of the dumbest things I’ve ever heard
franz_see@reddit
In my previous work, i used it often to scaffold projects fast. Also, we use coderabbit for AI code review - not game changer but helped catch silly things that normal linters wont be able to.
cajunjoel@reddit
I am trying to use a locally hosted LLM to mine data in old scientific books. Results aren't promising at the moment.
Full-Strike3748@reddit
Embedded firmware engineer here. My old corporate job looked down on anything AI, but it was very competitive and old school.
In the new job, we have a company wide chatGPT subscription. I basically use it in place of google, or even like a graduate programmer. If I need to generate a bash script quickly, or a set of funtion prototypes, or even just a list of defines or includes, it's great. Things that just don't need to occupy my time or space in my head.
Obviously you need to check everything it produces, and i still do all the high level architecture myself. But it's been very helpful for all of the BS/repetitive tasks i couldn't be arsed to do anymore. Or even to generate a starting point for a block of code.
Sometimes I'll run my code through it and ask it to do a 'review' to see if there was anything stupid or obvious i missed. It generally has some good recommendations.
Im always hyper aware it's a slippery slope to vibe coding, but used properly, I think AI is a good helper.
GolfinEagle@reddit
I’m a senior SWE in the healthcare industry and I use Copilot basically as autocomplete and Copilot chat as an in-editor Google replacement. That’s the extent of my use of it in my workflows. We also have an in-house model we’re playing around with using for certain things.
Any time I see someone using gen AI heavily in their workflows, a la vibe coding, it’s because they suck at their job or suck at the language they’re using. Sorry but that’s the truth. Especially in healthcare, where quality and security standards are very high (at least where I am now), it really stands out when someone starts vibe coding. Their PRs get torn tf apart.
gollyned@reddit
There’s definitely a lot of GitHub tab completion in one project. It’s awful to work in.
Another dev used LLMs extensively to suggest things and try to understand what’s going on. I had no idea what he was talking about or its relation to our work. He tried to apply massive changes for simple problems. He had no idea what he was doing and got fired.
Another dev also uses them extensively but partially knows what’s going on. He doesn’t know the answer to any question and doesn’t contribute to discussions about anything. He makes massive, impossible to review PRs with tons of weird artifacts. He’s not bad enough to fire and doesn’t piss anyone off.
robobub@reddit
One use case I've found it quite helpful with has been migrations. We migrated some production quality Python code with tests to C++ for performance, and also ROS1 to ROS2.
Other useful cases are devops / system scripts, boilerplate, or bootstrapping projects in a domain/library/language you're not that familiar with
Using AI to design whole features and make architectural decisions is a recipe for a disaster currently
robobub@reddit
One use case I've found it quite helpful with has been migrations. We migrated some production quality Python code to C++ for performance, and also ROS1 to ROS2.
Unsounded@reddit
I’ve been using it more and more, it’s stupid to not use it.
It’s useful for getting started on unfamiliar code, writing unit tests where you end up meta programming anyways, and getting some junk/boiler plate setup.
It’s not great at bug fixes, editing prod code, and handling complex algorithms. It is good at getting you like 50% of the way there but then you see to take over. Basically it’s good at doing tedious repetitive stuff that folks commonly do and have to slog through. It’s not like you’re going to get 10x productivity, but there are tasks that are infinitely faster and that shows. I’d say I spend 20% less time on coding tasks, those were a small part of what I did but that’s still a huge improvement and worth its salt.
PressureAppropriate@reddit
I barely ever go straight to coding...
I have ChatGPT provide me with some garbage to start with and I hack the garbage until it does what I actually meant when I prompted it.
(I mean I get a raw piece of clay and I shape it until I have a piece of art)
ViveMind@reddit
100%. All day every day. Every task. Not joking
SerLarrold@reddit
Use it but more as a helper than anything:
Writing tests and boilerplate Regex or other things I’d ordinarily have to lookup but is straightforward and has lots of examples Algorithmic type questions - being trained on all that let code makes it good for these Prototyping more complex features - I ask it to act as an architect or lead dev and kinda argue with it about how to structure code as a way to find faults in my own thinking faster. Ultimately I’m doing the real work but it’s like a supplemented brainstorm almost Refactoring code - if I have a convoluted if statement or something similar it’s quite good at taking that and simplifying it. Haven’t tried it to fully refactor components but I suspect it could be quite helpful in something like porting Java code to kotlin etc
It sucks for actually writing a ton of code for you though, especially if you have a complicated codebase which it doesn’t have access to. I’d spend more time trying to teach it what the actual problem is than just solving it myself for a lot of things.
Gofastrun@reddit
I use Cursor to offload grunt work like boilerplate, refactors, and first pass implementations.
It’s pretty decent at maintaining tests, but you need to watch it closely or else it will go off the rails.
I also use it to pre-review my code. It will find optimizations or missed corner cases that would have been caught (hopefully) in code review.
Chat GPT is pretty good at doing research on how to solve a problem. If you give it a problem definition it can write a report about how it was solved at other companies, what worked well, what didn’t work well, trade offs, etc with sources. Equivalent of days of manual research in minutes.
When it gets down to it, if I actually have to think about something and make decisions I’m using my organic brain.
I would say that for some tasks Cursor gets me from ticket open to ticket closed 25-50% faster. For other tasks it reduces velocity. Trick is knowing which and how to write the prompts for maximum effect.
depthfirstleaning@reddit
Day to day it’s kinda just a better autocomplete and google replacement. The code produced when you ask for anything substantive is generally too low quality for something that will be reviewed so it’s mostly local scripts.
We do use it in our system as a replacement for actual code. we use AI to gather information from various sources for customer reach outs, even some automated operational tooling where we use AI with mcp servers with a very strict and precise set of instructions to create a pull request on it’s own to change some configs.
Little-Bad-8474@reddit
I’m using it at a tier 1 almost daily. But evangelism is a real problem here (we have to use internal tooling). Also, it is very helpful for boilerplate stuff, but can write some god awful stuff with the wrong prompts. Junior devs won’t know the difference, so code reviews of stuff vibe coded by juniors will be something.
PPhysikus@reddit
I recently had to build up an elastic cloud system from scratch and their documentation sucks. So ChatGPT helped a lot of making sense to this mess.
fuckoholic@reddit
Yeah, what's up with people saying LLMs are useless. Had a similar thing today. I got fed up with unclear documentation and examples that did not fit my use case, let GPT instantly answer my questions and give me example code. I instantly knew the answer to my problem and voila, I'm done. Who knows how many more hours I would've spent searching for answers on the internet?
SubstantialListen921@reddit
A couple places where I've seen the tools really shine -
Cursor-based autocomplete is frequently a huge time saver. For boilerplate or repeated tasks, the sort of thing that you might be tempted to whip up a sed replacement for, it frequently nails it in one shot.
I've used Cursor as a first pass to translate code from one language to another. It's not perfect, and you definitely need to have some idea of the general encapsulation/decomposition you're aiming for, but if you give it that guidance it can do a lot of the grunt work. Obviously you need to read it carefully.
I was actually shocked how well ChatGPT 4o did on writing a script to translate between two different log file formats, given examples of both. That's a pretty sophisticated inference-of-a-sequence-to-sequence translator and it did a great job.
Cursor's IDE integration for things like adding a new argument to a function works very well. You can tab your way through a file and inspect the suggestion at each place; it's just a smart integration.
VizualAbstract4@reddit
A few things: data enrichment, release notes (still iterating on the prompts), flavor text for some descriptions and summary.
Everything else is just machine learning.
And user-facing tools to generate marketing messages.
We’ll likely start working on an assist in the coming year that will interface over text message, been planning and thinking through it for a few months.
That said, I’m personally weening off using AI in my day-to-day workflows, except for CoPilot.
It’s just getting increasingly worse and wasteful. Something that can take 6 minutes to do stretches to hours because AI is little more than a psychotic junior dev with a memory problem.
The_0bserver@reddit
Use it at our org. Have some writing emails, confirmation emails and a few others and then converting responses to simplified db values for easier tracking. (Not my team or services so not too sure tbh).
Also, verification of some documents (which are passed across multiple hands), but it's also checked by humans who I'm not sure are aware of it.
I personally do use chatgpt etc to source ideas and some vibe coding, and critiquing . It's honestly quite nice to run some sections of code via these tools as long as you already know what's happening and how it should generally be. Pure vibe coding has resulted in a lot of lost hours though.,.
SaltyBawlz@reddit
I don't think I could work at a company that doesn't have Ai that I can use to help me code. I paste shit in there all the time or ask it for how I do things that I forgot how to do. I don't copy paste code back into our codebase from it, but I definitely use it to help come up with solutions every day.
daishi55@reddit
Yes, every day
QuietBandit1@reddit
It just replaced stack overflow and google search. But now I refer to the docs more idk if that makes sense
vinny_twoshoes@reddit
Yes! I'm a skeptic about many of the promises made by AI marketing, Andreessen and Altman and their ilk. But I use it a lot while coding, and that's true of the entire company I work for.
It generally can't come up with entire solutions, I still need to understand the problem well enough to describe solutions. For example recently I ran into some tedious leetcode type "detect overlaps in a list of ranges" problem that I delegated to AI. I could have done it myself, and I feel weird that that class of skill may atrophy, but there's no denying it came up with a suitable chunk of code faster than I would have.
The other major thing is writing tests. I do not enjoy writing tests. AI basically does it for me. I still check and edit everything quite heavily before submitting PRs, but it takes a few unsatisfying cycles out of the loop, and I don't mind that at all.
Wonderful_Device312@reddit
It's fantastic for putting together a quick tool or a proof of concept. But for my current main project which is over 1 million loc, it's useless except for very specific things. It can't be trusted to make any changes because it's usually just blatantly wrong more often than not. It also loves to try and gas light me about some basic concepts.
I tried using it at first but recently I've even turned off the AI auto completion and gone back to regular intellisense because it's much more predictable and reliable.
Sufficient_Nutrients@reddit
I work for a health insurance company and they block any use of AI.
possiblywithdynamite@reddit
I only touch the keyboard to copy and paste. About to start 4th job
Smallpaul@reddit
Yes, every developer at my company has access to either Co-Pilot or Cursor and almost every one uses them.
morgo_mpx@reddit
We use it a lot. Copilot is used mostly when there is tedious code blocks and I want to automate it. Also with testing, but I still review it all and never just assume it is correct or the way I want to build it, so I often massage it a bit.
Also CodeRabbit is fantastic for PR( the sequence diagrams it generates is super useful) and pre commit reviewing in my IDE.
Deckz@reddit
Yeah absolutely, I built out all of the CRUD for my TTRPG project I've been contracted to make in Unity. I had to edit it all by hand and delete a bunch of DRY issues. But it got me going really quickly with the Firebase API in unity which I had never used before. It cut down my time probably in half of more because I didn't have to reach out to docs, and it was able to write generic enough code for me to modify it and get what I need currently. If you use it for a containerized feature that's decoupled from the rest of your software and push code generation that way it's pretty good. This is using Gemini 2.5 pro which seems to be the most useful model. Some of the solutions it came up with were better than what I would've written by hand if I'm being completely honest. But I was able to validate its solution and simplify things a bit, works great.
talldean@reddit
I have a peer who does most of their writing using LLM to speed that up substantially.
I have multi-line code completion in the IDE that speeds me up writing code, which is my personal favorite.
I have a generative AI that accurately prioritizes across my department which work needs to move now, so choosing the top 10% of say 100,000 hours of the work that needs to be completed this year.
I have a chatbot pull up relevant documentation to reply to my users when they ask questions, which cut our operations load by half.
Each of those are say a 1% efficiency gain for all of engineering, which is >10,000 people, so doing any one of those well enough - not perfectly - saves us like 100 engineers worth of time.
Smooth_Syllabub8868@reddit
Pretty funny that everyone that uses it is being g downvoted, pretty pathetic to come here and ask a question and downvote answes
Smooth_Syllabub8868@reddit
Same questions everyday gyuys
brobi-wan-kendoebi@reddit
Working on some tool a staff engineer vibe coded in a week. It’s so insanely jumbled and broken and nonsensical it’s taken months to untangle, fix, and improve. When I reached out to him about problems in the past about it, the answer was “idk ask the LLM”. What the heck do we pay you half a million bucks a year for then???? Insanity.
I’ve been resistant to it, more accepting of it, kinda into it, disillusioned, and now actively avoiding it often times when I’ve retro’d how long a thing took using AI vs. trad development. I will say it is more useful if you are in a common language using well documented frameworks, etc.
saulgitman@reddit
If you're not using AI tools at all, you're shooting yourself in the foot. They're not going to replace software engineers like all the cringe LinkedIn psychopaths contend, but they are extremely useful tools. Like any other tool, its operator needs to know not only how to use it, but also when to use it. If you don't know how to do XYZ and you ask GPT to do it, you're going to have a bad time. However, if you know exactly how to do XYZ and just need to write some code to implement it, then GPT is fantastic. Do I normally need to fix or optimize its output? Yes. Do I occasionally find its output garbage and jettison it before writing it myself? Yes. Do I still think I'm much more productive by using it? Absolutely.
liqui_date_me@reddit
I’m an ML researcher at a big tech company and ML has saved me so much time it’s not even funny anymore, and I try and get others to use it as much as I can.
The way I see it, code is a means to an end in my job, and any way to write less code that achieves the same functionality is a win. Most of my time is spent writing boilerplate data processing, data visualizations and plotting code, and 10% of my time is actually training experiments.
With the newer reasoning models I’m able to come up with newer model architectures that work right out of the box, write scaffolding code to visualize data and model progress, debug training issues like overfitting or memory leaks or even parallelize inference scripts across multiple machines. All of this could have been done by me alone, sure, but it’d be a lot of writing mind numbing code that would take up a lot of my time. Now I have more time to more experiments and analyze their results.
I don’t use any coding tools - just copy paste stuff from our company approved LLMs
bruceGenerator@reddit
Sure, I find it incredibly useful for React code like "convert this page into a reusable component", "scaffold out the boilerplate for Context", "lets make this a reusable custom hook", stuff like that saves me a lot of time. Keeping the scope and context of the prompt narrow, I can look over the code quickly and spot any discrepancies or hallucinations.
Main-Eagle-26@reddit
Yes. I'm at a f500 company and we use LLMs regularly to write code. I use Cursor (which uses Claude as its engine).
It's useful sometimes. Totally worthless other times. There's a balance to be found.
Fartstream@reddit
SWE with 7ish YOE at a 200 person series D.
We are AI driven from a product perspective, and I use it for the usual "dumb questions" and for boilerplate.
It's nice for some things but as everyone in here is well-aware, it lies all the time.
I would say it has increased my test writing speed by 5-10%?
I've found the only way I can really get remotely close to trusting it is to give it a snippet and say
"give me another test that tests xyz that STYLISTICALLY does not differ from the above unless absolutely necessary. Explain your reasoning.
__blahblahblah___@reddit
Pretty sure I can’t work without Claude and Cursor at this point. It does like 70% of the lift of my MRs these days. Especially if I jump between code bases I’m unfamiliar with.
It also corrects about 50-60% of my MR feedback I get from people.
skamansam@reddit
Yes. I work at a company that develops various AI models for a myriad of things. We have been using Claude for over a year to help write documents. Last year i convinced my team to use Windsurf and the boss bought a team license for us. We just finished a huge UX refresh where we relied heavily on windsurf to get things done. I'm doing cleanup and testing now. Cleanup manually, and testing with windsurf. These assistants are just tools. The biggest issue I've seen is the lack of knowledge to use them properly, just like most other tools.
PolyglotTV@reddit
Yes developers are using LLM. But like any new development tool, it takes time before folks are able to educate themselves and set aside time to change their development environment.
Compare it to other technologies like modern IDEs, static analyzers, or even VIM/EMacs. All these things are super helpful, but if you already have a good workflow you are disincentivised from disrupting it to try something new.
Even a few years ago I still had a lot of coworkers using Notepad++.
So it'll take time until everyone can be bothered updating their workflows but LLMs are generally really useful and so it is inevitable they will continue to be used more widely.
LateWin1975@reddit
All these people trying to add nuance to the question talking about vibe coding and replacing every thought with AI are overthinking this wayyyy too much.
Is AI a great tool, widely used, and aiding people in their tasks? Yes. And if you haven’t realized that yet the layoffs are coming for you sooner or later.
(I cannot attest to your internal tools)
zegrammer@reddit
Yep the cursor auto complete is insanely accurate
secondhandschnitzel@reddit
Yes. Every single day. I am dramatically less effective without a LLM backing me up. I use it for many things. Explain and fix a bug. Write me a test that does XYZ. Write me a function that takes in X and produces Y by doing Z. Write me a SQL query to get Y from table Z with these constraints oh and please join on this. Can you explain this file, function, or line to me? This is the pattern I want you to follow. Now do it for the next 5 things. Can you help me find where X is happening? Why would a developer do something I don’t think is a good idea? Am I reading this code wrong or is there an obvious bug? Please dockerize what I just wrote. I don’t follow it blindly but it’s a great way to augment my work.
It feels borderline unethical to try to work from an airplane without WiFi these days because I’m so much more effective with an LLM sidekick. It lets me focus on the important things.
llanginger@reddit
Imo about half of the scenarios you describe are good use cases for ai, and the other half sound like potential problem areas you might want to consider working on. If you need ai to explain why people might do things differently, or what a line in your codebase does, or if there’s an obvious bug it just -sounds- like there’s not much of “you” in your work. Not that my opinion should matter of course!
mia6ix@reddit
I have twice as many YOE as you do, am an expert in my field, and I use ai much the same way as the person you’re responding to. It’s a tool, and offloading cognitive tasks to it that I can otherwise perform frees up my brain to do more creative and complex things it can’t do. That’s the point.
llanginger@reddit
Asking so as not to assume - are you attempting to put me in my place with the first part? In any case - I agree! I’m not taking a position that using ai is some kind of inherent admission of a deficiency, or taking an unearned shortcut. But also it can be, and as I noted in my response to their followup it’s not possible to know what the unsaid context is in a Reddit post.
I don’t disbelieve your assertion that your use of ai is similar to what was described and leads to you being able to put -more- of yourself into your work. Another person might use similar words to describe “instead of talking to my colleagues I let ai tell me what to do”, and I stand very confidently by my suggestion that that person is reducing the amount of themselves in their work.
mia6ix@reddit
Not meant to put you in your place. In your original comment, your opinion seems to be that only half of OP’s uses of ai are “good use-cases” while the other half may be propping up incompetence. I’m pointing out that one can use ai in exactly the way OP describes without competence being in question - and if that’s the case, perhaps your understanding of a good ai use-case ought to be re-evaluated.
Without knowing the full picture of what a person is responsible for at work, how could you judge that there isn’t “enough” of them in what they produce? My overall productivity has doubled with ai. There absolutely is more of me - my perspective, my output, my leadership - in my org now than there was last year, period.
llanginger@reddit
Respectfully you are reading something into my initial comment that isn’t there, though in fairness a second pass at it would have resulted in “half of those sound like clearly good use cases, …”. I truly meant for “potential” to be the operative word in “potential problem areas”.
If you reread my last response to you you’ll see me anticipating and endorsing the idea that these tools can allow us (you) to bring more of what makes us “us” to our output.
In any case - I think the truth is there’s not really all that much distance between our positions on the matter :)
KhonMan@reddit
Work on your communication skills then my dog, because this sounds very catty:
llanginger@reddit
I disagree :), I stand pretty strongly behind that in context with the thread. Thanks for the feedback though!
KhonMan@reddit
Yeah obviously you disagree or you wouldn't have said it in the first place. Feel free to never change lol
llanginger@reddit
Ok then :)
secondhandschnitzel@reddit
I don’t need AI to explain things to me. It can read a 150 line file worlds faster than I can. I then get to focus on the parts that matter to what I’m working on sooner and with more holistic context.
Maybe you have infinite confidence, but when I see something that looks like an obvious bug, I presume I’m wrong. It made it through code review and is in prod and was written by someone who thought about the problem a lot more than I have. So before I go breaking something by trying to fix it, I love to be able to have a second opinion that I haven’t missed something. And again, it lets me move faster. I could take 15 minutes to really look deeply at something that doesn’t look right. Or I could flag it to my LLM, keep writing what I was doing, and then read its summary and start my assessment from there. The second is dramatically faster.
I do currently use it to make up for a SQL query skill gap but I generally (not always) use it as a means of improving that skill vs a means of not having to learn a skill.
llanginger@reddit
All of that makes sense! It’s hard to gauge what’s left unsaid in any given comment ‘round here :). Thanks for replying!
secondhandschnitzel@reddit
Yeah. The thing that really reminds me that “I’ve still got it” is that when AI can’t, I do. It does have significant limitations. It gets a lot wrong. I just know enough to know when it’s wrong and ignore it. I also tend to rather heavily guide it. I’m calling the shots. If I don’t like the result, it either tries again or I do it myself. When it goes off on a tangent, a new chat gets opened.
kerrwashere@reddit
“I don’t want to use a tool that is designed to replace my job, but it is still going to be used anyway”
MissionDosa@reddit
I use copilot for assisting me in general. Helps me a lot to write throw away scripts for one time data processing/anaysis
tb5841@reddit
It's helpful if you forget syntax: "How do I remove an element from an array in Javascript by value?"
It's helpful for explaining syntax that's unfamiliar: "what does %w [a b] in Ruby mean?
It's particularly helpful for explaining browser console errors, which I sometimes find hard to decode.
I find it helpful for writing CSS (maybe because my CSS is bad).
It's helpful for writing a general structure for tests, if you give it the file you want to make tests for (even if the actual tests it makes aren't so good).
It's extremely helpful for generating translations, if your code needs translating into multiple languages.
newprince@reddit
Yeah we're still not sure how this will shake out. Some people believe we need to have a "bring your own agents" approach, meaning the company provides many LLM services, but shows you how to make your own agents to perform what your department/unit needs.
I'm skeptical that people will build their own agents and apps but I don't know of great alternatives. It seems daunting to embed with all the departments to build agents to do their very specific workflows with specialized knowledge bases, etc.
Banner80@reddit
I don't understand how anyone could be working with code and NOT using AI.
I have the code autocomplete on and it saves a ton. Like If I'm duplicating a line to add a field to a form, the first line is something like -code- fieldname: FIRST name -code- ; then when I copy the next line, the autocomplete automatically changes it to -code- fieldname: LAST name -code-. The autocomplete is reading the document and noticed the dataset schema a few pages up, and uses that to predict what I'm trying to do. That alone means writing code at least 30% faster than before.
Then there's talking to it. If I'm designing a database table, I say: hey, here's my plan for what's happening with the time entry log. Here is my planned schema, and here is the reason why we need this table and what we are planning to do with this moving forward. Then the AI gives me 2 pages of things to consider and potential touchups to my table schema. Even if nothing of what it said makes a difference, it's still nice to get a second opinion on the fly. But it usually says at least a couple of wise things to consider.
Then there's debugging. I use AI to get to a problem fast without having to consult the online function and syntax docs. The AI usually gets there fast and provides a nuanced response that includes the syntax hints I would have found online, but directly addressed what I'm working on.
And notice that up to this point, I haven't made it write any code outside of my own intended code. Asking it to write code is a different animal, but for small contained functions it can do a pretty decent job as long as you oversee closely for quality.
marx-was-right-@reddit
If you have to use AI to do this you are literally stealing a paycheck from your employer lol. Congrats i guess?
Cyral@reddit
Believe it or not employers would love for you to get more done in less time
marx-was-right-@reddit
Youre not getting anything done in that scenario. Youre just talking to an text prediction bot that spits out junk.
How about actually read the (up to date) documentation and apply the knowledge?
Cyral@reddit
You can tell yourself that all you want
bfffca@reddit
Your debugging part does not make any sense, have you asked AI to write it?
pwouet@reddit
Even the second opinion stuff feels silly to me. It's not nice to have 2 pages of obvious answers as a second opinion. It's just yapping.
Cyral@reddit
I love in this thread where people share ways AI is helping them and then others tell them that can’t be useful.
Banner80@reddit
Yeap. Fascinating to watch.
This sub is not named properly. Same rookie mentality found everywhere else on Reddit.
Cyral@reddit
Everyone who is responding with how they use it is getting downvoted lol. Theres like twenty comments sitting negative right now.
This same thread or a variant of it is posted basically every day here and it goes about the same.
quentech@reddit
I mean, adding a first name field to a form and feeling like you need a second opinion to schema out some DB tables sounds like some seriously junior dev shit.
WagwanKenobi@reddit (OP)
But that's saving you a few seconds: copy-pasting the block then copy-pasting the new field in a few places. I specifically avoid AI for such repetitive work because I'm afraid it will break my flow.
omgz0r@reddit
Yes, I never fully got why these savings are amazing. I have to guess that they are coming from a world where they would type the entire line, not copy the previous line and edit the one field that changed. So to them AI is the way when for say, me, it is more or less equivalent.
Plus, y’know, I’ve never been just frustrated that I can’t type the solution fast enough. Usually I am thinking as I go and modifying on the fly and so text generation can tend to rob me of those moments of thought.
Banner80@reddit
Yes, autocomplete only saves me 20 seconds once per minute. Add that up for an entire work day.
The bottom line is this is a powerful tool and you are going to have to learn it. All the regressives here bending over backwards to pretend there's no use for AI are only making it harder for themselves. The industry is 100% moving in this direction and you are only falling behind. Start learning how to use the tools. We are only going to use more AI in the future, so something as mundane as autocomplete has to be adopted already. Enough with the BS excuses.
Honestly, I was having this discussion last week, and at this point I would be very skeptical of hiring a dev that still doesn't use AI. I'd be worried there's something seriously wrong with their mentality. How can you be in a field about critical thinking, and yet not see the obvious immense value that AI tools have today?
mia6ix@reddit
If you’re still copy-pasting ai inputs and outputs, you’re not making full use of new ai-powered IDE or CLI workflows. Look into Windsurf, Cursor, Aider, Claude code, etc.
pancakecellent@reddit
I work for a SaaS shipping platform with about 80 employees. I just wrote testing for all the code in our LLM agent that carries out customer requests. Over 200 tests and it would have been so much worse if I didn't have Windsurf to speed things up. To be fair I try to outline exactly what I'm looking for with each test, so it's not vibe coding. However, it would easily take 3x as long to make it all myself.
mentally_healthy_ben@reddit
I mean, yeah? Definitely? They've honestly transformed my experience of software development.
I'm so much more productive with this permanent pair programmer / rubber ducky. Esp if I'm working in a domain that it can "understand" / generate decent code for.
kr00j@reddit
Principal at Atlassian (OAuth + Identity) - I keep most AI code agents away from my IDE, since I find them very disruptive, producing useless slop when having to hash out complex security concepts: essentially mapping RFCs to our own stack. Many of the OAuth RFCs - probably many other specs as well - outline a general concept and approach, but implementation details and edge cases are very much left up to individual installations. Just take a look at something like dynamic client registration.
Pretagonist@reddit
Yes all the time. But in an informed way as a multipurpose tool.
Vibe coding, though, is the most stupid concept for software development I've ever heard. By their very nature AIs keep compounding on mistakes digging themselves ever deeper into holes of complete disaster. I've seen AIs keep making the same mistake over and over even after it's fixed just because it remembers the mistake and it doesn't really have a concept of bad and good memory.
latchkeylessons@reddit
It's handy with small refactors or simple knowledge queries in the IDE. But also lately we've been using it to summarize commits and automatically post to our sprint tasks daily since the executive team is asking for daily recorded updates from everyone. Claude is pretty good at that task, actually.
CRoseCrizzle@reddit
Not really yet. I did consult AI on a regex pattern because I don't like regex. I'm sure the day will come soon enough.
adambjorn@reddit
Yup I work at a big enterprise and we have a huge focus on AI right now.
For development I use the copilot agentic mode. I added a new capability calling some common (and well documented) APIs. I just gave the model links to the documentation, and added the appropriate files to the context window and with sonnett 3.7 it got me 80% of the way there. Saved me at least half a days worth of work.
I use it to write boilerplate code, or simple python scripts that saves me 30-60 minutes here and there.
I wrote another service that calls an LLM to do a specific translation task that regular machone translation sucks at. This ended being a 6 figures savings for the company since we had to have a human do it before, but the LLM output was good enough.
Some teams have built testing frameworks that work really well, or just use it to generate test cases.
Its also really good at PO type stuff like writing JIRA stories in a specific format.
They can be a real time saver if you put a little bit of time learning how to use them properly.
ObsessiveAboutCats@reddit
GitHub Copilot has been very useful for me.
VooDooBooBooBear@reddit
I use AI daily and am encouraged to do so. It really just increases productivity ten-fold. A task that might have taken an hour or two to do previously now takes 10 minutes.
LoadInSubduedLight@reddit
And a PR that used to take 10 minutes now takes an hour, and you don't know how to process the feedback you get.
the__dw4rf@reddit
I use it in a few capacities.
I've found its good for small, well defined tasks. "Give me a C# model that maps to this sql table" "Write me javascript code that will find the last continuous sequence of letters in a string after a dash, and strip it out"
Or simplish things I don't don't do often enough to be proficient. Every now and then I need a regex. I've had a lot of success asking for AI to write regex's for me.
Same thing for SQL queries. I often go months without touching SQL. Sometimes I am stumbling trying to remember how to do something, and I can usually get a solid answer.
Another thing I have found is when upgrading libraries, AI can give really good how to guides. Recently had to jump 12 years of jQuery versions. AIreally helped guiding me on that.
I have NOT had success with more complex stuff, or even simple stuff with large datasets. We have some SQL tables that have 40+ columns (I hate it), and when I give AI the table and ask for an EF mapping or whatever, it will just leave some shit out. I'll say hey, you forgot this column. And it'll say, you know, you're right! And give me back the same response it did the first time, lol.
1ncehost@reddit
I'm a SWE at a medium sized web business. 90% of my code by LOC is written by LLM. I invested a lot of time into creating a process that works, including writing my own LLM runner, https://github.com/curvedinf/dir-assistant. I also use codeium for code suggestions.
I kept it on the DL for a year because I didn't want my coworkers to know I was working about 1 day a week yet still closing more tickets than most.
I'd rather not maintain dir-assistant, but I haven't found anything that produces higher quality code yet. This is due to the custom RAG process I innovated in it.
Gloomy_Cod_9039@reddit
I recently started using AI based tools in my work. It helps increase my productivity and is being used openly by most of the developers in my team. Even our leadership team has setup a productivity increase goal with the adoption of AI based tools.
Edg-R@reddit
Absolutely, it’s made me insanely productive.
I don’t “vibe code”, I review every line of code generated by the tool and I give detailed requests to get exactly what I want. Being technical and knowledgeable enough to know what to ask for and how to ask for it is a skill that many people may not have. AI tools can infer what you mean but that’s a recipe for disaster, it’s best to be explicit if possible.
I rejected using AI tools for a while mostly because it seemed like a fad and because learning how to use a whole new way of doing things seemed too tedious, I felt like I had better things to do. Now I’m more productive and shipping better quality code.
Brilliant-8148@reddit
Why not start building your own competing service and replace the management and leadership of your current company? That's their plan for you
killbot5000@reddit
I use it damn near every day but…
Cursor is only useful in writing boiler plate code. Even then, depending on the api/pattern you’re following. I’m also convinced that cursor has gotten dumber since I started using it.
ChatGPT is very helpful at explaining high level concepts and introducing me to nomenclature for things I need to spin up on. If you get too far in the weeds, though, it’ll hallucinate whatever details you’re asking it about.
choose_the_rice@reddit
I was a naysayer until I tried Claude 3.7 Sonnet and holy shit... the tool is too powerful to ignore at this point.
As a dev with a few decades of coding, this has made me excited to build again after feeling really burnt out. It frees me up to think about the design. It's like having a team of interns that never tire of my relentless feedback. It does require intervention and skill to use the tools correctly. I review every line. But it is a 2x to 10x productivity improvement.
Logical-Ad-57@reddit
Use it for intellectually unimportant work that is slow to produce, but very easy to test.
Three basic modalities I use AI for:
-Faster Google Search + Stack Overflow copy paste.
-Rough idea to something that I can search for the docs on. Something where you'd ask an experienced dev in a particular area for how to get started, then search for documentation based on what they tell you.
-Unimportant boilerplate. Someone making you write unit tests to hit a coverage requirement? They now get lousy AI mocks.
Think of all the hype around generative AI as marketers discovering that you can write a for loop to add the numbers from 0 to 100. Suddenly the computer makes everyone Gauss. But the reality is there's narrowly defined, unpleasant , often mundane work that we can automate away to leave time for the challenging intellectual work.
tn3tnba@reddit
My main use case is quickly developing a high-pevel mental model of something I don’t get yet. I think it’s amazing for this and makes me maybe twice as fast at getting a handle on new to me concepts. I also ask it to help me think through edge cases I’m worried about.
I don’t use much codegen, except for things I screw up like regex and bash arrays etc.
ryan42@reddit
I just used an app called screenshot to code today to do some mockup to react and it worked pretty well
Saved me time and frustration and the results are good enough for an early version MVP app or even human quality by my standards
Impossible_Ad_3146@reddit
Yes
EmmitSan@reddit
I’m pretty sure there is no one who is NOT using it unless you want to “no true Scotsmen” about what meaningful means. It is just too useful, even its most trivial applications.
DargeBaVarder@reddit
Yes.
It’s great for generating code. I can name a variable and AI auto complete will get me most of the way there.
It’s great for reviewing code. Describe the change I want and it mostly gets the ai suggested change right (sometimes I tweak my feedback for that AI auto suggestion).
The built in prompt is great for digging up examples and documentation quickly without having to dig through a fucking ton of internal documentation.
Also one of the tools I built used AI to. Overall it’s a big performance booster.
Tuxedotux83@reddit
Depending on the what your team is in charge of, for complex, highly sensitive and impactful code AI is not utilized that much for obvious reasons.
A top-tier software engineer will still beat any LLM in complex, sensitive and high impact software architecture assignments- only trade off is that humans while generating a much higher quality and 100% trailer made solution, need a ton of time more to do so, and top tier companies have time and resources.
“AI to replace software developers” is mostly a stupid hype normally pushed by either (1) company executives who have no idea what they are talking about but got some “consultant” to “tell them” what’s the best current thing (2) a company selling you an AI product
ArriePotter@reddit
You're forgetting the execs who want to scapegoat AI for off shoring
Tuxedotux83@reddit
That’s true, AI in some cases is indeed “Actually Indian” (no offense fellow Indian)
Least_Rich6181@reddit
I don't think it's really a competition. A skilled engineer will be even more productive with Gen AI tools.
Although you could say Gen AI tools negate the need to have as much lower skilled engineers.
Tuxedotux83@reddit
AI tools can enhance and make the work of a highly skilled software engineer more effective- no question about it, but mostly as an assistant, a helping hand etc.
Still does not change the validity of what I originally wrote.
llanginger@reddit
Except that the way you get experienced engineers is by accepting and investing in the low skilled engineers :)
Least_Rich6181@reddit
Yup totally agree.... that is the conundrum.
It's almost like the entire industry is betting they won't need any mid level "line level" ICs anymore. Or we will rely less and less on handwritten code.
There's also the fact that the young ins are vibe coding their way through everything as well so they're mostly glossing over stuff....
It'll be interesting to see where we are 10 years from now
llanginger@reddit
Maybe :). I’m not of the opinion that ai is a fad - it’s good at some things. That said I’m not sold on the idea that it will deliver on the big promises, and if it stops accelerating or even beginning to show signs of reach a ceiling, I would expect “the industry” to adjust back to a more sane approach that humans are, yknow, actually not a fad :)
Abadabadon@reddit
I use chatgpt the same way I use stackoverflow
idgaflolol@reddit
We use Cursor, but I often find myself going to ChatGPT to have “sub-conversations” that don’t require immense context of my codebase.
The primary ways it’s helped me: - writing tests - debugging kubernetes weirdness - writing one-off scripts to manipulate data - designing db schemas
LLMs get me like 80-90% of the way there, and through prompt engineering and manual work I get to the finish line.
YetMoreSpaceDust@reddit
IntelliJ's auto-complete has gotten a lot smarter all of a sudden; I'm guessing they're using some sort of AI enhancement. I've noticed that it's right about 50% of the time - it'll offer an auto-complete that's exactly what I was about to type and I'm a little shocked that it came up with that. About half the time, it's just funny what it thought should come next.
gigastack@reddit
Autocomplete is dumb, but I use AI constantly.
Scaffolding out unit tests for me to check/tweak Simple refactoring Documentation (I edit, but it gets 80%) Ask for critiques Syntax help for Splunk queries or terminal commands PR reviews
AI models are getting better and better. "Agentic" IDE workflows are getting closer, but still too slow most of the time.
If you really don't use AI at all.. good luck
Eli5678@reddit
I don't use AI at my job for anything meaningful.
I have one coworker who does and I've had to clean up his bullshit enough already.
hidazfx@reddit
I've said it a million times and I'll say it again, I only exclusively use GPT with the internet search feature, and then it's just a tool to use in my toolbox. It's not always correct, it's often wrong and confidently so. I almost never actually use any code it finds or generates unless it cites official documentation. Even the , it's still tested of course.
CreativeGPX@reddit
In my organization there is a group of like 30 people across all disciplines who are tasked with evaluating AI and making policies regarding it. They are looking at everything from privacy and data ownership to accuracy to cost efficiency to bias to which tech is better to legal implications and custom contracts. That is to say, we're taking a pretty conservative approach while still allowing experimentation with it. You aren't prevented from using AI but are supposed to notify the group if you are and they help spot potential risks.
For me, I don't use AI for day to day tasks (partly because I don't find it that helpful, partly because of the privacy/legal/cost aspects), but the set of projects I'm developing includes a public facing AI agent so we're not anti-AI.
I'm not aware of any coworkers who heavily use AI, but I'm sure some use it for small things like text generation. I don't think people here use it for code generation.
PerspectiveLower7266@reddit
Every day all day. It's an incredible Rubber Duck. It gives better answers that Google or stack overflow 99% of the time. I use it to rewrite functions all the time. It struggles at big tasks but it's definitely better than a large amount of tier 1 devs.
eddie_cat@reddit
Nobody at my job is interested in using it for anything beyond what Gemini produces at the top of the Google search results when we Google quick shit
trg1379@reddit
Working at a small startup and we use it a fair bit and share how we're using/testing new things out constantly.
Currently using Cursor + occasionally Claude while planning/actually implementing and (and sometimes for SQL related stuff) and then use Sourcery for reviewing PRs. Been trying out a couple of things for generating tests and debugging but haven't found anything consistently good there
deZbrownT@reddit
Her is one example, I work as a contractor and need to submit a monthly report with a list of my activities. Almost 100% of that is done with AI. It creates tickets, title, description, updates the comments, tracks the sprint goals, matches it all, and at the end it spits out the report. In reality, I would never spent that amount of time to create such a fine and easy to follow report. It makes my life so much better.
i_ate_god@reddit
Yes. We add chat bots to every thing.
Our customers don't care, but it makes the shareholders happy and that's all that really matters in the end. As long as the stock price goes up, the work is meaningful.
jamie-tidman@reddit
Yes. I use v0 for fast prototyping, O3 / deep research as a sounding board for architecture decisions, and Copilot with VSCode. I’ve had poor results with agents and tools like Cursor, though.
warofthechosen@reddit
Is copilot considered meaningful usage?
Proud_Refrigerator14@reddit
For me its mostly fancy code completion and more lively rubberduck. Wouldn't pay for it for hobby projects, but it takes a bit of the edge off of the agony of a day job.
Howler052@reddit
Been using Augment Code for a couple of weeks now. It's the best one so far for me, on a large monolith codebase. It does a lot of the work for me, I then iterate on it. I don't really vibe code, I ask it to do the task, then I go through the actual code and ask it to make amendments. Genuinely made a difference for me.
Northbank75@reddit
We use Copilot a fair amount, its good at spitting out boiler plate, and it's not bad a reviewing a chunk of code for potential improvements, and it's really good for refactoring. It's not writing much for us, but it is helping us find weaknesses and flaws, and it is a definite net positive. This wasn't a decision, we just started playing with it as it was available in Visual Studio and we were curious. I had some gnarly recursive nonsense I had to spin up to flatten out bad hierarchy data and clean it up as part of a migration this week and it helped me not punch my keyboard ....
It's just a tool; we are using the tool ... and when the execs ask about AI usage in our day to day, we let them know that is already a thing and they get this warm happy smiley thing going on haha ...
moduspol@reddit
I had an LLM walk me through troubleshooting why our Elasticsearch instance might be running out of memory. It was pretty good. Included things like “run this command and give me the output.” And then it’d interpret it and rule out one cause or another and have me try more things. It ended up working out and being a lot faster and cheaper than going through support.
rlgod@reddit
I use it when I frustratingly can’t find a specific thing in the AWS docs that I know exists because I’ve read it before. Usually it helps me find the right keyword/s or even gives me the link to the page I’m looking for. Does that count?
SunglassesEmojiUser@reddit
I use it pretty regularly to troubleshoot niche issues with git, grade, and whatever language I’m using. It’s like a google search that I can ask follow up questions and give context to more easily such as troubleshooting steps I’ve tried.
marx-was-right-@reddit
3
DoctorSchwifty@reddit
Yes, for unit test (when it works), fixing my syntax errors and most recently formating Json.
ravenclau13@reddit
Debugging weird AWS errors :D with claude.ai. Works pretty well. Better recommendations overall than going through SO or random blogs, and it was good working code snippets
chuckmilam@reddit
It writes crap Ansible, which is most of my life as an automation/sysadmin guy heavy on the OPs side of DevSecOps, so I mostly just use it for softening my grumpiness when I have to soft-skill blather in E-mail or Teams messages.
cur10us_ge0rge@reddit
Yeah. I use it to schedule quick meetings, rewrite important emails, summarize long chat threads, and find and summarize info on wikis and docs.
Individual-Praline20@reddit
Absolutely not. 🤭
bubberssmurff@reddit
I only use it because I forget syntax sometimes
loxagos_snake@reddit
We get a Copilot subscription for anyone who wants it.
I'm not sure if that qualifies as meaningful, but I use it for code completions, quick questions and digital chores. For instance, if I have a DTO class with a lot of properties and want to generate a test JSON, I'll let the AI handle it to avoid hand typing it.
Or maybe I can ask it to insert logging statements in a standard way, or remind me how to do a certain boilerplate thing, or modify some data or provide scaffolding for using a new library.
In general, I try to do the thinking and use it to save time, not to outsource my work. I never use it to generate actual code that I just trust and run with it.
techblooded@reddit
Yes, its a part of daily workflow. Not exactly for coding but for various other things such as automations, workflows, etc.
Accomplished_End_138@reddit
Its great for explaining code or finding things in your code you don't know the specific string of words for. I think its great at boiler plate and anything that hasillions of examples.
Im trying toake actual useful tools but that's outside of work...
Sudden-War3241@reddit
Yes, GitHub Copilot, The inline code suggestions are not always helpful but sometime it does help. Copilot chat is really handy, but i believe its only good when you know exactly what you want to do and are using it as an assistant to get it done quickly. Also, like someone in comment suggested, making it summerize something is useful too. Since it has access to your extact code it does an acceptable job. I use it at times to make small tool type of project to read something and generate report etc. It does help me save a lot of time.
Bottom line is it is not helping me do something that I would never be able to do. It is just making it a bit simpler and quicker to be able to achieve it.
Inevitable-Hat-1576@reddit
I’ve used ChatGPT personally as a stack overflow replacement for a good while - occasionally getting it to write test stubs and algorithms for me. But nothing major.
As a company we are just starting to use copilot but we’re not sure for what yet
morosis1982@reddit
Using GitHub copilot in vscode, use it for refactoring, examples of new stuff that are relevant to our codebase, etc.
We're also looking at spinning up an ingest queue based on AWS bedrock that can do unstructured text parsing.
It is useful for certain things, but it often needs a proper review before being able to commit unless you're just after a quick prototype.
edgmnt_net@reddit
I suspect a variation on 2: others are using AI, but they don't do the same thing you do. I could see it taking up in feature factories that already generated a lot of churn even before AI.
stukjetaart@reddit
SR backend dev. I use it quite extensivly as a pair programming buddy.
I do have a lot of knowledge of the project and how it is structured and how I want it to be structured, so I ask the LLM to implement my detailed request. Then I just look at how the ai has interpreted my request and what changes it did to the code and most of the times I use that as a basis.
I also use it to quickly whip up some automation scrips or use it to generate cli-tools that make my life more easy. I let it vibe code a frontend UI tool, that would have taken me a long time to do myself (since I don't have that much FE experience) that is now used for testing (since we are working with images and add stuff on top of these images, it was a bit hard for visual inspection of the test data to rely only on postman)
IMHO I would say it is fine to generate code if you have enough seniority level.
For juniors it is amazing for getting to understand the project they have to work in. They can ask questions like "give me a quick rundown of what happens when we trigger this endpoint with this data, what external services are being called, and for what purpose?" or "I want to refactor this return type, are there any breaking changes" or "I have this error after my change, what could be the cause of this?"
iscottjs@reddit
Head of development at small agency. We have copilot and chat gpt licenses for everyone. Some people hate it, some people use it all the time, some people like using it but are also worried that too much reliance on it will make them rusty, so they use it cautiously.
We also have a research/innovation/training program where engineers can take a break from project work to do personal learning or build internal apps, and a lot of folks are choosing to learn more about building AI tools that can improve our own internal tooling, or something we could potentially productise.
We have a strict PR process so all code needs to be reviewed by 2 engineers, folks are free to code using AI but I take it pretty seriously if someone can’t explain a piece of code they vibe coded when submitting it for review.
Use AI to code if you want, but make sure you understand what you’re submitting.
Personally, I find AI the most useful for architecture discussion and planning out the best approach for the next feature. I can ask it for feedback on my plan and suggest alternatives with pros/cons.
It’s really good at setting up the boilerplate and scaffolding of an idea, but it still hallucinates stuff. Just yesterday we were trying to get some stuff working with AWS IoT and it got most stuff right, but the suggested config yaml file we needed to use was completely wrong and we had to cross reference it with the real docs.
And I think that’s a good workflow generally, use AI to get off a blank page, but it’s good to get into the habit of cross referencing the suggestions with the real documentation to correct any mistakes.
Other stuff we find it useful for, debugging really gnarly issues, writing tests and optimising queries/algorithms.
ImaginaryEconomist@reddit
The most productive I can think of if the EMs, tech leads are too fixated for productivity gains is to delegate all method/function level code, unit testing code to assistant, review & verify it as you would do for a coworker.
Even this has some drawbacks with regards to modification of codebase, estimates as not having first hand implemented or written code means you'll have to go back and check the existing flow.
In general I feel vibe coding or excessive use of AI tools help more for teams who have to prototype a lot and are in early stages of product development, iteration etc. While for orgs with already paying customers code quality, robustness, matter far more.
Nonetheless AI tools are wonderful to ideate, create things for everyone. How much of it you can use it in your prod code & push confidently is upto you.
JimDabell@reddit
“Super-powered autocomplete” is the most obvious quick win. If I’ve got half a dozen cases in a switch, then I normally don’t even have to type out all of the first one before it’s written all the rest as well. Even if one of the cases needs further work, that’s still an easy win over writing all of them.
Beyond super-powered autocomplete, it’s also a very generic, very powerful refactor menu. Do you ever use your IDE to extract logic into its own method or anything like that? Well an LLM can rewrite code like that, except for far more complex transformations.
One of the biggest transformations it can do is to clean up crappy code. Have you ever inherited a large amount of spaghetti? Throw it at an LLM and tell it to clean it up and document it. Even just “WTF is this code doing?” gives a big boost to understanding a codebase.
Taking a first pass at bugs is great. It can spot stupid mistakes like off-by-one errors, incorrectly pasted field names, etc. in seconds. Sure, I can eventually find those problems myself, but an LLM can do it way faster.
You mention vibe coding, but vibe coding is about having fun on throwaway weekend projects where you don’t care about the code at all. Exactly zero developers are vibe coding for work. If the code matters, then it’s not vibe coding by definition.
sfryder08@reddit
Yes, we were all faking it before they were officially approved but now that we have access to AI tools it’s hop on or get laid off. You aren’t impressing anyone by doing it yourself anymore. Use the tools you have available to you and make your life easier.
Haunting_Welder@reddit
AI is mostly good at shallow work, eg. Vibe coding high level designs, which is the opposite of big tech, so the smaller the company the more likely you’re going to use AI.
taotau@reddit
I have been using copilot on and off for a couple of years. Recently I've found myself reaching out to it less often, but that's very dependent on what sort of code I'm working on. I realise yesterday that I hadn't opened the copilot chat in over two weeks.
Today I was working on setting up some low level db connection stuff in a project, something I'm not super enthusiastic about so I pulled it up to debug an error. I shared the console output with it and it suggested some changes which didn't work and I shared that error so then it suggested changing to my original code.
At one stage, it's fix was to change one word in a comment to fix the error... That kinda made me realise this thing doesn't really understand much context.
It's fine at writing boilerplate code, but I find I waste a lot of time getting stuck in its misunderstanding loops quite often.
Also vscodes implementation of copilot is annoying. It tries to suggest code in random places I never asked for it and if I accidentally hit tab, I have to waste time debugging wtf it tried to do.
ConstructionHot6883@reddit
Yes.
I'm working on a greenfield project that uses some languages I'm unfamiliar with. So it's quite easy to say to an LLM something like:
Using html and typescript make me a slider that the user can use to set an interval, anywhere between 5 minutes and 7 days. When the slider is moved you need to update the label to say how long the interval is, in human readable language.
async Rust, tcp. Listen on port 5678 and for each string received, spin off an asynchronous task that takes the string as an argument
I've got a program in Python and Pygame, show me how to put a screenshot in a shared memory. Also make an HTTP endpoint using Rust and axum that serves the screenshot
etc. etc.
So then the thing I need is up and running soon enough, and of course since I have ownership over this code I'm going to change it up as I develop it. Works great, if you know to take it with a pinch of salt.
I am just using the freebies that ChatGPT, or Claude provide. Oh and github's copilot or whatever it's called. I have not found that any of them can interact meaningfully with established codebases.
ljwall@reddit
Yes very much.
Smallish company, 100ish people, 25 software developers. Company has paid for Cursor and GH copilot. Quite a lot of other tools we're using are bringing out MCP servers. CTO has been pushing it hard as a productivity tool.
Honestly I don't like it, but have to admit that with careful use it really can work to speed up coding tasks.
I'm also using it to summarise information and reword it into pre-defined formats, e.g. for outage reports.
redMatrixhere@reddit
may i ask in which category of probs u use Cursor vs GH copilot
ljwall@reddit
Most of my collegues seem to use Cursor in all cases; not much love for Copilot.
I'm using Copilot exclusively, but via the CodeCompanion plugin in neovim, so I guess that's really just using copilot as an LLM API.
I use that for everything, and so far so good.
kregopaulgue@reddit
I was very skeptical of it. Then my company provided us access to Copilot.
It wasn’t big help until edit mode update in VS Code. After that I was using it here and there, but still thought “it’s kind of helpful, but really useless in most cases”. And so I was thinking until we got some firewall problem, when I couldn’t use Copilot.
I instantly realised that I handled so much mundane stuff with it, that without it I feel like I am really slowed down. And that’s how I realised AI is actually a good tool.
Still not using it for anything complex though, if I don’t understand the code, I don’t let AI handle it. In my experience it’s a recipe for disaster
Dexterus@reddit
We're trying, it's useless, it lacks even the most basic understanding of hardware so even simple tests eff it up.
The first good thing it gave me ... I spent 2 days trying to find one llm that xould explain why the formula worked beyond halucinated words. I still have no proof that formula is correct
I also did a hw profiler implementation and it just started going off the rails adding shit I didn't need. I just manually removed stuff. It worked. Buuut, it adds so much overhead I just gave up and rewrote it myself - in this case extensibility, maintainability and clean code was bad.
Will keep trying.
fued@reddit
Yes constantly.
It is amazing for writing for commits and documentation, and doing the job of a project manager.
It's pretty poor at doing code, so I leave it for boilerplate stuff at best.
Fadamaka@reddit
I am working at a really small company, which had a rough year so we shrank down to 8 employees including management.
For the past 7 months I was doing contract work, on behalf of my company at a US Fortune 500 company. There we were only allowed to use Microsoft Copilot and we stricly weren't allowed to generate any code with it. Previously I was at a bigger Global Fortune 500 company, there we were offered Copilot for GitHub Enterprise, almost no one requested a license out of the 36 backend devs. Granted the stack was Spring Boot microservices and LLMs are pretty bad if you try to generate anything but JS/TS (I would guess python too but haven't tested that).
Now my contract work has ended and I was handed a project that I need to solo and do the full stack. They specifially asked me to use Cursor to generate as much code as I can. So now I am developing (generating rather) a project full-stack, react with supabase as a senior Java Backend dev. I have been a web dev for a while so I am mostly familiar with any code that is connected to this domain so although I have never wrote any react code I can navigate the project easily. I have been working on this project for 5 working days and I managed to make significant progress. The project itself is a pretty generic webapp with trivial business logic. As a non-frontend person I have the impression that Cursor agent mode can generate usable react code with minimal prompting, the end result is yanky and has weird esoteric bugs, like nothing loads on the app after a single focus loss and the app needs to be realoded in browser, but it mostly works. I haven't needed to really look at react specific code so far, everything just works, if it doesn't I tell cursor to fix it and it delivers. On the backend side though cursor is a hit or miss. Sometimes it halucinates endlessly, sometimes it oneshots. It is really inconsistent. The thing it one-shots one day it fails to deliver even after 5 prompts on the next day and it is mind bogglingly far from the correct solution.
I would say it is too early for me to draw a conclusion from this experience. I suspect that there are a lof of hidden bugs that will be dreadful to fix. So far I have generated ~3k lines of code in 5 days and the code works better than I would have expected.
I am pretty pessimistic about LLMs, especially code generation. I don't mind my current situation because I get to try out a way of working that goes against almost everything I beleive in and get paid for it. It is like I have switched sides entirely.
otakudayo@reddit
Yes, a lot.
I don't "vibe code" except for small hobby projects.
I use LLMs for ideation, quickly generating POCs, figuring out how to do things I don't have much experience with, getting solution suggestions for domain specific problems, etc. I don't like to use them much in the editor - I use copilot for simple things, but mostly I use a chat or API.
I am far more productive than before. But I couldn't use these tools if I didn't already have decent skills/knowledge. The LLMs suggest a lot of stupid stuff so you need to carefully review the code they produce.
There's also a bit of an art/skill to actually making good prompts and in general using the LLMs efficiently. I have developed good intuition for that. I don't think this can be gained without just using the LLMs for a while. There are also differences between the different services; so there are different "rules" for, say, Claude and Gemini. I can't explain that very well. One concrete example is that Gemini conversations "die" (ie, they will no longer produce usable / useful output) way quicker than Claude conversations. I hardly use ChatGPT at all, I find it simply can't compete with Claude, and Gemini is now way better than it used to be.
birdparty44@reddit
I think AI represents an evolutionary change to workflows but it’s not a replacement for a team of engineers. It could result in small downsizes.
cant-find-user-name@reddit
Yes, to create internal dashboards and UIs that don't need the best code and just has to be functional
idylist_@reddit
Large tech company. We are working on agents to do things like software operations, help with coding etc. We’re pretty big on using it any way we can
toblotron@reddit
I use ChatGPT to figure out specific technical solutions, which it is pretty darn good at. (how to transform a XXBitmap into a YYBitmap, How to modify a 2D rectangle into a 3D-twisted one, based on changes in tilts along different axii)
I wouldn't use it for deciding on architecture, though. Most of the things I talk to ChatGTP about end up in a separate function, not "running the show".
vvf@reddit
At a startup, yes. I’m using it like a souped up version of google/stackoverflow.
Any time I need to write boilerplate it saves me hours of googling and tweaking code from medium pages. Same with learning stuff about an unfamiliar framework or debugging unusual exceptions.
I don’t let it implement complex business logic or make arch decisions.
ForeverYonge@reddit
Yes. It sometimes gives useful answers for internal knowledge since it’s trained on docs and code. I also started using a LLM powered editor and for some simple changes it often guesses the next step well, saving me time.
What I don’t like is some of our engineers started vibe coding and presenting their untested and likely incorrect LLM PRs as an argument to support their preferred technical approach. “Show me the code” is no longer a useful way to evaluate the merit of something, unfortunately.
mullahshit@reddit
We write documentation in markdown in a repo which triggers pipelines, uploading the files to our RAG system. Makes internal ai chat pretty knowledgeable about our systems which is helpful for the TR-folks.
szank@reddit
I use chatgpt and copilot (with subscription) for some menial tasks. Validating sql queries, writing python snippets ( I don't know python), copilot often is good for filling in boilerplate .
Some people have paid subscription for cursor . That's encouraged by the management, we are a small shop and productivity is better imo. I wouldn't touch python project if I haven't had an ai assistance. And thanks to that I was able to spin up a proof of concept project quickly. As an example .
germansnowman@reddit
I use it as a better search engine, and as an assistant that I can ask to explain unknown concepts (e. g. a complex SQL query). I find it rather unreliable for generating actual code. It is very frustrating when it hallucinates things that don’t actually exist.
Intelligent_Water_79@reddit
yes, lovable + cursor are very strong for ux ui dev
secondhandschnitzel@reddit
I have just heard about Lovable from your comment and it looks great. What’s your workflow with it in Cursor (which I use and adore) if you don’t mind me sharing?
e_cubed99@reddit
If you’re using it as an aid it can be quite good. If you’re expecting it to do your job, not so much.
I find myself using it to generate test cases. I write the first one, tell it to make more in the style of, and it spits out a bunch. They all need some tweaking but the bones are there and usually good.
I’ll ask it to run a code review and about 3/4 the answers are nonsensical or not applicable. The last 1/4 are usually some form of improvement, but I don’t let it do the code changes. It screws them up every time. I use these as examples and ‘how-to’ but refactor the code myself.
Also useful in place of Google for simple stuff I just don’t remember - what’s the syntax for this command? Spit out a generic example of X pattern, show me a decorator function declaration, etc. Basically anything I only do once in a while and don’t have the need to memorize. Nice to get it in the IDE with a keyboard shortcut instead of adding another tab to the browser window.
PredictableChaos@reddit
We use it in my company but I'm in a large software engineering group at a non-tech company in the Chicago area so not the same environment you're in.
I would say that the use of the tools is growing at a semi-steady pace in my company based on CoPilot usage numbers. Engineers are still figuring out how they are comfortable using it based on informal surveys/discussions. CoPilot in VSCode and the plugin for IntelliJ are how we use it most.
We are seeing different people use it for different purposes, though. Some use it to help write tests and many others also use it to help them when they're working on a task they don't do very often. In these cases they are having the agent write code actively. Some will use it to just ask questions or maybe just generate a specific function. Just depends on the engineer.
I don't think it's going anywhere, though. I've been using it on personal projects where I have a little more leeway and can experiment more and it's definitely a productivity gain for me. But it's still kind of like running with scissors. You can definitely get yourself in trouble if you don't already know what you're doing or what good looks like.
e_cubed99@reddit
If you’re using it as an aid it can be quite good. If you’re expecting it to do your job, not so much.
I find myself using it to generate test cases. I write the first one, tell it to make more in the style of, and it spits out a bunch. They all need some tweaking but the bones are there and usually good.
I’ll ask it to run a code review and about 3/4 the answers are nonsensical or not applicable. The last 1/4 are usually some form of improvement, but I don’t let it do the code changes. It screws them up every time. I use these as examples and ‘how-to’ but refactor the code myself.
Also useful in place of Google for simple stuff I just don’t remember - what’s the syntax for this command? Spit out a generic example of X pattern, show me a decorator function declaration, etc. Basically anything I only do once in a while and don’t have the need to memorize. Nice to get it in the IDE with a keyboard shortcut instead of adding another tab to the browser window.
oceandocent@reddit
I use Cursor, previously used Copilot, most often I find it useful as a replacement for using a search engine or searching documentation as I don’t have to switch over to a browser from my editor.
Occasionally, I find it very useful for code generation for certain sorts of tasks that can be described easily in a series of small steps that can each be tested independently.
It’s also useful to get the ball when I have “writers block” or am otherwise stuck, even if the answer it provides ends up being wrong it will get me thinking of different approaches and ideas to solving a problem.
ivancea@reddit
Nothing to see here people, it's yet another "experienced devx talking about how little he uses AI without even trying it
notger@reddit
I use it to summarise things which have low-density information in them.
So anything business/managerial most(!) of the time has way too many fluff for what it actually says and summarising it works well. Legal stuff does not and coding also does not work well enough for my taste. (ChatGPT can not write a working program to connect to its own endpoint, funnily.)
I also use it to get ideas rolling and make sure I thought along all dimensions, like e.g. "list me all the things I have to think about when I want to do this". Gets me there quicker and I usually tend to overlook aspects / dimensions otherwise which then later have to be pointed out by others.
skyturnsred@reddit
The road mapping you describe is something I stumbled upon recently and has been incredibly invaluable for the same reason. love it.
devilslake99@reddit
I do use it to avoid writing boiler plate or an initial test file with test cases. Usually it misses lots of things/test cases and creates unnecessary stuff but it saves me at least half the keystrokes. Also another great use case for LLMs are generic and common features that are cumbersome to write (e.g. a drag-n-drop file upload in React) but have been done by others lots of time already.
Current AI imo gets more and more useless the more domain specific complexity is involved.
AcrobaticAd198@reddit
We use company provided copilot, rabbit ai to do PR reviews, recently we started using devin but for me that is more a pain in te butt that actual help.