What AI guidelines does your tech organization have in place?
Posted by FewWatercress4917@reddit | ExperiencedDevs | View on Reddit | 26 comments
Both technical and non-technical people at our startup are in love with LLMs - Cursor, Devin, Lovable, etc. I agree that these bring additional capabilities to people to do stuff faster, but I also can't help but notice a downside: Even the most thoughtful senior engineers will, over time, trust the AI more and stop thinking about everything it is doing. If it works, 95% test coverage and e2e playwright tests pass - then it must be good! A few things I am worried about:
-
Over time, the codebase will start feeling like it was written by 200 different people (we are a 15 person tech team). The standards for getting code in fall by the wayside as people just accept what cursor/devin do.
-
Stackoverflow and docs get a lot of deserved criticism, but people had a way to judge junk answers vs answers from people who really knew what they were talking about, canonical sources, etc. This is being lost right now and engineers just accept what the AI tells them.
I think these tools bring benefit - but I am starting to be afraid of the downsides (ie, making everyone dumber). How did you address this and how do you use it in your organization?
npqd@reddit
DeterminedQuokka@reddit
Good senior engineers are not going to stop thinking about everything ai does. They are going to use it for speed and verify it.
Code always feels like it was written by 200 people. Code by an ai is likely more consistent just worse. The solution to those problem has not changed it’s linters and style checkers.
You can judge the quality of ai. You do it by pushing back against the answers the ai gives you and verifying them. The point is not to outsource your brain it’s to speed up the searching for the answer step.
thephotoman@reddit
The problem I’ve noticed is that AI isn’t faster. Most of the alleged speed increases have been not due to AI, by from us overstating how long it takes to do things.
For example, I watched a coworker do an AI demo. And she boldly claimed that what she did saved a day of work. But it hadn’t: the work she did takes maybe, generously, an hour to 90 minutes. And what’s more, the AI fucked enough up that she would have spent an hour to 90 minutes cleaning up the mess for mistakes that she wouldn’t have made if she’d done it right the first time.
AI allows for us to produce more code, yes. But it’s just like directly copying and pasting from Stack Overflow was and not really an improvement.
Claims of AI improving productivity are deeply problematic because we can’t actually define and measure productivity.
Ok-Yogurt2360@reddit
Yeah all those claims of productivity increase and i'm like: wow, you got a reliable measurement for productivity?
forgottenHedgehog@reddit
Measuring productivity of people you don't directly work with is difficult, measuring your own with relation to your own experience is not.
Ok-Yogurt2360@reddit
Even then people tend to forget the whole process. Something can feel fast at first but cost you over time. The amount of people who skip over the fundamentals of programming show how easy this is. The only thing you can actually measure in your example is if it feels faster.
nullpotato@reddit
I tested the new copilot agent in VS Code this week. After 2 hours of watching it generate basic python syntax errors and wrecking unit tests I reverted all its changes. So much time saved.
prescod@reddit
Skill issue.
TedW@reddit
I think it really depends on the problem and codebase, too. In my experience AI struggles with problems that require more context, or that involve multiple repos. But it can whip out 50-100 line functions.
We're still in the early stages and it will get better, of course, but for now I treat it as a tool, not a whole toolbox.
thephotoman@reddit
This was not a problem that required context. The prompt she used was “make a client to call this REST service based on this OpenAPI documentation.” This should be a task that is well suited to AI implementation.
The problem was that the result didn’t even properly compile. And she then couldn’t recognize the problems with the code.
This isn’t old: this happened literally two days ago.
TedW@reddit
I agree that sounds like a good place to use AI. In that example I blame the developer for not checking or understanding their work, not the tools they used.
thephotoman@reddit
She was doing a demo of AI.
If you cannot provide a rigorous quantification of productivity, you cannot honestly claim that it has been improved. You don’t have a means of measuring such an improvement. You can provide anecdote all day, but the plural of anecdote is anecdotes, not data.
And here’s the thing: software engineers have been trying to quantify their productivity for the last 50 years and ended up with nothing.
prescod@reddit
By your definition we should all still be using assembly language and line editors because nobody has proven definitively that IDEs, grep and high level languages are more productive.
I suspect that your request for rigour is specific to tools you don’t like. When you like the tool you just take your own anecdotal experience as definitive.
TedW@reddit
If you don't like using AI, that's fine, but it sounds like you're blaming AI for the developer's inability to use it very well. I think that's the wrong thing to take away from this.
If someone struggles to drive a manual car, we usually don't blame the car.
AI is very capable of writing an API client. In this example the problem was the user, not the tool.
thephotoman@reddit
This isn’t about me “not liking AI”.
This is about a fundamental claim that AI vendors make that is unfalsifiable. Such statements should not be accepted as readily as you clearly are accepting them. You’re refusing to think critically about the claims that Sam Altman and Microsoft are making, and it’s leading you astray.
And again, AI failed to make writing the client faster. It generated code, yes, but it had so many compiler errors and obvious bugs that sorting it out took more time than actually doing it right by hand would have.
TedW@reddit
I know AI can do this type of task, because I can use it to do this type of task. It's a simple thing to demonstrate. You can do it for yourself. So was this the tool's fault, or the user's?
You can choose to believe whatever you like, about both me, and AI. I'm certainly not here to change your mind. Good luck with whatever you decide.
thephotoman@reddit
I’ve done it myself. I timed myself.
AI isn’t faster. It is rarely even correct. Its use takes the enjoyable task of writing code and eliminates it in favor of adding a lot more debugging tasks. You haven’t saved effort.
At this point, I no longer believe you’re acting in good faith. You want AI to work, and thus you’re ignoring the lack of evidence for its efficacy.
prescod@reddit
The direction this is going, AI will test and iterate on their code just as humans would. It’s incredible what they can produce zero shot, but unreasonable for us to force them to work in that mode.
PragmaticBoredom@reddit
This is definitely a real phenomenon, although I think AI can actually help these people anyway.
Many of the people I’ve worked with who take an entire day to do simple tasks work that way because they have a hard time getting started. They may have perfectionist tendencies where they feel like they need to know the perfect solution before they write anything and maybe converse with coworkers about it.
AI makes it easy for these people to get started quickly and learn how to iterate on imperfect code. The need to discuss and pre-plan everything is circumvented because they can move that to an AI that they have zero qualms about rejecting.
It transforms the problem space from originating code to being a critic of code, which is easier for most people.
unskilledplay@reddit
This is the most insightful and accurate take on AI as a coding agent yet.
thephotoman@reddit
It’s also wrong, in my experience. People suck at code criticism. That’s why I see so much shitty new code.
johnpeters42@reddit
I remain skeptical that these people will actually apply useful criticism.
DeterminedQuokka@reddit
I mean that doesn’t feel like proof ai doesn’t save time. It’s her using ai wrong. If you can do it faster without ai don’t use ai.
I don’t use ai to generate a random number in my code. I use it to generate unit tests 2-n once I have an initial test to base off.
Using ai effectively is a large and complex conversation. And unfortunately it sounds like the presentation you got was not ideal.
thephotoman@reddit
No, she wasn’t “using AI wrong”.
AI is just not the productivity tool that its boosters want to think it is. And the reason for that is because it’s not clear what “productivity” even means. So AI boosters largely need to bullshit their way through the sales pitch.
If productivity were quantifiable, we’d have less handwringing over AI. There’d be less skepticism and less cause for skepticism, because there’d be actual data to support the claims that AI vendors are making.
UnnamedBoz@reddit
A senior iOS developer I am working with, 10 years experience, is writing new code like an amateur. He never bothered to really learn SwiftUI and is writing code that incredibly slow and bad.
Overall he isn’t a good coder, he just knows a bunch of stuff from being there a long time. And now he use AI to put together crap, admittedly, because AI is just helpful for him. Essentially I have to babysit his PRs now.
There are also «AI-driven» projects being handed out, where management want programmers to reinvent apps simply because AI can do everything, right?
The whole damn department and its organization is the problem, having compartmentalized so much we don’t communicate much with designers or other people of interest. It’s a shit show and I really hate the self-delusional idiocy. Also, I have worked on improving many of these things, like automating design from Figma to code, but might be side-tracked by these idiotic initiatives.
I am looking at getting a different programmer job, where people actually have some standards in what they do. Want to use AI? Fine, but at least understand the results well enough to tweak as necessary.
tizz66@reddit
We have no real guidelines; we're still trying figure out the best approaches. We do have pretty much carte blanche to use AI though, with no real concern about it operating on entire codebases.
Personally I share the same concern you raised about it making engineers dumber over the long term. I fear once AI is writing the code and reviewing the code engineers (as a whole) will get lazier and worse at solving problems. I'm trying hard to properly justify this mindset to avoid it just being a luddite reaction to something new.
There is no doubt in my mind that AI is making engineers more efficient; I just don't know if the longterm tradeoffs are considered enough (or maybe they don't even matter given the productivity boost).