You don't preface a pull request with "I wrote this by hand, but don't worry, I reviewed it!"
This makes no sense. If I wrote it by hand, I already did some implicit reviewing while writing it. Maybe it's not as good as formally reviewing my own code (which, in turn, is not as good as someone else reviewing it) but it's still worlds better than blindly trusting the overblown autocorrect and having zero idea about what you are actually submitting.
I don't get it. You've always been free to write bad code, this hasn't changed. If you always cared about quality before, why would you change now? When you copied code from the internet, did you care about its quality? If yes, why wouldn't you care about the code LLM generated for you?
Who are the people who always cared about their code quality and now think "since this code was LLM generated, I can deploy it as shitty as it is"?
My company are these people. They are trying to push LLMs and vibe-coding very hard, and in one of the internal presentations about it they officially said it's okay if the code is of lower quality because the important thing is velocity.
I think the thing this article misses is that frankly I already see LLM users push worse code the more they embrace LLMs. I think part of that is correlation (ie. vibecoders tend to be novice or non-programmers) but I think some of it is that LLMs encourage you to take your hands off the wheel a bit. If you're gonna take complete ownership and carefully go through each line to make sure it's written the exact way you want, there's not much point in writing code with an LLM in the first place.
We could say that's on them for using the tool badly, but when we see such a consistent trend we need to consider it might be a problem with the tool too.
I would say it might be a good thing to have high velocity but low control for initial iterations, and then do a thorough review of everything before submitting for review.
The sane way of going about building a codebase is to start with the hard and essential parts - solve the core problems first, in a way that can act as a rock solid foundation for the rest. If your core functionality is sound, then you can throw all sorts of sloppily written frontends on it, you can half-ass all sorts of application code written on top of that foundation, and the damage that bad code can do will likely remain limited and easily contained.
Getting the foundation wrong, OTOH, is something you will never fully recover from - once you have a dozen things depend on your badly designed foundation, its bad API and many of the bad design decisions that went into it are effectively cemented into your system, and will hold you back indefinitely. Every bug that remains in the foundation long enough will become a feature, a part of the API that someone somewhere depends on, so even if you rewrite the entire foundation from scratch, you will likely have to effin' replicate those bugs, because fixing all the clients is just not realistic, even if you technically control them.
If you need high velocity during the early stages, then keep the foundation simple, but solid - cut corners on features and scope, but not quality. And if you have to write (or generate) sloppy code, do it in the periphery, where it's easy to replace, and where not a lot will depend on it.
I meant initial iterations of a diff against an already existing foundation, not initial iterations of the foundation itself. In the latter case I probably agree, though it really depends on who your customer is and what the goals are. If you need to establish viability, you can’t afford to invest heavily before you do so.
Right, yeah, sure. If you already have the foundation, then it depends on what the requested change is.
Then again, if you already have the foundation, you're probably past the stage where establishing viability as fast as possible is as big of a concern.
I don't really see the point, unless you're just playing with some ideas. It might be easier to rewrite instead of review that mess. Or just write it well the first time around, at least the stuff you're confident about. I'm not very convinced it's a good thing to deliver a half-baked MVP, trying to attract clueless investors and customers who may be fooled by that is a dangerous game. I kinda get why they're doing it, tech debt is debt and therefore leverage after all, but this is essentially amplifying both good and bad outcomes and project failure rates are pretty high.
I meant in the context of single diffs. To me genAI is just faster at typing than I am, so it gets me a useful draft faster than doing it manually. I’m going to review everything carefully either way.
If you're gonna take complete ownership and carefully go through each line to make sure it's written the exact way you want, there's not much point in writing code with an LLM in the first place.
That's a great way of capturing my experience with LLM coding so far. It's easy to make them spit out a bunch of "whatever" code, and sometimes it's even mostly correct, but the moment you try to get them to refine the code to be exactly what you want, you enter a spiral of badness - the more precise you are about what you want, the worse the code gets, to the point where they start hallucinating things that aren't even valid syntax.
Now, if I had any use for code that is of "whatever" quality, and only "somewhat correct", then this would be great news - but I really don't. Coding, to me, is precision work, and any part of it that isn't correct where it needs to be, or doesn't adhere to the coding standards that assure a basic level of maintainability and certainty, is a liability that's going to backfire eventually.
I think the fallacy of using "lines of code" as a metric of the "amount of useful work" or "value" that a given codebase represents is at the heart of this. With an LLM, you can easily maximize your output in terms of lines of code per minute, but what you actually want to maximize is "amount of useful work", or "value", and LLMs are kind of gnarly when it comes to those.
Yeah, I also feel like the appetite for LLMs is at least partly due to already widespread odious practices. A lot of boring and simple stuff is blown up to epic proportions and programming is, to many, a matter of churning out huge amounts of mindless boilerplate and inconsequential code. Of course LLMs seem like a good alternative to them.
I don't get it. You've always been free to write bad code, this hasn't changed. If you always cared about quality before, why would you change now? When you copied code from the internet, did you care about its quality? If yes, why wouldn't you care about the code LLM generated for you?
Who are the people who always cared about their code quality and now think "since this code was LLM generated, I can deploy it as shitty as it is"?
People who cannot code will generate more bad code; also people who used to copy-paste stuff without thinking and understanding from Google have now much more powerful tools to do so. But people who cared, will still care as much, if not more.
somebodddy@reddit
This makes no sense. If I wrote it by hand, I already did some implicit reviewing while writing it. Maybe it's not as good as formally reviewing my own code (which, in turn, is not as good as someone else reviewing it) but it's still worlds better than blindly trusting the overblown autocorrect and having zero idea about what you are actually submitting.
somebodddy@reddit
My company are these people. They are trying to push LLMs and vibe-coding very hard, and in one of the internal presentations about it they officially said it's okay if the code is of lower quality because the important thing is velocity.
JarateKing@reddit
I think the thing this article misses is that frankly I already see LLM users push worse code the more they embrace LLMs. I think part of that is correlation (ie. vibecoders tend to be novice or non-programmers) but I think some of it is that LLMs encourage you to take your hands off the wheel a bit. If you're gonna take complete ownership and carefully go through each line to make sure it's written the exact way you want, there's not much point in writing code with an LLM in the first place.
We could say that's on them for using the tool badly, but when we see such a consistent trend we need to consider it might be a problem with the tool too.
vnordnet@reddit
I would say it might be a good thing to have high velocity but low control for initial iterations, and then do a thorough review of everything before submitting for review.
tdammers@reddit
IME, the opposite is true.
The sane way of going about building a codebase is to start with the hard and essential parts - solve the core problems first, in a way that can act as a rock solid foundation for the rest. If your core functionality is sound, then you can throw all sorts of sloppily written frontends on it, you can half-ass all sorts of application code written on top of that foundation, and the damage that bad code can do will likely remain limited and easily contained.
Getting the foundation wrong, OTOH, is something you will never fully recover from - once you have a dozen things depend on your badly designed foundation, its bad API and many of the bad design decisions that went into it are effectively cemented into your system, and will hold you back indefinitely. Every bug that remains in the foundation long enough will become a feature, a part of the API that someone somewhere depends on, so even if you rewrite the entire foundation from scratch, you will likely have to effin' replicate those bugs, because fixing all the clients is just not realistic, even if you technically control them.
If you need high velocity during the early stages, then keep the foundation simple, but solid - cut corners on features and scope, but not quality. And if you have to write (or generate) sloppy code, do it in the periphery, where it's easy to replace, and where not a lot will depend on it.
vnordnet@reddit
I meant initial iterations of a diff against an already existing foundation, not initial iterations of the foundation itself. In the latter case I probably agree, though it really depends on who your customer is and what the goals are. If you need to establish viability, you can’t afford to invest heavily before you do so.
tdammers@reddit
Right, yeah, sure. If you already have the foundation, then it depends on what the requested change is.
Then again, if you already have the foundation, you're probably past the stage where establishing viability as fast as possible is as big of a concern.
edgmnt_net@reddit
I don't really see the point, unless you're just playing with some ideas. It might be easier to rewrite instead of review that mess. Or just write it well the first time around, at least the stuff you're confident about. I'm not very convinced it's a good thing to deliver a half-baked MVP, trying to attract clueless investors and customers who may be fooled by that is a dangerous game. I kinda get why they're doing it, tech debt is debt and therefore leverage after all, but this is essentially amplifying both good and bad outcomes and project failure rates are pretty high.
vnordnet@reddit
I meant in the context of single diffs. To me genAI is just faster at typing than I am, so it gets me a useful draft faster than doing it manually. I’m going to review everything carefully either way.
tdammers@reddit
That's a great way of capturing my experience with LLM coding so far. It's easy to make them spit out a bunch of "whatever" code, and sometimes it's even mostly correct, but the moment you try to get them to refine the code to be exactly what you want, you enter a spiral of badness - the more precise you are about what you want, the worse the code gets, to the point where they start hallucinating things that aren't even valid syntax.
Now, if I had any use for code that is of "whatever" quality, and only "somewhat correct", then this would be great news - but I really don't. Coding, to me, is precision work, and any part of it that isn't correct where it needs to be, or doesn't adhere to the coding standards that assure a basic level of maintainability and certainty, is a liability that's going to backfire eventually.
I think the fallacy of using "lines of code" as a metric of the "amount of useful work" or "value" that a given codebase represents is at the heart of this. With an LLM, you can easily maximize your output in terms of lines of code per minute, but what you actually want to maximize is "amount of useful work", or "value", and LLMs are kind of gnarly when it comes to those.
edgmnt_net@reddit
Yeah, I also feel like the appetite for LLMs is at least partly due to already widespread odious practices. A lot of boring and simple stuff is blown up to epic proportions and programming is, to many, a matter of churning out huge amounts of mindless boilerplate and inconsequential code. Of course LLMs seem like a good alternative to them.
BinaryIgor@reddit
This 100%:
People who cannot code will generate more bad code; also people who used to copy-paste stuff without thinking and understanding from Google have now much more powerful tools to do so. But people who cared, will still care as much, if not more.
DonaldStuck@reddit
We all write slop, but now we slap ai on the slop so it's ai slop
Silly-Sheepherder290@reddit
We already do
XLNBot@reddit
Everyone was already writing slop code