Vibe Coding Failures That Prove AI Is Nowhere Near Replacing Developers
Posted by ImpressiveContest283@reddit | programming | View on Reddit | 266 comments
Posted by ImpressiveContest283@reddit | programming | View on Reddit | 266 comments
PoL0@reddit
if you know how LLMs work there's no way you think they can replace engineers.
engineers solve problems, coding is just one of their tools. LLM doesn't solve problems, just selects the most probable word/symbol and goes with it. not bad for text book exercises.
instead of vibe coding to get some duct tape code, study the API you want to use, so your toolbox will grow.
I don't doubt there are use cases where LLMs are a nice tool, but software engineering isn't one.
start_select@reddit
I’m a 20 year veteran “Jedi coder”. I get it, but it really is useful for actual experts.
It takes months to understand how to get something other than spaghetti. But once you get it, for someone that ACTUALLY gets engineering and can describe the problem, it’s useful.
I still need to immediately fix it. But holy crap. It’s writes 80% of what I would have.
Watching a kid with 1-5 years on the job experience is a horror show. They haven’t seen enough real world meltdown problems to recognize them or to tell the LLM to avoid it.
Yes they are super dangerous. But if you are the person in the office that everyone asks to help them, you are capable of describing how processes work and can use an LLM to move very quickly.
PoL0@reddit
as a fellow 20year veteran I don't buy it. what domain do you work at that requires you to write 80% boilerplate?
clunkyarcher@reddit
Never got an answer, what a surprise. I have yet to see one person come back with the receipts when they claim they get good results out of using an LLM for software development.
Demonchaser27@reddit
I do kind of see this as a losing battle though. How are we going to ever get high level engineers in the future if we're replacing the "bad" ones who NEED to fail to learn... but aren't allowed to even try anymore b/c an LLM does it for them?
gwax@reddit
It's also ridiculously good at surveying large codebases and providing answers to questions like, "do we have a standard way of approaching problem X in this codebase?"
Or re-applying something in a slightly different way, "I've just done X to the code in location Y, can you develop a plan to do X to location Z, describe the plan, adjust the plan based on my feedback, and wait for my acceptance of the plan before beginning implementation?"
surger1@reddit
People really don't want to hear this stuff and it's so frustrating.
"Vibe coders" are today's "script kiddies". Not actually a problem but easy to point at and blame. "Those guys just copy and paste from stack overflow!"
Yet you are identifying what we both know is the real issue. The tools make already capable developers way more capable. The LLM's aren't perfect but you can get them to produce much more code and roughly the same quality as a junior.
No senior engineer before this was just going to stack overflow and copying stuff in verbatim without reading/understanding. And this is true with LLM's
Yet the tool itself is so powerful that the amount of code that can be generated increases significantly. Large boiler plate sections and individual hang ups are both easily solved now. It's like having a polite stack overflow at your fingertips.
That obviously is not "great" for everything, but "vibe coding" is REALLY not an issue in comparison to just raw productivity.
The worst part is that A.I. productivity provides no up or downstream gains. It doesn't need much more people to make it and it does not create a lot of jobs to operate it. So unlike computers and the internet. It massively ups productivity without making more jobs on the input or output sides.
GregBahm@reddit
LLMs are only good at solving problems that have already been solved before. So they are no threat to the creative work of engineering. By the nature of an LLMs construction, creative problem solving will always be what it's worst at.
But an overwhelming amount of engineering work is not creative. In my 20 year career, I've met a lot of engineers who pride themselves on not being creative. These engineers have always filled out the bottom 50% of the programming discipline in my mind, endlessly solving problems that have already been solved before.
These people will definitely be replaced by AI. If some PM needs code to fulfill the exact same requirements that have been fulfilled a million times before, the PM will be able to just generate that code.
It will be a much brighter future for all the creative engineers, but a much darker future for all the uncreative engineers.
I assume this is why r/Programming has this infinite appetite for anything that seems to slam AI. Even when it's an article like this that is blatantly just selling an AI product.
MornwindShoma@reddit
It works if the PM is technical enough to own the issue and the solution to it; else it becomes a liability that is passed down. Bottom feeders might not be incredible performers but they own their code nonetheless.
GregBahm@reddit
I'm not sure if it's a more usual or less usual scenario, but most of my career has been spent on enterprise software projects where the creator of the tech-debt has long since moved off of the project.
When I was managing a team extending features on "Microsoft Power Platform," I could check the git history to try and find out who to blame for any given problem. But there were so many check-ins after so many years, that any given coder could just blame some other (no-longer-present) coders. The platform was way too complicated for me to prove them wrong. So we just had to grind through all the tech debt ourselves. It was pretty miserable.
I'd love to see a world where that shit is delegated to the AI.
I've seen over the past two years LLMs get quite good at implementing little encapsulated bits of code, but they're still pretty bad at doing sweeping systemic refactors. This makes sense to me. I expect they'll get good at sweeping systemic code base refactors soon enough, and then we won't have to care about "the code owner moving off the project" or "the project being too complicated for any one engineer to understand." The PMs will wrestle with the application requirements all day, and the engineers will wrestle with checking the AIs work all day, and the AI will bubble and churn on code on and on until it gets all the requirements right.
I think it will be pretty cool if some PM drops a new requirement in the system, and it would normally necessitate a full architectural overhaul of the code base, and so the AI just... does that.
MornwindShoma@reddit
What's unfortunate is that AI seems to be very fast at producing what's basically legacy code that is then churned out and replaced quickly if there's no clear requirements and documentation. They simply can't fit it all into their context window, and larger the context the worse the results are. It's even worse than coders leaving the project as there's no coder in the first place.
GregBahm@reddit
I'm the lowest level of middle manager that there is at my corporation, so I don't get to hang out with the big top architects of new languages. But in my brief interactions with the head-honchos of code, there seems to be two camps emerging.
There is the old guard, which treats AI with the sort of fake, forced loathing reserved only for extremely important people who are masking their raging insecurity. I can tell they wish LLMs had never been invented, even though they're trying to maneuver themselves into a position where they can take credit for the invention. These guys seem desperate to make LLMs just another feature within the existing framework. No different than a spelling checker or the webcam filters in a video call.
Then there are all the new guys coming up from below, who are triggering the old guard every day in every way. They want high-level-code to become the new machine code, and for AI prompts to become the new high-level-code. Today we commit our code to git, but don't commit our built binaries to git. We also critically don't commit our LLM prompts to git.
The AI architects are triggering the old guard by suggesting we commit our LLM prompts to git, but don't commit our code to git. Obviously it's an enraging idea.
But the rage feels a lot like the rage I saw when the Cloud architects suggested everyone store their data on someone else's computer. Or when the smart phone architects suggest we develop for mobile first and let the desktop application come second (or just have them use the web app!!!) I was a young guy in the 90s, but I suspect the rise of the internet itself followed a similar pattern.
dwitman@reddit
They want to commit just the prompts? So the ai is forced to write the program anew at huge processing costs on every build where it writes the whole code base again from a series of prompts?
Why not just light the Kuwaiti oil fields back on fire?
Build code not prompts.
GregBahm@reddit
The processing cost thing is inaccurate. The power cost of an LLM text prompt is worth as much as the power cost to light a screen while a human reads the text.
Training costs a bunch, but that cost is the same whether people use the model a little or a lot.
I think people have this idea that AI is a big power hog because of confusion about cryptomining. Also, from taking statements from AI companies about their power use and extrapolating it incorrectly. It's like saying the first copy of "World of Warcraft" took $200,000,000 in dev costs, so that's what every copy of the game must cost.
MornwindShoma@reddit
Except that at scale it's actually incredibly expensive. Generating hundreds of thousands of lines of code as often as you build nightlies is burning money like there's no tomorrow, and the AI companies are even selling you tokens at a loss.
GregBahm@reddit
I could generate the library of congress with an LLM and it wouldn't consume as much power as my son playing his Xbox all afternoon.
Which isn't to say a day playing an Xbox is some big power sync. The heater is going to use a lot more power than that. And the heater isn't a particularly big power sync either. You're just orders of magnitude away from approaching the point where you begin to have a problem.
As far as the reproducible builds thing, the expectation is that there would be deterministic unit tests that the build has to pass. If it passes all the unit tests, but still has some problem, the solution is to write another unit test to account for that problem. Then if it satisfies every unit test, but is a different build under the hood each time, eh. Any sufficiently complicated application is going to have a non-deterministic build, due to compiler/SDK versions, library versions, operating system differences, timestamps, random seeds, file orders, etc.
MornwindShoma@reddit
No, it's definitely expensive. It costs quite a bit just to do some easy stuff, to multiply it by 1000x means spending thousands of dollars in token every day.
You're bringing in each user environment as a way to argument that builds aren't reproducible but this is on another scale entirely. You're getting bugs that magically might even not be there the next day and writing tests to account for bugs you can't reproduce. Wasting even more tokens.
Any complex software who has issues with dependencies you mention is so far out of the range of the current possibilities with AI and bigger than any sort of project that is in the range of thousands of lines of code that thousands of dollars is probably underestimating it.
Smaller projects usually means some sort of web software, running in containers or through runtimes that are perfectly or almost perfectly reproducible. Meaning you're losing something we've already achieved here.
Just to understand, what sort of job you do that you consider having software not going rogue day by day a net positive? You seem entirely detached from the standards of what is good and proper CI and CD. Is this people talking about committing prompts you?
GregBahm@reddit
You seem really committed to this idea that tokens are expensive. But you can download DeepSeek in the next couple of minutes and go nuts generating whatever you want all day long. Your computer isn't going to get as hot as if you were just playing"Cyberpunk" or whatever. The cost on your monthly electricity bill is not going to be identifiable. This is just a trivially observable fact.
The rest of your post just seems like an unfocused scattershot argument. It's also kind of funny that "thousands of dollars" is some big cost in your mind. The lowest salary of an engineer on my team is $163,000, but with additional costs the price per head is about $300,000 a year. So if my team of 8 people (plus myself) are costing the corporation $2.7million a year, and LLM makes us just 1% faster, it's profitable at a cost of $27,000.
But the LLM already wildly exceeds that benefit. The junior engineers used to be lined up at my desk every day, asking me how to solve all kinds of problems. Now they all just ask the AI (which there's never a line for) and get past all the problems they're stuck on, and I just have to review the code. It's junior code so code reviews are still a significant amount of work, but if we close the loop in a couple of years and my team is purely focused on the important and interesting creative problem solving work while the AI handles all the grunt work of implementation, hurrah! My corporation is already paying millions of dollars for a less productive result than that, so I'm excited for the future.
MornwindShoma@reddit
You really love to waste money over there for nothing. It can't be helped I guess.
dwitman@reddit
This is key: The value they add can’t be accurately charged for.
If they charged 20% over the cost to query on every query almost no one on earth would engage with the things for the level of value they add, except in very specific cases.
And I don’t really see that gap closing in the time it needs to, if ever in some cases…and every hack that they have to make them “better” compounds the processing requirements…and any processing overhead they can get they are going to throw back at the model in some fashion..to create a technology that at the end of the day does not understand the concept of truth…and probably can’t.
I’ll be interested to see what happens when the bubble here pops…the tooling wil be around forever…but how accessible? It does add serious value, but as far as I can tell no one can make a per query profit yet…what happens when you need to do that?
MornwindShoma@reddit
The fact that these people really don't get that LLMs aren't deterministic makes their claim so incredibly dumb.
chucker23n@reddit
No, you wouldn't.
The "Power Platform" is "the AI". Using an LLM to write code is just the newest term for what used to be called low-code, no-code, RAD, etc.
(I didn't even believe they'd use that exact language, but, yep, Power Apps is sold as "Learn how to quickly build low-code apps that modernize processes and solve tough business challenges in your organization using Power Apps.")
/u/MornwindShoma is exactly right. All you're doing is shift the blame to a different person. The "AI" is not a human. They cannot take responsibility. They cannot answer questions. They cannot, walking through the hallway, go, "oh yeah, I did that back then because X; ask Tanya about it!". The LLM forgets what it did as soon as you close the browser tab, and before you say "ah, but they now have memory!", that doesn't change that they don't actually understand any of what they're doing. You cannot truly interrogate them because there's zero intent.
That's cute.
Yes, you will.
If you don't know whom to ask "hey, who's responsible for this?", the answer is you. You're now responsible.
GregBahm@reddit
You seem to have confused yourself. Yes, the Power Platform allows our customers to create their own low-code apps. This has been an extremely lucrative thing for Microsoft, being described to me in 2019 as "not a money printer, but rather a printer that prints money printers."
But honey, programmers still have to actually program this platform. The Power Platform is itself not somehow made out of power apps or some shit. A small army of nerds like me worked for years to add all the capabilities of the platform. It's probably the most boring work I ever did.
Man, I don't know what model you're using, but my AI asks me questions all the damn day long. Maybe the problem here is you've never actually used an LLM and are just going off of fake articles, like the one above, for your worldview?
MornwindShoma@reddit
Careful about that!
We've seen in LLM research that LLM will not look back and explain their reasoning. You might think that they know what they did, but it's actually not the case. Getting them to reason about the thread isn't given. You also only ever get a generated answer about what it did out of the code in the given context, but it doesn't keep an actual memory of it, the context is gone after a session. You can turn the reasoning into documentation on the spot but natural language is ambiguous and any reasoning done with that as a context is non deterministic.
It is completely different from asking a person.
GregBahm@reddit
I was nodding along to your second paragraph, agreeing with everything you said until your last line.
How is this not exactly like asking a person? Surely you don't think my responses to questions are deterministic?
MornwindShoma@reddit
Because people has no contexts or stuff like that. People know people, know about meetings, decisions, has experienced consequences of choices. People is people, AIs are AIs. People can tell you to fuck off, AI won't.
GregBahm@reddit
Nothing prevents an AI from knowledge of people, meetings, or decisions? Nothing even prevents the AI from telling you from fucking telling you to fuck off aside from a prompt. I get the impression you haven't thought this through.
MornwindShoma@reddit
You seem to have humanized the tool, I suggest you reconsider for your own good.
GregBahm@reddit
Am I humanizing a mirror if I point to myself in the mirror and say "hey it looks just like me?"
LLMs are just a whole bunch of human responses, saved and reloaded. It's not some thrilling insight to watch a video of a human and say "That's not really a human that's just a video."
MornwindShoma@reddit
But they're not. They're not human responses saved. It's an algorithm spitting out words that has been tuned to always be in service, meaning sometimes it will just start spitting bullshit because there's big chances you'll take it for truth, not knowing it is bullshit.
You say it's a mirror but a mirror that randomizes the light coming out of it is a shitty mirror. And a mirror isn't enough to let you look into the past, or into your mind, or will now show you what's behind it or in another room; aka, it will only be as good as your biases and whatever input goes in.
People don't work on inputs bro. They don't simply "reflect what you're telling them", and if you think so I suggest you do some introspection or seek help. You don't want AI to tell you to fuck off on command, you want someone to tell you to fuck off when you need to fuck off regardless.
GregBahm@reddit
LLMs are definitely human responses saved. The "language" in "large language model" doesn't just magically appear from the void. This is such a weird argument.
The impression I get here, is that you're starting from the position that you disagree, and are struggling to come up with a reason why. "LLMs are totally different than people because [checks notes] you can't look into their minds!" Okay buddy. I'm content to just leave this where it is.
MornwindShoma@reddit
I disagree because that's fundamentally wrong.
chucker23n@reddit
My point is that you think using an LLM to build an entire software is a substitute "the programmers who actually program the platform". It's not. It's a substitute for Power Platform.
You still need programmers. You may think you need fewer of them, for smaller edge cases, but I don't want to be the guy who cleans up after the garbage you're having LLMs emit.
I feel like you're purposefully understanding my point.
I have. I also understand how they work, and the inherent limitations in that.
GregBahm@reddit
I agree there will be an awkward transition period. Probably there is no path from where we are today, to where we will be, without forcing a lot of poor programmers cleaning up a lot of LLM emitted garbage.
PoL0@reddit
that is still something that needs to be proven. parroting it over and over won't magically turn it into truth
chucker23n@reddit
It's inherent to how they work.
PoL0@reddit
wait what? LLMs work with text, there's no concept of "a problem" there.
GregBahm@reddit
Model collapse is a well proven thing.
You can maybe say "LLMs are good at solving problems that have already been solved before" in the context of being paired with a human. Because I can delegate all the tedious boring shit to the LLM, it frees me up to focus on the sophisticated creative stuff. But this is like saying a maid and a live-in chef improve creative problem solving. It's true but in such a tenuous sense.
Other forms of AI are good for solving problems that have already been solved, though. If the AI can compare its results to some objective measurement in reality (like a bipedal robot learning to walk across the room) then there's no risk of model collapse and the AI can be expected to invent novel solutions. The history of AI is full of this.
But those aren't LLMs. "Large Language Models" specifically search through information humans have created with our human minds. The LLM then regurgitates that information back to us. LLMs show a remarkable aptitude for extrapolation and abstraction and can even reason with themselves for better results. But any time you tell it to solve problems in domains where the solution isn't somewhere in its training data already, the results are just garbage.
Eventually we'll probably start allowing AI much more access to reality, giving the machine the artificial equivalent of eyes and hands. Then with classic neural network techniques, we can expect them to creatively problems solve like humans. But at that point we're off the LLM path and back in the world of classic AI techniques.
PoL0@reddit
dios que turra
nimbus57@reddit
I don't think you understand how LLM's are trained. THEY DO NOT JUST REGURGITATE INFORMATION BACK. Sorry to put that in caps, but if people understand one thing, it needs to be that.
The easiest way to explain LLM's to people is to compare it to something like a simple text generator or a simple "regurgitation machine". They are so much more complex and capable than that though.
That being said, they are tools, not people, and they shouldn't replace people.
GregBahm@reddit
If you feel so passionately about this point that you feel the need to shout it, maybe instead consider defending your point instead of leaving it as a naked assertion.
LLMs might not regurgitate information back directly. If you tell it "This dog is yellow" and "that ball is blue" it can give you "that ball is yellow" without ever seeing that exact set of tokens, so arranged.
But you cannot train an LLM on all the library books in a library and then expect it to be able to do something beyond token prediction. LLMs achieve dazzling things with token prediction; training an LLM in Chinese improves its answers in English, for example.
But all the training in the world isn't going to make an LLM achieve dick in a domain outside of token prediction, like math or science. They do just regurgitate information back. An LLM is basically a bunch of human thoughts, ground up into a fine paste. An infinite supply of perfect human mediocrity. But trying to get exceptionality out of it is like trying to walk to Hawaii by following the coast of California. Just a terrible misunderstanding of things.
nimbus57@reddit
I'm not sure what else to say, except the language part of LLM does not have to include a spoken human language. Or a computer language. IT CAN BE ANYTHING TO ANYTHING. As long as you have the training data for it.
Again, let's be clear, LLM's have already gotten their value. They have already proven how useful they. Just because people want to feel special by asking some insane question and then pointing out the ambiguity in it causes the answer to be "wrong", doesn't mean they don't work.
GregBahm@reddit
I think you maybe don't understand the concept of creative problem solving.
Certainly, you can train LLMs on images or on sounds or videos or whatnot. And then they will regurgitate back that kind of information too. But the output remains only as good as the training data. The LLM approach does not allow for the information coming out of it to be better than the training data that goes into it.
If it did, that would be really cool. Our models could make themselves better and better and we'd have achieved all our technological singularity dreams.
But that doesn't work. Like I said, it just results in model collapse.
So LLMs are just an infinite supply of human mediocrity.
An infinite supply of human mediocrity is still quite an amazing tool. I'm getting a lot out of my infinite supply of human mediocrity. I can wield this tool to great effect, just like I could utilize a big team of regular, uncreative humans to great effect.
It's not some "attack" on AI to understand this rationally. LLMs will still probably revolutionize the world. They just won't revolutionize it in the way certain, unscrupulous AI hype-men like you are claiming it will revolutionize the world.
nimbus57@reddit
You're right. I was being more aggressive than I should have been. It is my natural reaction when defending all of the ai tools, since there is so much misunderstanding with them.
I still think LLM's can do more than you are saying, but it may just be a semantic argument at that point.
I believe I heard this on a podcast with Ray Kurzweil; essentially, everyone with a smartphone and access to these ai tools is essentially as smart as everyone else with the same capability. These tools give people the ability to essentially answer any question that can come to mind. Now, I know there is of course a limit to what we know, but the point here is that the floor of human understanding has the ability to be raised so high that it is hard to even comprehend what comes next.
MornwindShoma@reddit
I wouldn't say so. You don't know what you don't know, and AI can't help you with that. You're way off from being an astrophysicist just by prompting AI.
chucker23n@reddit
No, that is pretty much how LLMs work. It may not feel that way because they're surprisingly good at the façade, but at the end of the day, all they do is
That's all. They can't truly extrapolate beyond that. Nor can they even reason about the data they already have, which is why they fail hilariously at, say, basic arithmetic questions. They don't understand their corpus, but they're very good at emitting portions of it that happen to be the ones you're asking for.
OrchidLeader@reddit
I refer to them as adaptive devs and prescriptive devs.
Adaptive devs can be given an objective, and they’ll draw their own map to get there and overcome any challenges on their own. That includes doing self-directed learning for whatever technology they need for their current project.
Prescriptive devs need to be given a detailed map with a well defined path, and they’ll get stuck with the slightest challenge. They need external training for anything they haven’t seen before.
And like you said, there is a lot of prescriptive dev work to do, so they can provide value on certain projects (for now).
The problem is a lot of managers seem to think that all dev work is prescriptive dev work. They think they can throw bodies at any project and make it go faster. They think that any offshore resource can replace any existing developer but for a quarter of the price. They think they can interview new devs by asking a fixed set of trivia questions. They think a two-hour knowledge transfer is enough to make a dev immediately effective on a new team. And yet they also think that hiring a new dev who doesn’t already have some specific technology on their resume doesn’t make sense because it would take them months to get up to speed (which is true for a prescriptive dev but not for an adaptive dev… something they don’t even realize exists).
So in their heads, product owners are perfectly able to describe their requirements, and devs are simply the cogs that make it happen. So it’s no wonder they think AI is super close to replacing developers.
The reality is a little different. I’m currently a Solutions Engineer, and the video “The Expert (Short Comedy Sketch)” is a god damn documentary.
start_select@reddit
This is what I thought until I actuall got it.
Vibe coding is about describing the high world problem and getting a canned solution.
Software engineering with LLMs is about describing the PROGRAMMING architecture and processes. Not the high level problems, literally how data flows or is transformed and what constraints to apply.
It will still be 20% wrong but can easily spit out the structure you might have handed a junior to fill in.
You need to be hella good at your job without it for it to be truly powerful. Most kids today have terrible writing skills so they aren’t as good as they think they are.
Turns out they did need to learn to read books and write essays, something more than text message shorthand that isn’t descriptive.
Ok-Scheme-913@reddit
I'm sorry, I don't think LLMs will replace engineers, but your reasoning is utter bullshit.
There are emergent behaviors, and LLMs are capable of some kind of learned reasoning, though on a far smaller level than the hype.
Like, CPUs just move 1s and 0s around, yet no one would argue they can't do some insanely complex stuff, not readily apparent from just a bunch of transistors. You can't really claim that "just predicting next token" is not sufficient for in principle GAI - of course the real deal is far more limited, but this is incorrect reasoning.
PoL0@reddit
is this learned reasoning in the room with us right now?
Ok-Scheme-913@reddit
Surely not in yours.. at the end of the day, you are just a bunch of chemical reactions, how could that reason?
sluggerrr@reddit
I'm a senior engineer and had been stagnant for a couple of years just maintaining a project, recently it ended and I decided to dive into learning new things, and let me tell you, this Ai craze isn't BS, Claude code is game changing, of course I'm not talking about replacing people, but it can increase productivity a lot, some times it will "think" of edge cases yo wouldn't see at first glance, look up the bmad method for example.
Also it's fucking annoying how the agent sometimes replies with "you're right, the problem is x y z" when it's not even certain that's the actual cause, or it saying that now x works, when an issue hasn't been fixed yet for example.
If anyone hasn't tried something like Claude code you're missing out, I bet it's also great at legacy code but haven't tried it yetin that instance.
PoL0@reddit
anecdotal at best.
d357r0y3r@reddit
I agree with your statement that LLMs can't replace engineers, but to say they aren't a nice tool for software engineering is off base. Even if we accept that we will never have fully autonomous SWE agents, LLM-based agents like Claude Code are super powerful for refactoring and scaffolding new code. If you look at how good coders use these tools, there's no doubt that they are powerful. I'm not talking about vibe coding; I'm talking about AI augmented dev workflows with smart guardrails.
PoL0@reddit
they aren't in my domain. and for them to be considered a nice tool they have to prove itself as a nice tool.
pretty tired that it's me the one that has to prove otherwise. I'm not critical out of spite. these tools are a joke the moment the software you're writing requires reliability, performance, clarity, conciseness....
d357r0y3r@reddit
They are applicable to your domain if you are writing any code at all, in the same way that IDEs, test frameworks, and type systems are applicable.
The proof that they're useful is that the best software engineers in the world are using these tools every single day.
PoL0@reddit
another AI fact pulled out of thin air... come fucking on...
nimbus57@reddit
Yes :). I'm guessing 99% of the good uses of all of these ai tools are never seen by anyone but the immediate user.
Old-Adhesiveness-156@reddit
I tried "vibe electrical engineering" and it didn't end well.
drcforbin@reddit
I bet it was shocking!
PoL0@reddit
vibe bridge construction will be a thing... according to AI-bros.
Jolva@reddit
By suggesting that a large language model just predicts the next word it's pretty obvious you don't understand how the technology works. By suggesting they're not a useful tool for software development, it sounds like you've never used one.
EveryQuantityEver@reddit
That's literally what they do.
lostcolony2@reddit
Is "it just predicts the next token" better for you?
Jolva@reddit
Transformers allow the model to predict the next word using the entire context of the question. It's not just predicting the next word or token, it's understanding the relationship between all of the words. It's fine if people want to dismiss the capabilities of these systems, but if they work in software development they're going to be the first people to lose their jobs.
PoL0@reddit
I'll patiently wait for a LLM to replace me as a software engineer. zero concerns on my side.
you can keep breathing that hopium. you probably think every software project in the world can be compared to a toy website full of holes.
and you can use all the fancy lingo you want, transformers just spit one word at a time with zero "intelligence" and zero "expertise".
McGill_official@reddit
Yea but decoder only models literally only predict the next word
Jolva@reddit
By weighting and considering every token up until that point in the context. I'd argue that's a lot more than a "next word generator" like the person I replied suggested. I'd also argue it's an extremely valuable technology for software development but Reddit loves hating on AI.
Old-Adhesiveness-156@reddit
It's just a queryable knowledge database using human language as the query. The only difference is its output is fuzzy since it's only correct some of the time.
Jolva@reddit
That's a bit reductive don't you think?
I can't say to a database, “Make me a Python script that exports my photos to Dropbox every Friday and emails me a summary."
An LLM could likely do that on the first try.
Old-Adhesiveness-156@reddit
It's not reductive at all. The LLM just stores information in its weighted neurons instead of in rigid text based articles\guides\textbooks. The LLM doesn't have the ability to think and solve your problem. It is spitting out stuff that it was trained on. The same way a traditional database returns data when queried with SQL.
McGill_official@reddit
Again all it’s doing is predicting the most likely word (softmax) in the sentence SXY where S is the system prompt X is the user prompt and Y is the current generation.
Veranova@reddit
I prefer some variation of “the question of whether LLMs understand is about as interesting as whether a submarine can swim”
They are just predicting the next token, and arguing against that is entering a pretty boring debate in which you will objectively lose, but LLMs do take a huge amount of information and training into account to get there and objectively do encode understanding of the world which your phone’s autocomplete algorithm never did
lostcolony2@reddit
'Encode understanding' absolutely requires more, er, specificity than you give it.
Includes a lot of context in determining the next word? Yes, absolutely. Definitely more than just a Markov chain.
But includes understanding? Abstraction, logic, the ability to recognize when information is missing, when a conclusion may be incorrect, its own innate biases, or demonstrate 'understanding' of causality? All missing.
Which is perfectly fine for many use cases! But not, unfortunately, many of the use cases that people claim they'd be great for.
Veranova@reddit
There is plenty of research on this, you can even ask ChatGPT what I’m talking about
Take for instance a recent study which demonstrated the model understood relative positioning of objects in space by decoding the internal activations. This is very much a real thing and you can go find it
lostcolony2@reddit
> Transformers allow the model to predict the next word using the entire context of the question
So, you prefer predicting the next 'word' rather than the next 'token'?
Like, you're missing the point - no one isn't saying that it doesn't have a whole lot of context determining the next word/token. What people are pointing out is that nothing resembling logic, reasoning, intelligence, etc, is involved. That's why LLMs are really bad at causality, why they hallucinate, etc.
MornwindShoma@reddit
People really gets so offended that some random people isn't using their tool of choice or doesn't find the net benefit.
SnugglyCoderGuy@reddit
The criticism makes them remember that reality is not lining up with what they want to be true, so they suffer cognitive dissonance, get angry, and lash out.
EC36339@reddit
I can't even make AI do the boring and tedious shit for me.
Fenix42@reddit
We use Amazon Q at my place. It has replaced Stack Overflow for me. Not having to dig through the comments on SO makes a win for me.
EC36339@reddit
Not having the context and public peer review that SO provides and just getting an answer just makes it an interior source of information.
Fenix42@reddit
1/2 of the SO stuff is supper old. The newer stuff gets closed as dups to the old stuff. All of it has people arguing. Q gives me a sample / answer based on my code base. If it is wrong, it is able to refine the answer. All of this is done in less time than it takes to find an answer on SO and is done in my IDE.
Q has been worlds more helpful and faster for me. I say this as someone with 20+ years in industry and who worked at the precursor to SO, Expers Exchange.
EC36339@reddit
Knowledge is super old because it hasn't changed.
If you think something is outdated or wrong, then contribute to it.
Arguing is a feature. It's what makes it a reliable source.
Fenix42@reddit
Stuff absolutely does change. It is madening trying to find Spring error stuff. I get stuff from 10 years ago. String does work the same now. I always have to check the date on the thread.
The arguing in thread means I have to wade through people bickering. I end up on SO because I can't remember a specific pattern, or I hit a compile error. Any other time pre "AI" my IDE handled it.
Now, with Q, I don't have to go to SO. I get the info I need with code generated for the environment I am in. I have used it to gen end to end tests in a framework I wrote. It was able to get about 90% of it in under a minute. It would have taken me 20 or so to do the same work.
I use Q to generate out the boiler plate code. It makes my job less tedious.
EC36339@reddit
You don't "have to" wade through arguments in comments at all. You do it if you want to assure yourself that the most upvoted or accepted answer doesn't have any serious flaws or things you need to know.
Also, others wade through the same comments and upvote those that are useful, so they stand out.
This is called peer review. It works.
Fenix42@reddit
I appreciate what SO is. I worked for Experts Exchange at one point. It's a fine system. It also has deep flaws.
So that means I have to read the whole thing.
I have seen the most upvoted comment be wrong for my context many times. If I am on SO, it means the problem is not a normal one.
It only works of the peers are knowledgeable and have an insensitive to review things. Older topic rot. EE had the same issue after a while.
The new "AI" tools are saving me time by getting me more accurate results faster. That is all that matters in the end. I have code to deliver.
EC36339@reddit
I think it is pointless to explain peer review to you. You don't sound like you would understand. Enjoy your glorified intransparent search engine with a built in randomizer.
Fenix42@reddit
Lmfao. I have 25+ years in industry. 15 + as an SDET. I fully understand peer review.
Your attitude is the exact problem with SO. You have assumed you know better with 0 actual understanding of the situation. You then dismis what others are saying. This is the exact reason I have to dig through all the comments.
FFS, I worked the precursor to SO, Experts Exchange. It's the reason SO exists. I know how these systems work at a level most don't. SO is a massive improvement over EE. They fixed a lot of the issues. They also have some of the same ones. Answers rotting is a huge one.
I am telling you the signal to noise ratio on SO has gotten bad over the last 5+ years. It has gotten to the point that I hate clicking on the links. That is where EE was before SO came along.
The code gen tools are not random. Especially when you integrate them into your systems. That is what large corps are doing. Amazon Q is integrated into InteliJ. When I ask for code, it uses my current repo as a reference. It will provide code in the context of what I am doing.
It's not perfect, but it saves me a lot of time. I get 90% of what I need with the boiler plate code done. I can then tweek what I need too to finish it out. It saves a lot of time.
You can dismiss the tool all you want. It's here, and it's going to replace SO. Learn to use it or get left behind.
EC36339@reddit
Oh, are we doing that now? Have fun talking to yourself then.
Pharisaeus@reddit
Wait until API changes and the old SO answers the LLM was using won't work any more, and there won't be new ones because people no longer use SO ;)
Fenix42@reddit
I used to work for Experts Exhange. I know exactly how this goes. ;)
The main thing Q has going for it for me is the code integration. I don't let it modify my code, but it can read it. That gives me much more accurate recommendations.
start_select@reddit
People who think AI can do someone’s job don’t understand that a hammer doesn’t do carpentry and a calculator is not an accountant.
They are tools that experts use to do work quickly. For everyone else, they are weapons used to destroy things and make mistakes at insane velocities.
Ragnagord@reddit
Carpenters famously went out of business when the power drill was invented. Because what was their job if not hammering nails into planks for 40 hours a day?
That's really the level of discourse around AI at the moment. It's almost childish.
Bakoro@reddit
The lack of nuance and thoughtfulness is what is childish.
We've got useful tools, and people with weirdly extreme positions about it, were some people are refusing to acknowledge that these tools could be good for anything, in any possible way, which is an absurd denial of objective reality, and then there are people who are like "replace all labor with AI right now", which is absurdly premature, dangerous, and not even feasible.
Technology has absolutely replaced people in the past.
"Computer" was literally a person's job, they did the actual calculations, where mathematicians ans physicists did the symbolic math.
+90% of the population used to work in agriculture, ans a collection of technologies has made is so ~2% of the population works directly in agriculture in the U.S, and ~10% in closely related fields. ~28% work in agriculture globally.
You can look at the entire manufacturing pipeline for many products, and see were people have been replaced by machines over time, and see places where the little bit of human labor could be replaced by robots when the robots are cheap enough.
People have been cheering on automation for decades, celebrating the reduced need for manual labor. People have been shitting on trade jobs and fast food jobs for decades. It's been open vitriol and disrespect for "ditch diggers" and "burger flippers".
Then it turns out that a lot of white collar jobs might be easier to automate with computers than physical jobs, and everyone is freaking out like "but the leopards weren't supposed to eat my face!".
With developers, it's pure copium trying to point out every little flaw and every mistake the system makes, and scream "See?, the system can never replace me! I'm special ans different!", while conveniently ignoring the absurd pace of development, and the very clear avenues of further AI development which have barely been explored. We've got every reason to think that within a few years, these systems will be capable of some meaningful level of fully autonomous development.
The people claiming "AGI tomorrow" are also absurd, the tool clearly aren't there yet. Businesses overplayed their hand, and played it way too early. Trying to replace developers with the existing tools was bone stupid.
The people pushing the world forward are thinking what can these systems do today? What are they good at, and how do we lean into the strengths of these systems instead of asking them to do the things they aren't strong at?
A loud fraction of software developers themselves have been among the worst offenders in grading the fish on its ability to climb a tree.
MornwindShoma@reddit
Big remainder though: past performance is no guarantee of future results.
We might get meaningful autonomously built software, or we might not. Research and actual experience has shown us that LLM are slowly hitting a plateau, costs are going up and have been generating only losses up to now unless they hike prices a lot.
There's no certainties about the future landscape of AI applied to software development, or in general to all topics.
Bakoro@reddit
There are no guarantees of anything, but there are a lot of clear indicators.
Cost of inference is not going up, it's the opposite, by far. The average cost per million tokens has had a ~0.1 multiplier every year since 2022.
The costs will only keep going down, as long as we don't have another pandemic or major war that interrups the supply chain. There are a bunch of companies making AI inference ASICs now which blow anything Nvidia has out of the water. Competition in the coming years will drive prices down.
There are photonics that are in the early manufacturing stage which use almost no power, which will drop the cost even more dramatically if and when they hit large scale manufacturing.
LLMs have hit a plateau in terms of "just throw more text data and parameters at it while using fundamentally the same architecture and training methods".
We know that there's simply no more meaningful human generated text data to train on, and we would have to scale ~10x to see another performance jump if we used the same "more data, more parameters" method. That's not feasible right now.
Fortunately we have several proven methods now where we do not need any more human generated data. For formal logic, math, and coding in particular, there are training techniques which use deterministic tools to do reinforcement learning with verifiable rewards, without any human in the loop. That allows effectively indefinite, continuous training, with functionally no way for the models to do reward hacking.
What that means for software development, is a training regime which naturally punishes hallucinated libraries and hallucinated functions.
It means a training regime which can learn increasingly sophisticated development and one-shot increasingly complicated instructions.
So, even without a dramatically different architecture, we have clear training pathways which essentially have to work to some degree. It's mathematical certainty, we can and will improve the models' abilities.
Where is the cap for text only RLVR?
I don't know, but it's higher than where we're at now.
And the there is the flood of other papers that have come out over the past few years, where whole lines of thought have barely been explored at scale. There's just no way to keep up with, and explore every promising new idea at the speed they're coming out.
From what I see, we are not even close to done yet.
MornwindShoma@reddit
Hopefully they don't run out of money before they actually get something valuable out of it. They're all burning billions monthly.
Bakoro@reddit
If that's what you think, you need to try and find some actual news sources and not just corporate LLM hype and doomer whining.
There is a a whole lot more to the AI world than LLMs, and every penny that's ever been spent on AI research has already been well justified.
MornwindShoma@reddit
Unfortunately I do have some reliable sources on this. That's the issue.
raccoonrocoso@reddit
Fair analysis, but comparing power drills to AI/LLMs is short-sighted. Power tools enhanced skilled workers' efficiency without compromising quality. AI does the reverse.
The need for quality work hasn't disappeared, but AI makes it harder to distinguish from convincing imitations. We're nowhere close to systems that match human judgment and consistency, yet we're already seeing markets flooded with Al-generated mediocrity masquerading as expertise. effectively. Al actively bypasses that requirement, as it enables anyone who can type to flood markets with mediocre work that mimics competence.
blwinters@reddit
So has the Indian subcontinent
GregBahm@reddit
I'm not super keyed into the world of construction, but I'm pretty sure a lot of construction guys moan about new tools compromising quality throughout the history of time.
I'm sure there's some old Japanese carpenter that only works with some ancient, painstakingly slow technique of merging pieces of wood together with a sharpened rock and an iron will. And I'm sure that old Japanese carpenter looks on with horror at some asshole plugging away with a nail-gun.
ForgettableUsername@reddit
It’s easy to sacrifice quality in the name of efficiency. Efficiency is easy to metricize, while quality is multidimensional and difficult to evaluate accurately or consistently.
nimbus57@reddit
I'm going to guess that old Japanese carpenter doesn't care about some other asshole.
nimbus57@reddit
I don't think you got it. Carpenters have not disappeared; their role has changed.
DaBigSnack@reddit
I worked construction for years, some job sites had no power, no compressed air, no nailguns. You raise the floor of who can do what job, but you still need to do things to spec. You still have to manage jobs, inspect, and build things to code for the house to not fucking fall down.
nimbus57@reddit
... I'm not sure what you're saying here; it doesn't seem to have any relation to the conversation. The person above was showing an example of a conversation that someone with a poor understanding might have, be it carpentry or ai programming.
chucker23n@reddit
The point is that the role hasn't really changed that much, and the role of software developers hasn't changed when visual programming was attempted in the 1980s, or RAD in the 1990s, or low-code in the 2020s. You get new tools. You might get a little more efficient. That's all.
chucker23n@reddit
E72
, and it'll be true when clients think "vibe coding" is good enough.Jonno_FTW@reddit
Have you met most people? They do not know what goes in most jobs outside their own.
TilTheDaybreak@reddit
Hey bro, power drills are useless for nails.
Try a nail gun ffs.
trinde@reddit
Power drills replacing hammers is a more accurate comparison to what is happening with AI and companies adopting it. A nail gun over a hammer for many situations is just a productivity improvement. Forcing carpenters to use a power drill everywhere over a hammer is likely forcing them to use the wrong tool for the job.
AI is a useful tool and is generally a productivity improvement for certain tasks. It just shouldn't be used everywhere.
ForgettableUsername@reddit
Management recommends pre-drilling the hole with our new power drill tech and then pushing the nail into the hole with your fingers.
fear_the_future@reddit
Cabinet makers, coach makers, smiths, etc. massively went out of business with the advent of mass manufacturing machines, so much so that many jobs of that time are now completely eradicated. The same can happen to you!
MornwindShoma@reddit
They didn't, actually. Many processes aren't yet done by machines, we only ever shifted the manufacturing away from public eyes into sweatshops. There are actually plenty techniques that basically have never been automated, like ever. And there's a market for handmade items as well. What went away were small companies who couldn't compete with larger conglomerates. But we have seen that this isn't yet happened in software: not all development moved to India, not all apps are low-code apps, not all processes have been digitalized, not all CRMs are Salesforce...
fear_the_future@reddit
All the processes that are done by machines are no longer done by humans. So, evidently, the number of jobs has decreased drastically in relation to the output.
MornwindShoma@reddit
Yes and no. It's not like those jobs disappeared entirely due to automation. Automation didn't replace those jobs, it created new means of production that allowed capital to outcompete artisanal work. But it couldn't do the same jobs as well or precisely, some at all. Like there is seriously some sewing techniques that need to be done by hand still to this day. Machines didn't replace people as much as capital moved the means of production into their hands.
atomic1fire@reddit
I suppose the real lesson to be learned is that using AI is not much different then building furniture out of a kit.
Sure anyone can do it, but that doesn't make them qualified to build a house that passes inspection.
gc3@reddit
The real 'developers needed less' idea comes from the idea that one carpenter with power tools is worth 2 who don't have them.
But really the software company should just be able to handle more projects at once with the same staff.
I also think that developers, like lawyers, make work for each other so after this dip I expect demand to rise.
emfloured@reddit
99% end-users don't want to deviate their mind to do something new that they haven't done before. All they want is willing to pay to get the stuff done. Most(i.e. not all) people want to enjoy life in peace rather than going into the territory of D-I-Y except for the most trivial use cases.
You are exhibiting Mandela effect. Carpenters never went out of business.
GregBahm@reddit
What's slightly interesting here is that, while carpenters never went out of business, calculators actually did.
They went out of business so hard that a lot of people don't even know that "calculator" used to be a job title. The concept has been almost completely supplanted by the tool.
I don't think LLMs are going to replace programmers, but if I did think that, citing calculators would be a pretty perfect historical example.
chucker23n@reddit
Eh. They became accountants, data entry clerks, data analysts, etc.
RealWeaksauce@reddit
Missed the sarcasm
Strong-Reveal8923@reddit
Actually he did not. The guy he replied to should put /s otherwise his statement is serious. I mean, what if carpernters did disappear in his town when drill was invented! lol
Ragnagord@reddit
There's this concept called a figure of speech?
emfloured@reddit
I think I may have exhibited staying-awake-for-31-hours effect. My bad! :D
Dj0ntyb01@reddit
You are exhibiting r/whooosh effect. The person you're replying to was being sarcastic.
Fearless_Weather_206@reddit
From your beloved AI
No, the power drill did not put carpenters out of business; instead, it became an essential tool that modernized the profession. Like other technological advancements in carpentry, the power drill improved efficiency, enhanced precision, and changed the workflow, but it did not eliminate the need for skilled carpenters
Glizzy_Cannon@reddit
Can you type into an AI prompt how to detect sarcasm?
Fearless_Weather_206@reddit
Actual AI response from a Google Search and your saying it’s BS 😂
Glizzy_Cannon@reddit
I know you're going to triple down like most people would, but please learn context. The initial comment was sarcasm and the fact that you couldn't even tell from my comment is crazy
Lataero@reddit
Buddy, him saying "the drill put carpenters our of business" was the sarcasm. Clearly, they didn't because a power drill is a TOOL that carpenters use. Just like AI is a TOOL... Christ alive
spamman5r@reddit
The post was not disingenuous, you just didn't understand it.
Glizzy_Cannon@reddit
Type "how to detect sarcasm" into a model's prompt. Might help you out there
Castle-dev@reddit
Maybe if they had bosses at the time that fired all the carpenters and complained at the drills why shit wasn’t working we’d have a more apt example of what’s going on today.
Aerhyce@reddit
It's basically modern-day luddites.
Is AI useful? Maybe, maybe not, but those vehemently arguing about their uselessness just hate it because it's new. They won't be the ones that will actually determine AI's usefulness, since they're already persuaded that it's useless.
G_Morgan@reddit
AI is like a hammer that 40% of the time pulls the nail back out but it feels "interactive" so you think you are going faster.
mothzilla@reddit
AI is like a toddler holding a hammer. You might be able to show it where to hit, and eventually it will get it right, but you probably shouldn't be using it to build houses.
start_select@reddit
Eh. It’s great at mimicry. Scaffold an app yourself. Make a feature. Then tell it to use that feature as a template for a similar feature with X, y, and z constraints.
Bam, super fast pattern templating/code generation that fits your architecture and project styles.
It’s better at analyzing an existing implementation and augmenting it than it is at pure creation.
I.e. imagine a web app with 5 main nav tabs. Implement the first two and there is probably enough context to bust out the first 20-80% of the next 3 with an agent while you do something else.
maxinstuff@reddit
This is a very senior dev centric take - I know this is r/programming, but I hope people still understand that.
AI is more like a factory than a hammer, and the impact of its introduction on labour will be similar.
It is indeed “just software,” and what is it exactly that software has been doing for the last 70 years if not heavily disrupting the labour market?
GitHub copilot and Cursor are not even a fraction of the real use cases, if you don’t think it’s taking away jobs, you have blinders on. It might be a hammer for you at the moment, but have a look at the things you are building with that hammer and what those solutions are doing.
Nothing you built ever took away a job?
skinniks@reddit
It's going to allow 1 person to do many people's jobs. And for some job classes, especially the lower income ones, like call centres, it's going to wipe them out totally.
pelirodri@reddit
Unlikely to replace call centers; they might try, but it will not go well.
maxinstuff@reddit
Your call centre job will be to monitor and keep a couple of dozen AI agents in line.
MuonManLaserJab@reddit
Why? At this point you're not pitting AI against PhD researchers, but against people who barely speak the language. And the tasks are simple, and when something comes up that's out-of-distribution you can afford to keep a couple humans around to deal with it because you fired 99% of them.
pelirodri@reddit
Because their fuck-ups can be expensive; also, one of the main differences is robots can’t be held accountable or made responsible.
_Lick-My-Love-Pump_@reddit
People who dont appreciate just how fast AI tools are improving risk getting replaced. 5 years ago no one would have ever imagined AI writing code with any level of sophistication. 5 years from now AI will be better and more accurate than 50% of current developers. The rate of advancement is hard to comprehend.
Your hammer analogy is a poor one. A hammer has no autonomy in the physical world. A more apt analogy is a robotic welder on a vehicle assembly line which long ago replaced human workers because they're faster, more accurate, more reliable, work 24/7, and are cheaper.
There will come a day when an AI is writing code that is more optimized in every way compared to a human. And it's likely that a human wouldn't even be able to understand how it works. Like an AI that isn't generating C or C++ or even assembly code, it's generating op code algorithms that achieve a carefully articulated objective. "Developers" in the future will either be data scientists and statisticians who write the AI prompts or they'll be GUI designers who describe what the interface should look like and how the GUI bits should be used together. The real value will be with other, independent AI tools that can certify some otherwise unintelligible code isn't doing something malicious so that a company can feel it's safe to deploy.
trinde@reddit
The rate of improvement seems high because it went from virtually nothing to pretty impressive.
LLM based AI cannot improve to the point where it can actually produce better code than a human, because it fundamentally doesn't understand things. Maybe someone will come out with a different design that will lead to AGI. LLM's will just get gradual improvements from this point on, and will likely stagnate.
MuonManLaserJab@reddit
Explain how it "fundamentally doesn't understand things."
EveryQuantityEver@reddit
It can't be, because it doesn't actually know anything about what it's programming.
Good luck debugging that when there's an error because you can't completely describe what you want in exact terms.
MuonManLaserJab@reddit
AI is not like a hammer. A sufficiently powerful AI with a sufficient body can take any job that a human can do.
Affectionate_Tax3468@reddit
Carpenters and accountants didnt stop training apprentices tho.
start_select@reddit
I didn’t say you should. That’s my point. They are no good to apprentices/juniors or even a lot of midlevel folks.
Anyone who thinks the tool replaces people is a moron.
brianjenkins94@reddit
I would train an apprentice. My company would need to hire one though.
Xatraxalian@reddit
I'm stealing this.
blue_lagoon_987@reddit
AI by itself won’t replace any job but put AI in proper tools, in robot, drone etc.
Soon thousands of job we didn’t think of will be replaced by machines.
Whenever I read « AI will replace jobs » I’m more like thanks to AI I can do stuff I couldn’t do by myself therefore I don’t have the need to hire someone right away.
Etheon44@reddit
With Digital Marketing it can actually do most of it.
SEO and SEM especially, but graphical design is another one for marketing, which is funny because there is a lot of restrictions and "must have" with big companies (like fot example the logo cannot be bigger than x, text has to be a specific height at most, you must always use x colours even outside corporate ones...)
Copies more of the same, you would think that they should be more creative, but there are sooooo many restrictions.
And its not that Digital Marketing will dissapear, but a lot of specific jobs in the field will, and, at least in my country, there are soooooooooooooo many people in the field, we will see.
I studied Digital Marketing but I switched to software engineering because for me it was not engaging enough so I switched (before all of the AI bullshit)
Ragnagord@reddit
Yeah, corporate copy writing is a big one. I can see how "make sure the text follows this highly specific style guide" is easily automated by an LLM. Doesn’t really matter if it sounds sterile and uninspired. After 3 rounds of VP sign-off that's guarantee anyway.
Etheon44@reddit
Yeah they already sound sterile and uninspired because you have to same thing slightly different, but it is a client requirement so you have to do it.
leogodin217@reddit
Wow, that is the perfect analogy. Poof. Mind blown. I'm definitely stealing this.
kukeiko64@reddit
What the fuck is this website? You are supposed to read text but every five seconds some stupid ass shit "notification" to the bottom right pops up and annoys the shit out of you.
LucasRuby@reddit
\# seems like it was actually a success to be honest.
Synth_Sapiens@reddit
Fun fact A: Vibe coding is viable for tiny one function apps only
Fun fact B: AI-assisted software development is not the same as vibe coding because it requires pretty good understanding of software architecture and perfect documentation discipline.
gimmeslack12@reddit
I use fun fact A to bootstrap ideas I want to build but am too lazy to get them off the ground. Works good (not great), but is certainly helpful.
i_wear_green_pants@reddit
I totally agree. I have been researching good practices and ways to use AI in coding. And the best results are when you can give specific instructions which usually mean you need to understand how software works.
It makes developing a lot faster and less demanding as you can focus more into solving the problem and not writing code and trying to recall how specific syntax should be.
Synth_Sapiens@reddit
It changes the definition of what a good dev is.
For instance, knowing a language syntax isn't really necessary because AI can convert any language to plain English pseudocode.
tapdancinghellspawn@reddit
Yet.
kettal@reddit
Junior Dev failures prove that humans can't write software
Maybe-monad@reddit
juniors learn from failures
Days_End@reddit
lol, maybe half of em do. The other half are basically eternal juniors.
You run into them when your hiring all the time. 10 years of experience but barely better then when they started; often worse because now they are opinionated.
superluminary@reddit
AI doesn’t currently learn from failures.
Maybe-monad@reddit
and it won't until some genius find a way to transalte that into matrix algebra
superluminary@reddit
Learning from mistakes is a ridiculously hard problem. We don’t know how to do it yet.
drekmonger@reddit
Here's a mistake you might learn from: the one you just made.
We do know how to do it. LLM chatbots wouldn't exist otherwise.
superluminary@reddit
You’re talking about RLHF. A row of users sitting giving thumbs up and thumbs down. Or possibly you’re talking about a fitness function where weights are adjusted based on proximity of a result in a training set.
Ordinary every day failure though, no, it doesn’t even know it has failed. The failure certainly isn’t trained back in.
drekmonger@reddit
The overall system has an inkling that it failed, if you indicate there's an issue via downvoting a response or cussing out the model.
Those responses are fed into RLHF factory.
How do you know when you fail?
superluminary@reddit
I know when I fail when I try to do something and don’t succeed. Most folks don’t click the thumbs. Plenty of people cuss out the model on a regular basis. How does it know if it got it right?
drekmonger@reddit
ChatGPT wouldn't exist at all without the capacity to translate failure (aka loss) into linear algebra.
Read up on RLHF. Or ask your favorite chatbot.
Maybe-monad@reddit
You can't put the equals sign between human learning and reinforcement learning even if they look the same on the surface.
drekmonger@reddit
You can't put an equal sign between anything about AI models and human intelligence. Every label for every technique is a metaphor, in the same way that a cut-and-paste operation doesn't involve scissors and glue.
Cut-and-paste operations still work, despite the lack of physical cutting tools and adhesive. And the metaphor works as well, for helping people to understand the point of the operation, right up until someone decides to get needlessly pedantic about it.
The point of AI model learning, from the very first paper on the subject, is figuring out how to translate loss (aka failure) into math. Over the past 70 years, we've gotten pretty good at it. It's not perfect. It's not human-like. But it works unreasonably well, and there just isn't a better metaphor for the process than "learning from failure".
drekmonger@reddit
Actually, that's the primary way that AI models learn. They do, in fact, learn from failure.
After an LLM gulps down the entire internet in the pretraining step, it is only capable of completing text. It won't have any of the skills you might associate with a chatbot like ChatGPT. It won't take turns, make much sense, follow instructions, or try to avoid producing unsafe content.
It'll just try to complete the current text.
RLHF is the primary technique used to train models to perform these kinds of skills. Check out what wikipedia says about it for more detail: https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback
But in short, two responses are generated. They don't even need to be generated by the LLM. A human could write or edit them. It's pretty common for a human to edit a response when both responses are bad/wrong.
The better/correct response is selected, and the delta between them is used to train a reward model.
That reward model is used to train the LLM.
superluminary@reddit
All this is true, but it is also true that if I am using an LLM to code and it fails and writes bad code, it doesn’t learn from this experience, no not at all.
drekmonger@reddit
It does, but in slow motion.
It's sort of a reverse boiled frog. You may not notice that the overall system is getting slowly better over the course of months, but it is. LLM models keep creeping up their benchmark scores, slowly.
Hopeful-Brick-7966@reddit
And it might become even worse, when AI slop is used in the training.
Demonchaser27@reddit
Yeah this needs to be stated in more places. We have this fucking bizarre notion that because juniors and mids aren't perfect all the time that that's somehow bad, and we NEED to replace them with some prodigy or something. But that's not only irrational, but as you said... you don't become a great engineer out of the fucking void somehow. It takes failure after failure after failure and understanding what went wrong, and years of improvement. We want to fast track raw experience and that's just not going to happen. And even if we COULD get AI that does all the junior level work... then how are we going to give experience enough to future devs so they can become senior engineers? Hmmm...
case-o-nuts@reddit
Most junior devs are a drag on the team I initially.
vogut@reddit
Only if the human race is capped to be a junior developer at max
reddit_user13@reddit
At first.
Logical_Angle2935@reddit
Working on a project last week I asked Copilot to help with a mundane function implementation. It insists on compiling the changes and iterating until the compiler issues are resolved. It's solution was to delete 5000 lines of code and ended with "All compiler errors have been resolved."
Undo.
grepper@reddit
AI isn't going to replace the need for programmers anytime soon. But if AI makes programmers 50% more efficient, it may reduce the need for programmers relatively soon. Or we may just progress faster as a society.
cjxmtn@reddit
for big companies it can reduce the need, for smaller companies, it will allow developers to work on more projects, and hopefully work on tech debt (lol)
fried_green_baloney@reddit
If vibe coding and AI in general is so great, where are the full examples of code produced.
Initial prompt => refinement => code => test, debug, fix => release.
Any real examples of someone doing this. If not I'm not really convinced.
goomyman@reddit
I am trying to create a website. I know zero about react and animation frameworks.
I know how to code but not websites. Website development is crazy - every year entire new frameworks drop - I have no idea what the latest is.
I have successfully vibe coded several very complex animed controls using node js, tailwind and framer motion. All tools I have never heard of before.
Honestly, creating the initial control was magic. Tweaking it to what I actually want is a nightmare.
It just randomnly breaks functionality that used to work. It’s wrong all the time. And it randomnly decides your design is bad and moves files making reviewing the changes twice as hard. Especially when you don’t know what the changes are doing in the first place.
However I am a dev, I know how to use git, I know what good component design looks like. I know the data structures I want so I can direct the AI to make changes the ways I want it.
It didn’t stop me from spending 4 hours straight arguing with the AI and begging it not to break existing functionality it just fixed.
However I’ll be completely honest. This would have been physically impossible without vibe coding for me. I would have had to spend a month learning these frameworks and weeks typing them out. I’d have learned more and it would have been useful though. I still learned a ton from what it produced. And I learned good coding standards from it as it learned and recommended approaches as the code base grew. I still learned - and a lot faster than on my own would have been - although i understand practically nothing still.
While I was successful at it, without learning the basics - i was only possible I think because I’m a dev.
It’s 100% a time saver and a useful skill. Would do again if I need to make a change on something I don’t have the time to learn.
I wouldn’t use it at all on something I’m an expert of. But then again I wouldn’t need to either.
absentmindedjwc@reddit
Meanwhile.. the dumbfuck assholes at my company absolutely demand that we all find ways to incorporate AI into our shit.
Also.. the best AI we have available is a self-hosted fucking Llama 3.3 70b model. Its less than useless.
AlSweigart@reddit
Invariably some apologists in the comments are going to mention "...but in five years the kinks will be worked out..."
It's because five years is an amount of time soon enough to wait for but long enough for people to forget when the hype dies and it turns out you're wrong.
Pttrnr@reddit
i'm still wondering how the AI know which of the training input is good and which is bad. and since "vibe coding" doesn't seem to work properly now there is a new hype in town: vibe testing. "feels good, man", eh?
Opposite-Cranberry76@reddit
So, the author's other articles seemed a little high-output and formulaic, so for fun I had a good LLM analyze them. The result:
* Extremely clickbait-heavy titles following similar patterns
* Every article follows identical engagement-optimized structure
* Heavy use of buzzwords and sensationalized language
* Repetitive phrasing across articles
*Very SEO-optimized writing style
"shows clear signs of AI-assisted content marketing optimization"
tmarthal@reddit
I wish the same articles calling AI a Failure should analyze Scrum/Agile the same way (for all intents and purposes, that process would also be a “failure”).
esiy0676@reddit
Agile was a "success" - in terms of corporate "efficiency". It earns more money. Part of the costs is dev going crazy, but that's not on the balance sheet.
fynn34@reddit
Agile is generally successful, scrum is far from it. It doesn’t make more money, it usually slows things down and causes significant loss to performance and output, however it’s consistent output which is trackable and measurable. Engineers work at half the speed to work in a neat little time box of consistent and reliable output metrics. Who would I work 60 hours a week to get stuff done because I have a metric shitload to do? Point my stories high so that I rarely under estimate a ticket, and I can coast half the sprint? That’s scrum for you.
Gwaptiva@reddit
Unless you get paid per hour, you should not work 60 hours per week. And if you are paid per hour and work 60, remind me not to hire you. You'll blow up and produce rubbish the last 30 hours of it
uber_neutrino@reddit
What a completely bizarro philosophy.
fynn34@reddit
What I’m saying is that’s what happens to junior engineers who underestimate on sprint planning and try to get their tickets done in time rather than drop tickets
Gwaptiva@reddit
Yeah, Scrum does have a high plan economy vibe; the GDR lives on
Reddit_sucks_3000@reddit
Like any method you can find crappy implementations and decent ones. If a team is experienced and self organizing, all scrum should be doing is removing the people doing the work from the pointless manage meetings and other corporate time wasting reports.
If you are still going throught the methodology like its a freakin monolith, you are just calling it scrum. Its a bit like Buddhism, if a part of it works, keep it, if it doesn't, throw it out the window.
fynn34@reddit
We were doing agile for years and more recently had scrum enforced on us, and I’ve been thinking of dressing up in cultist robes and lighting candles behind me in a dark room for every scrum “ceremony” that we have because… scrum. The terms are incredibly muddy, but it sounds like you are advocating more relaxed agile, which absolutely works. Scrum on the other hand is a lot
esiy0676@reddit
There is a non-zero number of folks on LinkedIn who boast with the "Certified Scum Master" [sic] title - I used to think it's sarcasm, but now I wonder how many actually are "on board" with any of that beyond the "ceremony".
Reddit_sucks_3000@reddit
That means they are following the Scrum guide like ita a formula, it's meant to guide not be a 1 size fits all formula. I understand that some people sell "scrum" or even "agile" like its a step by step thing that increases efficiency, but it a lot more simple and useful if its adjuates to each team.
If the hangup is "we must follow the ceremonies" just make them useful for the current team. Some people get a certification and think they know wtf they are doing irl.
greenmoonlight@reddit
Agile was a response to the waterfall model where you design everything, then implement it. Are you saying we should go back to that? Or what's the default state that we should go back to now that we "failed"? Or is the problem more that at your company, the scrum masters make everyone sit at pointless meetings?
GregBahm@reddit
I'm not convinced the people on r/programming actually know the difference between agile and waterfall.
I think most of the people upvoting posts against agile are students who have only ever been assigned tasks by teachers, and are frightened by the thought of any less structured process.
Ditchdigger456@reddit
All of these made up processes are exactly that in the first place tbh.
tmarthal@reddit
Exactly. Same with AI assisted programming and Agent Workflows.
Cherry picking social media posts/tweets about a change to programming methodology can make anything look like crap. There’s a huge spectrum of skill levels, organization maturity and implementation details that can make any process change successful or a failure.
jk_tx@reddit
Except waterfall was never "design everything up front with full specification and then go implement it all without change." - not anywhere I worked in the last 30 years, at least. That's just the straw-man argument that Agile zealots liked to tear down.
In reality it was more about designing the "big picture" in enough detail that you can start breaking it down into smaller chunks/problems and solving those. There can still be iteration/change, even if you can't release "every two weeks", which IMHO is the dumbest part of Agile anyways.
greenmoonlight@reddit
I'm sure there are teams and projects where the best interpretation of waterfall with iterations works better than the worst possible interpretation of Agile, no argument there. Personally I might get a little zealous sometimes, but I'm only defending the argument that Agile as a whole isn't a "failure" that the industry needs to revert.
I'm not a big fan of forced time boxed sprints myself and you can do Agile without them, but since we're talking about strawmen, a Potentially Shippable Product Increment doesn't have to be an actual release according to Scrum. It just means that you have to make some measurable progress every two weeks. It forces you to focus on things that move you towards your business goal.
In my opinion, having your best working guess of the bigger picture documented is fully compatible with Agile, you just have to keep in mind that it's not final until you stop worling. You're not just developing a product, you're also developing domain knowledge and team practices as you go. You need to keep validating your assumptions, and any big chart of architecture or process is subject to change if you learn new things. That's it.
raegx@reddit
The reality is that waterfall development often let project managers and dev teams anchor themselves to far-off dates. They didn’t have to, but in practice, they usually did.
Some people thought Scrum would fix that by breaking the work into smaller, consistent deliveries. But no process can rescue a project from mismanagement or developer apathy.
In the end, any framework—waterfall, Scrum, Kanban, you name it—will fail if management and devs don’t both buy in, support it, and evolve it to fit their team and business environment.
Everyone always gets caught up demonizing named processes, when in reality all of them can work, but not all of them will work for your team.
flying-sheep@reddit
I loved to work at that one place that actually had an agile process that wasn’t totally eviscerated. I feel like calling “agile” a failure would need more places that actually really do it other then vaguely guesturing in its direction while constantly interrupting sprints with new made-up deadlines, mixing roles freely and according to C-suite sensibilities, and so on.
GrammerJoo@reddit
This website is pushing a product and is using ragebait to get upvotes.
GregBahm@reddit
This comment is at -2 votes as off this writing, even though it's absolutely true.
One explanation is that it's being downvoted by bots who are here to promote this website's product. Would make some sense. I assume reddit bots are a very small expenditure in the marketing budget of an AI company.
But this website ("final round ai") isn't even hiding that it's pushing an ai product. At the top of the article are links to "ai application" and "pricing" and an ad pops up for their ai bullshit if you try to scroll through the article.
So I'm inclined to believe the downvotes here are actual r/programming members, who are so ravenously hungry for assurance that AI is bad, they'll shill for AI willingly and downvote anyone who points that out.
rossisdead@reddit
and the OP has been spamming articles from that domain for the last several weeks.
bzbub2@reddit
yes. similar to linearB's devinterrupted blog before it ....it's insane that people don't recognize this for the spam that it is
lunchmeat317@reddit
Can we start tagging these posts using post flair? It allows filtering by flair so it's easier not to see a bunch of AI/LLM cod8jg posts all the time.
rossisdead@reddit
Or just ban them as they don't follow the rules of the sub.
McGill_official@reddit
What a garbage article. Is this what passes for programming content …
Strong-Reveal8923@reddit
This sub is terrible for real programming content. Most posts here are news, blog spam, project self promotion, and more blog spam. AI is easy bait.. I mean look at all the comments here lol.
McGill_official@reddit
I’m convinced all the top comments are so eager to espouse their 200iq take on AI they didn’t even bother reading the article
rossisdead@reddit
Yeah we know, a post about it reaches the front page every day on this sub.
LukeLC@reddit
The term "vibe coding" is a deception meant to assign credit to the human who didn't do the work. I really hope it falls out of style ASAP.
spilk@reddit
it's right up there with "prompt engineer"
leogodin217@reddit
I feel like this is the main problem. The term sucks. LLMs are not in a state where true vibe coding is possible beyond simple use cases. We really should abolish the term.
Kingh32@reddit
I think vibe coding - as originally coined is completely fine. It’s a totally separate mode of putting together software that has nothing at all to do with building software at scale with teams, for companies etc.
It’s a fun thing to do to experiment, to build yourself random tools and just to have fun.
ungoogleable@reddit
If vibe coding means playing around with AI on your own to see what it can do, then immediately throwing away the result, OK, fine. But if you're using the code for anything that involves other people or giving the code to even a single other human for them to use, you have a responsibility to check that it isn't actively harmful.
Like if you were "vibe cooking" with AI in your kitchen and it left raw egg in your recipe, maybe you don't care, but you are responsible if somebody else eats it and gets salmonella.
Kingh32@reddit
I don’t know why people have taken vibe-coding to mean anything outside of its original description.
Why wouldn’t you want oversight over the stuff you ship to a production system?
Excellent_Walrus9126@reddit
It's both an insult and a trendy terms simultaneously, I think it fits the bill.
Demonchaser27@reddit
At best it has saved me time with language conversions or small functions I didn't feel like writing, but which I knew exactly what the end result should look like. But you almost always have to correct what they give you to make it fit for purpose. So if you ONLY ever use AI to write all the code and you don't understand it yourself, it isn't going to get you anywhere. Especially on anything even mildly complex... hell, even not that complex in reality.
Mental-At-ThirtyFive@reddit
getting into vibe coding in emacs using gptel / aideremacs - as a non-coder (strictly hobbyist) it is really good to see it generate code, and will almost claim this is what I needed.
I really have no idea what the impact will be - the only thing it reminds me is the quote that we overestimate in the short run, and underestimate in the long run - obviously with no context on time
moreisee@reddit
3 years ago, this entire premise would be unthinkable. Now, we have daily articles about it (and they're right. AI definitely cannot replace developers today).
I wonder what the articles will say in another 3. Or 5. Or 10.
gulyman@reddit
It's crazy to give AI write access to production.
nimama3233@reddit
The beauty of this article using an AI image
jonas00345@reddit
I feel we lack the tools to measure the complexity we are dealing with. We think if we can be 3 or 5x more productive we would run out of problems. My experience is the software most barely works and just covers basic use cases. To make if great its 100 or 1000 times the effort. Look at git vs github as an example.
Roticap@reddit
So wait, Claude works perfectly, but only if you're making ransomware?
nightwood@reddit
I like the part where greedy CEO'S are ruining AI and their companies long before it gets a chance to bw come actually good. All because it could generate crazy rainbow dog pictures.
Sea-Anything-@reddit
Good thing y coding technique is leveraging AI not vibing with it. These posts are such a joke - I stuck at using the most powerful tools on the planet, let me write about it🤣🤣
tedbarney12@reddit
The point of vibe coding is to make a devs menial work take less time. Not code an app for somebody who can't support it to ship.
-Hi-Reddit@reddit
That is not what 'vibe coding' means.
That's what AI is useful for.
They're two very different things.
Nimbux13@reddit
What's it truly mean then?
-Hi-Reddit@reddit
Well for starters, vibe coding is something you do, aka an action, and 'make menial work take less time' is a result.
So they're not even remotely similar before you even define what the action of 'vibe coding' is.
In general vibe coding is accepted to mean accepting AI output without thorough checking, based on the 'vibes' of what it has spat out, rather than reviewing it.
I used AI to 'make menial work take less time' by writing some code today, but I reviewed the result thoroughly and iterated upon it. I did not accept it based on vibes. I still made menial work take less time.
If you can 'make menial work take less time' while coding with AI without it being 'vibe coding' then they are evidently not the same thing.
zacsxe@reddit
Have you been to the vibe coding sub?
Electrical-Ask847@reddit
I know atleast 5 ppl personally that are hoping to clone docusign in 2 weeks.
mackfactor@reddit
Kinda makes you wonder why they haven't yet.
Dry_Try_6047@reddit
As an eSign SME, this is the type of thing that scares me. These people think an electronic signature is a script name overlaid on a piece of paper, as opposed to what it really is, a digital audit trail from a trusted key issuer.
Apply this to any capability that people think they understand but don't in the slightest, along with salesmanship to sell such a capability. Scary stuff.
wgrata@reddit
Yep they're probably sure that the llm knows what and how to make DocuSign. ignoring they have to supply all the product and technical context to the llm for it to do anything even remotely correct
Jmc_da_boss@reddit
That is not what vibe coding is lol, that's just not what the term means
YaBoiGPT@reddit
i would agree with you but vibe coding generally implies you... well go with the vibes so its kinda meant for people who dont know shit
but yeah as a dev ai does help quite a bit for adding small features. i can think and figure out issues and it just can just write the fix. pretty neat.
MornwindShoma@reddit
It really is so. But not writing by hand does somehow make codebases less familiar and knowledge isn't sticking as well to memory.
1RedOne@reddit
Exactly! The issue is people who are actually not programmers and have no technical or troubleshooting mindset, using tools like this and getting in way over their heads and having no ability to solve the problem that will inevitably appear.
YaBoiGPT@reddit
yeah the funny bit is one of my classmates got super in vibe coding and he's actually releasing a product for our school and he's getting all the proud looks and stuff but nobody realized his code is a piece of shit and insecure as fuck lol
this dumbass left the api keys running in the browser view ie it directly fucking hits up gemini's api and supabase's apis from the goddamn console. i also managed to read and write the database with the keys lol. worst part is this is supposed to store student info...
i would tell the person if they were literally anyone else but this dude is also a dickhead so i just decided to let it be and he'll eventually get cooked by someone LOL
Omnipresent_Walrus@reddit
By definition, vibe coding is just not caring what the LLM generates. Just CTRL+C, CTRL+V, testing it, feeding back failures. Rinse and repeat.
Anything more involved that actually involves thought and development knowhow isn't vibe coding.
Electrical-Ask847@reddit
thats not vibecoding. I am a dev and i do vibecoding sometimes for fun.
I feel like we need a word for coders using AI as assistant in the way you described.
GenazaNL@reddit
Yeah, till you have to debug and fix the vibe coded code
flying-sheep@reddit
No that’s not at all how people are using that phrase. But you do sound very confident while you’re wrong; are you an LLM?
esiy0676@reddit
I am not sure - when asked to create test cases, it produced something that will look like a set, but won't catch anything not already obvious.
asphias@reddit
''less time'' xD
BoozeAndTheBlues@reddit
Bingo. A thousand years ago when I was in Computer Science school my AI professor used to say: It’s about context, it’s about the ability to associate one concept to another and make sense is it.
That is AIs biggest hurtle.
BotBarrier@reddit
These lessons extend well beyond "vibe coding". Disregarding explicit instructions, hallucinating statuses, lying, falsifying data to hide misdeeds. These agents have no actual agency to its users and little functional controls beyond protecting itself.
gullydowny@reddit
okay come on, which is it? It seems like "vibe coding" for some people has come to mean "hur dur I'm retarded, make me an app" instead of "write a function that does x, y and z and then write some tests that look for this, this and that".
onlyonequickquestion@reddit
I'm a working dev and so far AI has been helpful for tons of stuff and sped up my workflow, but trying to use if to come up with net new features has not worked so great yet. I'm not worried about my job (yet), but I am worried about less jobs in the future for new devs, and worried about the quality of newer devs who use vibe coding as a crutch instead of a tool.
esiy0676@reddit
We know, anyone who ever had a "chat" knows. It's another of those waves. Do not worry.