Had an existential crisis Friday afternoon at work…
Posted by oblongfuckface@reddit | ExperiencedDevs | View on Reddit | 314 comments
Team lead with 10+ YOE here, been working for a small startup for the last 18 months or so. I lead a team of 3 other engineers, plus 4 contractors we picked up while we’re interviewing other potential candidates.
Last week, it dawned on me just how much AI is impacting our standards of quality as engineers. I’m starting to see a drastic decline in critical thinking skills all over the place - it’s like folks no longer care to challenge themselves. Instead of using AI to help them understand a problem and get to a solution, they’re letting the tool do their thinking for them. And it’s completely obvious.
Here’s an example: the other day I was interviewing a potential candidate who looked very promising on paper. Ex-FAANG employee, strong background in python/java, plus some tangential experience to our industry. Our tech stack is mostly node/typescript, so to show his technical prowess in an unfamiliar stack, he demo’d an app he made for a client using a React frontend and express for backend apis. I asked him what his development experience was like trying something new instead of his usual stack he was comfortable with. His response?
“I used Claude to develop it.”
That’s it. Period. Nothing about what he learned, what challenges he faced, what he liked or disliked about the experience, nothing to show me he has any interest/patience/ability to learn something new. When pressed, his only explanation was that he was on a tight timeline, and he couldn’t deliver unless he used AI to build the project. Okay, fine, but I have no idea if you’re able to think about solving problems for yourself without relying on the answer machine.
Ultimately, we passed on him. He’s just one example of what I’m talking about though. One of my contractors has been reaching out to me for two weeks now for a ticket that should have taken three days max to develop, even for more junior developers. I’ve given him advice and guidance along the way, but I refuse to flat out show him how to do it. We hired him to work, and I refuse to do anyone’s job for them (I’ve done plenty of that in this position already, and I don’t have the bandwidth or the patience to do it anymore).
So Friday, he calls me and tells me he’s still having issues with his ticket and explains there’s a problem his SQL query not working as expected. Alright: send me what you have and let’s take a look at it together. He proceeds to sends me this monstrosity of a query that makes my eyes glaze over the second I see it. At least three nested CTEs, aliases that make absolutely no sense given the context of his problem, all for a problem that could be solved with a simple where clause on a distinct select statement. I asked him how he came up with this query. His response?
“I used ChatGPT to write this.”
You see what I’m saying? These are just two examples, every week I run into another situation where AI causes more problems than it solves, and on top of that, no one seems to remember how to use their brains anymore to think for themselves. Maybe I’m just becoming the old man yelling at clouds, but I’m very nervous for the future of our industry. No one seems to remember how to do shit for themselves anymore, it’s easier to let the machine think for them. What’s our industry going to look like with everyone delivering AI-generated garbage instead of sitting down, thinking about the problem, and coming up with a real solution?
Am I just getting old, or is this problem getting worse for anyone else too?
TLDR: AI tools are turning our brains to mush, is our industry as fucked as I think it is?
Glum_Cheesecake9859@reddit
I am using Claude for the past 2-3 months, "successfully" converted a 5 year old Angular 9 app that I wrote myself, to React 19 as an upgrade. Used Claude for 90% of the code but it was quite frustrating, partly my fault for messing around with Tailwind etc.
There are a certain things Claude (and most other mature models) do well, and some things not so well. The latter will cost you a lot of time and sanity to fix. All in all AI is a great glorified auto complete tool and also can be used in small doses to write new code. It won't replace good developers anytime soon. It can assist seasoned developers to get a 10-50% productivity gain depending on how they use it.
fuckoholic@reddit
I am strong at frontend and I avoid everything Claude or GPT give me. You ask them something as simple as a fade in on a modal and they will introduce a ref, a useEffect and some extra utility functions for something that you can do without any of it. And it's everywhere. The worse the model, the more useEffects you see. For me they are good search machines for human knowledge. Not good for quality code.
Glum_Cheesecake9859@reddit
I agree, they are good for bite sized tasks, auto complete, and should be used sparingly. I would not create a brand new project from scratch with AI.
KnowledgePitiful8197@reddit
The more fucked industry becomes, the more money will be there to collect by folks who still "got it"
tr14l@reddit
I can see you have never been a manager. Here's what will happen: they'll be to invested into AI, so the entire industry, all at once, will start shelling out 10k PER USER a month you a product that, eventually, will do it because it will be the second stage of the gold rush and research will increase even 10x from what it is now because it will become a multi trillion market.
Because of that, the previous top tier engineers who are refusing to use AI and upskill will take the mid-tier and lower jobs for a couple years until prices stabilize.
Is AI ready right now? Probably not. Does that matter? No. Because the people with the money to drive it think it is. Contracts are signed. Box is open. Egg is cracked. There is no "going back"
IceMichaelStorm@reddit
BUT for AI to become better it also needs to learn more.
Where does it learn from? Ok, partially reddit. But partially also other sources. And guess what? More and more content online is generated by AI. So bullshit will become more frequent, AI will learn more bullshit, quality will decline.
Well, that’s one potential path. Curious!
tr14l@reddit
It doesn't need to learn more. It already knows everything nearly every engineer knows in the planet. The problem is the context window. It's an engineering problem.
noxispwn@reddit
My bet is that the bubble will burst before AI becomes actually good enough to straight out remove engineers from teams You can generate all the hype you want with impressive demos in the limited scenarios where it kinda works, but as services and products become buggier, unscalable, unmaintenable and straight out unfit for purpose, the companies that depend on them will collapse.
tr14l@reddit
Yes, the .com bubble burst too. But actually stop and what that looked like. You see desolation in the internet markets anywhere? How long did they slow things down? What did corporations do after the first settled? They put in BILLIONS. And web app development is now, by far, the biggest sector of software development.
The California gold rush ended, but in the end it left California a powerhouse.
Right now is the gold rush.
Groove-Theory@reddit
Alaska had a gold rush too (in 1899). Not exactly a powerhouse.
Also California didn't become a powerhouse until around the early 20th century (due to completion of transcontinental highways. Also manufacturing a significant amount of US arms during World War II multiplied the population too. All almost a century after the original gold rush.
Analogies being.... "gold rushes" do not dictate future long-term success.
Meta_Machine_00@reddit
Can you give a specific time for the pop of the AI bubble? Vague predictions are cheap. Let's get a solid month of a year so that we can see who knows what they are talking about.
Groove-Theory@reddit
No
Bubbles don't pop on a specific date. That's what makes it (scarily) a bubble.
Capital overextension, declining marginal returns, investor sentiment shifts, none of that obeys a calendar, it obeys dynamics.
If you want horse-race betting, Polymarket probably has odds.
Meta_Machine_00@reddit
Capital overextension relative to what? What else are they supposed to invest in? Land and buildings with restricted building? The most recent big investment idea was diversity and inclusion programs. That has all effectively backfired, but all the companies are still standing. The tech titans holding up the market aren't about to tank.
noxispwn@reddit
What happened after the .com bubble burst was that only the minority that was actually useful and valuable survived and eventually thrived, while everything that mostly relied on hype and "potential" imploded. When the same thing happens to AI, I expect companies and engineers that have a sustainable and practical use for AI tools to enhance things to benefit, while those who went all-in on AI based on what it might one day be able to do (but not today), will fail.
tr14l@reddit
I don't disagree that AI wielded by experts will be the premium product people will pay for in the long run.
I just don't agree there is going to be some pendulum swinging to the other side. The people who are acting like AI luddites are going to get shut out completely, and using GPT as Google search+ isn't going to get them much further than not using AI at all
You think letting AI write all the code is dumb? Fair enough, it's pretty dicey. You think everything is still going to be done by hand in 5 years? Well, it's a bold move, Cotton. Let's see if it pays off
noxispwn@reddit
I don't think that everything will be "done by hand" in 5 years. I agree that there are useful applications for AI right now and we should be taking advantage of them; I know I am. However, my point is that AI by itself is not close to being a replacement for the engineers that should be using it, as much of a wet dream as that is for some. Our most likely future is AI being a useful tool to speed up and augment software engineering rather than putting them out of work. As with any useful tool, those who use it will probably be more productive than those who do not, so it won't make sense to avoid it.
DefinitelyNotAPhone@reddit
How do you increase R&D improvements in LLMs by 10x when there's already literal trillions invested into it and the models have consumed 99% of the available data in existence? At some point investors are going to want their money back and no AI product in existence has delivered on either the insane pie-in-the-sky promises made by the people harking it or the profits that investors expect to see.
This is a bubble. This is tulips that cost a million dollars a piece. The overwhelming desire of boardrooms to make their payroll expenses disappear forever will prop it up for far longer than it should be, but at the end of the day reality will beat out investor hype.
djnattyp@reddit
Or... along with all the other dumb shit going on in business and politics the bubble pops and there's no work because of Great Depression 2.0.
SDplinker@reddit
you are getting downvoted but I think this is the more likely outcome. I get the OPs take and actually sympathize/empathize. But it's not fair to just blame employees or job candidates....there's such a massive push in all industries and often heavy pressure from management to use it and justify the CFO/COO/CEOs pipe dreams (and thousands of dollars spent on tools) that fine, good luck taking a stand and losing your job. Despite overall good pay most of us won't take that career/job risk, not with the inflation and political climate we are in.
oblongfuckface@reddit (OP)
True, valid point. There’s pressure from all directions to use AI however you can, and it’s definitely not worth the pushback. Maybe I’m being overly critical, but I can’t help but feel there’s an overall decline in quality since the whole AI takeoff started.
Also love the vintage Jagr pfp btw!
SDplinker@reddit
it's definitely declined. I only have my personal experience to draw from but it's almost causing its own form of burnout. The landscape is changing so rapidly, the tools are very good at some things and mgmt gets gassed up about it but it's horrid at other things. It's difficult to manage expectations and also get flooded with all this new tech.
Meanwhile AI is enshittifying a lot of our consumer experience. It's also dehumanizing. Maybe someday it will be the tech utopia we see in some sci-fi but in the near term it kinda sucks.
KnowledgePitiful8197@reddit
Correct, never been manager. But I don't mind using AI to help me with stuff I'm already familiar with. I treat it as a intern. It will produce something of unknown quality that needs to be triple checked before used for anything other than internal tooling. But my scenario is not the main use case for AI. Which seems be more like person trying to develop stuff they have no clue about. So let them burn. Your competition wants to destroy themselves? Why stop them ...
Only-Cheetah-9579@reddit
I think the same. We will be rare and should charge 10x more.
aidencoder@reddit
I think that new AI systems have their use. They're great as a "natural language mouse and cursor" ... a new UI model if you like. Amazing, people can talk to machines that have facts in their context.
But for programming? Writing the code is the easiest bit. Programmers are lazy by nature. We make languages that are expressive, logical, and unambiguous. Natural language is ONE of those things.
Great. A machine can write some kind of code for you. You just let a machine do the easiest 10% of the job. Now what?
Honestly, it is dire. Even if machines could _somehow_ iterate on complex code for us and the result was beautiful ... entropy will still win. The system will rot, and nobody will know what to do when the tangle gets too tight.
AI has its place, but it isn't the new compiler. It isn't that abstraction IMHO. It's a user interface, not the system to define logic.
gajop@reddit
There are some tasks where you can almost imagine all the code that's necessary to write but it's just time consuming to write it. One-off scripts, refactoring, porting/migrating, etc. Surprisingly a lot of that kind of work in some positions.
slonermike@reddit
Yeah I use it a ton for one-off scripts, boilerplate, and integrating APIs with painfully obtuse documentation (ie Google).
NoobChumpsky@reddit
Great for heavy rote boilerplate as well.
simfgames@reddit
That's where my first 100% speed up came from, boilerplate and faster coding. The 10% you're talking about.
The next 200%, though, came from outsourcing all but the highest-level design decisions to the LLM.
You're correct as far as the tools you've seen go (and the way people use them). But I don't think the custom tooling I'm using is anything revolutionary, so I think you won't be correct for long.
lordnacho666@reddit
I'm perfectly happy with using modern tools to write stuff, but the test is can you explain what happened without saying it came out of the tool?
If you can act like you wrote it yourself, I don't care if Claude wrote it.
If you can't say anything beyond "Claude did it" you're not a senior dev.
oblongfuckface@reddit (OP)
This is total fair and valid imo. I’m not saying AI should never be used, it’s a tool to improve productivity and I use it as well. But if you’re shipping code that’s generated by AI and can’t even bother to try and understand what it’s doing, that is a major problem and will lead to entropy as others have pointed out
As an experiment I tried using AI to fully work out a solution for me a week or two ago. What it gave me was garbage, so I scrapped it and did it myself. And I knew it was garbage because I used context and my own understanding to know what it gave me was incorrect
po-handz3@reddit
If you already had the context and the understanding then why didnt you include that in your prompt?
Clear context, roles, actions, understanding, expected outcomes - these are the absolute most basic parts of effectively interacting with current genAI systems. Wtf is the point of 'testing' technology you dont understand how to use?
oblongfuckface@reddit (OP)
lol, for the record I know how to use AI tools for code assistance! When I say “experiment” what I mean is “can I give the tooling my current code, what I want to achieve, and what constraints I know exist, and will it write a feasible solution for me?” It didn’t, so I scraped it and wrote it myself as I planned to initially.
Which is more effort: iteratively prompting an AI tool with the cognitive reasoning ability of a 12 year old to give me something resembling a working solution, or coming up with a solution and coding it myself?
Ok_Individual_5050@reddit
Coding. You are describing coding, with a looser syntax than normal
wobblydramallama@reddit
tbh that person probably is more focused on delivery speed vs quality. for many startups and companies this enables them to get money faster. churn features asap and you look great for leadership
Upbeat-Conquest-654@reddit
One thing I'm looking for in people is a certain sense of curiosity. They can literally ask these tools to explain their solution to them. They can ask follow-up questions if there are parts they don't understand. It's more simple than it has ever been before. If someone can't be bothered to do that, that tells me all I need to know.
Ok_Individual_5050@reddit
Just to correct a common misconception: an LLM-based coding tool cannot explain the rationale behind the decisions made, but rather it can try to invent a rationale based on a summary of the code that exists at the end.
gillythree@reddit
I think that's actually what happens in our own brains too. Obviously, it doesn't feel like that, and I can't remember where I heard this, but there are experiments that demonstrate that we frequently make up our rationale on the spot and believe we're remembering it.
Then again, I don't think the context was coding. Maybe it only applies in smaller situations, and not in complex problem solving or creative situations.
Ok_Individual_5050@reddit
No. I can explain the steps I took to get to a given output. That's part of being a professional engineer.
congramist@reddit
I agree with you, but I think you might be missing the point.
They are saying that in times when we can’t remember why we did something, we often tend to rationalize and interpret our rationalization as memory, when we aren’t actually remembering squat, we just can see now why we did what we did. Sometimes only then does this jogs a memory which supports our present understanding. Sometimes not.
The problem with their statement, imho, and why I agree with you, is because we are using actual reasoning and not summarizing and bullshitting predictive text and calling it “reasoning”.
Schmittfried@reddit
They are both right, but in the most pointless kinda way. Yes, humans do come up with retrofitted explanations for why they decided something intuitively, but they can reason and we can explain our deliberate choices.
An LLM is a next token predictor. Whatever explanation it will give you is just as much repeated auto complete as the original solution. As such, it’s not an explanation at all. It’s a combination of words that are statistically likely to occur when sampling text in the context of the previously generated solution.
That’s a fundamental limitation of reasoning in LLMs and replying that humans also sometimes make stuff up doesn’t help at all.
Ok_Individual_5050@reddit
I think people really struggle with this distinction. Yes, people sometimes bluff and bullshit. LLMs CAN ONLY bluff and bullshit. When they're correct it's because, statistically, the correct answer is the one most likely to follow your prompt.
Schmittfried@reddit
I‘d argue even bluffing and bullshitting are misleading terms. They may be fine for explaining the situation to normal folk (though I still prefer repeated auto complete). But here, when talking to other software engineers we should be precise. The term bluffing implies agency, which an LLM doesn’t have. WE are bullshitting ourselves by essentially failing the digital mirror test.
gillythree@reddit
I know this is true, logically, but it totally blows my mind! In fact, when I imagine how a human mind must work, I figure there must be a lot of this same thing going on. And then I start to have my own existential crisis! 😆
Meta_Machine_00@reddit
So now take your engineering skills and apply them to your own thought process. The explanations that are generated out of your head aren't coming out of nowhere. Your brain has some sort of context window and it generates the explanation out of you piece by piece. If you think about it, that sure sounds a lot like an LLM.
Schmittfried@reddit
That’s not at all how the brain works. This is the classic fallacy of engineers over-applying CS concepts and thinking they‘ve figured out life and consciousness.
Meta_Machine_00@reddit
My viewpoint is nothing more than occam's razor. To assert anything different is to believe that people are magic.
Schmittfried@reddit
Your viewpoint is unfounded wishful thinking, that’s what it is. The dichotomy you stated is utter bs.
Ok_Individual_5050@reddit
No.
Look. I have a PhD in natural language processing. I know pretty well how LLMs work. The overlap between human cognition and neural networks was a common subject of discussion in my department. The notion that the brain operates on anything like a context window for an LLM flies in the face of everything we know about psychology and neurologistics.
gillythree@reddit
Ok, I think I remember where I got that idea in my head. I remember an old CGP Grey video where he discussed experiments on split brain patients. Surely, you would know more about this than me. I'm curious to hear your thoughts on it, particularly with regard to a developer explaining the rationale behind a design decision. Have you ever studied the phenomenon discussed in the video?
Meta_Machine_00@reddit
You do agree that human sentences are generated from neurons firing, correct? And that sentences and thoughts are constructed across time from existing neuron firing pathways?
Ok_Individual_5050@reddit
The "neurons" used in "neural networks" look nothing like actual human neurons. As for your second sentence, no I do not know that because it is a drastic oversimplification.
Meta_Machine_00@reddit
It doesn't matter if it is a simplification. Brains generate outputs like these comments here when neurons are fired off in a chain. The neurons and the firing path generate the behaviors. There is no man in the middle that can pick and choose how the neurons fire away because the person is generated by the neurons in the first place. It must all be automated. It is impossible for it to work any differently.
hyrumwhite@reddit
Consider looking at a series of if statements.
You can look at them and reason your way through them. Understanding that certain states imply certain outcomes.
An LLM cannot do this. It ingests the if statements as tokens it uses to influence the tokens it outputs.
For comparison, most people who’ve never seen code could reason through well written if statements. If an LLM has not been trained on code, it’d likely output something useless.
Although, now I am curious about that actual scenario.
Upbeat-Conquest-654@reddit
Valid point. But the explanations the LLMs provide are usually helpful for me, and I think that's what matters.
lllama@reddit
But.. Claude did do it right? I think it's problematic to pretend Claude did not, and to then retroactively explain away whatever Claude did as if you did.
AI tools are not designed to "help you write code", they're extremely insistent on writing the code for you. So this is how people will use it. It'd be better if the commits were tagged as such.
big-papito@reddit
"My IDE autocomplete did it, donno, go ask it, I guess".
The worst part is that a ton of FAANG engineers are not that useful. They are junior and interview-smart, but to work with a big tech vibe coder at a small company sounds like a recipe for pain. The things they are required to know are very FAANG-specific, but for anything else, sounds like they are encouraged to just shut off their brains.
Ok_Individual_5050@reddit
The impression I get (I might be wrong, we don't have many ex-FAANG ppl in the north of England) is that the way those orgs are structured almost discourages having a sense of ownership of the code you wrote unless you're fairly senior?
The_Right_Trousers@reddit
It's not possible to have a sense of ownership of anything but the current project. There is an unbelievable amount of code. Billions of lines. Which 30 million lines are you going to own?
That being said, at my little part of the FAANG I work for, we are encouraged and incentivized to take ownership of what we work on. It's one of the things that officially differentiates seniors from juniors.
Refmak@reddit
I’d argue hard that if you can’t reason beyond “Claude did it”, then you’re not even a software developer.
How much software development did he do, versus how much did he offload to a tool?
plinkoplonka@reddit
I did this last week (I'm a principal by the way).
I had a fairly long and complex class almost finished. Logic was all written out and structured in pseudo-code before I started, all commented, formatted properly etc.
By Friday afternoon I was burned out (70+ hours last week again) and needed to get this off my desk for this week.
I couldn't figure out why the last loop I needed wasn't outputting properly, so used Amazon Q to find the problem.
It found it immediately. Stupid mistake in my nesting of logic that I should have spotted, but somehow missed.
Instead of needing to wait until Monday when I would have found it with fresh eyes, AI found it for me and then pointed me at the fix. I then had to update some of the code so it made sense in context of the rest of my class/methods, making conventions etc.
I'm now free to do whatever I need on Monday instead of messing with this, so it can be helpful if used properly.
steveoc64@reddit
I recently wrote some new web hosted systems for work to do some complex data analysis and visualisation to highlight some serious problems in our existing legacy internal systems.
I did this in my own time out of normal working hours, so just chose the most appropriate stack to get the job done as neat and as quick as possible. PoC on steroids. So I wrote it in Zig, and used that to pull in some mature C libs. Easy peasy. The code is super easy to read, modify and understand.
Then I go on leave for a while, and writergate happens.
Zig is easy to understand.. But … even the best and most up to date LLMs have no idea how to work with Zig code, let alone help migrating any code to the new 0.15 Io.Writer breaking changes. All of them will 100% hallucinate nonsense suggestions, mix in imaginary Rust crates, or bits of JavaScript that obviously won’t even compile.
It’s not that hard - you just need to engage your brain and spend an hour reading the official docs to make the required changes.
None of my coworkers can touch it. A year ago, no problem, but today, no way. They have all gone head first into AI vibe coding, and have collectively lost their ability to reason in the space of 12 months.
fuckoholic@reddit
I had the same experience of GPT giving me Rust crates when I used Zig. In fact I had trouble installing zig and asked GPT for advice, it gave me cargo add zig
As for hallucinations, LLMs hallucinate even some of the heavily used npm packages, because they probably don't have enough data for the specific use case I wanted to do with them, so they just spew out one lie after another.
unconceivables@reddit
It's not really AI's fault, it's that the industry is full of people who have no idea what they're doing and were never good even before AI. When I was interviewing people 20 years ago it was completely different. Hiring these days is a nightmare.
oblongfuckface@reddit (OP)
The interview process is indeed a nightmare, especially remote interviews! I never know if the person I’m interviewing has AI listening in to give them the answers or if they’re being genuine.
I had a candidate a few months back that hit every single technical question thrown at them on the head. Every time, without hesitation. Even complex algorithmic questions, each one was a nail on the head with a perfectly worded response. Like come on buddy, it’s obvious you’re cheating. An engineer giving an honest answer is going to stumble on words, make mistakes during explanation and give correction, etc. Nobody knows everything, it’s more obvious than a lot of people realize you’re using AI to cheat
bigbry2k3@reddit
How are they answering tech interview questions with A.I.? It seems far too incredible. I don't think anyone could get away with this.
oblongfuckface@reddit (OP)
It’s happened more than once. There was another gentleman I interviewed that would have “network issues” every time I would ask him a question, despite the fact I could see and hear him perfectly fine on the call each time it happened. After 30 secs or so, he would “come back” with the answer. Maybe I’m just cynical, but my suspicion was he had a phone with ChatGPT listening to feed him information.
It’s not that hard to tell when someone is giving genuine answers. If it comes into question, we ask for an onsite interview to see if they’re legit (which we do anyway as part of our interview process). In my experience, every person we’ve suspected of using AI in some way has bowed out before the onsite
bigbry2k3@reddit
Wow! I believe you. I just feel it's so strange.
ThrobbingMaggot@reddit
100% agree. I have a different but similar issue in my job with a hero dev. As we are all WFH and annoyingly in a no camera culture, this dev will just counter anything you say by using prompts like 'what are the pitfalls of approach x'.
Also this is not paranoia as he has done this, then forgot and shared his screen on a 1 on 1 call and I have witnessed the prompts.
unconceivables@reddit
Yeah, you just have to dig into some technical parts of their past experience instead of generic technical questions they can feed into an LLM. I refuse to hire anyone who can't talk about things they've done.
Speaking of that, I love when I see on resumes (and I just saw this last week). something like "optimized a SQL query resulting in 20% performance improvement." Like sure, good, but why was that important, and why are you highlighting a single query as a bullet point on your resume. People who have accomplished so little that they highlight something so trivial on a resume are an automatic pass.
inglandation@reddit
I gotta say, it takes some balls to say you just used Claude at a job interview lol
fr4nklin_84@reddit
I’ve conducted a few interviews in the last 3 months and becoming a common theme. Ask them a question “oh I’d just use Chat GPT”. The other common theme is when you ask “what are your hobbies, what are you interested in?”, most of them say they’re studying AI. I honestly don’t know heaps about AI - I’ll say something like “oh I’ve been looking into RAG it seems like something that would be very practical and achievable in our business” - and they have no idea what I’m talking about.
inglandation@reddit
Haha the bar is low, maybe I should interview at your company.
potato-cheesy-beans@reddit
They don’t see it as ballsy, they’re proud of it. I’ve got a friend who works at MS and he knows I’m not exactly sold on AI so he constantly tells me stories of how they’re all using Claude (not copilot hilariously) and anybody not using AI to code most of their stuff is seen as a relic or jest being difficult. They’re all paying for Claude out of their own pockets too.
Perfect-Campaign9551@reddit
OP is acting like a dinosaur, many companies want you to embrace AI , these devs are doing that and learning new technology and OP yells at cloud
potato-cheesy-beans@reddit
They’re not learning anything if they can’t explain what they’ve built… you can’t expect senior level pay and responsibilities if you can’t explain the basics of what you’ve done.
There’s nothing dinosaur about it, it’s not unreasonable to expect that if you’re making decisions you are able to back that up with more than “ChatGPT told me to do it”.
Honestly, relying to heavily on ai is just the new cut and paste from stack overflow, but because CEOs see it as a way of either speeding things up or reducing their biggest development cost (engineers) it’s suddenly corporately backed and encouraged to not think about how you build things… for now at least. It’s going to be an interesting decade ahead as we have juniors coming up and absorbing the culture of where they’re starting out - half the industry are “all in” and aren’t bothered about how things are built.
Perfect-Campaign9551@reddit
They don't need to learn it. That's the point. It's an abstraction that makes it so you shouldn't have to know
I know that RIGHT now AI isn't good enough to use it that way, the my point is, them attempting to use it that way is still learning new things and keeping up with the future. For you to hold them back "they don't know anything" is old thinking.
HelveticaNeueLight@reddit
You hit the nail on the head. I find it so funny to interact with devs I knew before LLMs took off that now embrace it whole heartedly.
Most of them were never particularly talented to begin with… but now they will immediately think you’re stupid if you voice any concern at all about AI.
One guy i know in particular is constantly reposting slop on linkedin about how “in the future, companies will only consist of a single employee (the CEO) that manages all the AI agents that run the company for them”. Like I’m not even particularly anti-AI, but some of these people need their head checked.
Meta_Machine_00@reddit
Humans brains are generative machines too. They can only express precisely what their brain generates out of them at the time. The problem is that humans hallucinate about the nature of their own intelligence and think that they can willy nilly hold different perspectives at different times. But that is simply not true.
What happens now and going forward is a simple generation of the physical universe itself. AI had to emerge, and all of the human behaviors we see are also inevitable.
HelveticaNeueLight@reddit
Are you addressing anything I said or do you just feel very smart going through this thread regurgitating whatever you heard the first time you read Robert Sapolsky? Of course thought is likely deterministic, I never implied it wasn’t. The neurons that power our brains and the perceptrons that power LLMs are clearly very similar.
But I’m not talking about that. My comment is about the actual real world use of the tech. As of 2025, AGI does not exist yet. I bet it will come about eventually but I don’t know when that will be. To get there will likely require some paradigm shift (or multiple) on at least the same level as transformers were for the original machine learning architecture.
My point is about the developers I work with using the tech rather than the tech itself. Many of these devs I interact with are obsessed with agentic AI, MCPs, prompting, but fail to understand how the algorithms actually work behind the scenes. I think this stuff is going to change so fast that techniques like prompting are going to become very quickly outdated.
Until AGI exists, I firmly believe devs still need to focus on the actual engineering understanding because otherwise you’re just a slave to the output of the LLM. Once AGI does exist… well none of us will have a job anyway so I’m not sure anything will matter.
Meta_Machine_00@reddit
AGI cannot exist. Intelligence in humans is not "general". You cannot diverge down a path. It is all straightforward and algorithmic over time. Just because humans cannot predict future outcomes does not mean that an outcome they observe at some point could have somehow been different.
HelveticaNeueLight@reddit
Saying “AGI cannot exist” is such a crazy absolutist claim that you cannot possibly back up.
Have you worked at any of the firms or universities researching this technology? Do you have a degree in philosophy, biology, computer science, mathematics, etc? What you are saying simply does not match the views of the majority of experts in the field.
You sound like a bullshit artist that consumed a crash course on determinism and now think you’re an expert on everything.
Meta_Machine_00@reddit
We can only comment how our brains write it out. Where do you think your words have been coming from?
HelveticaNeueLight@reddit
I’m not going to have a reddit debate with both of our grade school level understandings of philosophy, physics, consciousness.
Put up or shut up. Give me some credentials to reflect your expertise, otherwise I don’t believe you’re engaging in this conversation in good faith.
Meta_Machine_00@reddit
Your argument from authority is surely something, but I guess you don't understand the implications of how a person attains these credentials. Do you think all people around the world are capable of acquiring these demanded credentials or at the very least, does there need to be some preceding life circumstance involved?
r0ck0@reddit
It's funny how often techies fall into the trap of thinking that our jobs exist purely only because other people don't have the tech skills to do them. And that non-tech people are going to spend their time making & fixing stuff themselves, instead of just giving some vague requirements in a couple of meetings. As if they have nothing else to do with their time.
CMSes like wordpress have been around forever... but like 1% of clients I've seen actually bother doing updates themselves. They usually just email shit through to the dev to update anyway.
Like... cleaners & dog walkers exist too. And it's not because people are incapable of doing that stuff themselves.
Time is not unlimited. So people hire others to do stuff for them. Especially like... CEO & managers... that's the main thing they do. That's why middle-managers exist.
Meta_Machine_00@reddit
Why do you think they could think any differently than what you see coming out of them? If you understand that brains are a type of generative output machine too, then human beliefs and behaviors are perfectly explained. It's just the neurons talking.
The major problem with AI is the fact that humans have always hallucinated their own intelligence. They assume that they can be the homunculus that controls how the neurons fire. But that concept is totally bogus.
Which-World-6533@reddit
I bet a few years ago he was telling us how block-chain was the solution to everything.
po-handz3@reddit
Never talented at what..? Memorizing syntax? Writing boiler plate code? Reproducing asinine algorithms that are widely available in open source libraries?
Something else?
HelveticaNeueLight@reddit
How do you single that out in my comment while forgetting the context of this entire thread? I was referring to what OP and the comment I replied to were discussing.
I’m talking about shipping code for PR review that either doesn’t work or is overly convoluted in a way a human would not write it. Unit tests that don’t actually test anything. Comments longer than the function they annotate. Etc etc. It’s so painfully obvious some of these people are not even reading a single line of the broken slop they submit and expect me to review.
I do believe this tech will continue to advance but its current state causes constant headaches for me today.
Sea-Us-RTO@reddit
lol. as the amount of ai-generated code ticks up in the training models, the quality of the output will drop to a level that will make it no longer profitable. its a neat tool but its already starting to show its age.
LargeSinkholesInNYC@reddit
It might happen like in 50 years, but won't happen overnight.
Just_Stirps_Opinions@reddit
It'll be a lot sooner than 50yrs. Plenty of popular video games have already been created by a single dev.
AI can make people exponentially more efficient. However, they still need to be skilled at what they do.
Mediocre Devs will use AI to create mediocrity. A talented and skilled Dev will find ways where it can make them more efficient, they will know how to prompt the AI to not write slop.
regrets123@reddit
Plenty? Name five. I can think of 3. There are what, 500-700 new steam games a week now? (Quick search told me 50/day.) While I agree smaller teams will constitute more of the market, I feel your comment is hyperbolic.
topological_rabbit@reddit
Reminds me of a scene from The Big Short:
"Why are they confessing?"
"They're not confessing, they're bragging."
chipper33@reddit
“It’s a bubble!”
Meta_Machine_00@reddit
Brains are generative machines. Why do you think it is physically possible for them to not express the perspective you see being generated out of them?
dlm2137@reddit
This is where the term “AI-pilled” is apt. It’s a fucking cult with some people.
Seriously nothing says “imposter syndrome” more than this obsessive fear of being “left behind”.
If you actually know how to do engineering rather than having become proficient in a series of fads, you are not gonna be so worried about being replaced.
syklemil@reddit
Yeah, it sounds like similar social dynamics as conspiracy theories.
Which also applied to blockchain & NFTs, and a lot of the grifters from that scene have moved on to "AI".
At some point it's not about achieving anything that a non-cult-member can understand, but just about proving their faith to the other cult members. And actually knowing stuff becomes a threat to the authority of the cult leader.
blahajlife@reddit
Paying out of their own pockets to put themselves out of work. Smart!
cyrenical@reddit
Wut
matthra@reddit
Lets be real, proper AI use is a skill, and should be something we talk about in interviews. It's like google search, knowing how to use it is a plus, being completely reliant on it is a deal breaker.
alliswithin@reddit
It’s barely a skill.
Glotto_Gold@reddit
It's a skill like delegating effectively.
Mission_Cook_3401@reddit
It is more than delegating, skilled AI development is first in architectural ability, and matching business need to tech
Glotto_Gold@reddit
Do you mean building AI-based products, or introducing vibe-coding into your development flow?
Mission_Cook_3401@reddit
Vibe coding is telling the LLM what the end goal is, architecture is the goal, and the path… the architecture is then broken into plans, and all plans match the architecture specs.
If your architecture is decent; then it is not always necessary to read every line of LLM code, only when there is an obvious smell, common error in LLM understanding, or in critical portions of the code, such as security or UI.
turningsteel@reddit
I disagree. You shouldn’t be shipping things that you yourself don’t understand. That’s ludicrous and just asking for trouble.
Mission_Cook_3401@reddit
You can understand something by the tests it passes, the metrics it hits, the end result for the client. You can even understand something fundamentally without particularly
Accomplished_Pea7029@reddit
That sounds like the level of understanding the project manager should have. Not the developer who is responsible for the code
Mission_Cook_3401@reddit
Yea, perhaps ideally.. but developers that leverage LLM are moving faster than traditional chains of communication and planning.
The pm should move to a higher order organizational principle, across many systems, communicating more with marketing and data science. Cross team colander and improvement.
A non technical pm can’t guide a technical system build
anonyuser415@reddit
Vibe coding means you don't read the code. You just ~vibe~
https://x.com/karpathy/status/1886192184808149383?lang=en
Mission_Cook_3401@reddit
Ok,fair
Awric@reddit
Proper AI use is definitely a skill that’ll be focused on in things like performance review.
There are a few companies I know of (including the one I work at) that already do this, which is controversial, but I can see how it makes sense. The people who are great at using AI are noticeably more productive than before, and they get the recognition they deserve. The people who aren’t great at it (like the people who “vibe code” and panic when they hit a wall) aren’t given a bad rating or anything for now, but it’s seen as something they need to improve on
Pretend_Listen@reddit
A skill you certainly lack
ccricers@reddit
Google search is barely encouraged at interviews unfortunately. They expect you to know trivia answers from the top of your head.
oblongfuckface@reddit (OP)
Right?? And with no follow up! Credit where credit is due, I admire his gumption
Whole_Sea_9822@reddit
Was the guy unemployed?
Honestly it seems like he's just using your interview as practice or to pass time. Like people do all kinds of shit nowadays so it's not really that surprising.
tehfrod@reddit
Yeah. I've seen that advice given: to keep your interview skills up, apply and practice at a company you don't really intend to with at once a year or so.
It seems like really shitty selfish advice to me.
lostburner@reddit
It’s not selfish if there’s at least a chance that you might say yes, given the right offer.
Job hunting is a whole set of skills and tools, and it’s much easier when you’re not under existential threat because you have no income. There’s no similarly effective way to practice the process and get a feel for how you are measuring up on the market and what kinds of offers, enthusiasm, etc. are out there.
tehfrod@reddit
Oh, agreed. The advice was to do this at places you definitely don't want to work, though, which why i think of it as selfish. Better to get together with friends in your industry and trade mock interviews.
Whole_Sea_9822@reddit
Yea not only that but I don't know how real these stories are, there's quite a few on tech discord channels where top devs intentionally do interviews just to "kill time" and "troll". Like they'll give some absurd number for the expected salary and when asked how did they do XYZ, etc, they'll act all nonchalant about it.
The whole "I used claude" story guy fits this description. If he's employed then he doesn't give a fuck, he's just doing this for fun / practice / troll.
oblongfuckface@reddit (OP)
Not full time no. This might be the most plausible explanation honestly. Not really sure what he was practicing though, I feel like I got no useful information out of him during the interview 😂
r0ck0@reddit
If he was like this for all questions... Sounds like maybe it was just a classic case Sheldon-like nerd personality with missing soft/interview/social/communication skills? Could that be the simpler explanation? Not exactly unheard of in this industry.
Or did he otherwise come across as if he did actually have communication skills? But just chose not to use them?
Whole_Sea_9822@reddit
Yea he's just bored or something like really it's honestly not that crazy, like I post on reddit and spend hours, I literally get nothing out of this, I do it just to pass time and laugh on the inside, he's probably doing the same.
abrandis@reddit
Why????, if the app works and if you can explain it's innards , why does it matter?, assembler programmers would be clutching their pearls to think you used a compiler to write your code...
Let's get real, companies don't care how the sausage is made, the just don't , does it work, is it reliable, is it performant.. do you know how to modify it.. great so use whatever tool spits out the most code to get the job done.. .
I know this is going to be heresy to a lot of neckbeards here, but the fact of the matter, is code only matters when it doesn't work and you need to debug it, if it works accomplishes it's purpose its invisible.. that's really the question at hand can you work with what AI produces? AI is a tool, nothing more.. but saying no you can't use this tool because it does too much of the thinking is silly.
Ok_Individual_5050@reddit
You know, this notion that "the code doesn't matter to the client" is some of the most ridiculous shit I've ever heard when you have any development experience at all. The number one question that gets asked when there's a significant bug is "How did this happen". How do you ever answer that question without caring about the code and the quality of it?
abrandis@reddit
Your delusional if you think executives at a company care about the technical details, that's what I'm referring to, that's why younger paid the big bucks ...
. They just want you to fix the issue and move on... Code only matters to the folks directly touching it ... Do you care about the millions of lines of code you interact with everyday (payment systems, gps guidance , email etc.) ....at the end of the day people only worry about things they can change or have authority over ..
noxispwn@reddit
I think you're missing the point. The issue is not how the code is created, but rather that some of the people tasked with creating it are either unwilling or incapable of reviewing it themselves before passing it off as "done". It's fine to use whatever tools you have at your disposal to create it, but it is also your job to make sure that it meets standards and is fit for purpose. If you're using a tool that is not producing perfect results all the time, which is definitely the current situation with AI, then you need be able to identify the issues as they come and correct them. If you don't have the skills required to do that then you're going to do a bad job.
NoleMercy05@reddit
QA / UAT exist
noxispwn@reddit
Those are important but insufficient when it comes to evaluating code because they focus on the behavior of the software from the perspective of a user. Furthermore, their job is to find bugs, not to fix them; you still need someone able to correlate the erroneous behavior found with the source code and correct it.
Here’s an example of an issue QA / UAT wouldn’t catch: instead of using a vetted library for encrypting user data, the AI generated code that uses a custom implementation. While the code works, it exposes the data to unforeseen vulnerability issues and the app does not benefit from security patches that it would have otherwise.
Mission_Cook_3401@reddit
Bravo
Mission_Cook_3401@reddit
Vibe code is legacy code
MonthMaterial3351@reddit
>>Let's get real, companies don't care how the sausage is made, the just don't , does it work, is it reliable, is it performant.. do you know how to modify it.. great so use whatever tool spits out the most code to get the job done.. . it's 2025 folks don't be stuck by the dogma of languages or systems of yesteryear.
This is total bs straight from the gold standard amateur echo chamber.
Potterrrrrrrr@reddit
No one is saying that. If you read the post, the interviewee was unable to explain how anything worked, just that they made it with Claude. AI use today is the old way you’re expected to use google. Being overly reliant on google would be a red flag, same thing for AI use.
Understanding-Fair@reddit
It's one thing to use Claude, then actually read what it wrote, understand it, and maybe do some cleanup, but to just generate and ship is unfuckingbelievable.
norse95@reddit
I worry that there’s some hidden side effects in the code it spits out that I am just too “dumb” to see… meanwhile people are out here just pushing whatever it gives without even trying to understand it. Crazy
Understanding-Fair@reddit
Yeah I have that worry as well. I've actually been thinking of switching to a more functional style of programming to minimize side effects from AI. At least if it's writing small, pure functions we can reason about them easily.
Atupis@reddit
Thing is who cares if code is good, it is like saying I am using Vim or Emacs in job interview.
Ok_Individual_5050@reddit
I would genuinely be more likely to hire someone who says they're comfortable with IDEA-based IDEs than the VSCode-based ones in an interview. Choice of tools shows where your priorities lie as a developer.
r0ck0@reddit
I switched from jetbrains to vscode because of the extension ecosystem.
For me, the combo of vscode + my extensions is more tools that are useful to me than I had with jetbrains with its limited, buggy & dying plugin marketplace.
These debates about whether vscode is an "editor" or "IDE" are just subjective language haggling, and pretty much only focusing on default install state too.
Also more stable settings in vscode too. Lost my settings so many times trying to sync them with jetbrains IDEs. Their decision to split the IDEs into separate products made money for a while, but I couldn't be bothered dealing with that shit wasting my time any more. They claim IDEA Ultimate works with all languages, but it doesn't.
And their business decision to create Fleet was completely insane too... it's basically vscode without vscode's biggest asset... its extension ecosystem. wtf was the point of that? Aside from infuriating their long term loyal customer base.
It's such a pity, because if they'd just released an actual proper single all-languages IDE, and focused on making that stable instead of wasting resources on Fleet, a lot of us probably would have stayed.
So that's where my priorities lie as a developer. A stable editor, with more features in the end, that doesn't lose my settings just because I need to code in another language.
big-papito@reddit
They are REQUIRED to use AI at FAANGs now. Or you are getting shitcanned. There is a lot of getting high on their own supply, and it will be their downfall. It won't happen right away. Slowly, security issues, downtime, performance problems, and just bugs, will start creeping up
dizekat@reddit
FAANG pretty much requires you to claim you used Claude even if you didn’t.
DAA-007@reddit
Yes same. Even I was taking an interview for my company. And I asked the person how is your day to day work looks like.
He just very casually told, that he is mostly working with chatgpt and doesn't remember systaxes of array methods.
Glum_Cheesecake9859@reddit
Aren't employers pushing people to start using more AI? At least in FAANG companies.
anonyuser415@reddit
I would bet most employers do not want to hire an engineer who vibe codes apps and ships without understanding
MendaciousFerret@reddit
You'd be surprised. Engineers who know about excellence in software engineering care but CEOs definitely do not. In fact they are actively encouraging the opposite. It sounds like paranoia and negativity but a lot of tech leadership see software engineers as a big part of their problems; too slow, too expensive, too many problems. CTOs and engineering leaders who know better are all biting their tongues and nodding along.
anonyuser415@reddit
I am sure that there are employers fine with hiring someone unable to talk about their code.
I do not think most employers are at this point.
If an interview has you talk about your code, most employers will probably expect you to talk about your code.
Upbeat-Conquest-654@reddit
I wouldn't bet on that. I mean, most employers want engineers who solve problems fast. I don't think they really care about whether or not you understand the solution.
Glum_Cheesecake9859@reddit
Yeah.
coffee_beanz@reddit
This is true at my place. Not FAANG exactly but Big Tech. Our interview process has been completely revamped to allow for AI usage in interviews to see how “fluent” candidates are. It’s considered a plus. We are mandated to use AI in our jobs and our usage is tracked. I’ve been interviewing to move somewhere else and the advice I’ve been given generally is if a company allows you to use AI, then you definitely should to show you’re staying up to date and are comfortable using it. So far I’ve seen no in between in interviews. Either it’s banned entirely from the interview and any usage is an instant fail, or it’s encouraged and part of the evaluation.
intertubeluber@reddit
That’s some real IDGAF energy.
xampl9@reddit
Big Claude Energy, you might say
theshubhagrwl@reddit
The FAANG confidence I guess
mq2thez@reddit
The people who don’t fall into the trap will have even more job security.
Fidodo@reddit
These people who don't give a fuck about craftsmanship barely put in the work before AI either. At least now it's a lot easier to pick out the frauds because these tools enabling their own laziness are their own worst enemy.
My team is full of great people who care about doing high quality work. They haven't been affected one bit because AI or no AI they want the code to be high quality and they want to know how things work.
AI is great for people who want to learn because it's like a personal tutor that can rapidly advance your development, but that only works for curious people who have pride in their work. Uncurious people will offload their thinking and sabotage themselves.
PoopsCodeAllTheTime@reddit
I will keep repeating this to myself until it is true, it's been about two years already.
big-papito@reddit
Four years ago these tools were not that useful. Now you can easily fall into the trap of letting it do all the thinking.
PoopsCodeAllTheTime@reddit
But the marketing was useful ;) Especially in providing a cover excuse for layoffs
KeytarVillain@reddit
4 years? ChatGPT came out less than 3 years ago.
PoopsCodeAllTheTime@reddit
4 years ago we got GitHub Copilot and the post pandemic crash began
annoying_cyclist@reddit
I'm hopeful but doubtful about this as an answer.
The echo chamber around AI usage, vibe coding, and LLMs today is sort of singular, at least across my short career. Tons of people in the executive, VC, and investor class and many managers are convinced that it's the future, of immense productivity gains, etc. We can observe that the reality is mixed, that LLMs accelerate code generation and shift work/risk elsewhere in the SDLC (code review, defects, knowing what the hell you shipped a month ago), and points like OP's about engineer growth and brain rot. But the investor/executive/VC types are primed by their echo chamber to see such sentiments as engineers having their job security threatened, and in any case do not understand enough about software to really grok the objections anyway. Unfortunately for us, they gatekeep job security, and their opinion about AI, however wrong or reductive, can matter more than reality.
(For any Futurama fans in the crowd, remember the episode with the spa planet, and how Hermes organized it so efficiently that all the work is done by one Australian man? Maybe being effective and missing the AI trap just turns you into the Australian man)
Norphesius@reddit
That still doesn't change the fact that (unless there is some drastic improvement in AI coding tools) organizations that over rely on AI for code generation are going to collapse under immense tech debt. Like business destroying amount of tech debt. The higher the shit stack gets, the more damage its going to do when it falls over.
Regardless, I would rather be the "one Australian man" anyway; Actually employed, versus the dozens/hundreds/thousands that got laid off into a greatly constricted job market.
annoying_cyclist@reddit
Maybe I'm old and cynical, but I'm not so sure I buy that. Poorly used LLMs will absolutely amplify tech debt, tech debt will result in buggier products and a worse customer experience, but the end result of that is often customers getting used to worse software and companies accepting a level of toil to keep the pile going.
Amazon and AWS in particular are a common example here. Notorious for brutal on-call driven by tech debt, plenty of people predict its doom when it all falls over or they can't hire anyone to burn out, but in practice they have no problem replacing the folks they burn out to keep it going. That's going to become even easier if the market becomes more constricted, or if that sort of KTLO tech debt maintenance is rebranded as a support function rather than an engineering function and has its compensation adjusted down to match.
blahajlife@reddit
That's the key. It's going to be a question of what the likelihood and rate of that collapse is and its impact. Some places may scrape along, others will collapse entirely. Will everyone understand what got them there. Stuff like writing good commit messages and PR descriptions is going to be more important than ever in capturing the why and sometimes the how of changes. The AI summaries only really capture the what, which the code does itself. That's going to contribute to the tech debt burden as future folks will struggle to understand past changes.
riskbreaker419@reddit
I think this is the answer. The more people let their critical thinking skills atrophy from depending on LLM too much, the more valuable the people that don't allow that to happen will become.
I bring it up often in my job when I teach people how they can use LLM to help them improve some parts of their job. I always note that it works best the more you know about your code base and more domain knowledge you have (either product knowledge or tech stack knowledge), so don't rely on it too much. Not sure yet how many are taking the advice, but we'll be able to tell soon enough.
fuckoholic@reddit
My main issue with LLM code is that even if it sometimes happens to work, it's usually the opposite of succinct. LLMs do offer solutions, but they usually have unnecessary steps, or outright using the wrong thing. Like: I already have a raycaster there in code that I gave you, why do you create another one? And then again create another one? It's all technical debt from line 1.
Fidodo@reddit
I don't allow my own code to go into production without having thorough code review from co workers. Why the hell do people think it's ok to let AI code in without thorough review?
TheSkaterGirl@reddit
Worse is even when you show them, they still manage to fail lmao
Anacrust@reddit
A lot of orgs have little or no real engineering culture. Everyone pumps out whatever "works" in the moment and moves on. No real critical thinking or long-term accountability. AI will probably be a multiplier for whatever culture exists.
nickelickelmouse@reddit
God so true it hurts lol.
bigbry2k3@reddit
I can't blame you with these examples. In the first example you did the right thing. That would have been a complete mess if you hired that dev. With the Junior Dev who uses ChatGPT to write SQL you gotta get him off of that. Tell him before he comes to you, then he needs to take a look at the code and rubberduck it. Meaning, practice telling a rubber duck how he wrote that query and what he expects and where it's going wrong. Step by step without ChatGPT. By the end of the day he needs to have a query that works. If he doesn't then time to go to your boss and have ChatGPT's IP address blocked from access. The real problem with these kids using A.I. for everything is they paste our proprietary code into ChatGPT and maybe even with sensitive information. I work in the Healthcare Industry so that's so dangerous they don't even allow the IP address on any of our computers.
Blankaccount111@reddit
I'm not sure thats the cause. I think its more of a form of ennui. The "CEO LEADERS" of the world have unanimously proclaimed that they don't really care about workers and that they are certain AI/LLM's will be replacing all the troublesome employees. This industry has always had a lot of friction between doing the job and being perceived as doing the job. Rewards rarely followed good performance even before the AI craze. Now whats the point if the boss says you must use AI and they still don't care about the outcome they have essentially disconnected you from responsibility/rewards/consequences of your actions.
Fifthbloodline@reddit
AI is literally designed to glaze it's user, I just completed a cert IV in IT programming and even I can tell you it uses way more lines and objects than necessary. It's like a high school student trying to hit a word count in an essay without doing research.
On a side note, I can't seem to find work. I'm new to the industry, have some related experience from before I got my certification. Any tips? Looking in Western Australia.
babuloseo@reddit
Are you Australian?
alonsonetwork@reddit
Yo, hit me up dawg. I don't mind doing contract work lol. I use my brain... and I use AI exactly how you suggested: Help me to understand a problem and get to a solution (especially if it's very verbose)— mostly to catch my stupid errors like variable mispellings, and forgotten schema updates, etc.
What you're seeing is 2 types of people: Those who are mentally lazy, and those who are results driven. A results driven person uses AI to get to his end-result faster—but he verifies along the way and does his due diligence. A lazy person lets AI do the thinking for them and doesn't bother to learn the fundamental problems being solved.
On the SQL problem: yeah, classic. I fight with people online all the time because of ORM laziness. Just learn SQL. If you understand your DB, there's no query too complex. And guess what— AI ain't gonna learn your DB for you... It can just assist you in syntax, columns, and with the verbosity of typing out a query or procedure.
Anyway, message me if interested. TS / SQL is my stack.
rivasw@reddit
Those are post-pandemic engineers (the ones who take a Udemy course in a month). That’s why you don’t need to rely only on a candidate who can solve an algorithm, they should also be pragmatic and have a solid background.
mac1175@reddit
At least he didn't say he "vibe coded" it. I cringe when I hear that.
namonite@reddit
AI assisted vibing
anoncology@reddit
I wish there was a different word for vibe coding. Totally agree it is cringe worthy.
Upbeat-Conquest-654@reddit
Now you're giving me an existential crisis as well. The thing is, from a management standpoint, understanding the solution was never a priority. Management famously only cares about solving the problem at hand. And honestly, that is their job.
As a developer I had to understand the problem very well, often better than management, and then build the solution. But neither is really necessary anymore. Developers are becoming managers themselves, just having the answer machines solve their problems. And now I'm wondering, is it even necessary to understand the solution when it solves the problem?
When I use AI, I ask it to explain the solution to me and ask follow-up questions to make sure I understand why it did what it did. But let's be honest, that's just my own curiosity. It makes me grow as a person, but it does not help the company. I can totally imagine people applauding the lack of curiosity as a go-getter attitude of someone who is more interested in solving problems than technical details.
psyyduck@reddit
Yeah timelines will just shorten, and bothering to understand the code will be like bothering to understand the assembly.
Upbeat-Conquest-654@reddit
Those are good analogies. I don't like it, but it looks like that's where we're heading.
I'm too young to have experienced it myself, but I guess many of the people who programmed assembly thought the assembly code the compilers created was messy, needlessly complex and inefficient.
psyyduck@reddit
Yeah and in this case you can also just ask for quality code. I'm sure people will figure it out in a month or two.
I’m hopeful that having super cheap high-quality intelligence will improve society in the long term. Evil is almost always dumb. But it could be rocky in the short term.
Ill_Lead_9633@reddit
Yes on the first question.
No on the second. What we're seeing now with AI slop is not sustainable. Sooner or later people are going to be forced to come back to their senses.
IceMichaelStorm@reddit
Well, good reason to not employ the guy. And potential reason to fire the junior dev
Cool_As_Your_Dad@reddit
All I can say.. i'm f-cking tired of this AI push. We had our meeting last week about how AI will "enhance" our output etc.
I tried with a co-worker the vibe coding. HOLY SHIT... what a waste of time and effort. Yes it pumped out code like weeks works of it.. but then try to make 1 change. Takes days to "fix" 1 issue.
The x10 multiplier they talk about is a pipe dream.
If this bubble can pop so we can just go back to being normal...
Meta_Machine_00@reddit
It will never go back to being "normal" lol. Vibe coding is the new coding.
Cool_As_Your_Dad@reddit
Haha. Good luck to the people trying vibe coding in enterprise solution.
Meta_Machine_00@reddit
Free thought and action are not real. The only reason you see vibe coding in the first place is because it was generated by the universe to be observable by the existing entities. We could not avoid vibe coding. So if it happens, it happens. The great thing about AI is that it doesn't have all of the emotional baggage that the meat bots hold.
Cool_As_Your_Dad@reddit
You might be chained to not free think. The rest of us will continue
Meta_Machine_00@reddit
Please explain how free thought operates. Do you have direct control over which neurons are firing off?
bravopapa99@reddit
At 59, I see and share your pain. I am seeing elevated levels of PR-s with AI SLOP and it is driving me down. I refuse them point blank until all variables match naming conventions and the author can explain ever line like they wrote it; code ownership in thew new regime! LMAO
I am not having our code base fill with shit; I have had a fighting-the-management rough time for 5 years now to reduce the amount of human tech-debt seen when I first joined; I *like* where our code is and I will be damned if I am letting it backslide due to "AI" being pushed by management or any other bugger. They've seen the downtime reduce, UI errors reduce etc, lately we just had two months with nothing reported at all.
If your excuse is "I couldn't do it in time" then you failed to raise the matter and be given more time etc. That's not being a proper developer, that's being a slave.
Meta_Machine_00@reddit
Everyone is a slave to physics. You are a slave to your life's path and the current neural structure you have. AI is a mandatory generation of the physical universe. We could not avoid it appearing before us. If I am hiring, I am looking to avoid people that hallucinate about the capabilities of their own intelligence. Calling it "AI slop" demonstrates that you misunderstand both AI and your own brain.
bravopapa99@reddit
So, was free will baked into the Big Bang then? Hot topic right now!!
AI SLOP is a widely used common term. The training corpus contains all the good good and bad code out there they could find. Hallucination feels like a problem they cannot solve, at least yet. We have all seen the generated slop, eight fingers per hand, broken text, Will Smith eating noodles, faked legal precedents being taking to courts of law by lazy legal teams etc.
I use Devin almost daily for TTD test assistance, Jira population etc but I don't trust it to offer up too much code, especially core code; throwaway stuff is fine e.g. moving stuff about it trying to get a fresh Docker script up.
Your final two sentences clearly demonstrate a willingness to make snap character assessments based on... nothing really. So you misunderstand me to, I have a long standing interest in brains and consciousness, I have followed Roger Penrose / Stuart Hameroff for the last 5 years at least as I find their proposal that microtubules in the brain could bet able to take part in the generation of what we call "our consciousness", he also does not believe that our intelligence is the result of a computational process either:
https://www.youtube.com/watch?v=iTVN6tFknCg
Finally, now a lapsed lay Bhuddist, but 15 years of so of meditation and introspection probably means I understand my brain more than most. My brain. Not yours.
Meta_Machine_00@reddit
You only understand your brain in the ways that can be generated out of you at the time. Meditation isn't actually useful. You just "print" out whatever your brain is forced to produce at the time. So once again, these beliefs demonstrate a lack of understanding of what is going on.
Yes, "AI Slop" is a common term, but 99% of humans hallucinate about what humans are. I didn't say it was avoidable, but it does demonstrate that you currently share that common misunderstanding with most everyone else.
silvergreen123@reddit
As a junior, I worked with three other juniors that completely relied on AI. They performed poorly, and either couldn't finish or took incredibly long with sloppy code.
I also worked with two other juniors that used it, but effectively, and they were able to implement features quickly and timely
slayemin@reddit
Yeah… I cant say I have much exposure to this personally since I rarely work with others. But this sounds like a lot of people arent understanding how to use AI correctly. Its a tool to help guide your work, not do it for you. I have used AI sparingly and its been hit and miss.
Back when I was learning precalc 2, we used to have these fancy graphing calculators which could do a lot! Our professor discouraged use from using the calculators too much. He said that the calculators could turn into a crutch, causing us to rely on it too much and be unable to do basic math without it. The warning was there: dont let your calculator replace your thinking, skills and intuition. The same principle is still true today with AI!
AI is no shortcut to avoid understanding a problem and solution deeply. AI hallicinates. It may not understand your prompt because you wrote it poorly, but you take the generated response and treat it as gospel coming from an omniscient mind reading entity… The AI may even just give flat out wrong answers. How would you know the answer it gives is wrong? How do you check the work of an AI for correctness? How do you know the generated output isnt just over complicated AI slop?
The reality is that we humans have to own every line of code written, even if its generated by an AI. A part of that ownership process is understanding precisely what each line of code does, as well as vetting that the solution is 100% correct and passes all unit tests. Code you do not understand or did not vet should never be checked in! its incomplete. You will be the maintainer for that code and it will need to be updated in the future, so that requires fundamental understanding and good documentation.
Personally I am wary of AI code. I recently needed to convert a cubemap image into a spheremap by ray sampling pixels from the cubemap surface. I had a 75% working solution, but decided to ask ChatGPT to give me a solution. It spent over a minute calculating a carefully thought out response and then it gave me an overly complicated white paper response which was just garbage. I then spent a minute googling, found a forum post from 2017 from a respected graphics engineer I knew, and his solution was about 15 lines of simple code. He had a minor error where he wasnt clamping his values to avoid floating point precision errors, but otherwise it was perfect. I used it, understood it, tested it thoroughly, fixed some minor issues, and made it mine. Thats how you’re supposed to do it.
WittyCattle6982@reddit
That's one of the dangers. You really have to read and know what's going on. The problem is that you might know, but it's very easy to forget.
angrynoah@reddit
Indeed they are.
Yes it is.
Downtown_Isopod_9287@reddit
It’s just like using a random, untested library. You don’t know who wrote it, you don’t know if it really fits your needs or use case and you certainly don’t know if it’s even secure or free of show stopping bugs.
perfection-nerd@reddit
From my experiences, every code write by AI, I will take a look at it and try to understand what are they suggesting for me and learn from them
cballowe@reddit
On some level "have delivered a solution, on time, using a set of technologies acceptable to the customer" is a useful point. Being able to steer the various LLM code generators into producing an acceptable output is a skill. Not the skill you're looking for, but it is a skill.
I usually tell people that the job of a software engineer is to solve problems for a business that revolve around software. It isn't always to develop software. It'd be interesting to drill into their experience with Claude - how they prompted, how they knew if it was good enough, the kinds of errors that they encountered and needed to correct. Even questions about how they assessed the quality, whether the product was a one off or needed maintenance, whether they were responsible for that, etc. (Someone who is actually senior+ would understand the importance of being able to explain that.)
djnattyp@reddit
Typing "No do it again, and don't hallucinate for sure this time" at a nondeterministic system is not a "skill".
CanadianPropagandist@reddit
Apologies in advance but over the last few weeks specifically I've soured on our industry, so this will come off very bitter, but I also sadly think it's true.
It's gonna get way worse and I think it's a multi-faceted problem with the industry and it's race to the very bottom of driving down costs. This will end in some very strange places. The curse of the quarterly result mindset.
For reference I'm in ops with ops coding tasks and ops duties, I worked with very talented senior developers I respected until recently when the entire top layer was laid off, replaced by LLM wrangling whiz kids of the hour. Amazing shit, and that situation is currently festering with results you can imagine coming to fruition soon.
Anyway, my day to day has become a constant arm wrestle with management to justify my relevance too. They can't fire me, and they know it, but they badly want to, if only the boy wonders of AI could figure out how to get Claude to do my job. The hot minute they think they can I'm getting airlocked.
So the tech industry as a whole went from coveting the best people, to resenting us for our cost, and now they're in a phase where they imagine they don't need us at all, if they can only unlock the right agentic prompts to replace us.
It's all a fantasy of course, and this harsh reality is in the process of manifesting itself once the bubble bursts, but in the mean time.. who can even muster up the fucks to give?
Of course everyone's taking shortcuts using AI. It's almost spitefully passive aggressive at this point. We aren't valued. We were never "part of the family". And our careers are disposable. Why even bother for these people? Use AI. Let it shit the bed.
bucolucas@reddit
I noticed a big push for HATEOAS at my company and started wondering if it was to prepare for AI, and sure enough it was
ares623@reddit
How does HATEOAS prepare for AI? Is it because it's more 'accessible' for the LLM?
p1-o2@reddit
Yes, LLMs work very well in hypermedia.
BalanceInAllThings42@reddit
Can't say it better myself. Disagree and commit. Leadership wants to force AI? Let's do it. To avoid reviewing AI slop PRs, we even added AI to review the code as the first pass. We are replaceable, and let it be.
RoadKill_11@reddit
You can’t outsource thinking
You should:
Use AI to help plan your features Use AI to discuss ideas Use AI to understand the codebase Use AI to carry out the planned implementations Review and read the code once you’re done and understand how it works
You should not:
Try to one shot everything Blindly accept AI outputs Use AI when you have no idea how you would do it yourself
Cahnis@reddit
The new batch of jr engineers are gonna be absolutely cooked by AI
Only-Cheetah-9579@reddit
you are not getting old. It's happening everywhere.
I have been working for around 10 years as a software dev and I completely understand your position.
A lot of people don't enjoy programming so they don't want to use their brains to make something.
It's one thing to use LLMs (I do that also) but it's another to just blindly accept the code they generate without thinking.
It's a trend in the industry that will result in layoffs imho, you don't have to work with these people if you don't want to.
LLMs accelerate enshittification.
NoleMercy05@reddit
I love programming. 35 Yoe. I use AI all day every day. I can build 5 apps at once now.
Only-Cheetah-9579@reddit
But do you actually read the source code for all the 5 apps?
I consider LLM based development outsourcing work, if you don't even look at it and get code from an LLM without review you are not doing much programming, sounds like project management to me.
If the LLM does the programming for you, then you are not the one programming.
Gwolf4@reddit
In my case is the other way around. I love programing, the software development industry kinda makes me hate it.
TraceyRobn@reddit
It goes in cycles. This AI replacing techs is the current one.
I've seen many come and go. There was CASE (computer aided software engineering) in the late 1980's, then object orientation, then UML, then no-code etc.
They were all meant to replace most of your engineers, and let your company use cheaper engineers. It never really turned out that way.
AI is impressive, probably better than all these trends combined, but, if the past is a guide, they will still need people who can think.
AlwaysAtBallmerPeak@reddit
This is why I work alone. I did consulting for many years in large corporate contexts. I already have too many horror stories pre-genAi, so I don't dare imagine what it's like nowadays.
My opinion: yes, the industry, as it used to be, is completely and irreversibly fucked.
But, this is a good thing. Large corps hiring lots of lazy and/or junior devs will get displaced by smaller teams consisting of senior devs who work hard and use ai tooling.
Meta_Machine_00@reddit
Brains are a type of generative machine too. It is not a person's fault that they are perceivably "lazy" or junior. They are a product of physical circumstance. The problem is that society hallucinates that free thought and action are real.
Perfect-Campaign9551@reddit
Every level of software is another abstraction that lets a developer "not have to think" about stuff. Are you gonna fail a dev next for not knowing how an assembly code works?
Are you not going to give credit for someone keeping up with latest tech?
Have you done any AI yourself?
NoleMercy05@reddit
This feels like when Java first came out.
So many of us C devs couldn't fathom letting the computer decide when and how to allocate and deallocate memory.
Their brains will turn to mush because they don't understand how memory works!!
The software will be slowed down dealing with inefficient garbage collection!!
These Java devs don't even know gcc optimization flags!!
_Kine@reddit
The way I've explained what I'm seeing to friends is that a strikingly high number of companies that have adopted the "AI" mindset for development have chosen to lower their bar of quality to match what AI can do out of the box rather than manage AI to meet their existing bar of quality. All comes down to money (surprise surprise). Why pay 100k for "good" when I can get "gud enuf" for 10k at the surface level?
NoleMercy05@reddit
Well if you are buying something they had 2 versions. 1 that works for 10,000 and another that works a bit better and was hand crafted for 100,000...
drnullpointer@reddit
I totally feel you.
I am hard at work trying to explain to management why AI, *the way it is being used*, is not the solution.
They all seem to feel I am being dramatic and just trying to save my position...
NoleMercy05@reddit
To your management all that code is a black box. Input - > Output. They don't care what the black box does inside. They care that it works well enough and how much it costs.
theSantiagoDog@reddit
I have noticed that when I don’t want to get too deep into a solution, I will lean on Claude to get me part of the way there. In that way, I can feel my critical thinking skills atrophy from disuse.
aidencoder@reddit
What's so wrong with a problem taking a lot of concentration?
Are people's brains that fucked that concentration is a bad thing? wtf
vegetablestew@reddit
Sometimes I don't want to be bogged down by the intermediary problems. Sometimes I don't really care about the solution at all, which may differ for everyone.
For me anything that relates to configuration, networking, auth0/authr, elaborate bash scripting gets thrown right to the AI.
aidencoder@reddit
AI does your auth. Got it.
NoleMercy05@reddit
Auth is a known pattern. If you are creating your own Auth system you are not smart.
vegetablestew@reddit
Surprisingly there is enough data on it for them to be pretty competent at it. Of course you should vet it, but its a solved problem to a tedious solution.
NoleMercy05@reddit
Time
theSantiagoDog@reddit
I don’t feel like you’re understanding my point. Now that I have a tool at my disposal that can do some mental lifting, I tend to use it, at the expense of my own skills staying sharp. Whether that’s a good or bad thing is a matter of opinion.
aidencoder@reddit
It's like not doing exercise and developing a weak heart and poor health.
Clearly bad.
theSantiagoDog@reddit
The entire history of technology could be viewed as offloading traditionally human labor to machines in order to increase productivity, so that people can do other work. When was the last time you did arithmetic with a calculator? Is this technology fundamentally different? I’m not disagreeing with you, but it just doesn’t matter. The technology cannot be put back in the box.
riskbreaker419@reddit
I think calculators is a great example here when comparing to using LLM. We reliably know a calculator does what it's supposed to do 100% of the time (accuracy), it does the same thing reliably over and over again when given the same input (deterministic), and you can use it for nearly free (economically viable).
LLM can't do any of those three things well enough yet (and may never be able to). Until it can probably get at least 2 of those 3 things down I don't see it being as trustworthy and reliable as a calculator, and as such, we can't let our own skills lapse until it can.
Ok-ChildHooOd@reddit
That's fine if you can turn on the concentration switch and finish the job.
MorallyDeplorable@reddit
I can't wait until all the old curmudgeons who think AI is the devil retire
ThrobbingMaggot@reddit
Yes have also noticed this. The same people doing this also now seem devoid of life and engagement. Feel like this is because it is a hell of a boring day just prompting rather than using your own problem solving ability and getting the coding high.
aneasymistake@reddit
My crisis is coming from the direction of senior management who WANT the engineers to work with AI in the way you describe. They have chosen speed and quality and refuse to recognise that the speed is destroying the quality. I only see job losses, falling sales and chaos ahead right now.
Riman-Dk@reddit
Yup. 100% valid. The extreme push from management towards total adoption doesn't help matters.
Now, take what you wrote here and blanket apply it all across society. This is very real and it's happening now, all around you, from schools to lawyers and policy makers to doctors and nurses and it's freaking terrifying!
ShadowStormDrift@reddit
Look I mean it's a serious problem.
Like a year ago I was blown away by how quick AI was able to do things for me.
And I've grown quite dependent on it I won't lie. I'm good at expressing what needs to happen in order to solve the problem. But I can definitely feel the "Wait how do I construct a lambda statement again?"- memory atrophying.
But at the same time, business has rapidly adjusted to the new pace with which we can bust out software and gone is the expectation of there being a barrier to writing in another language. Never written in C before? Bro all good, just have an LLM do it. And you feel increasingly like an ill informed driver behind the wheel of a car being assembled around you. Suddenly able to do things it would have taken you years to build the skills in, yet lacking the necessary experience to course correct.
And like sure, before you'd have needed to hire a React dev, and you can now instead get your intern to build that React Flow component, and hey that feels pretty awesome to feel the barrier to building be dropped substantially to anyone with the ability to express themselves cogently.
But I definitely feel that age old pattern of the increase in productivity going to the top to make life easier there and industry just learning it can squeeze just a little harder.
Red Queen Hypothesis and all that. Man life is exhausting.
Natural_Squirrel_666@reddit
I intentionally have the no AI days, where I challenge myself to solve everything in the old-fashioned way. And that definitely helps. Even a week of using AI to code turns my brain to mush, so I do at least one day per week without AI as mental and professional fitness.
The biggest issue is that the degradation process is slow and seamless and you don't know while you're in this process. Only later it strikes how much worse you've become. At least that's what I noticed about myself. Hence, no AI days.
Sure-Business-6590@reddit
I use claude almost daily, but I always triple check and often cleanup/optimize what it wrote. There is never a situation where I open a pull request with code that I wouldnt be able to understand and explain. Some people are just coasting, and i understand them tbh. Industry is shit, no raises, layoffs left and right, why would you care?
garn05@reddit
Ai is more robust version of stack overflow. Before dev copied code from there, now they do from ai.
zabby39103@reddit
I don't think you're strict enough, nobody at my work would dare to come to me with a three nested CTEs mess of a query and then have the audacity to say it was AI generated. Okay, I guess he's a contractor but you do have levers with these guys too.
Honestly though, it is the job of senior devs to enforce standards. I don't have the time to go deep on every PR I approve, but I will occasionally grill people on theirs and if I get the impression they don't understand what they're doing I reject it and tell them to come back when they have code they can explain.
Additional_Rub_7355@reddit
They really don't care.
po-handz3@reddit
'It's like folks no longer want to challenge themselves'
Yeah no shit, 30% inflation and zero wage growth bascially removed all reward from hard work. There's no incentive to do anything other than grind leet code because the only places that have inflation protected compensation are those with stock comp and thats all they care about. All this blaming 'AI' or 'gen Z' is a load of crap.
I'm 10 years into my career yet haven't had a raise, promotion or shit since I entered the field as a junior despite getting CS masters, winning nvidia dev contests, mentoring, etc. And I interview regularly just to get a view of the field and I will tell you 95% of positions are paying less for senior then they paid a junior 6 years ago.
That was just a personal anecdote to add context but I feel its broadly applicable. The feed back loop from 'hard work' to 'career progress' is completely broken. At least as far as I can tell
Beneficial-Ask-1800@reddit
Posts like these kinda makes me sad.
I am a junior developer. I like programming because of problem solving. I mean before programming I enjoyed playing logic puzzles like sudoku. So programming gives me that same feeling and even better since I am building something meaningul.
However, I got no job, so seeing people who don't enjoy it, or don't even bother trying to improve, get opportunities like that and don't use them, makes me jealous, lol
AConcernedCoder@reddit
I'm not worried about the industry long term. I'm worried about the immediate fallout that's coming when the consequences of using a platform that can only ever simulate intelligence with none of its own, and I'm silently assessing the impact it will have. It's a shame because I do really think that AI/ML has huge potential, if only organizations wouldn't jump to the immediate conclusion of eliminating jobs or having it do your thinking for you.
myAnonAcc0unt@reddit
You're not just getting old. I see the LLM mindset present at my employer. There was always a critical thinking problem here, but now AI coding tools enable that problem. It puts undue strain on code reviewers correcting fundamental issues aka doing the actual work for people that can't bother to think for themselves. People don't even read the output for basic things like adherence to conventions and spelling. These are people making six figures that could collectively be replaced by one junior and a cursor subscription.
savornicesei@reddit
I believe critical thinking is shrinking as technology evolves. This reminds me of Asimovs Foundation, where entire civilizations lost the tehnical knowledge to build and operate nuclear reactors.
Linaran@reddit
I usually make it my business to 110% understad the crap (it's not always crap but ya gotta remind yourself) AI generates. So someone just crapping out a node project for a client and shipping it feels like fraud.
Super_Field_7277@reddit
it is crap.
fire_in_the_theater@reddit
i've been out of the industry for a few months now and i dread going back in :/
bit_shuffle@reddit
I see a couple of things going on in your post.
1.The guy who was on a tight deadline, and used AI effectively (did it work or not?) you rejected because he used AI, instead of writing it himself.
The guy who's trying but failing, you reject because he's failing using AI, instead of failing himself.
You then say "AI is turning our brains to mush."
Let's try an experiment. Let's replace "AI" with "IDE".
The guy is on a tight deadline. He writes his code with an IDE with a special plugin for Node. You reject him because he didn't write it in a standard editor like vi or emacs.
The guy is failing, because he doesn't understand how to load the SQL support module in his IDE. You reject him because he can't formulate the SQL query syntax from memory alone.
The technology exists. Genies don't go back into bottles. If you reject everyone who's trying to use new technologies, you're going to cripple your organization, and eventually, yourself.
Be an engineer. Understand how to make the technology work for your organization. Because your competitors are learning how to make it work for theirs.
hippydipster@reddit
Technologies aren't all equivalent and it doesn't make much sense to lexically substitute different words in, that have different meanings, just because they are all a "technology".
bit_shuffle@reddit
If you think my statement is meaningless, that's fine by me.
oblongfuckface@reddit (OP)
I see your point, and I think we agree on a few things. Like I stated in a few other comments I’m not against AI, and I’m aware we need to use the corrects to stay competitive. My issue is using people using the tools as a replacement for critical thinking and problem solving.
Regarding the interviewee, I understand why he chose to use Claude for developing the product because of his deadline. But the problem is he’s in a senior-level interview, and he’s using an app that he essentially vibe coded to show he’s competent working with the technology. If he could talk to me about what he implemented, or at the very least his experience working with the language, that would give me something to work with. My job as the interviewer is to gauge whether the candidate has the chops to do the job, and he wouldn’t (or current) tell me anything to make that determination.
As far as the other guy, it’s great he’s at least making an effort to solve the problem, but he’s leaning so hard on the AI he can’t see the forest from the trees. He’s doubling down on an overly complex, generated query instead of thinking about how he (the engineer) can solve the issue. At some point, hopefully he will come to the solution, but i don’t think this fair to say I’m rejecting him. The interview candidate, sure, but that’s a completely different context
F133T1NGDR3AM@reddit
It's shitty people just treating AI like they used to do with stack overflow
Copy, Paste, Refactor.
I sometimes use chatgpt and claude, but my questions are like -
"Do hashmaps in C# have some setup overhead"
"What linq function can I use to remove duplicates"
Most of the time, It's just because I've forgotten something.
It's never - "Write me a function to do this..." thats just crazy.
It's really good at converting data between things like sql and data models though, use it for that constantly.
Beneficial_Map6129@reddit
i no longer give a shit about my company, i work in big tech and they are pushing this along with lower salaries (CEO and the big boys get pay raises though) along with extremely pointless performance quotas etc
private_final_static@reddit
Dead internet theory? Dead jobs theory.
ashman092@reddit
I think it’s going to make already experienced devs who know how to use it judiciously more productive and harder for those who are juniors.
I like it for particularly boring or repetitive easy tasks. At least right now for anything more complex I’m having to fix errors too often for it to make me faster.
pySerialKiller@reddit
I don’t know man, I agree with you that there’s a lot of people using genai blindly building stuff, and that’s scary.
But I do not think this is making good engineers become bad. There’s been always professionals with “demonstrated “ expertise committing war crimes on production repos. The difference is that before that code came from SO and other sites.
A great test to see if your coworker is a reliable lad is to see how they feel with integrating ai tools on their workflow. A great engineer will be careful and use it just as another productivity tool. A bad engineer will try to pass code slops under the rug
turningsteel@reddit
Geez, hearing these stories makes me feel like I’m a better engineer than I think I am. I think maybe that candidate was just bad and your contractor is not very good.
fdeslandes@reddit
Not sure if AI tools turn devs brains to mush, or devs with mushy brains who used to say nothing and make themselves forgotten now pushes AI PRs.
Zulban@reddit
Am I seeing this too? Yep. Enough that I wrote this: Why I'm declining your AI generated MR
Careful_Ad_9077@reddit
And with the market as is right now,.it's hard to blame them.
oblongfuckface@reddit (OP)
I don’t disagree the market is awful right now. But I see this in my org as well, I had a fellow senior engineer with 10 years experience on me submit a PR that was very obviously AI slop. It just feels like no one even wants to try anymore
Careful_Ad_9077@reddit
You see your friends getting fired left and right.
You see unemployed colleagues ( seniors, 10+ years of experience) struggling for months to land a job.
Even the employed ones are getting raises under the inflation or at worstz.theybare getting their salary decreased by having bonuses and other extras killed.
All that context makes some people go " yeah , fuck this, Inal doing the minimum possible".
Dolo12345@reddit
damn that’s me
fun2sh_gamer@reddit
"Inal doing the minimum possible".... Like you did with this post.
Tired__Dev@reddit
A colleague of mine who is a good dev has been due to burnout.
Many_Particular_8618@reddit
Ai helps expose shits.
aqjo@reddit
What you don’t see or mention is the amount of work AI is successfully doing for your team.
xampl9@reddit
The sad part is - in the future, having critical thinking skills are what will keep you employed.
Someone must be able to direct the AI and validate what it produces.
LargeSinkholesInNYC@reddit
Hold on, are they actually good at solving Leetcode problems? I bomb those tests, but I've always managed to get hired after doing 50 to 100 interviews, and I don't use LLMs to write code, I just use it when there's no StackOverflow question covering the issue I am having. Even when I am using it, I make sure I understand every line of code.
Mission_Cook_3401@reddit
If AI causes more problems than it solves, then your problems aren’t bigger than your code
Heavy-Report9931@reddit
had a lead data engineer on our team. Sat me through 30 excruciating minutes of him vine coding lmao. it was fascinating how proud of not having to think he was.
I did use A.I for that range mapping algorithm and because of it I actually understood what it does.
A.I should augment you not replace your thinking. but its fine let them run off a cliff better job security for the rest of us
DaRubyRacer@reddit
“on a tight timeline, and he couldn’t deliver unless he used AI to build the project. Okay, fine, but I have no idea if you’re able to think about solving problems for yourself without relying on the answer machine.”
Why do you need to know if he’s capable of solving problems himself? I feel like you’re being very dogmatic. At the end of the day, the problem he solved was getting an app demo ready in a brand new stack in a short amount of time. It sounds to me that the guy knows what a deadline is, and knows what’s realistic within that timeframe.
drcforbin@reddit
I'm so frustrated with Claude. I know someone is going to come by and say "skill issue," but when I try something complicated, it's using out of date APIs and/or doesn't compile at all. When I ask it for simple things I end up just using it as a "better Google," and still having to massage its output and code it myself. I genuinely can't understand using it to make any whole thing.
liminite@reddit
Am I wrong? No, it is the industry that is wrong!
dryiceboy@reddit
Sometimes I get impostor syndrome and worry that I'd be replaced by a younger more energetic developer. But then I see posts like these and realize they need more guidance than ever in the coming years lol.
simfgames@reddit
As a vibe coder, I can only say...
You cannot stop me, I spend 30,000 lines a month.
Ok-ChildHooOd@reddit
Just want to comment and say we're in a very similar situation. We're trying to look for this during interviews. We don't want people who outright reject AI. But we also don't want people who can only use AI to solve problems.
oblongfuckface@reddit (OP)
Exactly. Just to clarify again, I’m not against using AI. It’s a great tool if you know how to use it. If I have a candidate and they say they use ChatGPT or Claude or whatever, I’m fine with it so long as they understand what they’re doing. But there’s a big difference between software engineering with AI to come to a production-grade solution and vibe coding
mistaekNot@reddit
tbh i don’t understand what’s the problem with the claude guy. whether you personally like it or not the AI tools help with productivity. especially working with a new programming language / framework. just because some people use them poorly doesn’t negate their usefulness. the claude guy is simply embracing the future
apartment-seeker@reddit
Yeah, but what kind of moron chooses a project he hardcore vibe coded and can't explain to discuss in an interview?
He might have done a sensible thing per his contract with the client, but it's an insensible thing to do as an interviewee.
No-District2404@reddit
Thanks for this post. I’m freaking happy seeing this because I knew that this could happen. These are still good days. I wanna see these CEOs, investors and decision makers suffering to find engineers who still use their brains. Then I will laugh with my ass
apartment-seeker@reddit
lol, is this a direct translation of an idiom from another language?
No-Economics-8239@reddit
I don't disagree with you, but I also don't see this as a really new problem. We used to joke about the clueless engineers who copied something they found on some forum or channel or stack overflow that they didn't understand. There was a time when it was practically a right of passage to try and see if you could get a new dev to run some shell command that would bork their box or delete something important.
Critical thinking isn't common sense, and even common sense isn't that common. We need to teach the next generation the same as it ever was. If we want to push back at the tide of vibe coders, we just need to set the standard we want to see and work to encourage it in every place we work.
I've already had a few projects over my career that were unborking a code base written by contractors who only knew enough to be dangerous and didn't have any technical oversight to stop them from either bilking unsuspecting clients for a longer tenure than they needed or driving the truck over the architectural deep end or both. I hadn't really planned to end my career doing the same thing, but it already seems obvious that at least some of us will be doing that. Same as it ever was.
kevin074@reddit
God this is the most hopeful thing I have read in months lol!!!
I am gonna be consider a rare find for employees among these brain dead developers UwU!!!
chat_not_gpt@reddit
Our industry has evolved a LOT and VERY fast in the last 50 years. I've worked professionally for about 20 of those. AI is presenting a significant challenge and opportunity for us. In the end better and more quality software will come out of this but the road will be bumpy. If you are a good engineer with the ability and motivation to learn and evolve I think you have a bright future.
Twirrim@reddit
I've been using Gemini on a machine learning thing, something I'm totally new at... and I eventually realised that while I was getting the overall concepts, I could barely modify the code and couldn't really write any of it myself if needed. I barely understood how things were structured sufficient to be able to achieve what I wanted without leaning back on Gemini, though I could read individual methods without too much difficulty.
It has been an eye opening experience. Yes, I got something functional done much quicker than if I'd learned everything from scratch and done it myself, which I guess is the appeal, but there's no way I could support it, or use it in a production environment. Without the grasp of the fundamentals I'd never be able to troubleshoot effectively.
I'm increasingly concerned about what we're going to do to ourselves with this tech if we're not careful.
sanityjanity@reddit
I'm an experienced dev, and I've written my code by hand in the olden days, and with increasingly sophisticated IDEs as time has gone by.
I recently attended a workshop on using ChatGPT and github to write some code. But the workshop about using AI just had us cut-and-pasting code into the AI, and then the AI was generating code. There was zero time built into the workshop to actually read the prompts we were using or read the output that they were producing.
So, at the end, it was basically impossible to have learned anything about the subject.
Also, the person giving the workshop said that it almost never worked, and almost none of their students ever managed to get the desired app built.
So, the way this was done didn't teach the skill of prompt building, it certainly didn't teach *anything* about how the app itself was constructed.
And that's how most people use AI. They receive an assignment, copy-paste it into the AI of their choice, and then paste the results out. There's no reading, analyzing, and it's not going to connect with their brains.
FWIW, there are a lot of experienced devs out there who are out of work and can't get interviews. Something about your filtration process is emphasizing the wrong thing. What you really need is a method that will help you to identify devs who can think instead of the ones who look good on paper.
minn0w@reddit
I am seeing the same internally and externally. Developers seem to be mentally checking out.
My problem is that I will be called into meetings, and the meeting content is the same mentally absent logic. Then I spend the rest of the meeting trying to turn on their critical thinking, which concerningly, doesn't always happen. My CEO is one of the worst offenders.
noxispwn@reddit
I think the problem is less that AI is turning their brains to mush and more that AI is making their flaws harder to spot. Even if they relied 100% on AI tools to generate the code, I would expect any competent engineer to be able to understand what's coming out the other end and to put some effort into finding out if they don't. If they're unable to figure it out then they're currently unqualified to do the job, and if they can't be arsed to figure it out then they don't have the right attitude to be doing the job. The only thing AI is allowing them to do is to pretend that they're a good fit until you take a closer look and figure out that they're not, but they should have either never made it that far into the interview process or be able to keep their job for long.
vegetablestew@reddit
I am a bit torn on AI use. On one hand I don't want to completely rely on it to the point where as you say, turning brains to mush. On the other hand sometimes I feel that the ordeal isn't the point and there is no glory wasting days to find that arcane incantation that makes things work but you don't use that recipe enough so that it will be quickly forgotten.
noonemustknowmysecre@reddit
Oh don't be silly. It's not our industry. It's humanity in general.
How many people these days could navigate anywhere given a paper map instead of GPS directions? If you gave them a map, an address, and even the major intersection, AND they couldn't use the Internet to go find how to get there, how many just simply couldn't?
Now, GPS devices are great. They really do work better than what we had before. But without flexing that skillset, it has atrophied. Most Americans simply don't have it anymore, or never developed it.
Lonely-Leg7969@reddit
We’re definitely heading towards enshittification. My goal with our engineers is to get them to get used to the fact that LLMs are here to stay for the short to medium term and how to use it intelligently and be defensive against it if needed. These are people’s livelihoods at the end of the day so the general advice is to sharpen your craft and to not fall for the hype. If a company is pushing very hard on it, I’d suggest to keep an eye out for companies that are better. Unfortunately the hiring bar for new engineers are also going to rise to acct for LLM use.
DeterminedQuokka@reddit
So for the guy you interviewed if he showed you a project that Claude built then he’s an idiot. And you dodged a bullet. Which is not to say you can’t use Claude but you aren’t interviewing Claude. He could be a great developer but he failed that interview because he chose badly. You just have to move on.
The people who work for you. What I’ve done is to start training people on how ai actually works. There is a human bias that a machine is always smarter than a human. If you can demystify the machine a lot of people will stop treating it like that.
I started with a presentation of the 10 most common ways LLMs fail. What causes them, if you can stop them and why you should care. Then I did how you can tell what an LLM was trained on based on what it does. Then a deep dive into how LLMs work. Then rl. Then how to confuse RAG.
People don’t need you to tell them ai is great that’s getting shoved down their throats they need you to tell them how ai makes mistakes.
This has been the most effective of anything I’ve done and has lead to a lot of people out of the blue asking me how to double check ai. Or how to use it effectively.
I also instituted a critical assessment of AI. It’s basically a document you fill out the first time you use ai for something that asks you to think critically about how well ai actually did. Which enforces a critical thought path into mindless ai usage. Basically you earn mindless ai. And no one has earned it yet I think the highest assessment we ever got was B-, and that was for fixing private function usage in the type checker.
Before all of this I actually instituted case studies which were like “I used ai to do this and this is how I got the answer”. These did not work for my current coworkers. But they would probably work for something else.
Ignoring all the ai and just talking about your contractor. SQL is like a big stumbling block for people. Even without ai. They aren’t good at it and because they aren’t good at it they can do really wild things. If you want to help this person I would try to figure out if they know sql and give them resources if they don’t. I sent someone sqlzoo a few years ago when this happened. I also can’t tell if you have a framework or something here. But I also wrote a translation doc for sql at my job because we use Django and the orm is not a great direct translation to sql. Which saves me a lot of time because I just link people to the appropriate section.
dexter2011412@reddit
For the first one, I hope there was a follow up question after "I used Claude". Because I can see myself saying that shit but actually seeing if it's useful or not.
But otherwise, I'm still fairly new to the industry so I don't have anything to add. Thank you for sharing this, really helpful.
StackOwOFlow@reddit
maybe the quality of your candidate pool is just very bad
Mindless_Ad_6310@reddit
I would have hired him on the spot for being honest in an interview… figure out how to work with him after the interview. Hire him based on personality. Like all interviews outside of tech do. Hard to find honest people nowadays I would have been afraid if he obviously used ai to write it (easy to tell) and he said he did it himself and lied and made up some dumb answer. Anyway. Good luck with the ones who pass it by lying to you
lvlxlxli@reddit
"I would have hired him" - guy who has never hired anyone
TalesfromCryptKeeper@reddit
If you'd choose the mid candidate cause he's honest, what about skilled candidates who don't have to lie about being skilled? Or are those rare now?
Just-Ad3485@reddit
Lots of people are honest in interviews.. doesn’t make them all good candidates
aidencoder@reddit
I mean, honesty is an attribute not a qualification.
I've hired good programmers who gave honest, brutal answers in interviews.
I wouldn't hire a bad one on that basis.
DigmonsDrill@reddit
If I'm building a brand new app in a brand new language then using Claude to do it sounds like an excellent way to do it. Even not writing a single piece of code I see the frameworks and how it's built and things that I can change.
Spending 3 days on an SQL query is harder to justify.
GammaGargoyle@reddit
It sounds like the people you work with don’t even realize that they’re embarrassing themselves. They might benefit from someone actually speaking up about it.
Spirited-Fudge208@reddit
Hire me, I use AI when stuck but I make sure to take ownership of all that I commit.
iBN3qk@reddit
One successfully used a new tool to make an app. The other never bothered to learn sql. Both seem to lack ambition.
datadade@reddit
I couldn’t imagine telling someone “I did it with AI” in an interview setting. Honestly, that person just lacked common sense.
local-person-nc@reddit
🙄
Former_Dark_4793@reddit
chill old man, grab a beer 🍺
oblongfuckface@reddit (OP)
lol, trust me I have had plenty this weekend 😆