I really worry that ChatGPT/AI is producing very bad and very lazy junior engineers
Posted by ITried2@reddit | ExperiencedDevs | View on Reddit | 540 comments
I feel an incredible privilege to have started this job before ChatGPT and others were around because I had to engineer and write code in the "traditional" way.
But with juniors coming through now, I am really worried they're not using critical thinking skills and just offshoring it to AI. I keep seeing trivial issues cropping up in code reviews that with experience I know why it won't work but because ChatGPT spat it out and the code does "work", the junior isn't able to discern what is wrong.
I had hoped it would be a process of iterative improvement but I keep saying the same thing now across many of our junior engineers. Seniors and mid levels use it as well - I am not against it in principle - but in a limited way such that these kinds of things are not coming through.
I am at the point where I wonder if juniors just shouldn't use it at all.
Sad_Selection_4232@reddit
Made a tool for lazy juniors to make them more lazy Check this out Link : https://chromewebstore.google.com/detail/jpopmplgannacijpekjngmlkkpjclgbb?utm_source=item-share-cp
š AI Sidebar: Effortless Chat Navigation for ChatGPT, Gemini, & Grok Functions: * Quick Jump: Click sidebar message to instantly scroll in main chat. * Supported for: ChatGPT, Gemini, & Grok messages in one sidebar. * History Access: Easily browse and manage past conversations.
Link : https://chromewebstore.google.com/detail/jpopmplgannacijpekjngmlkkpjclgbb?utm_source=item-share-cp
DarkTechnocrat@reddit
Itās a huge problem. People will say that bad programmers have always copied code from Stack Overflow, but at no point could you create an entire app by copy/paste. What weāre seeing is unprecedented.
It goes beyond junior engineers. How effectively are kids actually learning in college when GenAI does all their homework?
raichulolz@reddit
That's actually been my main question. I worry less about the people right now but more about future talent (talking about 4-8 years from now). I think I read articles that already it's heavily affecting research skills, and just navigating problem solving yourself.
I'm looking forward to talking to my SIL who is a researcher and asking her about what it looks like atm.
yubario@reddit
Have you ever interviewed a candidate from a coding boot camp? They spend 6 months learning how to code, graduate and then fail at simple programming questions like how to make a loop.
ings0c@reddit
Must have been a shitty bootcamp. My experience interviewing bootcamp juniors has been overwhelmingly positive, granted this was some time ago when they were a new thing.
The courses were intense, they only accepted people they expected to pass to keep their stats looking nice, and the people that did them were often experienced in another field and trying to change careers.
They usually put better people in front of us than we found just accepting CVs and trying to weed out the good ones.
MathmoKiwi@reddit
That's not normal now for bootcamp grads in 2025
ings0c@reddit
I feared that may be the case⦠Iāve been a contractor for quite a while now so havenāt done any interviewing.
geopede@reddit
It changed when they went online because of the pandemic and became essentially open enrollment. Bootcamps prior to 2019 have essentially nothing in common with their 2025 iterations.
geopede@reddit
The change was starting around the beginning of the pandemic. From like 2012-2018/2019 there were bootcamps that were legitimately picky about who they admitted, some even to the degree that they had a money back guarantee for graduates who couldnāt get a job. Then they went open admissions and turned into scams.
forbiddenknowledg3@reddit
It sounds ridiculous right? But that's my experience too.
Lady couldn't write a loop. I helped her with the syntax (in her language that I never used myself). Then she refused to write a nested loop because "O(n^2) is bad" when there was no better approach.
bluesky1433@reddit
Exactly this! I think AI is just making people more lazy and less creative. It's not only affecting coding but every other profession/area of life where people just jump into AI to do it for them. I think using AI has to be deliberate and minimal if humans want to keep their intelligence and critical thinking.
MsonC118@reddit
but... but... SPEEED, and QUARTERLY PROFITS! MONEY! Dolla dolla bills yooo! LLMs gonna make us fast and managers make the dev team go vroom vroom! \s
BoxyLemon@reddit
wrong. We just need to define problems that cannot be solved with AI
grobbler21@reddit
The issue is that solving problems that cannot be solved with AI invariably require skills that are developed by manually solving the problems the can also be solved with AI.
annoying_cyclist@reddit
As a lead who works with heavy LLM users, LLMs actually being decent at junior level tickets throws me for a loop.
I've worked with a couple people who've gone from struggling with junior-level tickets to competently executing mid-level tickets as they've adopted Cursor/Windsurf. They eventually struggle with senior scoped work because they don't truly understand the engineering principles they were meant to learn by doing junior/mid tickets (having just fed the tickets to a model and put its output in a PR). That dynamic isn't new, and the fix isn't new either (working through the issues, leaning on more senior folks for guidance, etc), but it now happens much later than it used to, on larger scoped/riskier features than it used to, and at a point in the career ladder where it shouldn't, all of which makes addressing it more complex. It is much more work for me as a lead to mentor someone through unfucking a senior scoped feature than a tiny junior feature, especially if I have to fight against someone's misguided view of themselves as a senior engineer (vs. a successful end user of LLMs who doesn't understand their output) while I'm doing it.
sismograph@reddit
Yup agreed, I can also see the trend especially the confidence and self view of a being a sr. Engineer as a major issue.
I'm experiencing this with some jr's. now, LLMs enabled them to become confident in mid level tasks, so they don't take feedback seriously anymore, because they feel experienced. Furthermore the LLM help them justify what they did, even if it does not make sense.
annoying_cyclist@reddit
It sounds mean, but I don't personally have a problem with engineers choosing to take shortcuts around being competent. I think it's a dumb, career-limiting choice, but I also think it's one they should be able to make for themselves. I just don't want to carry water for them in a team. Selfishly, that's what I worry about. If incompetent people get fired for being incompetent, I'm OK with that. If incompetent people get kept around because management have bought into an AI zeitgeist that defines competence as using LLMs and generating lots of shitty code, and what we think of as the hard parts of engineering (dealing with ambiguity, dealing with existing system constraints, risk management, etc) gets relegated to a class of necessary, invisible and undercompensated glue work, I'll have to find another line of work.
MsonC118@reddit
Yes! I couldn't have said it better myself. This is my biggest fear by far, and one that is very possible, IMO. It seems that decision-makers are prone to the same thing that most of us dealt with earlier in our careers: " You don't know what you don't know."
Honestly, non-technical management is by far the worst. Of course, this is only based on my anecdotal experience, but it's definitely left a bad taste in my mouth over the years. If it's up to me, I refuse to work for a non-technical manager.
MathmoKiwi@reddit
Yeah it's not unusual for people to stall out their career at the mid level and never properly crack "Senior level" (unless they get there via title inflation / tenure).
But I suppose in this new Era of AI it's going to become far far more common.
DarkTechnocrat@reddit
Fantastic comment, and an interesting insight.
avatardeejay@reddit
they are not effectively learning in college. you are right. consider this angle,
the only reason those kids were learning anything in college to begin with, is their parents had money.
AI CAN be used to learn, and it can also be used to cheat. what's interesting is that now, the people who learn big are the people who want to, instead of the people whose ancestors had money. the internet had this effect but AI is having this effect
Pavel_Tchitchikov@reddit
And you had to inspect the code from stack overflow to fit your own usecase. Now you have devs going ācan you adapt this code such that is uses instead?ā And barely reading the output
MightyX777@reddit
I have experienced issues that took days of thinking and trial and error.
Itās like learning chess. Chess masters back then had to learn about specific turns for weeks. Today the newbies just use AI and they get a solution immediately.
The difference is, that I am emotionally connected to everything I have learned while new people just do something they found and because it works.
A few days ago ChatGPT was down and it caused chaos for a lot of companies and people, suddenly not being able to be productive.
Guess which people can still work without ChatGPT fine?
tantrumizer@reddit
I worry for the future when I'm driving over a bridge or undergoing complex surgery, and the engineers or surgeons involved learnt this way!
Derrickillmatics@reddit
lol you canāt chat gpt your way through or into medical school
commonsearchterm@reddit
Real engineering and doctors seem to have a little more rigor and gatekeeping then software though. Idk if it would be a real issue for those professions in that kind of way.
reddeze2@reddit
https://www.reddit.com/r/Skillsire/s/Hi7TURqmJE
segfaul_t@reddit
TBF heās using it to write notes not ask or what to do or anything
DarkTechnocrat@reddit
That's crazy!
tantrumizer@reddit
Ok now I worry for the present too.
DarkTechnocrat@reddit
Yeah exactly. Same for chemists and such
ur_fault@reddit
That's why they have exams at the end. To test if the kids have actually learned the material.
It's crazy to me that people think all kids are going to sit there and skip out on their homework. Like, that was always possible even before LLMs lmao. I know this might be shocking, but guess what, a lot of kids who make it to college actually want to pass their classes and graduate.
Yall sounds like those scared moms from the 90s who thought Marilyn Mansons music was going to turn all of the kids into devil worshippers. I'd always assumed this kind of thinking was a "middle aged woman" kind of thing but apparently I was wrong š
DarkTechnocrat@reddit
lol come on:
I feel ashamed for using Chat GPT to do all my homework and now its the end of the semester
and the first response:
And there's this:
Educators Battle Plagiarism As 89% Of Students Admit To Using OpenAIās ChatGPT For Homework
89%!
ur_fault@reddit
None of that is specific to LLMs lol.
You really think plagiarism among students at that rate is something new?
You left out the rest of the response, which is funny because it actually supports the point I was making:
This person is talking about using it in classes that don't really have anything to do with their main area of study.... you think that is some kind of new idea? The idea of putting little to no effort and/or cheating through classes that students see as "useless" lmao. LLMs are just another way to do the same thing that has always been done.
They also go on to say "use your judgment" make sure to "actually learn the stuff that matters". This is exactly how the tool should be used. Use it for "boilerplate", use it for prototyping, use it for one-off scripts, use it for shit that is not relevant for you to learn. Use it to support you when learning the stuff that is actually relevant.
If that's the top comment, then it seems like most people have their head on straight when it comes to LLM usage.
Or.... maybe you're right, chicken little, and the sky is falling.
DarkTechnocrat@reddit
What the fuck is your fixation on "middle aged mom" as an insult? My wife is a middle aged mom. You sound like a 14 year old.
Mundane-Raspberry963@reddit
How are we supposed to get the kids not to cheat on their online exams though?
ur_fault@reddit
Are you joking or serious?
Mundane-Raspberry963@reddit
Are you unaware that a significant number of exams in college courses are online these days?
ur_fault@reddit
No I'm fully aware, I just wasn't sure if you were making a joke or just not using your brain.
No one can prevent cheating with an unmonitored online exam. Which has nothing to do with LLMs... cheating has been going on since before LLMs, before online exams, before the internet.
The obvious solution would be to come up with a different/better way of assessing students understanding of the material.
Mundane-Raspberry963@reddit
LLMs obviously increase the percentage of students who cheat. You haven't spent any time in a university have you?
ur_fault@reddit
"Obviously" eh? Lmao
People were already plagiarizing and cheating in one form or another long before LLMs.
What makes you think LLMs are increasing the number of cheaters? Because they're so easy to use? Oh yeah that must be it because plagiarism was so difficult before....
DigmonsDrill@reddit
And when the stackoverflow code didn't work, we'd look at it to figure out why, instead of "well just paste in another stackoverflow answer"
MathmoKiwi@reddit
Couldn't do that when you only found one Stackoverlfow answer for your situation!
BoxyLemon@reddit
Hear me out: Information nowadays is inflationary. We have to start at square one with education systems
DarkTechnocrat@reddit
100% agree we need to adjust our education systems, I just wish I knew how.
What do you mean by information being inflationary?
BoxyLemon@reddit
information has less value. Hence we stop reading the AI output. Because information is in fact dirt cheap
DarkTechnocrat@reddit
Because AI is making it inflationary? You mean something like Dead Internet theory?
BoxyLemon@reddit
We will live in a future where we wonāt have to read books anymore. Ai is doing that for us. This causes issues in developing critical thinking patterns.
DarkTechnocrat@reddit
Hah yeah I get that.
Mundane-Raspberry963@reddit
You could always steal somebody else's code and call it your own, which is the same as vibe coding.
DarkTechnocrat@reddit
I mean, with vibe coding you can say āwrite me a snake game with sharks instead of snakesā. Much harder to find someone who already wrote such a game you can steal. Or if your boss asks you to write something specific to your app itās unlikely you could steal it.
Mundane-Raspberry963@reddit
If the AI agent of the future logs into github and downloads someone's snake game, and changes the snake png to a shark png, would it be theft?
DarkTechnocrat@reddit
From Github I wouldnāt say ātheftā but I take your point. Taking code and modifying it would actually be insanely efficient.
NatoBoram@reddit
I don't know Python, but I've done someone's homework (a magical 8 ball game) by literally only copy/pasting from StackOverflow and Python's docs. But that's only because I know what to look for, it should be about the same in any language
DarkTechnocrat@reddit
Right, you didn't just say "write me a todo app for" and get one out of the box. You had to put the pieces together. You had to adapt the code, at the very least.
MorallyDeplorable@reddit
Sure you could
DarkTechnocrat@reddit
You have a link to an answer where someone codes an entire app for you? Iām happy to be proven wrong.
MorallyDeplorable@reddit
You have zero imagination if you can't figure out how copy+pasting off stackoverflow would allow an app to be built with a significantly lower bar of entry for the user.
ghostwilliz@reddit
I see people say this a lot, but they had to at least understand the problem and how this code will fix it.
Now, you just copy the error message and paste the "solution" in wholesale, no though required
NotYourMom132@reddit
Yup, it's not just comp sci. This matter is even worse because job opportunities are declining at the same time. I feel sorry for young kids.
Old-Plant-4184@reddit
Because talented people will use it as a tool of progression and learning vs. Lazy people using it as the path of least resistance.Ā
PM_ME_YOUR_MECH@reddit
It's funny, I remember feeling guilty when I would copy-paste code from Stack Overflow at work. On reflection, that required a lot more knowledge than genai does now, at least we had to modify it for our situation and understand it on some level. It was always a tiny part of what we were doing, not the entire thing
0ut0fBoundsException@reddit
Yeah. You had to make minor tweaks to the copy pasta at minimum. Now I ask a junior to make a few adjustments or replace a section with a call to an existing method, and they seemingly have no idea how to make those small fixes
Itāll take way too long and come back with significant rewrites that I have to review all over again. PR review process is taking more time than ever
VladislavSorokin@reddit
Interesting!
I feel that juniors are now all AI's, at least my routine shifted from "coding" to "chatting about coding". And if you treat your coding agent as junior, tolerating his flaws and patiently explaining 5 times what and how to do something - eventually AI is much better.
But yeah, I agree that this totally disrupts the whole idea of a junior and I have no idea how people suppose to learn real dev practices.
K9ZAZ@reddit
Not quite the same, but i am a sr data scientist and was working with another senior data scientist who tried using chat gpt to write a function to access some api. It wasn't working and we were trying to debug it. I couldn't make heads or tails of how the code would work, so i asked him if he had read the docs on how to use the api. He kinda sheepishly said no and that this was just a zero shot response.
Anyway, yeah. I made him go read the docs and rewrite the function.
Zeikos@reddit
That should be seen as negligence.
If someone I work with will ever pull a stunt like that I am going to have a pointed discussion with them.
Using LLMs is completely fine, when they're used properly and not to pretend that you know something you don't.
K9ZAZ@reddit
Believe me, we've had conversations (plural)
sleepysundaymorning@reddit
When I was in your situation a couple weeks ago, I just rewrote the code myself instead of telling the LLM guy the problem he was getting into. That's because criticism of AI is viewed with extreme suspicion by management especially when senior folks do it, and I need to keep my job intact
Evinceo@reddit
This kinda lines up with my experience, Data Scientists are going way more nuts for this than engineers.
MsonC118@reddit
Just got an email from A16Z with a report on who's adopting LLMs the most over the past year or so. Yeah, to my surprise, data scientists were the highest. Even higher than creative roles (where I thought it'd be the most used).
Evinceo@reddit
My best guess is that data scientists have been vibe coding since before LLMs.
Slow-Entertainment20@reddit
Thatās because a lot of data scientists are frauds imo.
dashingThroughSnow12@reddit
āBack in my dayā when I was a data scientist, we would do investigations on our data, we would read academic papers, write programs incorporating them, train them with a generic algorithm we coded ourselves, then deploy and verify our accuracy with new, real data.
Nah, beep boop put data in model someone else wrote to train it, beep boop get data out.
maggmaster@reddit
That first thing is data science, but the second is not going to be a job for longā¦
hohoreindeer@reddit
Whereās the imposter syndrome trigger warning on this comment!! š
thekwoka@reddit
its not imposter syndrome when you're an actual imposter.
cava_yah@reddit
tbf itās a lot of companyās fault for hiring these people before covid
dubnobasshead@reddit
Not necessarily frauds per se, I think this is a little harsh.
Data Scientists do not see themselves as software engineers. The reality is they are mathematicians/ statisticians that can use scikit-learn or numpy. They typically don't know, or don't care to know anything about Software Engineering; for them its just an unfortunate side effect, a means to an end.
At least thats my experience from 10 years as a Data and Software Engineer.
I fully believe most data engineers could do what most data scientists do, you cannot say the same for the other direction. But non technical middle management love them, Data scientists are treated as high value innovators whilst Engineers get treated like low value technicians.
Im not bitter, I swear...
nrith@reddit
Data scientists are one of the reasons why I got out of the NLP/machine learning field after 14 years a decade ago. I could not believe the insane amounts of self-confident bullshittery coming from these newly-minted PhDs that my company was hiring.
Secret-Inspection180@reddit
It is at best an adjacent discipline. I have worked alongside many analytical roles over the years and the vast majority of them are writing scrappy/superifical code, not building maintainable, scalable systems or applications.
Where there is a conflation of those rolesl & responsibilities in the business and they are treated as engineers inside their actual domain of expertise it rarely goes well in my experience.
marx-was-right-@reddit
Yeah at my company i started as a data scientist in a rotational program and quickly realized it was simply a Business analyst type paper pusher role. Quickly pivoted
MathmoKiwi@reddit
Title Inflation has hit the the position of "Data Scientist" hard.
germansnowman@reddit
One could say you ⦠rotated? (sorry)
marx-was-right-@reddit
Lol yeah I could have chosen them as my permanent team if i wanted but saw the writing on the wall. Theyre full offshore now
thinkoutsidetheblock@reddit
Wouldn't business analyst be less likely to be offshore since they require more domin knowledge and communication? In my company, lots of highly technical roles are offshore to Eastern Europe, while the ones that involves more business stakeholders stayed in the west.
marx-was-right-@reddit
One would think
thekwoka@reddit
tbf, most engineers are too
dbalatero@reddit
A lot of engineers are fairly unimpressive as well, when it comes to iterative improvement/critical thinking vs. crapping out stuff to "get the job done".
Glum-Psychology-6701@reddit
Why? They just have different competencies and are not programmers
Nyefan@reddit
I'm seeing in real time people on teams that I work with surrendering their brains to the llms. Just yoloing untested, unread code into production like it's 1999. I'm much less worried about juniors failing to advance than I am about seniors' skills degrading without any reduction in influence or seniority.
thekwoka@reddit
People were really so ready to just give all thought to the machines. How often people just go "well chatgpt said this" as like...an argument for something.
pissstonz@reddit
I lived in seven states from 0 -18... from hawaii to georgia... And I had always been taught what makes a reputable source. I simply don't understand how people have 0 critical thinking skills when if comes to where they ingest information from
MalTasker@reddit
Devs will have to fill in the gaps that LLMs currently cant. They will build up skills that way
ba-na-na-@reddit
Watch this video, from 15:00 to 15:30. These are actual Y-combinator people discussing vibe coding. I laughed so hard at their suggestion to "just reroll instead of debugging"
https://youtu.be/riyh_CIshTs?t=900
JesseDotEXE@reddit
Lol I get what he's saying but it's so fucking dumb.
MsonC118@reddit
Well, when you don't have experience, it's probably faster LOL. I've noticed that the less experience someone has, the more they'll re-roll ChatGPT, hoping for a new answer. They'll spend absurd amounts of money on the API calls just to get nothing lol. Kinda blows my mind.
JesseDotEXE@reddit
Why learn when you can gamble lol.
MsonC118@reddit
The sad thing is, it probably won't change. They'll keep hoping for a different answer to preserve their external image and ego. It's a sad cycle that will eventually likely lead to a silent implosion.
ba-na-na-@reddit
I don't think it's even possible in any larger piece of code.
I got 3 failing tests after pushing my PR, now what, I tell it to reroll? Reroll what? Do I commit and run all unit tests after each GPT roll? Maybe I run integration tests too every few min just to be safe nothing breaks, because the code is a black box that I don't understand?
It's such a bizarre take, I cannot see this working realistically once you go past the proof of concept stage
JesseDotEXE@reddit
Right, like I just don't understand the use case outside of tutorial level stuff. Which is what I noticed a lot of these vibe coders are doing. They are just "vibing" through a to do app tutorial and claiming it's 10x-ing their productivity. What takes them 2-5hrs vibing an experienced dev could do in 1-2 properly.
That said test cases are actually a place I try to use AI because you can just tell it to reroll smaller unit tests pretty easily.
Any_Championship1075@reddit
Oh God, thank you for saying this. I thought I was going insane with all my non-technical friends telling me that AI is going to replace all software engineer jobs very soon. Like, what are you smoking? I've used AI sparingly, as a time saver for automating trivial work, for years now but if you're writing or debugging anything remotely complex, AI is just not particularly helpful. It will 99 times out of 100 just give you garbage and it won't actually understand the core problem. I don't understand how all these people are claiming AI will revolutionize this field, when outside of asking it write super basic CRUD applications, it's just shit.
JesseDotEXE@reddit
Yeah AI is a good tool, but still just a tool. I know some engineers might unfortunately lose their job to AI but they ones who don't ever skill up and essentially just copy paste code. I think most skilled devs will be fine. I do have some concern for junior level, I could see some companies just trying to replace junior devs with AI.
yubario@reddit
No thatās fairly accurate, if you factor in all the bullshit meetings and drive-bys even making a simple to do list app as an experience dev could take hours or a few days.
You can try to multitask all you want in those bullshit meetings but it can often more than half your productivity
Eagle_Smurf@reddit
Just delete the failing tests obvs š
Napolean_BonerFarte@reddit
If you follow the PRs that copilot is making against the .Net runtime, that is literally what it does when tests fail and it is asked to fix them. Or it will just remove the Assert in the test so it doesnāt fail. Itās amazing at hiding problems rather than actually fixing them
thekwoka@reddit
You have the LLM tooling run the tests locally and feed the results back into itself to write new code.
It's actually fairly decent for some kinds of changes.
yubario@reddit
It can be, if you setup unit tests and logging the LLM can often fix bugs faster than raw debugging. I do that all the time honestly, if it doesnāt pan out after three attempts I generally just do it the old fashioned way afterwards
bn_from_zentara@reddit
Yes, especially if you let AI to drive a runtime debugger for you like in Zentara Code. It can automatically set breakpoints for your, inspect stack frame, variable values, can pause and continue the run. In short, AI can talk to the code, not just do static analysis. (DISCLAIMER: I am the maintainer)
JesseDotEXE@reddit
Fair enough, that's a realistic approach, having a cut off limit is a good way to go about it.
Franks2000inchTV@reddit
In my experience it's not about next next next, it's like you do it, theres a bug, you ask it to try to fix it, and while that's happening you search and find a relevant page in the docs to try to identify the cause of the error.
If it fixes it, then great, if not you either give more context or paste in the docs url, and ask it to try again
It's a collaborative approach when done well not just a blind click click click.
PragmaticBoredom@reddit
The startup community in my area is being overrun with non-technical founders who are Tweeting, blogging, podcasting, and YouTubing about how the age of software developers is over. One of the largest local VCs Tweets constantly about how glad he is that companies can be done with annoying ālow work ethicā developers and build products themselves.
Several of the vibe coding non-technical devs have been doing the build in public trend and applying to Y Combinator. Exactly zero of them got accepted.
motorbikler@reddit
The AI trend seems to be dotcom-like potential mixed with cryptobro-tier grift in in a giant human centipede of hype. VCs at the top and unfortunately devs mouth's sewed to the last ass in the chain.
LALLANAAAAAA@reddit
This might be the most modern sentence ever constructed , I hate it.
Valid point though.
clickrush@reddit
I guess AI has enabled a whole new generation of idea guys.
910_21@reddit
Vibe coding is retarded, its impossible to actually make a working program with more than like 4 functions without doing atleast something manually.
pninify@reddit
I've seen devs and CTOs with over a decade of experience twiddle around with an API without reading docs guessing at how it works. Lazy and bad devs exist and will exist with or without chatgpt.
sudojonz@reddit
While a good point, now with the "power of AI" these same lazy/shit devs can supercharge their shit code which does indeed make things worse than they were before.
GammaGargoyle@reddit
Iām starting to get tired of cleaning up everyoneās mess. The amount of negative value people in the software industry is unbelievable.
lift-and-yeet@reddit
This is why I'm a big proponent of at least basic algorithm challenges in interviewing. Fuck me, I don't want to have to stay late debugging a half-assed O(n^3) algorithm ever again.
unreasonablystuck@reddit
Oh man, I'm not even good at leetcode style questions, and I don't even have a CS degree, but honestly, the amount of disregard for basic polynomial or even exponential complexity coming from supposedly well studied fellows... I have to agree
MsonC118@reddit
THIS! I get the criticism that they shouldn't be asking 5 different LC hards with DP (which I've only faced once out of thousands of interviews, mind you). However, it feels like it's a cop out. You don't need to traverse a tree at your job, but the basic knowledge will help you out when writing production code.
sudojonz@reddit
I'll drink (too much) to that. Pfffff
MsonC118@reddit
Yep, just for the new bot that uses ChatGPT to "review" the sh*t code. It's the complete ensh*tification cycle turned up to 11 lol. Sh*t code already made it to prod with humans in the loop, imagine how bad this is gonna get LOL.
Hudell@reddit
You can say the same about Electron, WordPress, Delphi...and I don't know what else before that.
All stuff that have its fair uses but also gave a lot of bad devs the illusion of not being that bad.
sudojonz@reddit
Yes, yes you can. And each iteration only exacerbates this very problem. "AI" is an entirely different level of this, not just a linear increment.
Hudell@reddit
Heck I remember a time when the complaint was "devs are just looking for whatever keyword seem to make sense in the autocomplete list"
MsonC118@reddit
This. One of my original arguments I posted on another site was basically highlighting this. Harmful code makes it to prod and losses companies' money all the time, with humans in the loop. Imagine how bad it will become with LLMs and "vibes" these days. Don't even mention the new PR review tools using LLMs (I wanna pull my hair out after 30 seconds of using it ARGGHHH). I swear, even with my everyday stuff like my iPhone, and PC, I've seen more and more bugs recently that I *NEVER* would've seen before. IDK if that's LLMs, but it doesn't look good either way.
CorrectRate3438@reddit
To be scrupulously fair, I spent last week dealing with an api where they scrapped one API and replaced it with another and when you read the old docs (reasonably well written), they point you to the new docs, which either don't exist or consist of a yaml file. They haven't quite scrapped the old one, so some calls need to refer to it.
I have an entire rant on this, but long story short, the only way I can make this work at all is ignoring the pretend documentation and beating on the API until it confesses. I have learned that some optional parameters are in fact required, and the sometimes the URL is api.foo.com and sometimes it's foo.com/api. Why? Who can know?
I'd love to read the docs. I have asked to read the docs. They haven't gotten around to writing them yet, but they have managed to push half-written code into prod. I guess this makes them feel more agile. I swear this industry has actively gotten worse the more agile it gets. Yes I am old.
7cans_short_of_1pack@reddit
Same just had dev sit there for weeks hacking away at a problem. I had look myself couldnāt find docs pulled the source code and worked out a solution from the source code.
ALAS_POOR_YORICK_LOL@reddit
This was my first thought as well.
It's not that different than copy pasting from google
marx-was-right-@reddit
Google doesnt invent shit out of thin air that compiles and looks correct imo
ALAS_POOR_YORICK_LOL@reddit
Ime those who copy paste don't care about "looks correct"
Crafty-Confidence975@reddit
And by that you mean he went and pasted in the docs as part of the context for the next attempt!
Any-Ring6621@reddit
This is a really shitty way to use AI: write me a function to do X or a unit test that tests the thing I just wrote.
AI can augment skilled engineers in unimaginable ways, but this is not it. You use models as a partner in discovery and planning, and it can frequently (not always) get you to a confident answer and implementation significantly faster than just reading the docs or code with your own eyes
Engineers who use AI in this way will not be replaced. The rest of them absolutely will be.
educational_escapism@reddit
I have a senior engineer whose first response to any question is āChatGPT said thisā, even when itās something he probably should know off the top of his head or something ChatGPT couldnāt possibly know. Itās wrong every time but he insists it isnāt for a good 10-30 minutes before you either give up or he finally thinks about it.
Itās not just the juniors we should be worried about.
qwrtgvbkoteqqsd@reddit
ok, but real ai coders would have just copied the api docs and given it to chat gpt.
Helpful-Desk-8334@reddit
He- ⦠ā¦he didnāt even read the docs?
Docs and GitHub code examples exponentially increase the amount and quality of output I can generate.
Iām able to show the model exact implementation of frameworks THANKS to the docs and THANKS to peopleās organic works on GitHub.
People are becoming way, way too comfortable just writing two sentences into an input field and pretending thatās all that it takes.
ShardsOfSalt@reddit
Just tell the AI to explain each line of code.
hardolaf@reddit
Yeah, I'm also seeing all of the laziness by people in senior and above level roles. The juniors are largely trying to learn and rarely relying on AI to help them out.
WittyCattle6982@reddit
You should have had the AI read the docs.
forbiddenknowledg3@reddit
It is quite insane to me that AI is trained on all this data... yet the AI doesn't how many basic APIs work. It is simply misleading people. All those saying "it'll only better" are missing the point IMO.
RedTheRobot@reddit
This a prime example of why what OP is posting isnāt an accurate statement. ChatGPT/AI isnāt making good devs bad it is just making it easier to spot bad devs.
In your example that dev would have a stack overflow that they would just copy or even maybe a post or a YouTube video on it. They werenāt understanding the logic they were just copying it and hoping worked and for the most part it did because it was built by people who did read the doc.
So ChatGPT in the right hands makes a good dev more efficient and in the wrong hands just makes a bad dev more obvious.
mguinhos@reddit
I think that if he actually took the documentation and feed the chatgpt chat, maybe it would fix the issue.
Usually works with me.
Routine_Owl811@reddit
I won't lie I think this is becoming me on days I want to ship things fast. It all comes to a halt though when there's a bug, chat gpt can't fix it and I don't know what the code even does to fix it.
I need to stop being so reliant on it. Struggling to find the balance though because it's still a very good tool.
marx-was-right-@reddit
Yikes
Routine_Owl811@reddit
I agree.
seatangle@reddit
Is it you who wants to ship code fast or are you under pressure to do so? If itās the latter, I have some sympathy. Those kinds of work environments suck. If itās the former, donāt do that. Faster != better and youāre putting yourself and your team in a real pickle with a code base you donāt even understand.
Routine_Owl811@reddit
A mixture of both. Thanks for the advice, I need a kick up the backside to stop taking the "easy" route. Need to start becoming more familiar with the documentation than relying on AI generated code.
Brief-Knowledge-629@reddit
I try and avoid using AI as much as possible but it's also important to point out how bad documentation has gotten. I imagine a lot of it is being written by AI now, and using previous versions written poorly by humans as training data.
I got caught in a circular reference in the sqlalchemy docs yesterday. The documentation for Method A said see Method B for more information, Method B said to see Method A for more information. Neither method explained what they did
K9ZAZ@reddit
That's fair, but i read the docs! They were fine!
Infinite_Maximum_820@reddit
Did you file a bug against the project ?
RandomLettersJDIKVE@reddit
Taking someone else's time with your zero-shot attempt is rude.
K9ZAZ@reddit
Yeah i wasn't pleased
armahillo@reddit
That highlights one of the big overlooked drawbacks of using an LLM to generate solutions.
Normally when we write things organically, even if it incorporates copypasta from a website, we can roll back to an earlier iteration when things dont work right. You learn to start broadly and then hone the function until it gets you the output you want.
Generated code lacks the history and process to be able to rollback iterations.
marx-was-right-@reddit
What the actual fuck
Hexorg@reddit
Did you guys forget whole departments having down time when Google/internet is down? Itās just another tool, stop fear mongering. Youāre not helping anyone just making your blood boil. There were plenty of junior devs before who went into programming for money and didnāt care for good code and some graduated by having someone else do their homework. And yeah some will do the same with ChatGPT. And theyāll get hired and eventually their contribution will be a net negative and theyāll get fired. The world isnāt ending.
hashkanIV@reddit
What?? I can stop anytime I want to.
Lordmaile@reddit
Not only engineers.. why ask the well Stocked Knowledge Base when you can Just Copy the Ticket to GPT and Vibe-Support Form there..
da_supreme_patriarch@reddit
This has been a problem for some time, I used to receive PR-s from some of my juniors that was letter-for-letter copy-pasted from StackOverflow, when asked to actually explain the code some would fold immediately because they didn't actually understand the answer. AI models make this problem worse, but the root cause of the it all is still the same; the people who would mindlessly copy-paste code are still mindlessly copy-pasting code. Granted, the amount of junior engineers at the moment has skyrocketed, so the percentage share of people who don't really understand what they are writing has gone up as well, but I do believe it is not that bad, these people will either completely crash out once dropped into a real-world project or get "baptized" in it into somewhat of a competent engineer
fuzzynyanko@reddit
I've run into people that created their part of the code by copy/pasting from StackOverflow, 90-98% of it. No error handling at all, so when the app size got larger, the crashes started to go up.
JamesRigoberto@reddit
I have done so. My first programs were basically like that. Quite a lot of the juniors I saw at the time wrote code like that. So I assume quite a lot of juniors today will write most of their code with AI and probably don't realise about the quality of the code.
Companies should have quality control to ensure that bad code don't reach production. Juniors will have feedback and opportunity to learn. Is just matter of them taking the opportunity.
This happens in all fields at all levels. There is always people interested in learning and improving and people not interested.
JamesRigoberto@reddit
^ this ^
I started my career as one of those who copy pasted from stack overflow and other sites and I am very grateful for that possibility. It did help me a lot when I had no one around.
I believe AI is the same. It can be a great help for novice developers when facing the white page problem.
But eventually each person will either learn the ropes or remain low level or even find themselves out of their jobs, not replaced by AI but by another more competent developer.
Just_Information334@reddit
very bad and very lazy junior engineers
Hot take: you can remove the "junior" and it describes 90% of devs.
Do you read the documentation? Do you read books instead of random blog posts before implementing some new architectural pattern / method (DDD, CQRS, Scrum, Kanban, CI, CD, TDD etc.)? When was the last time you built something new instead of piping the results of multiple APIs or libraries together?
CorrectRate3438@reddit
You may find this interesting:
https://www.theneurondaily.com/p/here-s-what-your-brain-on-chatgpt-looks-like
As an aside, it mentions that GPS is making us more directionally-inept, it's a relief to know that it isn't just in my imagination.
codemuncher@reddit
Motivation is either extrinsic or intrinsic.
The extrinsically motivated people will be using AI to ācompleteā tasks and generally try to look good to their business sponsors. Short term thinking rules the day here.
The intrinsically motivated people - myself included - want, no NEED to know WHY something works. Theyāll keep doing the digging.
Without speaking for someone else, itās like a mental obsession or mental flaw really. I canāt be satisfied by surface level answers, I need to dig in. When something is broken, you call me and I can get it fixed with a capital-F. But if you want me to vibe code a half broken pos⦠uhhh yeah I canāt do it.
Fit-Notice-1248@reddit
And the reality I've come to is that a lot of people are absolutely okay with just having surface level knowledge. They will not want to dig deeper, either because it requires effort or they don't think it's important
codemuncher@reddit
Agree.
I feel confident that people who know how things work will always have jobs. The other people? No idea.
Fit-Notice-1248@reddit
Unfortunately, in my experience those people do have jobs and remain in jobs. They do just enough to keep management happy and the project afloat even if the codebase is a ticking time bomb, due to their negligence of not understanding project requirements or understanding their own code.
GaTechThomas@reddit
There were plenty of those already. But they're very bad senior engineers now. AI is a tool. A portion of people will use the tool well. A portion will treat every tool as a hammer. Pay attention to the output and correct the dev behavior as needed.
ContextMission8629@reddit
Not at my company. In my workplace, ChatGPT is creating lazy "senior engineers" (pun intended).
I use AI as a productivity enhancement to my work and thought. I want to make the code correctly reflect the logic and underlying intentions/decisions for long-term maintainability. This makes the application flow/logic stays in my brain for a long time and helps when trying to solve new problems.
But I sometimes swear because another *senior* engineer onboarded a few months ago and use ChatGPT and Cursor extensively to write code. When I ask him something, he just throws me the AI-generated code and tell me to read and try to run it :)
He is a data engineer, not a software engineer so maybe he doesn't code better than software people. But the experience in the past few months has been draining me. I mean, how can a senior person work in such a bad style?
siammang@reddit
It's no different from they just copy and paste from stack overflow or GitHub codes.
facinabush@reddit
Worst thing to happen since compilers started writing machine code.
RaKoViTs@reddit
It is, all students in Computer science doing their projects just vibe Coding are in trouble. Thats why being a good programmer will be OP in the near future. AI will not get to a point that it will be able to produce 100% clean and correct code without guidance so being good 5 years from now can have you at a crazy advantage.Ā
Jone469@reddit
or maybe it will be able to produce code without bugs in 5 years
RaKoViTs@reddit
Without supervision? No wayĀ
No_Heat2441@reddit
This kind of thinking is literally the only thing stopping me from leaving the industry. We just have to suck it up for a few more years and then hopefully things will get good again.
codesnik@reddit
the fun thing is that anyone making anything usable with llms will have to train and retrain their critical thinking, and throw away some previously acquired heuristics. Like, for example, "code that looks bad is bad, so code that looks good is good". Style gets decoupled from robustness.
juniors will have really bad time getting hired next few years. And then they'll be asked to be able to produce much more from the start.
U4-EA@reddit
I am not worried at all. Software engineering is a skill that takes years of hard work to master. If juniors want to be lazy and if managers want to hire incompetent staff, that is on them. I expect there to be a huge backlash against AI in years to come and there will be a mountain of tech debt to get through. Juniors right now should spend their time learning the craft to get themselves into a position to benefit from that.
BrianHubble@reddit
The real worry is that LLMs and "AI" will make companies less likely to hire junior engineers in the first place.
Anxious_Algae9609@reddit
You should probably be more worried about your job.
blahajlife@reddit
It's going to be a problem across society, the lack of critical thinking skills.
And when these services have outages, people have nothing at all to fall back on.
SKabanov@reddit
I've had experienced coworkers attempt to use ChatGPT output as a point of authority in technical discussions, sometimes just plain copypasta-ing the output as if it were their own thoughts. It's mind-boggling how some people view critical thinking as such an onerous burden that they gleefully ceded it to the first credible-sounding technology the moment it came along, but moreover, it seems so myopic. You use LLMs to generate code, and you use LLMs to formulate arguments to justify said code; why would a company need *you*? You've turned yourself into a glorified pass-through for your LLM!
itsgreater9000@reddit
Coworker A "wrote" (used ChatGPT) a technical document that coworker B disagreed with. Coworker B then went to ChatGPT, copy and pasted the contents of the document into ChatGPT, and asked it to find what was wrong. ChatGPT responded to a specific subsection with the exact same text, which coworker B did not read, and then used it as an argument "against" coworker A's document.
I legitimately felt like I was stepping into the Twilight Zone that morning when I was reading the comments on the document.
Unlucky-Ice6810@reddit
Not sure if it's just me but I've found ChatGPT to be an sycophant that will just spit out what you WANT to hear. If there's even a smidge of bias in the prompt, the model will generate an output in THAT direction, unless it's an obvious question like what is 1 + 1.
Sometimes it'd just straight up parrot back my prompt but in more verbiage.
ebtukukxnncf@reddit
According to chat gpt I am very useful
Opening_Persimmon_71@reddit
People have to learn that to an LLM, there is no difference between a hallucination and a "regular" output. It has absolutely no concept of the physical world. Even when its output is "correct" it still hallucinated it, it just happened to map onto reality enough to be acceptable.
People like your co-workers see it as a crystal ball when it's a magic 8-ball.
raediaspora@reddit
This is the concept Iāve been trying to get through to my colleagues but they donāt seem to get it. Their argument is always that humans are perfect all the time either. When all that makes an output from an LLM correct is the human brain making sense of it. The LLM has no intentionsā¦
micseydel@reddit
I've started thinking of it this way: all the output is a hallucination, it may be useful, but until it's been verified through traditional means it's just a hallucination. Someone on a different sub recently shared a fact, I expressed surprise, and they revealed they'd asked 3 AIs thinking they were diversifying their sources. (I tested manually and falsified their claim.)
I think chatbot interfaces should legally have a warning on the screen telling users they're reading hallucinations they have to verify, to not just trust it.
AdnanM_@reddit
ChatGPT does have a warning but everyone ignores it.Ā
micseydel@reddit
Thanks for the comment. I just pulled up ChatGPT and see two upsells but no warning - but once I start a chat, it does say at the bottom in gray text "ChatGPT can make mistakes. Check important info."
Again, thank you, I'll make sure to bring this up in the future.
RestitutorInvictus@reddit
That warning would just be ignored anyways
micseydel@reddit
Sure, but after they ignore it I can point to it. I don't think anything is going to stop people from being lazy other than shame (which is usually not a useful tool).
GoTeamLightningbolt@reddit
To be fair, it's like billions of weighted magic 8-balls, so on average they're right more often than they're wrong /s-kinda.
Hot_Slice@reddit
Before this they would just parrot what they read in a book - "design patterns", "hexagonal architecture" etc. I used to call that the Argument from Authority logical fallacy. But Argument from ChatGPT is just so much worse because they don't even have any credibility.
electrogeek8086@reddit
Hexagonal architecture?
motorbikler@reddit
It's really good architecture actually, some would even say six times better
Dolii@reddit
This is really crazy. I had a situation at work where someone from an external team told us we shouldn't wrap everything in forwardRef (a React utility, in case you're not familiar with it) due to performance concerns. My colleagues asked ChatGPT about it, and it responded that forwardRef doesnāt cause any performance issues. I was really surprised. Why not check the real source of truth ā Reactās source code? So I did, and I found out that it can impact performance during development because it does some extra work for debugging purposes.
PerduDansLocean@reddit
The kicker is they constantly outsource their critical thinking skills to AI, yet still panic about it taking their jobs??? It makes no sense to me.
Okay_I_Go_Now@reddit
It's pretty obvious. We've been brainwashed into thinking we can't compete without it, following all the proclamations that "AI won't take your job, someone who uses AI will".
So now everyone is in a race to become totally dependent on it. š
PerduDansLocean@reddit
Tragedy of the common situation I guess. No worries they'll get sidelined by people who use AI alongside their critical thinking skills š
pagerussell@reddit
The brain is a muscle. Developing critical thinking is like going to the gym, for your brain.
It should be the most important thing you cultivate.
eddie_cat@reddit
i agree with you in spirit, but the brain is not a muscle lol
JojoTheWolfBoy@reddit
I saw this with my own kids. When something doesn't work, they're helpless and have to ask me to look at it. When I ask what they've tried to do to troubleshoot or fix it, they have pretty much done nothing. I blame the fact that they've never really had to do a lot of things "manually" before, and a lot of things are designed to "just work" these days. However, when I help them, I don't just do it for them. I guide them through the thought process. "OK, let's think about the actual problem here. You do X, expect Y, and get Z instead. Fill in the blanks for me. OK, how does X work? What part of the process is X failing on? Why might that happen? What can we try to verify if that's the issue or not?" I'm not sure if that's helping, but I'd like to think they'll draw upon those experiences when they're out on their own in the world. And without that kind of experience, these junior developers are not going very far.
forbiddenknowledg3@reddit
Idiocracy.
carlemur@reddit
First they stole their attention with smartphones
Now they'll steal their critical thinking skills with LLMs
Ok-Tie545@reddit
Yup. And if the world was different both of these things could help increase attention and critical thinking. But humans just arenāt ready for these tools they created š
motorbikler@reddit
My body is ready for the Butlerian Jihad.
ings0c@reddit
Just playing devilās advocate but this is nearly word for word what I was told in school:
āDonāt use calculators all the time; youāll forget how to do mental arithmetic and how are you going to cope when thereās no calculator around? You arenāt going to have one in your pocket all the time.ā
Correct-Anything-959@reddit
I'm more worried that right now we're paying money to train these models to become more sophisticated and the end state is going to be a disaster.
Kids were calling the wrong stiff late stage capitalism imo.
marx-was-right-@reddit
They arent improving and theyre bleeding money. Dont worry
Correct-Anything-959@reddit
That's... Just not true.
The models are getting better because they are all semi supervised by free and paying customers who rely on them for work.
The roadmap always was to gather as much data as possible with free utilities -> train models -> when good enough people pay to train models at scale -> whoever has the best tuned model can do whatever.
Then there will be fewer people employed.
Sounds like universal basic income until they start to think about who the useless eaters are.
Maybe they'll come up with a different name this time.
marx-was-right-@reddit
Getting better at what? What business use case? Be specific. I havent seen improvements in anything since 2022.
StackWeaver@reddit
Eek.. finally you've been disagreed with! You're delusional. And if you are in fact experienced you should be ashamed for your complete lack of understanding in how tech moves.
marx-was-right-@reddit
Sooo you cant give an example then?
StackWeaver@reddit
Course I can. Why would I give it you you contrarian prick?
Correct-Anything-959@reddit
Being obtuse about how AI works and what the plan has been at these large organizations doesn't make you correct.
It's getting better at a number of verticals.
marx-was-right-@reddit
So you cant give an example then? "A number of verticals" isnt an example.
electrogeek8086@reddit
Yeah something is fishy here. Indeed they're getting bettet at what? Also, if the oaying customers are semi-sueprvising it, how is it getting better if said user can't even evaluate if the outputs make sense?
marx-was-right-@reddit
They deleted their comment lol.
Any time i try to pin someone down on this the only answers I ever seem to get out of them are "AI Art" and competitive programming problems, neither of which are busienss use cases
StackWeaver@reddit
No, sweetheart. It's because they realize they are talking to a sub of flat earthers.
electrogeek8086@reddit
Yeah well I have a degree in engineering physics (although I graduated years ago) and if it seems that so many devs and data scientists are so shitty then i'm wondering if I could have a shot myself at a junior position loll
billcy@reddit
What do you mean " Kid were calling the wrong stuff late stage capitalism " can you give me an example ?
sudojonz@reddit
Right? This is all part and parcel of late stage capitalism, given that the tech has reached this point during this phase.
dlm2137@reddit
fredric jameskn
blahajlife@reddit
Yup the bait and switch is on the cards for sure. Free whilst people train it. Then hike the prices. It's not free because you're the product this time, it's free because you're making the product.
Correct-Anything-959@reddit
Huh? It's not free, I'm a bit confused.
Everyone is paying to semi supervise these models
blahajlife@reddit
GPT has a free tier, I mean
Correct-Anything-959@reddit
Oh right but I mean that it's not quite good enough. So it's at a stage where we were the product back during data collection.
I believe that stage passed.
Now we're paying to become supervisors to these semi trained models.
That's why our move to open source must be swift.
Sheldor5@reddit
society always has been stupid but it's getting worse, remember covid ... one part thought its going to kill humanity and if you don't take an experimental vaccine you are the devil and another part thought its a hoax to vax people with microchips ... meanwhile it was one of the mildest pandemic ever recorded
Ok-Yogurt2360@reddit
Knew people who thought in a similar way. They changed their minds after getting hit with a bad case of covid. They are still not completely recovered from the damage it did to their bodies.
Also don't forget that it matters what you compare it with. Black plague vs Covid would be a tricky comparison for example.
What i'm trying to say: be careful about what you measure. This is also relevant as an SE. Measurements are only telling you exactly what is measured, the rest is a combination of logic, experience and wisdom to recognize the limitations of measurements
Sheldor5@reddit
everybody can die of anything ... just because you know a bad case means nothing and just shows how biased you are, that's why I look at global numbers and inside my personal bubble
inglandation@reddit
Going to be? My man, have you seen who's president in the US? A society with critical-thinking skills would not reach that point.
ZorbaTHut@reddit
This is the 2025 equivalent of "you won't always have a calculator with you!"
PragmaticBoredom@reddit
The heavily LLM-pilled young people I know already have subscriptions to multiple providers and they know all of the current free options as well.
An outage of one provider wonāt slow them down in the slightest.
The real problem is that the LLM addicted seem to be most likely to copy and paste sensitive info into any random LLM they find on OpenRouter that is listed as āfreeā, without reading the fine print that itās free because theyāre using prompts for training info.
Colorectal-Ambivalen@reddit
As a new parent I am incredibly concerned with how schools plan on integrating "AI" into their testing and curriculum.
I'd rather my child not have much of any technology in school, especially at a young age-- I can train him up on that, just as I learned about technology outside of k-12.
I worry that the society he'll grow up in will be populated by people that are essentially fleshy UIs for LLMs.
Quirky-Local559@reddit
I suppose this is true for us too? What to fall back if Stackoverflow or Google is dead, back to flipping pages?
ChrisMartins001@reddit
Hopefully as you gain more experience, you can try to figure it out yourself. Play around with the code, it will take longer but I always feel it's more rewarding when you do figure it out opposed to googling everything.
ba-na-na-@reddit
Flipping manual pages is not a problem, it's asking LLMs to generate any basic chunk of code for you
chrisfathead1@reddit
I mentored a junior guy 3 years ago and one thing I feel great about is I taught him how to use the debugger in the ide and walk through code execution line by line instead relying on print statements to debug. He's crushing it now
TheLastMaleUnicorn@reddit
you have a context window bigger than an llm. use it please to critically analyze your solutions.
zombie_girraffe@reddit
No reason to worry about that, it's job security for those of us who still know how to attach a debugger.
Constant-Listen834@reddit
The crazy thing is the new AI agents are able to effectively do this. If you Ā give them an error message they will attach a debugger and fix the error themselves.
sneed_o_matic@reddit
Maybe for syntax errors and smaller logic errors. They won't figure out how business logic is broken though.
Strus@reddit
They do with enough context. It depends on the tooling you have available for you language and the tools you use for agentic coding, but Claude in Cursor has no issues with adding debug logs and launching the app to check what's happening, and fixing the bug based on the output.
sneed_o_matic@reddit
I've done that with cursor and Claude and while it can work sometimes, it does have the ability to paint itself into a circle. Don't get me wrong, it's impressive, but it's still not at the point where you can reliably set it and forget it, even with architecture documents and all that.
officerblues@reddit
I have been repeatedly telling juniors that something like:
Is useless and doesn't need the try block. For whatever reason, co-pilot spits this out a lot in our codebase and the kids insist on using it. I've even had pushback on my reviews trying to defend this.
Yes, the next generation of seniors will have a lot to learn on the job.
Suspicious_State_318@reddit
Oh god thatās terrible lol
Avocadonot@reddit
Isn't this the correct pattern as long as you add more context like an error message or wrapping it in a different exception? I see this everywhere in java codebases
commonsearchterm@reddit
The 2nd raise (
raise e
) Changes the original stack trace from the exception.You can just write
raise
on its ownofficerblues@reddit
If you do more stuff, yes. I'm talking literally that pattern, you have an except for raising.
larsmaehlum@reddit
I do this at times just because itās a convenient place to put a break point, but it should never end up as part of the PR..
officerblues@reddit
Yeah, that pattern is also something I do a lot, which is likely why copilot spits it out, but you need to critically think about the things you want to commit and push, right?
larsmaehlum@reddit
Exactly. Things that are situationally useful should usually be cleaned up before itās committed, but it takes experience to understand why that is. And that understanding doesnāt happen unless you get firm but constructive feedback from your more experienced peers.
If a junior is not getting that feedback, or not listening to it, they will end up as a junior with lots of years on their resume. I believe AI coding will make this a lot more common. Hopefully the seniors can enforce some standards so they can learn.
officerblues@reddit
I've also been noticing a reduced amount of care going into code reviews throughout the years (I think I've been doing this for a decade, now). Seniors who just rubber stamp LGTM, or that only care about the obvious stuff in PRs have been more and more common. I have been fighting that for a while, but I always feel like it's an uphill battle.
I used to worry or be angry about it, but then I realized that my teams would, long term, consistently deliver more and that my performance actually looks nice, so whenever people refuse to listen or just disagree on the obvious things I just make a note somewhere and plan my next vacation. Companies still need good people, even if they think they need AI. I'm happy to charge them extra for doing my normal job.
theNeumannArchitect@reddit
Why not just break on exceptions? VS code can break on several different layers of exceptions. (all, user caught, uncaught)
larsmaehlum@reddit
Sure, but I might have proper handling further up the stack, and only really care about that one path.
theNeumannArchitect@reddit
Fair, yeah. I thought about that as I posted because I've done it before but wanted to make sure I wasn't missing anything else lol
bart007345@reddit
If you see this a lot, why don't you add it to the system prompt to not do it?
officerblues@reddit
This is not the point, though. AI will spit out stupid code sometimes, the issue is that people get this and move on, with no critical thinking on whether that is a reasonable piece of code.
Yeah, I get it that people can code effectively with AI. People can use whatever tools make them happy, so long as they do what they have to do. This thread is complaining about how the new youngsters are not equipped to use this effectively, though.
bart007345@reddit
The point is you saw this pattern multiple times and you tikd the juniors.
If it keeps happening you either tell the LLM to stop doing it ir you work out why your juniors aren't listening to you.
Either way don't blame the tools.
officerblues@reddit
Sorry brother, you look angry at something. Did I look like I was blaming the tools? Or even the juniors? Inexperienced is the expected state of juniors, friend. I was remarking on how there is very little thought going to coding nowadays, if that kind of pattern gets through multiple times. I don't get why you are so angry, here.
ur_fault@reddit
I completely agree with this point:
If your juniors aren't listening to you when you request that they not do something, it has nothing to do with LLMs. They wouldn't have listened to you without LLMs. That's a "following directions" problem, not a "critical thinking" problem or a "thought going into code" problem.
There is a reason they aren't listening to you. It's either them or it's you... and based on your complete misunderstanding of the issue and your aggressive (and confused) response here:
I'd say it's probably you.
officerblues@reddit
I mean, they are listening to me, though. I don't know why you assume they aren't? There's just many of them and they all do the same thing. I just gave an example, and then was immediately attacked by people assuming I was an old man yelling at clouds. You guys really need to chill.
ur_fault@reddit
By "aren't listening to you" I meant that they aren't following directions regarding the error handling stuff you're telling them.
...which is what I meant when I said that it was a "following directions" in that same response.
Like, you can't even comprehend a simple response on reddit. Don't you think it's possible that maybe your juniors/LLMs aren't the issue here and maybe you are somewhat responsible for the issues you are having at work?
No one was attacking you, we're were pointing out problems with your logic, and pointing out supporting evidence based on your responses. You are the one making assumptions here by assuming that this is was an attack.
Do you know what that expression means? No one is assuming that. What we're saying here is that it's very likely that the communication/following directions issues your juniors are having are in part due to your inability correctly assess the situation and/or communicate your wishes effectively.
Again, based on your lack of reading comprehension, the fact that you immediately became defensive because you thought we were all attacking you, confidently using expressions that you don't even understand, and the fact that you're projecting your anger onto us... I really think it'd be worth doing some introspection here. I think it would really benefit you and your team.
officerblues@reddit
Brother, look at what you wrote. Here's my actual problem, which my post was alluding to: I have had multiple junior people come up with code reviews with patterns that are obviously wrong in the context. I have had to correct those, some of which had pushback.
Just look at that last response, man. How can you claim you don't need to chill, lol.
ur_fault@reddit
I feel bad for your juniors.
officerblues@reddit
I am trying really hard not to offend you here, even though you've been doing that to me for a while. Let me try to spell this out one more time:
I do not have a problem with people unable to follow directions. I have never stated that. What I have had is an influx of multiple people pushing PRs not only with bad code (because that's expected), but with obviously wrong patterns that you would not write unless suggested. I have corrected those, juniors understood and have moved on. I don't know why you would feel bad for my juniors, you don't know shit about me. You know some weird fantasy you made up about a reddit comment chain.
I have been doing this for a while now, I've trained multiple people, some of the seniors working with me were once people I mentored up into promotion. You wouldn't know this though, because this post here has nothing about my professional capacity, just a simple remark of one thing I observed. Why did that tickle you this hard and deep?
bart007345@reddit
No anger, my friend just questioning your point.
Constant-Listen834@reddit
Bro hasnāt heard of error handlingĀ
pl487@reddit
I had this same issue. But this pattern creates a place for error handling to go when it is implemented. It signifies an error handling boundary. It isn't useless, it's just not what you would do if you were typing it.Ā
officerblues@reddit
It also breaks the YAGNI principle. I'm not against copilot spitting it out, I'm against it making it to code review.
pl487@reddit
All principles are to be rethought in AI world. Why do we avoid writing code we may not need? Because writing code has a cost. But that cost just got dramatically less, and that changes the math.Ā
Mapariensis@reddit
No, the reason why we avoid unnecessary code is because reading/understanding code has a cost. Writing it is the easy part in most code bases that Iāve touched.
officerblues@reddit
Yes. Reading code is harder than writing code, it has always been like that. Every line of code is a liability.
The_0bserver@reddit
I've been showing my juniors how I use ChatGPT.
I also regularly use chatgpt infront of them, and specifically point out how I iteratively improve it. I still get shitty code (especially since everyone's on python in this org that helps them with shittier code). And it's not at all close to the point where I can enforce linters.
Main-Eagle-26@reddit
It is.
I remember when a lot of folks had the philosophy that new grads and juniors shouldn't even be copy pasting code so that they could build more functional memory with what they're writing by actually typing it out. That might be a tad silly, but this is a much more significant example of a crutch that is just preventing people from actually learning while they work.
I've had to push back on code from less experienced engineers several times because they don't understand the code they wrote.
beefz0r@reddit
Hell, it makes even seniors lazy
Some-Vermicelli-7539@reddit
Not just engineers. People in general.
Zambeezi@reddit
Sometimes in this field I wish we could talk to each other like traders do. When someoneās acting fing dumb, we should be able to tell them unambiguously that they are being fing dumb.
The fact that people can say stuff like āI donāt know know, ChatGPT said itā and we canāt unequivocally tell them to āuse their fing brainā means we need to repeatedly use rhetorical arguments to convince them that, in fact, they are being fing dumb.
Sometimes there is real value to being told off. You wonāt remember all of the rhetorical arguments, but you will 100% remember the feeling of being schooled, and adjust accordingly.
This stuff is serious, and can cost money, time, reputation and potentially even lives (in certain industries).
/rant
bmxpert1@reddit
I've tried this vibe coding thing and I find it fucking miserable. I actually enjoy coding, not just being a middleman.
Accomplished_Pea7029@reddit
Same, I kind of hope LLMs will never get good enough for vibe coding to actually succeed, if they do I'm getting out of this field
Constant-Listen834@reddit
Unfortunately Iāve been using all the LLMs as I was basically forced into vibe coding by my leadership. I was under the impression that LLM code is garbage but honestly using anthropics new model (Claude 3.7) made me realize weāre kinda screwed. Idk what they did but that model can write the vast majority of code with good quality.Ā
So yea I expect in another year or two we could be out of luck. Which sucks because writing code with prompts was such a miserable experience I almost quit my job over this projectĀ
kowdermesiter@reddit
I'm on 4.0 now and it's wonderful. Nobody forced me to use it though. The velocity I can achieve is unprecedented and the code is indeed not bad, but not brilliant either, it's at strong mid level. But it can refactor it after it starts working and I'm satisfied.
I don't think we are screwed. It takes a lot of experience to know exactly what to work on and how to architect the system and this knowledge will remain a valuable skill. Not for todo apps ofc, but building more complex software will still require human oversight.
VintageModified@reddit
Claude 3.7 is great at greenfield and thinking tasks. Try asking it to make a modification to an existing sensitive business product. Hell try asking it to make any non-trivial change to UI.
It consistently drops the ball for me. With all the handholding and guard rails and corrections I have to give it (thanks to my existing knowledge on the domain and tech), I can't imagine a jr dev using it effectively for that, much less a non-technical person.
If all you do is create proof of concepts or implement a UI based on a super detailed spec, then yeah, you might not be able to continue doing that for long. But if you're good at doing that, you can easily transfer your skills to something else. The core skill is problem solving, and that will always be needed.
NotYourMom132@reddit
It's good at building MVP at best. Unfortunately most of my job is not building frmo scratch. I would be lucky to even get to build anything new.
RestitutorInvictus@reddit
While I agree with you, I still think Claude 3.7 gets into bizarre rabbit holes and overcomplicates things.
Constant-Listen834@reddit
It definitely does. Iām not saying itās the end all to do your work, but it can be very effective at writing code
ivorobioff@reddit
yep, people say that with AI developers will be more efficient and will do their work twice faster, but they forget about motivation that drives developers to do their work, I personally love coding, designing systems and etc. and I'm doing that proactively which is making me fast and efficient, but reviewing AI generated crap, explain AI what to do step-by-step, and explaining common senses on daily basis is not fun at all, and that will make the job boring as hell which will surely impact my performance significantly.
sarhoshamiral@reddit
It is good for one-off scripts or internal tools that won't be maintained beyond its initial version.
It sucks for code that needs to be maintained for years too since LLM output doesn't care about refactoring most of the time. It will happily repeat large sections of code over and over again.
crazyeddie123@reddit
what scares me is when it's good enough that "code that needs to be maintained for years" won't be a thing because you just vibe code a whole new app each time you want it to do something different
Famous-Spring-1428@reddit
I tried the Copilot Agent feature last weekend for a small side project. At first I was extremely impressed how fast I was able to setup a bare bones prototype, but the more I worked on it and the larger the project became, the more I wanted to rip my hair out.
I have finally "finished" it, but I gotta say, I really don't want to touch this code ever again.
bmxpert1@reddit
Yes this is my exact experience. In the end I completed the project and it works but I don't have a great understanding of the codebase in a way that I inherently would had I wrote it myself, making it a nightmare to maintain.
SamWest98@reddit
Literally. I gave Jesus (Cursor) the wheel with a solo project I was building and the result was so bad I had to restore an old checkpoint and lost a half day of work
levnikolayevichleo@reddit
The annoying part is people who want to get work done without understanding how any of the code works. Just copy/paste stuff and then wonder why it doesn't work. I really miss the pre-AI days sometimes.
I've been working on a project with another teammate of mine who literally just copy pastes code or asks chatgpt without making an understanding or asking questions from me.
Mind you this is someone who has more overall experience than me, but less experience in the current company. I'm trying my best to be nice and let them make mistakes as I sit next to them while they code.
But it's damn slow and I end up fixing their mistakes after going home. I really wish for a teammate who could work in parallel and research on their own. Asking for help is okay as long as you do your own research or ask me questions when you don't understand something.
As the deadline for the project nears, I think I'll have no choice but to take over and finish the thing in a weekend or so.
JojoTheWolfBoy@reddit
I would agree here. It's all well and good that you can quickly write code. But if you have no clue how it works or how to debug or fix it, then you're saving time up front just to waste that same amount of time (or more) later on. Intimate understanding of how things work is critical, even if something is automated.
ivorobioff@reddit
I've got few words on medium about why vibe coding will never replace traditional coding
https://medium.com/@ivorobioff/vibe-coding-will-never-replace-traditional-coding-63be3dc0f859
ChessCommander@reddit
Don't sweat it. I think it will rapidly make those who would fail fail faster and those who will succeed succeed faster. The tools aren't going away, it would be unethical to tell some individuals they can't use it. The best we can do is try to educate and hope the rest falls in place.
Mr_Gonzalez15@reddit
I worry more and more that AI's primary goal is to tell people what they want to hear so that people like it and then it makes shit up in order to accomplish its mission.
phoenix823@reddit
I don't know man. Google and Stackoverflow were around for a long time. I think there are just a lot of unengaged employees.
botterway@reddit
From my perspective, this dumbing down via LLMs and vibe coding is awesome, because as an experienced senior engineer who's been coding for 35 years, this pretty much guarantees a lucrative career until I retire.
We're going to see absolute garbage piled into codebases thanks to people using AI, and it'll work great right up until the point where it doesn't.
Then there'll be huge security holes, massive performance issues, and completely unmaintainable tech debt generated by this BS. And of course, what that means is that - similar to Y2K - companies will panic and pay sky high rates for people with actual coding skills like me, to come and unravel the mess that's been made. I reckon that'll start kicking in in about 3 years, and give me 5 years of easy lucrative work to allow me to retire early.
Bring it on.
Mr-Canadian-Man@reddit
I agree. But I donāt know if itās relatively close like 3 years as you said for more like 5-6.
Why 3 do you think ?
botterway@reddit
Because LinkedIn is wall-to-wall vibe coding BS, and every company I know (including the one where I work) is doubling down on this nonsense. You might be right, it might be nearer 5, though.
samswanner@reddit
From what I've seen it's also producing very bad and lazy senior devs. Admitted, these weren't great senior devs to begin with, but they've found a new crutch
monkeyd911@reddit
Ok, itās your choice whether use or not, fuck off then worry your self instead
samswanner@reddit
My company is demanding all devs use it. It's ceasing to be choice.
Helpful-Desk-8334@reddit
Yes, I stay solo because of this lol.
Thousands and thousands of lines of spaghetti code that I wouldnāt even feel good paying someone to help me work on.
Soā¦yes your junior engineers are being turned into retards by GPT, if they arenāt actively bug testing and doing all the quality assurance required when using AI to generate the code.
Hope to God these poor guys and gals know how to add debug logs to their code.
Most-Mix-6666@reddit
It's not the juniors fault. Even before AI, university education was wildly inadequate in preparing you for a career in software engineering. But you'd learn on the job, because generally you were encouraged to find a mentor and some people cared about quality and maintainability. Nowadays juniors are told not to bother the seniors and use AI to answer their questions instead. And seniors are told not to waste time on clean code, cuz hey, we're a customer oriented company , we gotta be shipping...
carnalcarrot@reddit
What's worse is I feel I am losing my skills that I developed by not using AI in my career so far by now starting to use AI.
Clitaurius@reddit
Who cares. We could probably use a dumbing down of being able to post shit online.
BDHarrington7@reddit
Like any tool, it takes some experience to know when to use it and when not to.
As a full-stack engineer, Iāve started coming over to the bullish side on AI coding (though Iām still skeptical of full-on vibe coding), and hereās why:
AI can take care of the really boring parts of development. For example, I had an old nodejs project with thousands of lines written using the promise pattern. With a simple prompt, I could have the agent convert the entire thing into async / await (with some prodding, I found it amusing, if not annoying, that it would stop partway through the conversion and then say āall doneā)
Another time, I needed a complex sql query that can query a table that was structurally identical across different schemas. I am entirely capable of figuring this out on my own, but as Iāve had to do this a grand total of twice in my entire career, using AI was definitely a handy tool to have around for one-off things like this.
All that to say, I think weāre still figuring out the limits of this thing. Like when autocomplete / intellisense became a feature, people could Willy-nilly tab tab tab, but it doesnāt always produce the right output; it was just easier to see (and catch) that with single-token generation.
TeeeeeFarmer@reddit
I removed these llms models - they are fine to generate patterns for normal / simple codebase but they breakdown after some complexity and they try to solve only specific problem instead of understanding as a whole.
It hallucinated up code - that never existed in codebase at all and suggested to use that. It can't seem to understand what "can" and "cant" be changed at all.
Screw it.
knowitallz@reddit
It doesn't make them good if that's what you mean. You have to know what should be done and use the tool to help you. When you can't figure it out , you will have to figure it out yourself and that's when AI doesn't get it to help you.
Can't always call on a senior dev that gets it
h4l@reddit
If AI doesn't take all our jobs, there could well be a shortage of competent engineers in a few years due to students now opting not to learn to program, and those that do not learning to the same degree as before AI.
Fair_Local_588@reddit
Thatās actually the silver lining in this. They might have actually increased senior engineersā power in the market rather than reduced it.
NotYourMom132@reddit
correct, if you're already senior now, you'll be benefiting from this massively. Everyone else below got screwed.
ElectronicGrowth8470@reddit
Itās not about senior vs not senior itās about lazy vs people who take the time to learn. Juniors who are actually learning without blindly trusting AI will do great in this market
h4l@reddit
Agreed, it's also a good opportunity for those people. I'd not be betting on the majority doing well out of this though, given humans have a long history of developing technology that makes things easier/better, but also bring unintended consequences that are avoidable in theory, but in practice affect a majority of people.
NotYourMom132@reddit
bruh, have you seen the market? my company hasn't even hired any junior the past 3 years.
ElectronicGrowth8470@reddit
My point is when people need to hire more intermediate and senior devs the only people that will make it that high are juniors who didnāt rely only on AI.
Many companies are still hiring mid level and junior devs thereās just a ton of competition
termd@reddit
Senior engineers are also being told to use it and how we should be 5-10x more productive now. I have friends who are forgetting how to code because they're using genai instead of thinking themselves.
Fair_Local_588@reddit
I think thatās nuts. I can kind of see it as it will āsolveā whatever you tell it to, even if the solution is completely wrong. And itās very convincing. But I found out pretty quick that itās best for boilerplate, and sometimes for algorithmic things, rarely for anything that requires business logic.
alinroc@reddit
Sounds very similar to the loss of manufacturing capability in the US. Manufacturing sent offshore, tooling is decommissioned and no ability to build replacements, but it doesn't matter anyway because we've lost the skills and there's nowhere for people to learn them because the equipment/facilities to rebuild everything don't exist.
bart007345@reddit
Not sure that applies to the knowledge industry. Physical goods yes.
alinroc@reddit
"Outsource" your programming to AI. Fewer people learn to program, those who do become dependent upon AI. The overall skill pool atrophies to the point where you don't have a population that can do the work.
bart007345@reddit
But the ones offshore do?
Adverpol@reddit
And added to that: the absolute clusterfuck a lot of codebases are rapidly turning into.
seg-fault@reddit
Has nothing to do with seniority and everything to do with credulity.
WeveBeenHavingIt@reddit
I don't think the term "vibe coder" best captures the silliness of what this is turning into.
I think the proper term for someone who blindly uses an llm to code should be an "imagineer".
steami@reddit
Nah should be "cope coder". Imagineer is reserved for professionals who work on Disney attractions.
cuntsalt@reddit
Copium coder? Elicits mental imagery of drug-induced hallucinations.
topological_rabbit@reddit
That's way too cool of a name for them. We need something more derogatory-sounding.
Fit-Notice-1248@reddit
~~Vibe~~ Hope Coding. As you hope the LLM outputs the right thing and doesn't break the entire codebase.
_Kine@reddit
You can remove the "I really worry that" part.
ElkChance815@reddit
How about lazy senior(by title) engineer
MathmoKiwi@reddit
You mean with Google and Stackoverflow?
Nah, bring back the real "traditional way".
With punchcards and 1,000 page thick reference books.
diggpthoo@reddit
A senior is who can use whatever junior assigned to him effectively. If you're complaining you're still a junior.
Repulsive_Constant90@reddit
Oh not even junior. Even experienced dev PRs not even pass a review because of AI code. I ask a question and the dev canāt answer. So redo it again like human readable code.
BigHammerSmallSnail@reddit
I got a couple of years of experience before gpt and all. I think I kind of made it just in time because I can spot things that it is so so, but I rely on it a lot now. Pretty useful, I love it for rubber ducking.
budd222@reddit
They haven't been through the issue of either just figuring it out, or getting an answer on stack overflow that was somewhat close and having to modify it to work for their use case. And they never will, as long as they just ask ai. The level of understanding won't match people that go through that, or just reading through documentation or a library's code to figure it out.
agumonkey@reddit
either llm evolve to the point of solving large scale needs or it's gonna make people unable to evolve the skills to do so because the temptation to rely on premade solutions will be too high
PabloZissou@reddit
It is and semi senior developers are not getting better.
Evening-Gur5087@reddit
Quasi developers have it rough too
Ibuprofen-Headgear@reddit
Shit, Iām a sr/tech lead, and I have colleagues at or above my level that will (after Iāve done some research, tried various things, etc) simply respond to a question with ChatGPT output. Like mother fucker I can do that. The only reason Iām asking you is because Iāve tried most of my available avenues and Iām looking for your personal experience or history with this area. Previously, they might have taken a day or two to really respond, or grumbled about it, but now they just shit gpt code back to me (that ofc doesnāt work, btw). Itās not like Iām doing this often. As a sr/lead, I of course get asked questions too, but I still take the time to make sure my response is sane and tests out at a basic level before asking them to try the solution in their specific situation. And I still have coworkers who do that. But many are going to the dark side.
PabloZissou@reddit
This is become a massive problem.
Keyakinan-@reddit
Def the case. I don't know much django or anything but just wrote an application in a few hours with chatgpt and copilot without doing barely any typing myself. Maybe because this was all LLM code it could more easily understand and add to it but it is crazy how well it performs! Barely any braking mistakes.
But this is just an easy app, i'm sure when the projects gets larger it is way more difficult and the LLMs will still fail at this point.
forbiddenknowledg3@reddit
Yeah in general people are thinking less thanks to AI.
The good news is if you genuinely have skills (and keep them) you'll stand out even more than in the past.
Huntersolomon@reddit
Maybe it's better to get the junior to explain what the code does. If they can't. Tell them fuck off until you understand what you're sending
WrennReddit@reddit
I straight up have a junior just drop ChatGPT links into the meeting chat all the time. Just doesn't even bother.
kronik85@reddit
the total lack of self awareness is the most surprising part.
kronik85@reddit
they would just get ai to write them an explanation and shoot that over
Mart1127-@reddit
Im still a student, learning and not trying to crutch on copilot/ cursor but I do use them. I can certainly understand using it to make a bit of code that I donāt have the skillset for yet or just cant make work then reverse engineering it basically. But what I donāt understand is when I hear that some people have it make usable code they couldnāt make then just moving on to the next thing. If any LLM figures out my problem the first thing I do is go line by line figuring out why it works then add comments to my code so I actually learn what it does and can reference it for a similar problem next time. Usually rename different variables so itās easier to remember also.
TheNewOP@reddit
"You need to know what the code does? Hold on, lemme ask ChatGPT"
Refmak@reddit
This is great until the code kinda works, and the business as a whole benefits from just releasing it to production.
Now you got years of technical debt building from day 1ā¦
Defiant_Alfalfa8848@reddit
This here, writing code without syntax errors is worthless today. Designing it is still a skill to learn.
Thommasc@reddit
In my computer science school EPITA (Paris) where I learned programming if you submitted any piece of code where you couldn't explain every single line of code what it is doing and why it's here, you would get -42/20 for your project grade.
Good luck getting your diploma if you did that.
That was in 2008 way before AI and even before stackoverflow became super popular. They assumed people would just copy other students code without understanding it.
I can tell this is going to separate good and bad devs moving forward.
particlecore@reddit
Instead of engineers that only know how to solve leetcode hard problems in 30 mins.
thatdudelarry@reddit
2015: I really worry that forums like StackOverflow are producing very bad and very lazy junior engineers
2005: I really worry that search engines and the availability of information on the web is producing very bad and very lazy junior engineers
Time immemorial: I really worry that {latest tool/trend} is bad for {industry}
yazilimciejder@reddit
Yeah, ai stole youtubers job first. Damn.
Batman_Punster@reddit
They said the same thing when the industry moved from assembly to higher level programming languages. One of the last holdout in my career was system BIOS. The seniors said we would have less control, take up more space, etc. And it would not be feasible. Eventually with the adoption of UEFI everyone was practically forced to do move from assembly to C. Everything worked out, code us more maintainable, easier to understand, etc.
AI is a paradigm shift still in its infancy. It is not there yet but it is getting better. Anything it produces should be viewed as a suggestion and should only be the basis for your final solution,one that you understand. Understandably code is good code, do not commit code you do not understand, and that a weary you will have trouble understanding at 3AM when you are debugging a problem in production.
I, for one, do not want to get left behind. I use it as a learning experience. I tell it to do things like refactoring code, do a code review, write a unit test, fix a CERT-C error, and when it is done, I ask it, what would have been a better way to ask that. I often ask it to give me a numbered list of ways to improve code, then I tell it to fix the ones I want it to fix. Then I read and understand the code. If I do not understand it, I discard that solution. If I do understand it, I look to see how I can make it better or sometimes I have a better solution. Sometimes I just have to discard garbage, but in 2 or 3 years it will not be producing so much garbage.
People who use AI to avoid doing their work will not learn, will not grow, will not advance. People who learn AI as a tool to help them do their job amd to do their job bettter are the ones who will be successful.
CarsonN@reddit
Agreed. I think it'll be interesting to see what solutions come up to address how leaky of an abstraction this stuff is. Previous abstractions like higher level languages tend to be pretty tight if they're good, leaving things like memory management to the innards. Seems like there's a case to be made for more investment into formal verification frameworks or something to give LLMs strict rails. I definitely think in the future there'll need to be more of the kind of rigid scaffolding layers than we've been used to in the past, and agents to help with all of it.
olionajudah@reddit
Until they realize that vibe coding requires real attention and experience, thorough review, validation and testing, probably. Might be harder to build that experience with such robust, independent, confident, and often confidently incorrect tooling, but itās always been hard to learn software engineering. The strong performers and the gifted will still find their roles
blokelahoman@reddit
The bad and lazy arenāt new. They merely migrated from Stack overflow copy paste to GPT. They exist as job security for those who can fix the trail of destruction they leave behind.
BanaTibor@reddit
I agree with you, also I have not seen this in person.
What I think is that AI is not good enough yet. It can produce some code, which looks deceptively good and need an experienced SWE to see its' faults.
So AI + senior dev = 1.5 senior dev, AI + junior = 2 junior and one is worse than the other.
The only way to combat this is by cultivating a strong engineering culture.
EveCane@reddit
I started limiting my usage because it usually takes me more time to correct it's mistakes or to write a good enough prompt.
officerblues@reddit
I tried having it as auto complete, now I moved it to a hotkey because seeing the bad suggestions was more confusing than useful. I've been using that hotkey less and less...
nullpotato@reddit
I used the autocomplete for months as POC and then turned it off and I code noticeably faster with it disabled. Having to waste energy reading the stuff it creates, delete it and then write the correct code is slower.
germansnowman@reddit
I just disabled Copilot because it was way too annoying and mostly not helpful.
topological_rabbit@reddit
When I last updated my IDE it came with a shiny new AI autocomplete automatically enabled and it just spammed me with nothing but wrong suggestions. Had to dive into the settings and shut off everything AI-related.
I'll never understand people who think using a statistical next-token generator is in any way a good idea for engineering.
Own_Candidate9553@reddit
The Cursor IDE has a "tab" mode like that, I had to disable it immediately. Felt like I was fighting it all the time. I prefer to do a mix of AI assisted coding and human coding, I'd much rather deliberately invoke the AI when needed, usually when I would have previously had to jump to Google or stack overflow.
k_dubious@reddit
As a general rule, code is harder to read than it is to write. Except for the most bootstrap-y or context-free code, or just vibe-coding and YOLOing anything it spits out into your repo, I canāt imagine why anyone thinks AI would be a productivity improvement.
dimd00d@reddit
Iāve actually consciously increased my usage in the past couple of weeks almost to the point of vibe coding. I thought I was living in another universe seeing people claim that they are getting massive increases in productivity, so I decided to keep forcing myself.
Yes, some tasks are faster. Some are slower. For some I just want to kms. At the end of the week, I canāt really say if I was more productive, or less productive or what.
(40 yoe btw - yeah, I know, I am practically decrepit)
Hopeful_Steak_6925@reddit
You mean tomorrow's senior engineers
Stubbby@reddit
It's not all bad. Laziness is often a powerful driver to make systemic improvements and create value at a fraction of the effort.
Its just a new class of developers. Just like today we have "Wordpress Developers" that are different from "Full Stack Developers". There is a use case for each and they dont really overlap that much.
jedfrouga@reddit
yeah itās across the board. hell even iāve gotten lazy with things.
Substantial_Law_842@reddit
"The future generation is going to shit."
It's never been true.
Handwriting might be useful to learn, but it's not a practical skill. Being able to manually search a library catalogue might be useful to learn, but it's not a practical skill.
The "shortcuts" and "laziness" of younger generations is really the standard of the future, and there will always be friction during the transitions.
adamos486@reddit
AI can turn a good coder great, but canāt make a bad coder good. For now, competitive advantage for more experienced devs.
DrumAndGeorge@reddit
I lead the development of a RN app, and in all honesty I think itās also made me worse, there were always certain things that I never bothered remembering the syntax for off the top of my head (looking at you reducers) but GH copilot has definitely made me a bit complacent, going off to the docs feels like a bit of a hassle sometimes now haha - so I canāt imagine how bad it must be for a junior!
Round_Head_6248@reddit
Itās the juniorsā problem. They have all the tools available to them (at least as long as google search works), if they decide to phone it in, tough luck.
Some of them will learn the basics, others will try to coast and probably fail. There were lazy and incompetent devs when I started as well.
DrIcePhD@reddit
I worry they're producing very bad and very lazy senior engineers at the rate some of them are using it.
jcradio@reddit
A colleague and I were taking about this recently. It is a tool that in the hands of a seasoned developer can be useful, but even then I've noticed some laziness or frustrating creeping in when I want something fast. For those of us who've devoted years of our lives in this craft we can quickly spot useful or incorrect things.
In the hands of one still learning it will prevent them from learning.
One of the ten commandments of programming has always been "Don't copy and paste any code you don't understand."
It will still take 10,000 hours of doing something to master it. I encourage people learn to do it before they find shortcuts.
AngoGablogian_artist@reddit
Iām excited because I see a ton more billable consulting hours untangling AI generated spaghetti code versus the regular human made spaghetti.
StackWeaver@reddit
And yet we have to figure it out. Bear in mind none of us are being paid to write code. If businesses deem a certain tech enough of a productivity boost to force it, and that means automated code, it's going to happen.
Megatherion666@reddit
One thing I have not tried but am looking forward to - when junior justified their code āAI wrote it so it must be goodā answer with āAI reviewed it and said itās badā.
On the topic of juniors not learning properly. IMHO thatās not very different from StackOverflow and shitty tutorials all over internet. People inclined to learn will use AI as a tool and reach higher highs. People who couldnāt care less, well, they will be useful to some degree, but ultimately helpless without AI. Not a big deal in the professional setting. Much worse in other aspects of human life.
BoBoBearDev@reddit
Nay, it is the same for me. The asnwer needs to be explored, tested, and verified, just like Stackoverflow answers. And devs still need to spend their time analyzing if they did it nicely, not just a slop. All of that existed in the past, AI slop is not new, it was just called slop, nothing has changed.
If Sr Developers are also getting sloppy, it is because they were sloppy before and got away with it. Or because you become more critical of them because they started using AI.
jakechance@reddit
LLMs are increasing the impact, visibility, and unfortunately longevity of poor developers but they are a catalyst, not the cause. There were and will always be devs who don't read docs, error messages, or try tutorials. In the past their inefficiency, lack of contribution, and peer feedback would often limit their tenure.
A developer who actually cares about learning how to produce high quality software will use LLMs differently. They'll ask it to optimize functions, explain things they don't understand, or produce example after reading documentation.
mxldevs@reddit
I would expect test driven development to become even more emphasized because now devs have tons of free time from not having to write the implementation themselves.
But of course, they won't be doing that either and expecting AI to generate the tests.
savornicesei@reddit
Actually, it produces very lazy human beings. If you've watched Idiocracy and/or read Farenheit 451 - you can see the signs
NeuralHijacker@reddit
I'm old. I felt the same way about Stack Overflow. Suddenly code was full of copy pasted crap done by people with no understanding.
TheAnxiousDeveloper@reddit
So, let me tell you a story. I'm a tech lead in a company that works with a very known E-commerce platform.
A few weeks ago I was interviewing a person for a position that required at least two years of experience with this platform.
We did a simple live coding interview where we took some code they wrote in a (max) 4 hours assignment (doable in 1), and I provided some additional requirements to see how they would have implemented the feature. I made myself extremely clear that he had access to a browswr and he should feel free to consult the documentation.
All throughout the interview, I saw that the candidate's eyes were constantly shifting away from the camera, pausing for a bit and then coming back with answers that weren't 100% on point with the request, usually going around the subject.
After the meeting, I went to the HR and said that I wouldn't let the candidate pass because: 1) he really didn't know what he was talking about 2) I had the suspicions he was using an AI tool to get the answer during an interview.
The answer from the HR manager was "can you safely say that none of the people on your team are using ChatGPT?". 1) it's not the same thing during an interview 2) that's exactly the reason why I have juniors that are underperforming and go around the problems, usually complicating them, rather than solving them.
NotYourMom132@reddit
why? good for us experienced folks, less competition.
Fit-Notice-1248@reddit
When you're working on a team it will start to become less about competition and more about managing stress, as you're going to have to constantly deal with colleagues that are breaking stuff due to this sort of negligence.
constant_flux@reddit
To be honest, this has been true for me before and after the widespread usage of AI. At least now, it's easier for me to deal with mishaps at work.
Also, code reviews are another check to make sure garbage isn't checked in. And you can also use AI to HELP you with code reviews.
NotYourMom132@reddit
Well then don't hire juniors. My company has been exclusively hiring seniors only for the past 2 years and it's worked really well.
thekwoka@reddit
AI will take all the jobs not because it gets so good, but because people get so stupid.
Competitive_Stay4671@reddit
It is definitely a big problem and becoming worse. something against it: sit next to people, discuss code, let them explain, and make sure they not use LLM for a few seconds to get an answer to a previous question when you quickly leave the room to get a coffee. Just own experience: After I came back from the coffee break I suddenly got a surprisingly good answer which was 100% contrast to the situation 2 minutes before I left the room. The problem was: the answer was just memorised from LLM... but not understood.
I expect programmers "own" the code they produce and to know what each line / method / command / flag is doing and be able to debate it. If this is the case, then using LLM is fine. If not, and people start shrugging shoulders when being asked to discuss a solution, it's a problem.
What I am not saying is that LLMs are bad. They are very helpful in a lot of aspects. But if you rely too much on them to produce stuff it will just postpone the moment of pain a bit more in the future. There always comes a point where you need to know what you are doing.
thruc@reddit
A lot of junior devs I am working with are completely reliant on chat gpt. They can get boilerplate code started, and then pester me for hours on why nothing works. Its because they don't understand what chat GPT spit out. I use chat gpt for busy work (like setting up base boilerplate code for routing or a simple component) ... but when I'm solutioning through a complicated problem, its easier to just do it myself.
Willing_Sentence_858@reddit
yes
Turbulent-Week1136@reddit
That's called job security. Be happy not worried.
Organic_Battle_597@reddit
LLMs are job security at this point. How silly we were to see them as some kind of threat. But there is going to be some pain before we come to terms with where these tools fit into our process. For certain, vibe coding ain't it.
SomeEffective8139@reddit
I do think this will become a problem but it's a problem we see with other technology. When ORMs took over software developers completely lost knowledge of SQL and it became a specialty that only DB admins would be able to tune a query for optimal performance. As code generation becomes more common, there will be an analogous situation. Developers will become better at getting things out the door and delivery will increase but when there's a truly difficult bug or tricky refactoring nobody will have the skills anymore to hunt them down the old fashioned way.
South_Future_8808@reddit
It's messing up everyone. From managers, senior devs, juniors.. No one wants to take some time to think through things like we normally would. The priority now seems to be shipping and moving faster.
look_at_tht_horse@reddit
Don't worry. Junior engineers were always pretty bad. At least they have AI to help now.
IlliterateJedi@reddit
I have a sample size of 1 on this with regards to a junior dev, but this has been my experience. Mind you, this person was someone hired out of a boot camp program that was hosted at a major university. They were heavily reliant on Chat-GPT but did not have the foundation to be able to really parse what the code was doing or know how to appropriately massage the output into useful code.
I would ask questions about the code or make suggestions, and those suggestions would be passed into Chat-GPT to implement. Ultimately we had to move that particular dev to a different department because programming wasn't their strong suit. They weren't dumb by any means. They were a college graduate in an unrelated but rigorous field. They just didn't have that 'just figure it out' quality that you really need to be successful in a field where you have to be constantly learning to stay on top of things.
FrustratedLogician@reddit
The only take away from AI code gen: do not let it to do the thinking for you. You are an engineer and it is your paramount business and responsibility to completely understand the inner workings of every part of the system you are adding or changing. AI code gen can make you faster but you need to be the driver and the final decision maker whether the solution meets engineering quality standards.
Odd_knock@reddit
In Platoās Phaedrus, he recounts the Egyptian myth of Theuth (or Thoth), the inventor of writing. In this story, King Thamus criticizes the invention of writing, saying it will create forgetfulness in peopleās souls because they will rely on external marks rather than remembering things internally.
The key passage warns that writing will produce āforgetfulness in the learnersā souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves.ā
Plato presents this through the character of Socrates, who expresses concern that written words are like paintings - they seem alive but canāt answer questions or defend themselves. He argues that true knowledge comes through dialogue and internal understanding, not from written texts.
newrandreddit2@reddit
jr engineers? hell i think it's making staff/principals bad and lazy too
SoapilyProne@reddit
I primarily use GPT for discussion when planning a new project. Occasionally Iāll write a massive chunk of code and ask it to clean it up for me. But everything is double checked because GPT has screwed me in the past before.
ur_fault@reddit
Juniors who have critical thinking skills will come to the conclusion that "offshoring" it all to AI is a bad idea. They'll use it strategically as a tool, improving their understanding of technologies and concepts, which will increase the quality of their output.
The ones who lack critical thinking will not.
The truth is that the juinors who end up being "very bad and very lazy" are the ones that would've ended up that way even without ChatGPT.
That's because before LLMs, these PRs wouldn't even exist. They'd still be working on that task, probably with no idea how to complete it, and it'd take them another few weeks to get up the courage to finally ask someone for help. At which point they'd need someone to hold their hand through coming up with and coding a solution.
To be honest I prefer the ChatGPT code... it starts the conversation sooner. Their PR is trash, but at least it's something that is produced quickly that we can iterate on. We can use it to figure out what's missing, what their understanding is, and point them in the right direction. Which is what we want anyway, we want to know where they are as soon as possible so that we can give feedback and let them revise and re-review as fast as possible. IMO this is much better than pre-LLM times when we'd get a full sprint of radio silence before we finally hear that they don't even know how to start a task.
There's so much fear and misunderstanding around AI. It reminds be of back in the day when moms thought video games were going to turn kids into serial killers lol. An LLM is just a tool... the outcome depends on the person using it.
remote_math_rock@reddit
I can't repeat it enough. You cannot outsource your thinking to these tools, especially as an engineer. You physically can't. I haven't run into a single serious design challenge or problem GPTs have been able to help me with yet at work
grizzlybair2@reddit
Yea I don't think it's helping from that perspective. A lot of those juniors are pretty lazy to begin with, have one who has 3 cards and they are all basically monitoring cards, so what are you actually working on. It's his choice, but my manager has noticed for sure that he doesn't try to stay busy, gotta at least pretend sometimes imo. He's competing for the worst on the team with another junior who I would say is worse across the board, frequently out, but is at least trying to stay busy and learn some new stuff. I'm sure we will do more layoffs ... I've warned both to try and stay active and avoid being at the bottom but neither so much improvement imo.
Meanwhile one of the other juniors is knocking it out of the park and is going to be promoted this year for sure.
Though I also wouldn't say this is new either. I separated my self from my pack when I was a junior mainly because I kept trying and would try new things while the rest wouldn't as a whole. It's not like I'm a super genius or had connections, hell I barely talked and have terrible social anxiety, the bar was that low...
bart007345@reddit
None of that is about AI. It could be true in any situation.
grizzlybair2@reddit
Okay I'll translate for you. The low performing juniors are utilizing AI to generate subpar code for us to notice. They aren't learning, maybe it would always be like that for these guys, maybe not if they didn't lean on AI. Then we pay extra attention and don't utilize their time and all managers have to rank their team members.
darthsata@reddit
One of my concerns is that it will greatly increase the cost of innovation. Fewer and fewer people will have the skills to do new things so the amount of cutting edge experimentation will rapidly decrease. Essentially we may lock in our current technology.
bart007345@reddit
You assume that innovation only comes from writing unique software. It doesn't, we can still write software and hardware still drives innovations.
darthsata@reddit
As someone who owns the compiler and language development for tools and languages used for processor design at several companies, I tend to see a lot more of how hardware design happens than most. It is not like AI driven development is not also hitting hardware development. I also suspect hardware development is more software-like than you might think.
Which is to say, why do you think the same pressures which might stifle software innovation won't also stifle hardware innovation? (I also might have a fairly high threshold for what counts as innovative. The patent lawyers get annoyed at me about that.)
softwaredoug@reddit
People have written similar things about stackoverflow, intellisense, Google, etc over the years.
constant_flux@reddit
Same. Instead of training devs on how to build effective prompts and think critically, people just resign themselves to "iT SpItS OuT BaD cOde."
ITried2@reddit (OP)
I've never seen it this bad though in my eight years working professionally.
It's not just that AI spits out bad code, it's that it spits out bad code that looks like it ought to be right. And unless you have people explaining why it isn't, this problem is only going to get worse.
nickisfractured@reddit
Been working as a dev for 20+ yrs now and itās always been the same, the only difference is that now thereās just more computer science grads and bootcamps and curious folks who are coding. Most code has always been bad, most devs donāt know how to build consistency or have a good grasp on architecture. Most projects are rushed and have throw away code. AI was trained on the code that is out there and available and itās telling that the quality of what itās trained on is bad code. Someone had to write all that bad code and push it to production at some pointā¦
Brief_Yoghurt6433@reddit
Doesn't even need to be in prod. I mean it had probably pulled my advent of code solutions or gross internal tools I only made to make my day easier. If I'm writing lazy code or trying to get a solution ASAP, well I'd ask for a full rewrite if it was submitted as a PR.
softwaredoug@reddit
I mean similar things have happened with stackoverflow and relying on crazy dependencies
In the end those software engineers will be the ones responsible for maintaining whatever mess they produce, so I'm less worried. People will learn the limitations of these tools (eventually). Though near term the hype is insufferable.
puckoidiot@reddit
The software ends up maintained by the team, not individuals, so if other engineers in my company use these tools irresponsibly, 'm definitely responsible for maintaining the mess that my co-workers produce.
officerblues@reddit
Which means some engineers will learn good practices and evolve, others will keep pushing out crap. The real test here is for blameless processes and no finger-pointing in teams, imo.
ITried2@reddit (OP)
I really hope we do.
You raise a separate but also relevant point, that I am convinced these tools are going to crash and burn at some point and their use case be much more limited than is said. The hype and buzzword bingo I am seeing makes me convinced I am right and we have another bubble on our hands.
meemoo_9@reddit
I totally agree, I catch a lot of terrible code that to a junior would look totally fine. It's the way it's confidently wrong and seems convincing.
marx-was-right-@reddit
I dont. Lol
njmh@reddit
No way near the same. You still have to think critically while researching with Google/Stackoverflow.
geekfreak42@reddit
That has been around since the transition from machine code
Electrical-Top-5510@reddit
they were like this before gpt
constant_flux@reddit
I think there are very valid perspectives in the comments, but I also see a lot of throwing the baby out with the bathwater. AI has objectively made me a better senior dev. I can parse and summarize documentation, interrogate for clarity and sourcing, quickly build POCs and evaluate pros and cons, improve performance, test cases, and the show goes on.
I'm disappointed that so many people cannot competently use AI. However, the dark side of this is that there will be a growing gap between the devs that work better with AI, and those that don't. Devs who suck with AI will be competing with devs like me, who know how to think critically and constructively.
IDatedSuccubi@reddit
You know what really worries me? As soon as nearly every junior becomes dependent on AI, companies will rugpull the AI users with expensive monthly subscriptions and enshittification, and suddenly millions of junior devs will become completely unproductive. And by that point the code bases will be incomprehensible.
Fit-Notice-1248@reddit
This is already happening. In our org we have been bashed over the head to create projects around these LLMs and agent modes, come to find out starting this month, we are getting hit with rate limits/300 requests per licenses. Going over the limit we will get charged extra.
HappyFlames@reddit
I think education hasn't caught up with AI. Writing code line by line manually is becoming a thing of the past; we should be teaching people how to use AI properly and how to test and spot code issues.
soleDev@reddit
Wait until you learn that PMs use LLMs to clarify legal and compliance requirements.
RandomLettersJDIKVE@reddit
I've found bots useful for documentation. My company has a LOT of packages without docs. The in-house bot writes a reasonable ReadMe file when one doesn't exist. That's saved a ton of time.
LongjumpingGate8859@reddit
Just tried to use AI to show me how to implement a script that would allow dead-letter peeking in an Azure service bus topic.
Failed miserably. Gave me a bunch of questionable code and when I asked for a source, it gave me "on second thought, it looks like this may not be accurate" type shit.
It gave me several completely WRONG solutions, including suggesting the use of AZ CLI extensions which don't even exist!
Waste of my time and a complete disappointment in my first AI-assisted coding task.
bart007345@reddit
Perhaps you used a bad tool? I mean the competition is fierce and each provider is definitely not equal to each other.
LongjumpingGate8859@reddit
I used Copilot, which has typically been better for coding questions, at least for me, than chatGPT.
bart007345@reddit
Copilot allows you to choose the model as they vary in performance.
bart007345@reddit
There are better tools right now. I use windsurf, vs code + copilot and Claude code.
Claudecode is very impressive.
drumnation@reddit
Since itās basically a junior engineer itself two things.
It doesnāt make sense to hire junior engineers when you can amplify seniors with it to multiply their output and still get quality code.
When the LLMs continue to improve they wonāt make those mistakes, maybe even with juniors, but that wonāt mean itās better to hire juniors still.
The pipeline of real engineers is going to go dry.
dimd00d@reddit
It is *not* a junior engineer. It has the ability to solve tasks, that we usually give to the junior engineers, so they we can yell at them and they learn and develop - their jobs is to eventually become a senior engineer.
Most (anecdotal evidence) of us, dont really want to deal with juniors, so I do agree with you - the pipeline is going to be fucked.
drumnation@reddit
Is your point that itās a task solving machine and not a consciousness that grows and evolves over time?
I guess if you are looking at it like a junior is just a seed that grows into a senior over time yeah not the same, but if you are looking at it as an assistant that takes direction and frees the senior up to do more planning and architecture thinking by taking on more repetitive boiler platey tasks⦠it is that.
Thereās also a pretty big difference between vanilla ai and ai you setup with rules and augment with mcps. AI capability is what it is now and we can only expect it to grow both in autonomy and capability as time keeps going. Not sure what that means for new juniors as this gap gets wider.
The only way we get new seniors is some kind of guarantee a junior will stay with the company that gave them a chance for x amount of time⦠otherwise they get hired for way too much compared to ai and then leave as soon as they are skilled enough to get a better gig. Net loss for any company that takes a chance on them. The economics arenāt looking good for the beginning of the pipe.
dimd00d@reddit
I donāt see it as consciousness. Hell, I donāt know how to even define and explain what consciousness is. It is the realm of religion and philosophy.
I had the same discussion the other day with some friends that also have a software company about training and retaining juniors. We do still hire them and they most of them do actually chose to stay with us, but yeah - the whole process even before AI was net negative if they chose to job hop.
Now, with AI - donāt know - maybe the solution is some sort of apprenticeship where you have to stay X years afterwards? (I think there is something like this with the WITCH companies in India - where you have to buy yourself out or something.)
dimd00d@reddit
I donāt see it as consciousness. Hell, I donāt know how to even define and explain what consciousness is. It is the realm of religion and philosophy.
I had the same discussion the other day with some friends that also have a software company about training and retaining juniors. We do still hire them and they most of them do actually chose to stay with us, but yeah - the whole process even before AI was net negative if they chose to job hop.
Now, with AI - donāt know - maybe the solution is some sort of apprenticeship where you have to stay X years afterwards? (I think there is something like this with the WITCH companies in India - where you have to buy yourself out or something.)
Goldman7911@reddit
Thanks guys. This post is really a sum up of what I am perceiving.
You sum all the bullshittery push AI without think from upper management, race to the bottom with juniors, outsourcing brain in llms, lack of care to minimum code quality and then I really can't imagine what future you be.
Don't you think what happened with google this week isn't related with this all?
brunocas@reddit
I recently vibe coded a small rust app, partially because I'm not very knowledgeable in rust and partially because I wanted to experience what it is for a new dev in a language they don't know very well.
My lesson is that it is very easy to start relying on the llm code without thinking and more worrisome to blindly trust the llm without thinking. So yeah, we're fostering a whole new generation of people that do without any critical thinking.
For real world, I think the llms are useful to speed up boiler plate functions but it doesn't help you organize and think ahead of how to design and develop code. Several times the llm refactored my code making it useless. For example, one time it refactored it in a way that now instead of starting 10 Tokyo tasks in parallel, it was spinning my task function inside a Tokyo task in a loop lol.
If you already have critical thinking ingrained in you they can be good. You can ask why rust does things in a way or ask for examples in different situations etc which is incredibly faster than searching and reading lengthy docs and its inherent redundancy. LLMs also suffer from older data. In languages like rust that evolve faster it is really problematic.
I can see how junior dev may fool hxrself into thinking they actually know things and this is worrisome...
xSaviorself@reddit
I have to regularly remind coworkers that AI shouldn't be writing your solutions for you, but accelerating your ability to implement complex systems with advice curated and specialized for your use-case. If you rely on the AI to come up with the solution, then you do not have the ability to validate it conclusively.
Who validates what this AI tooling suggests is actually the best practice or right decision for this situation? I find that if you ask questions in different ways, you get wildly different answers. If you are negative about a style or pattern in your questioning the AI tooling will use that and it will taint the results of your query. That's not good when you are seeking objective truths.
There are still people building software by hand and writing about it, that's never changed, and that for me will always be the best way to figure out how things should be done. Talk to your fellow engineers. Do the planning together. Then have AI write the code and do hours of work in minutes. Then spend those hours reviewing it, scrutinizing it, and refactoring. That's how we know what works and what doesn't, the industry is always constantly communicating even if indirectly and trial and error will always be king.
Agifem@reddit
I have the feeling, just as you do, that ChatGPT is producing lazy and incompetent software engineers. However, it doesn't worry me.
synth003@reddit
It's hilarious how software engineers view themselves as something special.
seatangle@reddit
I donāt work on an SWE team any longer (Iām basically the only tech person at my organization now) but I do see this trend increasingly online, where people describe their coding process and the first step is talking to ChatGPT.
I was also just at a conference for a software vendor my org uses where a non-technical person gave an entire presentation about how to use AI to write code for you without mentioning any of the downsides. I found it very concerning.
fal3ur3@reddit
Well, there's nothing to worry about. It is, in fact, doing exactly what you've said.
rharrow@reddit
I worry about the future generations of children, I know I sound like my grandparents saying this, but I feel that this technology is unprecedented. As schools and individuals lean into AI, critical thinking in general is going to go the wayside.
SupermarketOld9056@reddit
At my workplace AI is encouraged but you must understand what the code is doing. I like using it for a reference or to jog my memory. I really hesitate using it to write a whole class, I feel like I'm cheating or I should be writing that code.
TheScapeQuest@reddit
I wonder if engineers in the 90s think the same about me not having to study books to learn languages because there were online resources and stack overflow.
I don't blame the junior engineers. I blame the product attitude of shipping features with so little emphasis on quality. How many times do you use an app only to be presented with a shitty WebView which doesn't integrate well, or where the same application seems to jump between different design systems.
Businesses have stopped caring about quality.
PragmaticBoredom@reddit
I will never understand the Reddit attitude that only one party can be blamed and, of course, itās never the individuals.
You can āblameā multiple parties each for their own negative actions. You donāt have to false dichotomy yourself into picking only one thing to dislike.
TheScapeQuest@reddit
Absolutely, we bear responsibility as engineers to call out these bad practices too.
zero-dog@reddit
I started programming when you got all your information from books and man pages. Then people started transitioning to getting everything online. I remember the gray beards back then decrying the āinstant answer cultureā and people just doing straight copy & paste of unverified and untested code without understanding from first principles, yada yada⦠here we are in the next round⦠meh⦠š¤·āāļø
TempestSkylark@reddit
As a lazy dev I am thankful every day that I graduated before any of this
konjooooo@reddit
idk feels like boomer mentality to me. Juniors are more productive than ever in terms of output. Making sure code is up to standard before it merges is the responsibility of more senior engineers.
The feedback cycle becomes much faster because of how much code juniors can output. So they also learn faster from it. And AI is only getting better at explaining code too.
If the quality is not up to par that is not the juniors or AIās fault but the more senior engineers or engineering leadership not valuing quality enough.
djnattyp@reddit
Juniors shoveling shit faster while seniors try to pick through it for diamonds doesn't equal "productive" unless you're pushing brain dead management metrics.
konjooooo@reddit
I agree. But they reach a level where they are no longer sending over pure trash much quicker than pre-AI. Or maybe Iāve just been blessed to work with awesome juniors
DrNoobz5000@reddit
Yeah no shit dude
wrex1816@reddit
I worry that it's producing very bad and very lazy posts on this sub... Non-fukcing-stop.
ITried2@reddit (OP)
I didn't use AI to write this.
Middle_Ask_5716@reddit
If youāre a junior engineer then maybe stop posting on experienced devs. Stop projecting your own inability on this sub.
ITried2@reddit (OP)
I've been working for now eight years.
DustinBrett@reddit
Eventually the AI will be good enough where these people will be like wizards
Best_Recover3367@reddit
What you are describing is like saying you are sad because devs nowadays only know about frameworks and not the basics of memory management and such for building performant and reliable system, and you are happy that you were born where C was the default language. Hey, I'm not at all downplaying your points here, just a funny comparison.Ā
Here's my take: Things are changing, for better or worse. People are adapting. I'm a pro AI dev but I don't trust AI blindly. Think of this as a chance to weed out early those juniors who can't think for themselves. Focus on the brilliant ones. Also, use Claude, it's a much better AI for devs, $20 is not even that much on a dev's salary.
VoiceOfReason73@reddit
Generally pro AI programming here.
I see the comparison a lot that LLM-assisted/driven coding is like the transition to higher lever languages, where you need to worry less and less about the low level details. While somewhat true, I think this somewhat falls apart when you consider that you still may need to understand what the LLM is doing, or else you end up with sprawling code that is difficult to maintain. Also unlike memory-safe languages where you no longer need to worry about that, LLMs still make lots of mistakes and you need to verify that they aren't introducing bugs.
Competitive-Vast2510@reddit
This was bound to happen if you think about it a bit:
Humans are lazy by nature.
Most of the companies prefer speed over quality, so there is a huge incentive for juniors to just "get work done" rather than asking "why" and "how".
Basically, AI exploits the laziness of humans, like most of the consumer focused applications are. For those who have no desire to learn the craft, AI is the perfect tool to use.
dashingThroughSnow12@reddit
Iām not sure how much of this is to blame on ChatGPT and how much is to blame on remote work.
HolyPommeDeTerre@reddit
In pairing sessions with my mates, I always get frustrated at the time they spend on delusional solutions proposed by the LLMs... I will start to ask them to disable it when we actually have hands on.
Poking for leads and hints is one thing, putting your brain aside is not.
ChrisMartins001@reddit
Lol I've noticed this too, it's solutions will akways be unnecessarily complex.
I have a colleague who used to use it regularly and he read it back once and was like "this massive wall of code could just be done with 1 or maybe 2 lines of CSS".
Fit-Notice-1248@reddit
One of my coworkers had a global variable that was set using an env variable. I told him to remove it, and just put the variable inside the function that is using, ie when you call the function it will properly set and use the env variable, like: let webUrl = process.env.WEB_URL. Instead of removing the global variable, he created 2 new functions, loadEnvVariable(), and checkEnvVariable(). I looked at the code, it had emoji's and very ChatGPT like comments. So yeah, he basically asked chatGPT how to do this and copy and pasted an incredibly complex solution...... for setting a variable.
Zestyclose_Worry6103@reddit
Thatās actually is not the best advice, as process.env calls are doing getenv under the hood, and you might tun into performance issues if you do it often enough.
Fit-Notice-1248@reddit
Yeah, we are actually using a config file that reads process env once and done. But was just using an example of how complex he made the whole thing.
RenTheDev@reddit
I just came from a finance subreddit where someone posted a ChatGPT response that was both wrong and terrible advice. Once the hype dies down, I think weāll use AI tooling more appropriately.
Itoigawa_@reddit
At the same time, Iām a senior ds and use copilot all the time.
From simple and straightforward refactors, to getting big chunk of boilerplate code written.
Falls short on very complex tasks, but takes away some time
geon@reddit
Using AI to write code is just like having a very junior/trainee write it. You get the same kind of quality, requiring the same amount of work to coach and fix it. Productivity wise, it is a net loss.
The difference is, within a few years, the junior has gained the experience. Helping them was an investment, and you now have a competent colleague.
With AI, you get nothing.
bart007345@reddit
You've made an assumption that junior you trained stayed. Chances are they didn't and went to a competitor for more money.
Either way, your company loses.
theNeumannArchitect@reddit
This is why junior engineers are being replaced with AI. They're all new grads that relied on AI all through college. Every new grad I've worked with the last few years is just an LLM wrapper commiting spheghetti code and can't explain how it works. And now they're screaming from the roof tops how unfair it is that AI is replacing them and they can't get a job.
AI wouldn't be able to replace junior engineers 5+ years ago. That's why people with 5+ years experience aren't as replaceable. It's ironic.
bart007345@reddit
Maybe this will bring back some level of professionalism to the craft.
If yiu want to be lazy then don't expect a job.
SolarNachoes@reddit
We just had an employee work on a contract gig and AI produced most it. Only problem is it doesnāt work and now they have no idea how to fix it. Nor can they explain the code. Is more senior developer is having to fix it now. So much useless code that doesnāt need to exist.
Comprehensive-Pea812@reddit
There was a time when using IDE and google perceived as lazy.
As the current state, LLM is not really that capable of consistently producing decent code due to the nature of LLM training data itself so being 100% reliant to it indeed being lazy.
LLM for is much faster than accessing documentation for opensource, or google and syntax that I dont know or dont remember.
EnderMB@reddit
I do a lot of interviews, and this is a very real fear of mine.
It's most likely because Amazon has made several regressive moves over the last few years, but graduates this year seem to be far worse than any cohort I've ever interviewed - big tech or not.
Outside of the obvious AI cheaters that'll implement BFS and then be unable to tell you what any of their code does, I've personally seen:
These aren't uneducated people, several of them went to Harvard, Oxbridge, MIT, IIT. Some of them graduated with high honours, top of their class, published research at grad level, but holy shit they are utterly clueless when it comes to basic coding problems - even non-LC style stuff.
Aomix@reddit
I get horror stories from a friend of mine who manages a bunch of early in career developers. He fired one who would refuse to review or test the LLM generated code he put into PRs.
michaelbelgium@reddit
Its already happening unfortunately
They can't work without AI, its like oxygen for them
st4rdr0id@reddit
This bubble feels like Expert Systems 2.0 but with the funding of the .com era.
kyngston@reddit
The bottom line is productivity. Usage models that boost productivity are good, even if it means the dev can be lazier. Usage models that reduce productivity are bad, full stop.
Everyone keeps posting anecdotes of increased productivity and reduced productivity as if you can't demand one without tolerating the other
EwanMakingThings@reddit
I've seen some of this as well, even a bit in my own coding, but overall it makes people more productive, not less, so I think it's still a net positive.
I feel like you could have made a similar argument against something like Stack Overflow back in the day. Yes, of course it's bad if you copy and paste the code without understanding it, but you'd be crazy to say "I never take code from Stack Overflow, I write all of it myself".
squidazz@reddit
I know someone who is using chat GPT to get through medical school. They have online tests with Proctors watching through the PC camera. She set up a second PC with a monitor behind her school laptop so that she could cheat in spite of the Proctor. When you badger her about how bad this is for her future patients, she just hand waves it off with "I promise to learn everything later." She plays video games instead of studying.
RealFrux@reddit
I donāt know if we are there yet but in the near future I wonder if vibe coding will not become the new normal. It sounds boring/scary as a dev that like to code everything myself but it might not be that bad if focus is on control and responsibility of the AI-output in a larger context. Like: Use AI to help generate a spec. Validate the spec and make sure it covers all cases. Have AI generate tests for the spec with all input/output outcomes. Let AI work on the implementation until all tests are passing. Now you have a component that fulfills all your requirements. That you donāt want to look at the inner implementation ever is not a problem as it passes all your tests and requirements. Bad performance? Well that was a missing requirement add it to the spec and let the AI work until all requirements are fulfilled. Then move on. Focus and human work will be on making sure you create 100% tested building blocks with AI and a few decisions about scope and size how you can make sure you use AI to do this.
I find AI today is not yet good enough that it is not faster to write 100% correct components by yourself but I think it is around the corner where a similar approach as what is described above will become more and more interesting.
SnaskesChoice@reddit
We were always bad and lazy!
loptr@reddit
It's worse than that. It isn't producing engineers at all, it's producing tech consumers.
Even if it was possible to become a good junior engineer by using GPT, it won't actually happen in the workplace because companies won't give the necessary time/leeway to go that route.
Barely any company today even account for any level of initial productivity loss/starting stretch in connection with switching to an entire new tool/way of working (i.e. copilot and similar tools). Most juniors will have zero room for actual growth/reflecting/deep diving [because frankly that's were we are today in companies even without AI, and adding AI creates a tenfold expectation in output/productivity so it will just get worse].
CurryNarwhal@reddit
Just throw more AI at this, it'll have to solve it at some point
Electrical-Mark-9708@reddit
I think youāve missed the point we no longer need junior engineers. I suspect in the future, we will return to the age old apprenticeship model..
DoingItForEli@reddit
I don't mind using ChatGPT to do the busy work, but when I depend on it for anything complicated, I basically find stuff immediately wrong with it every time, and it's simple stuff too. "Should we be setting that string to null given we're only checking a length of 0 later?" ChatGPT: "Thank you for that astute observation, you are so smart and amazing, here is the updated code blah blah blah."
The type of person who wholly trusts these apps to do their work for them just is not going to make it. You need to remain attentive to every detail. Don't just copy and paste functions into your code without know what they're doing. One day you'll be put on the spot and asked why you did something a certain way and if you can't even explain it, your goose is cooked.
tOLJY@reddit
I'm in the same boat as you - but maybe more native - is it really that different to us relying on Google searches and stack overflow and having to sift through the crap?
SoggyGrayDuck@reddit
Part of it is the planning and lack of structure. Everything is done in a way that delivers ASAP but that's not scaleable. It's so frustrating because everything I learned in college about this stuff went out the window 5 years ago. Now companies get upset when you talk about the tech debt that needs to be addressed before we can start planning the future again.
deZbrownT@reddit
If you stop worrying, the world will keep on revolving.
Same thing happens when you stop using ChatGPT to write these slops to farm likes.
ITried2@reddit (OP)
I did not use AI to write this.
deZbrownT@reddit
Well thank you, I will execute my right to diagree. Your text is obviously written by chatgpt, the - are dead giveaway.
Its_me_Snitches@reddit
This doesnāt seem like GPT-generated writing to me. ChatGPT uses the emdash frequently, but that doesnāt mean that all writing with dashes is generated by ChatGPT.
Iāve been writing frequently with the emdash for about 20 years, and this assumption is the bane of my existence. My only saving grace is that my writing structure is often so bad that people know GPT didnāt make it.
deZbrownT@reddit
Yeah, sure, whatever floats your boat.
valkon_gr@reddit
Well, AI is here to stay and it will keep coming better. There is no point fighting it.
cran@reddit
I was hoping AI would do a lot of my work for me so I could relax more and not work so hard. Turns out I just get more done and I love being productive so now Iām up all night excited about my projects like Iām in my 20s again.
mrcomputey@reddit
It has its uses -- for unfamiliar languages and ecosystems it's great. I recently shipped something that did a bunch of terra forming in gcp for example. Both were pretty unfamiliar to me, but by guiding the AI to what I wanted, i got it done a lot quicker and saner than I would've by myself. I was very pleased with the ultimate result and the ~90% accuracy of the unedited AI results.
And if juniors weren't just ripping code from chatgpt, they'd be ripping it from stack overflow or GitHub or whatever. When the printing press was invented, luddites complained that so much reading was going to make everyone blind
At the end of the day, it's up to the individual to strive to be better.
SynthRogue@reddit
I've been programming since I was 12, since 1997. I had no internet and no tutorial. And no one in my family or friends knew what programming was. All I had was the help section of turbo pascal when I pressed F1.
I also programmed in notepad (the basic version) for over 20 years and used the terminal to compile my code. It's not until 2019 that I used an IDE and that I started using third party libraries for the first time. Before that, I programmed my own solutions for everything, using the basic/standard commands of the programming language, and human logic. Because I didn't know third party libraries were even a thing.
I was very self-reliant for over two decades. So you can imagine my disappointment when I got my first software job in 2022, and I learned that not implementing your own solutions, following design principles and patterns, and using frameworks were best practices. It instantly killed all the enjoyment out of programming and turned it into an exercise of "copy-pasting" ready made solutions, downloading libraries and using frameworks that do it all for you.
All of a sudden programming was no longer a puzzle to be solved and a means of expression (turning my logic into code), but a boring "colour inside the lines" activity. I understand how all those practices make it possible to develop quality code faster, and how that is advantagous for a business, but in my opinion that makes most developers too reliant on the work of others, instead of knowing how to program their own stuff and be self-reliant.
Maybe there's a balance to be reached there, but now with AI, the idea of not reinventing the wheel is taken to the extreme. Because why would you program anything anymore, if AI can do it all for you? According to the best practice principles in the software industry, it would make sense to have AI generate all code or as much of it as possible. Since doing it yourself would be a waste of time for the business.
So on the one hand you have people who program everything from scratch and implement their own solutions (e.g. Jonathan Blow) and on the other you have vibecoders. For the past few decades software engineers were somewhere in the middle of that spectrum, using google, documentation, tutorials, forums and stackoverflow to find their solutions, copy-paste and change those as needed in their code.
We were told implementing everything from scratch is bad practice and now we are told vibecoding is also bad practice. So which is it? To reinvent the wheel or not to reinvent the wheel? Why bother being in the middle and having to struggle with both?
Intrepid-Sir8293@reddit
You can't drive the car everywhere and expect to have the ability to run a mile
TrickyWookie@reddit
Why worry about increased job security?
polypolip@reddit
Artificial Intelligence producing Real Stupidity.
swollen_foreskin@reddit
I work as a platform engineer aka customer support for developer, and let me tell you that itās pure hell. No one knows how to do anything, how to read up, how to try themselves. Everyone goes straight to pestering the platform team. When I started in this career it was heavily frowned upon to bother people if you havenāt tried yourself. But these days I get senior devs asking me basic shit that is in the docs. Itās mind blowing
techie2200@reddit
I think this depends on your employer. I've worked at places where eng teams are not allowed to work on anything that's not ticketed and accountable to their current project, and I've worked at places where experimentation was encouraged.
polypolip@reddit
That's been persistent ever since forums got replaced by discords and similar. And other users, will berate you when you point out that the person asking the question could have done at least minimal effort before asking it. That's why I hate people hating on Stack Overflow heavy moderation, it was helping with that problem and made finding actual answers easy.
nevon@reddit
Right there with you. The worst for me is when they have tried nothing and are all out of ideas, so they assert that it must be a problem with the platform/the network/"the cloud" and now I'm spending an hour trying to get them to do their own debugging instead of having me do their job for them.
swollen_foreskin@reddit
The last year Iāve been debugging Java for Java devs, c for c devs, python for python devs and teaching DBAs how to connect to Postgres 𤣠at least the job security is thereā¦
Acceptable_Durian868@reddit
It's just another tool. There are always bad and lazy engineers, and they're not always juniors. Focus on mentoring them to use their tools effectively instead of trying to feel superior.
danthegecko@reddit
What may happen⦠the tooling for agent coding feedback loops will improve sufficiently that those junior devs will just watch the agent build the code, deploy it, watch it burn, then debug itself and apply a fix, rinse repeat. The dev will have NFI what the code is doing and most wonāt care. Their aptitude as an engineer will be measured differently to those of us pre llm.Ā
Itās ok to feel some nostalgia for those times though, itās part of being human.
ActContent1866@reddit
Bet a lot of people complaining about AI happily copy pasted from stack overflow for years just trying to get by and provide some value to their company while junior and learning. Rather than shame people for using what is there and at their disposal why not get them to flag any work done with AI help and get them to explain what they learnt along the way and what they could do better. Lazy could equal someone trying to succeed and not bother that snappy senior dev with questions a LLM might help with.
Personal-Status-3666@reddit
Agree there is a negative feedback loop here.
joe190735-on-reddit@reddit
hold them accountable to their outputs
Fancy-Nerve-8077@reddit
Id like the think that it can make engineers significantly better. Not only can it do, it can explain
Zeikos@reddit
I share the worry but I am optimistic.
Thing is, people need training in how to use LLMs effectively, and since they're so new very few people have the needed expertise to share.
LLMs are completely fine for templating and quick iterative exploration of something you're not familiar with.
I was recently working with PL/SQL to spruce up some procedures, I had no clue how PL/SQL worked, and the chatbot helped me avoid pitfalls that were obvious in hindsight.
I have learnt a lot so many best practices in so little time, the code was simple and readable to the get-go.
And now I feel far more confident to work with PL/SQL in the future.
I reached the same level of comfort in two weeks that would have taken me three + months with only the documentation.
Angryvegatable@reddit
Donāt worry about it
pragmaticcape@reddit
I have had a few grads come into my team and then stunned to find out that they donāt even have co pilot enabled on their IDE. Vanilla completions and no chat.
I tell them all the same thing. You will be fine for a few months so i can asses you. Itās ok to use GPT to research but no copy pasta.
They struggle, get feed back, then they donāt. Enabled co pilot on one after 3months the other was closer to 6.
If I could force them to use SO i would lol.
Iām a massive user of llms and such. They save me plenty of grunt work but Iāve been mashing keys for 40yrs. A kid on their first job shouldnāt be deprived of the learning
supremeincubator@reddit
I'm a self taught developer who later went on to take a degree (still an undergrad while working), and I see peers who can't explain their own assignment submissions, they breeze their degree with ChatGPT and I don't think many of them put the effort to understand the AI generated working code to their own assignments. You could argue it is a problem with the universities turning a blind eye on the obvious and letting them pass, but it is what it is!
thearn4@reddit
I share the same concerns. Part of me also wonders though, if eventually it's just the next step in the evolution of how expertise is applied, and the mental models will shift accordingly over time. In my first job my boss showed me a copy of the program he wrote for his dissertation, which was a thick stack of punch cards of old FORTRAN fed into a 70s mainframe, wildly different thought processes and pain points than what I was doing during my own dissertation at the same time. Maybe this step, once best practices are actually worked through, will be similar for the next generation?
Fit-Notice-1248@reddit
This is something that I have been having a problem with, but at times it isn't really the fault of the Juniors, but management is constantly pushing them to use AI tools/Copilot. I have stand-ups with the manager and the team and the manager is constantly asking why we aren't using Copilot for every line of code, and no matter how much I explain that it's not required in every situation, management doesn't want to hear it. Sure toxic workplace and all, but there is a push from higher-ups forcing engineers to use these tools, even when it's not necessary.
But to your point, yes. I have made it a point to tell the engineers on my team to not do this copy and paste nonsense. I have had so many problems of the engineers just writing one sentence prompts to Copilot, copy and pasting the output, and when I ask them why they implemented something this way, their response boils to "I don't know ChatGPT/Copilot told me to do it". It's a little infuriating and I feel it definitely causes people to be lazy and not investigate/understand what they're trying to do.
Stock-Marsupial-3299@reddit
Some people work at jobs they have no real interest in and donāt excel at. This has been the case long before AI. There have been junior engineers that change career because it does not work for them and the difference now is that they will create more mess to be fixed until they make the eventual switch.
Fun-Dragonfly-4166@reddit
don't worry about it
either chatgpt can handle things and you just don't know how to prompt chatgpt (unlikely but possible)
or the junior will learn from experience
lordnacho666@reddit
Yes, and you should be happy about that.
Now you don't have to pull the ladder up yourself.
FetaMight@reddit
It's the insinuation here that tech is hard because people established in the field deliberately make it hard?
I have met a handful of people in real life you thought this way and even had the misfortune of co-running a non-profit community organisation with one of them.Ā
The common link I've noticed between all of them is that they refuse to acknowledge other people have put in the thousands of hours required to become an expert at anything.Ā When they're faced with having to slog through the early stages of learning something complex they quickly abandon it and spend more energy crafting stories about how their inability to gain expertise is everyone else's fault.Ā
It's exhausting.
xian0@reddit
Wasn't so long ago that we thought there might not be any point in trying because the "digital native" geniuses were definitely going to wizz right past us.
FetaMight@reddit
Offloading != Outsourcing != Offshoring
Emotional_Street217@reddit
To me, leveraging genAI as an āinitial exposureā to something new is okay, but when starting out, to really understand the edge cases of what can go wrong and think outside the context of an IDE, you need to start with a fumble, slip, debug and eventually succeed with small, trivial concepts and ideas.
When you palm the fundamentals off at the very beginning of your learning journey to AI, you forget to ask why and how things work the way they do, instead of just the outcome
PositiveUse@reddit
This is not about junior vs senior.
This is about knowledge and brain loss in favor of laziness and employer driven productivity.
prashnts@reddit
One thing I could recommend trying is to create a team wide Stabdards document. Since multiple devs are repeating same mistakes you can take inspiration from that. The goal of the document would be to formalize things like styles, and links to established external docs to refer. This would put you in a teaching position (which is part of your job anyway) so you train the juniors.
ChatGPT is likely here to stay, and you can't stop people from using it. But you can teach them how to not rely on it.
simfgames@reddit
Very bad and very lazy junior devs have existed long before AI.
A very bad and lazy junior dev never turns into a great engineer, that's not a path that really exists. Because that path requires curiosity, and by definition someone with curiosity is not a very lazy dev.
And AI is now more useful than those people! So maybe it will just result in fewer of them getting into the career, which is a good thing!
And we'll also have more people joining that are using AI to learn faster and be way better then before. So there will be that kind of balancing going on. You don't need the traditional junior path to exist anymore. The market will find a way.
So yea long story short these posts seem silly to me. Worrying about this is mostly irrelevant.
casastorta@reddit
Bad, lazy and very self-confident š