the juniors who only learned to code with AI are going to have a rough time in about 5 years
Posted by Motor_Ordinary336@reddit | learnprogramming | View on Reddit | 227 comments
Two juniors on my team. Both ship fast. Both grew up on Cursor and Claude Code basically
last week one of them pushed something that broke in staging and I watched them paste their own function back into Claude going "what does this do." code they wrote on monday. THEIR OWN CODE. that they merged
I know how I sound. every senior ever has complained about juniors not knowing X and I swear I'm trying not to be that guy. but when I came up you had no choice but to sit with broken shit for hours and slowly build a map of the system in your head, and that part sucked but it's also where the actual learning lived (for me anyway). now you don't have to suffer through it. you just ask.
(not an anti-AI post btw, I use it constantly)
year 1 is fine, year 1 they ship features. it's year 5 I keep thinking about. one of them on call at 2am, prod doing something insane, AI confidently wrong, and they need to reason through an unfamiliar codebase under real pressure. I don't know what that looks like for someone who never built the muscle
NeonQuixote@reddit
Business executives are going to have N even harder time when they discover where senior developers come from.
YetYetAnotherPerson@reddit
Are we talking about the same business executives who don't know how to build an Excel model because they just asked AI to do it?
Individual-Brief1116@reddit
Exactly. They'll cut junior positions to "save money" then wonder why their senior devs are all 45+ with no pipeline behind them.
AttorneyAdvice@reddit
senior devs retire at 45?
45t3r15k@reddit
We burn out at 50
Gazibaldi@reddit
I'm a senior dev and I'm 45 next month. I wish I could retire.
Keep on keeping on.
Just2Ghosts@reddit
Career switch into goose farming takes place around this time for most
no-curse@reddit
Carpentry đȘ too
Donstap@reddit
Damn i only have 28 and o already switch.
Plus, getting hired as junior in this economy and this "AI era" is insane. I preffer spending my time as a plumber or Carpentier than going to LinkedIn hell another day
EnyaMorgan@reddit
I feel attacked!!!
InVultusSolis@reddit
I was about 6-7 years late to the party, so my goose-farming phase should hit closer to my early 50s.
But for now I'm still doing pretty well in my career.
karmiccloud@reddit
It was alpacas for one of my coworkers
demigodxy@reddit
lmfao
AtomicMac@reddit
Yeah, most times not voluntarily.
Dramatic_Win424@reddit
I wonder how this will pan out in the future. Nobody in a deciding role can tell anyone's real skill level during the hiring step anymore because AI tools can hide the true knowledge and skill level to an astonishing degree. It might destroy trust in all known indicators.
We might move onto word of mouth exclusively because there isn't any other way to know.
Do we have any comparable industry where hiring practices changed like that?
Humble_Warthog9711@reddit
Let's be honest - the industry for the most part has always upheld that the code people claim as their own is a beyond awful indicator of their actual knowledge/proficiency.
It's why the fuss around here about personal projects is wayyyy overblown, though I don't think people here want to hear it.Â
shure_slo@reddit
It's insane that people expect personal projects to make up for years of production level experience. If I have 10+ years in high profile company what does it matter if I do not have projects in my personal Github???
Humble_Warthog9711@reddit
I think it's spoken of here as the way to make up for a lack of degree/internships for self taught devs and bootcampers mostly.Â
But anyone that has worked in tech knows that when peoples livelihoods are at stake, cheating becomes the norm.Â
No one is going to look at a juniors personal projects unless they were being hired alreadyÂ
LeatherDude@reddit
"There must be a prompt we can use to get senior quality code"
need-original-name@reddit
It takes a few weeks, and access to LinkedIn.
Mejiro84@reddit
And money!
Sunstorm84@reddit
There is!
âMake me an advert for hiring a senior developer.â
Geno0wl@reddit
On a long enough timescale, we ironically end up like Warhammer 40k Tech-Priests. Which if you are unfamiliar with the universe, Tech-Priests are basically the scientists and engineers but because technology is so complicated that nobody actually knows how any of it really works anymore, so they start treating tech like religious rituals.
InVultusSolis@reddit
I would be happy with that. I can't stand the 3+ rounds of interviews that are basically just a byzantine set of hurdles to jump over.
If I were tasked with hiring a junior, I'd bring him/her in and simply sit next to them and work on a problem. I'd create a mock systems design exercise meant to test knowledge all the way from basic coding up to knowing how systems interact. The only thing I would disallow is "write the code for me" sorts of AI usage.
I know that sort of thing isn't always possible with remote positions, however.
PartBanyanTree@reddit
I wish we could somehow work in a reputation based mentor/apprenticeship style situation where it is all about who you know.
I realize this will lead to horrible inequalities and be rampant with abuse and worse. I get that it's a terrible idea
but also I wish I could work with people who I trust, help people I know are good, and that trash devs could be ostracized instead of promoted. combined with some type of union so that management is forced to respect people
but y'know somehow it's a good union and only helps people and isn't bad. and that somehow all the programmers who can't even agree on what to name a variable will somehow run things properly and fairly in a way that only has benefits and no drawbacks
Kane_ASAX@reddit
Yeah nah. This is science fiction.
First company I worked for. My boss was my best friend's dad. Repeatedly delayed my paycheck by 1-2 weeks every time. Never again. I do not want to work for "someone I know".
You can build that trust with strangers. It can be done, and that is exactly what I've done after the first shithole.
Important_Coffee_845@reddit
Ew... I wanna get away from "who you know" as much as possible. Forget that.
need-original-name@reddit
I will probably hearing that some anime artists have started requiring that their job applicants draw in front of them.
Frolo_NA@reddit
its been trending that way already
TheRealKidkudi@reddit
Itâs always been that way for any technical role, including the world outside of programming. Where do you think leetcode-style interview challenges came from?Â
The phenomenon of a generally incompetent or incapable coworker is nearly universal. The problem is exacerbated by the fact that programming is basically magic to non-programmers, but most jobs face the same problem  that it is hard to  accurately assess someoneâs skill level without 1) having that skill yourself and/or 2) actually seeing them practice that skill.
Iâve seen it commonly mentioned that other industries have some form of a licensing requirement and that is good in many ways, but that doesnât really solve the problem - it really just indicates a minimum level of competence AKA âthis person is at least capable of the bare minimumâ or, in some cases, simply âthis person is aware of the regulatory requirements for this jobâ
DigmonsDrill@reddit
It may just be that they give up on having any seniors and most development and maintenance is done by people prompting AI.
Beneficial_Noise_737@reddit
What will word of mouth be based on in 5 years when real world tests as you say have become imperfect measure of individual's skill due to ai ?
wkynrocks@reddit
Indeed global dev positions are not going down.
Upstairs-Sentence512@reddit
If it takes 5 years for many company to feel the impact, then it will be around 2030, but how have AI coding tools advanced by then?
We already seen primitive exploit finders such as models like Mythos, so maybe one can simply do adversarial testing with red team agents.
TheStruttero@reddit
Dont they just spring out from the ground?
decrementsf@reddit
Only if you have the teeth of dragon and sow them in soil freshly plowed during the new moon.
future_traveller@reddit
The rolls just shifting. IT has for the most part always been a tool set to support some other type of outcome. The roles and I.t have shifted before and I don't expect this time to be wildly different. Sure, the work will change as it always does in IT and your Juniors will probably spend more time creating agents or adjusting them and than they will writing code.
lasooch@reddit
Yes, let's have the juniors set up agents that will burn thrice their salary, while they (the juniors) don't even yet have the capability to assess whether the agent's output is correct. Brilliant idea.
future_traveller@reddit
All they will have to do is assess the output is correct though I'd the point. I no longer need someone to get syntaxes right, I need someone who can judge if output is correct and adjust/update instructions to get it there.
So while yes the days of junior programmers is over you still need junior folks doing this work......
madrury83@reddit
Centuries of education support the thesis that you cannot accurately judge the correctness of technical work you never learned, through practice, to produce. There is no magic spell here, LLM use almost definitionally weakens someone's ability to assess correctness.
future_traveller@reddit
I don't even need to program to judge if the programming work done met the outcome I needed. And I can have an agent scan the code for meeting of standards compliance,functionality, and security vulnerabilities.
I really just need a junior product manager to say this functions well and how I want it.
Pantzzzzless@reddit
Sure, if you want to happy path everything.
What happens when the size of your dependency tree grows beyond that of a grad project? Try getting Opus to accurately grasp how a system works when there are 5 layers of downstream calls. Half of which have no documentation at all.
Lol, ok. Sure, it will spit out a nicely formatted table with green checkboxes telling you "This is a very clean implementation!". But that pat on the back won't help much when half of your services are closing circuit breakers because your agent didn't think to do any load testing.
If your measure of success is "functions well", then I truly feel bad for anyone that has to come behind you to clean your messes up.
A "junior product manager" is not gonna have any idea how to test edge cases.
future_traveller@reddit
As opposed to a junior engineer who is definitely going to produce high quality code, scale everything out correctly in complex scenarios and understands the whole app and code base? Reality is when you hire a junior engineer the quality you get isn't great at first. It requires solid process with checks and balances.
I'm suggesting you build the checks and balances into your AI devops workflow for things like writing test cases, executing them etc. the jr product manager doesn't need to be worried about the code they just need to worry about clearly defining outcomes and validating those outcomes.
glotzerhotze@reddit
You have no clue what you are talking about. Absolutely no clue, like most of the code I get handed by LLMan juniors. Looks like âthe shitâ, and works very shitty, too.
nachoaverageplayer@reddit
IT is not programming.
future_traveller@reddit
Correct programming is one very specific small part of the it world.
AUTeach@reddit
The world is large.
In Australia for example, IT is descriptive of systems/network and software. It's so entrenched that my university rebadged their computer science degree as information technology only offering CS to international students.
Also, wouldn't it be a bad recommendation for the next generation? Systems/networks are getting more and more code friendly with how you configure remote and cloud computing.
I know a lot of my systems administration time less about smashing commands and more about writing ansible plays and often scripts to patch stuff together.
I'd hate to be a junior systems/network administrator/engineer and not have a solid foundation in scripting.
Important_Coffee_845@reddit
H1-B programs?- I uh- I don't get it.
OrphicDionysus@reddit
This has been a growing problem across a lot of industries for decades now. Larger businesses had depended on smaller competitors to train their workforces, poaching them once they had some experience as a way to offload the costs of training onto companies that didn't have the hiring networks or prestige to do the same thing. Then through the post-antitrust era they are those competitors through unchecked consolidation until the only participants left in their markers were the other players large enough to have altered their hiring pipelines to work in the same ways.
Individual-Shame6481@reddit
From The Hundred Acre Wood?Â
farfromelite@reddit
They come from LinkedIn, duh
/s
BroaxXx@reddit
Do you know who wonât have a hard time? Everyone that can work without an LLM⊠I can already feel the salary bump and job security in a couple of yearsâŠ
simonbleu@reddit
Yup
I think that if it gets bad enough, what will grow the most are companies middlemanning contractors, and companies that train andor test employees doe you under synthetic scenarios. Likely the same companies
urbanhawk1@reddit
When a mommy developer and a daddy developer love each other very much...
tripleshielded@reddit
Mars ofc
SNsilver@reddit
I keep raising this concern at my job, and itâs been brushed off so far. I have a few juniors that report to me that Iâve put on an âAI banâ because they kept putting up garbage in MRs. Not sure whatâs going to happen but I know it wonât be good
gordonnowak@reddit
I don't understand this. I've just picked up claude code and if I insert it in the process loop where I would usually go off and look something up or actually type something out, generating roughly size-and-content-equivalent PRs to what I would normally produce, it takes 1/50th of the time and is indistinguishable in quality. what are you seeing that's garbage?
shamusl93@reddit
Most of it. âWorksâ and âgoodâ arenât the same thing. If youâre looking at what Claude/Codex are actually writing youâll see tons of over-engineered, overly defensive code that repeats itself frequently. That leads to a massive cognitive debt where itâs difficult to reason about once it breaks, and, in any reasonably side codebase, it will break. I use these tools, but I see the garbage itâs constantly creating and fixing it where appropriate. In tiny brand new codebases, itâs reasonable, throw it into 250k LOC project and itâll ruin your codebase.
If you canât see the garbage itâs producing, thatâs more a sign that you need to spend more time learning why what itâs doing is such a bad idea.
gordonnowak@reddit
so far I've caught it generate some particularly naive architectural ideas over long sessions but the code itself is perfectly acceptable. it's pretty easy to bound them into effective patterns by mere suggestion, and one thing I won't accept that they're bad at is writing tests. so if the stuff you're seeing is bad and breaking, I'd suggest it's a sign that you don't know how to use these tools properly
Pantzzzzless@reddit
I would love to do the same. But my juniors are all offshore. And I don't have the power to block their EID on our licenses.
fractalskiesahead@reddit
This thread is pure cope.
Jahonay@reddit
I feel like we're in the flooding the market stage for AI, once prices readjust for prompts to become profitable, it will be a lot more cost prohibitive to solve all your problems with it.
tfhermobwoayway@reddit
Is the cost of AI going to go up? I heard that happened with Uber and delivery apps but are they using the same model for AI? How does that work?
AUTeach@reddit
Looking at the changes in claude, they are going to move everybody to a token based model sooner rather than later.
If you are on a subscription fee, they are going to give you daily and weekly limits.
API users are already on a token basis but the rumours are that they are burning through money there too. So, the prices are going to go up up up
Mrseedr@reddit
Doesn't claude already have multiple limit-windows per day? e.g., "your usage limits reset at 6pm"
Jahonay@reddit
It's just an assumption, that's the penetration pricing approach, you flood the market without making a profit, and then once you're everywhere the prices go up. The idea is generally to win over market share with low prices, and then once you establish a market you raise prices to become profitable.
I mean, I suppose we'll see, but I don't see how else they could be profitable.
tfhermobwoayway@reddit
I mean I guess that makes sense. I thought people would abandon food delivery apps once they shot up but it seems like more people are using them than ever. Some people just don't know how to budget.
Pantzzzzless@reddit
People are different from corporations though. If the board starts to see money hemorrhaging from the AI pipes, they will turn that shut-off valve without a thought.
subLimb@reddit
I wonder if this may be one reason why many firms are throwing AI so hard at their employees right now. If they expect vendors to become prohibitively expensive in the future, it might make sense to take advantage of as much cheap consumption as possible right now while using that experience to train their workers and simultaneously come up with some in-house solutions that could take take the place of vendors if it becomes necessary.
Jahonay@reddit
Yeah, we'll see how people budget AI after the prices readjust.
Pablo_Ameryne@reddit
Basically they are running these services at a loss with money from venture capital. When investors start to want to see a return they will start charging the real cost to users.
Meta-Four@reddit
This 100x. Soon its going to be cheaper to throw human labor at the problem than AI.
derallo@reddit
Based on how good the local models are getting, that scenario seems unlikely.
AUTeach@reddit
A sizeable chunk of the people white knighting AI here are solo non-technical people using something like claude code or codex to be their entire development and operational pipeline.
The moment they are paying per token ingested and egested their entire operation is going to become ridiculously expensive.
movie_man@reddit
You didnât address the comment or the person you were replying to though.
Ok_Food4591@reddit
I agree with this 100%. AI is only "viable" for full development process when you are able to pay once and prompt it to hell and back about everything. Once you start paying actual cost of computation (and enterprise customers are first on the chopping block) unless quality instantly skyrockets, stepping away from using AI on scale will be the easiest way to cut costs
shawnaroo@reddit
Yeah, itâll be interesting to see how it plays out when these AI companies start telling their customers âyou need to pay for every prompt. Yes, even the ones where it returns incorrect answersâ.Â
Not only is that going to massively raise the price of using those AI tools, but it just leaves a really bad taste in peopleâs mouths when youâre charging them for stuff thatâs straight up defective.Â
Teagana999@reddit
As soon as everyone is thoroughly addicted to AI.
gloriousGeeseGrease@reddit
This is never going to be the case Iâm sorry but this is just wrong
Meta-Four@reddit
Why do you think that?
HannibalK@reddit
That sounds amazingly stupid. Do you really believe that?
Meta-Four@reddit
I do think we'll see intermittent jumps in progress as AI companies crank up monetization. Right now its being rolled out as a solution to every problem, and I think well see places where it gets rolled back because of costs, especially if it doesn't fully replace the position its being used for.
I also fully expect it to get better and cheaper in the future as well. But there will be periods where the cost of AI outweighs the cost of just having a human do the job.
Ok_Food4591@reddit
Honestly, I can see it getting better for sure, but cheaper? No sir. To get better the existing data centers would need to be constantly modernized for better hardware and the problem the existing ones have is that setting up the entire infra physically takes such a long time, DCs gear is 3yo at the opening. They would play a constant game of catch-up and that costs a ton of money with the amount of compute LLM needs for training and operation
Meta-Four@reddit
By cheaper I mean, what might require a higher cost/usage model now would be able to be done with a lower cost model in the future. But I made my first comment based on the opinion that I do think these models are going to get very expensive to users because of exactly the reasons you mentioned.
MufasaSaylum@reddit
Claude code is literally being removed from the Claude pro subscription as we speak
yopla@reddit
Doubtful. It will eventually become a cheap commodity, China is making sure of it just to damage the US, and even if the compute is still important, for a decent business buying a few 100k of GPU to run very large local model is not an issue, it's just more convenient for now to pay five time as much to anthropic & co.
Jahonay@reddit
I'm not dogmatically committed to my guess, but how do you think the big AI models will become profitable if not by raising prices?
gordonnowak@reddit
by lowering costs? lol
Jahonay@reddit
Sure, but how? I'd assume most companies wouldn't want to cut on quality, where do you see massive budget savings? To my knowledge they're operating at a loss, that's likely a lot of ground to make up.
gordonnowak@reddit
government subsidies, favorable energy contracts, cheaper hardware (China). maybe I'm wrong, I don't fucking know. I don't know what these companies are burning so I can't say what a reasonable cost model for profit is. If it turns out to be $50,000 per million tokens, great. Something tells me they'll figure it out
Jahonay@reddit
I'm in the same boat, I'm not committed to my guess, I have no idea how it will go, I just know they will eventually need to balance their budget somehow.
I think those are reasonable guesses, I guess we will see if those sources end up closing the gap.
AaronPK123@reddit
I'm wondering once things balance out and VC money dries up whether a human coding versus an AI will be more "bang for your buck" in terms of how much work is done for how much money.
licorices@reddit
Yeah, if it is going as it currently is, no one will really afford AI for personal use, and even many companies will either have it extremely limited, or skip it all together. A lot of companies built around AI, wrappers and so on, will fall extremely hard the moment those costs fall onto them as well.
epic_pharaoh@reddit
Ask me what a function I wrote a week ago does with or without AI and I would need a minute to figure it out as well.
Just because they are asking the AI a question and using the AI doesnât mean they are relying on it. They are juniors, they are developing skills and trying to use the tools available to them.
If they arenât documenting their code well enough that they donât need an AI to summarize it for them, then you as the senior are responsible for teaching them how to do it.
I see a lot of seniors complaining that the juniors are leaning too much on AI. If thatâs the case show them the tools and methods they should be using instead, show them how those methods/tools are more reliable.
People will use the tool they think is best for the job, people learning to draw with a pencil will use an eraser. If thereâs a problem with how they use AI itâs your job (or at the very least your companyâs responsibility) to point that out and correct it; or alternatively provide an approved coding environment where AI tools must be used externally from the IDE (in which case stackoverflow is often a more efficient solution on its own).
nenchi_@reddit
That tool is called head.
epic_pharaoh@reddit
You offering?
On a more serious note, no itâs called properly documented code.
quietcodelife@reddit
the on-call scenario is exactly right. ive had those 2am moments and what gets you through isnt knowing the answer, its having a mental model of how the system is supposed to work so you can find the drift. AI doesnt help you build that model. it helps you build things faster without building it.
what I keep noticing is the debugging style. devs who learned through suffering tend to form hypotheses, test them, and narrow down. devs who learned through prompting tend to ask what might be wrong and hope something sticks. those are genuinely different failure modes under pressure and I dont know that you can shortcut your way to the first one.
gordonnowak@reddit
you won't be needed to do *any* of that pretty soon. the starry-eyed bullshittery in this thread is insane
quietcodelife@reddit
timeline is the part I genuinely can't call. if the window is 2 years you're probably right and most of what I said is moot. I just don't know, and I don't think anyone in this thread does either
Doooofenschmirtz@reddit
If you donât think ai will get infinitely better in 5 years, idk how you function
mrgudveseli@reddit
Big "if".
AhmetYaq8bi@reddit
Im a junior still learning, I have lately started using AI tools in the past few months, specifically because I have realised it helps me do more with less time.
I take note of changes it makes and why it does it.
I realized it has sped up my learning process.
Although at times I wish I had a senior dev as a friend I could discuss things I have dificulty understanding.
I know AI is good, but there are architectural level of details I wonder and I end up not being certain which path would be the decision a senior would make(as that is my end goal).
I dont find AI explanations on system level decisions good, as its answer for my "why" questions simply logically doesnt seem good enough.
I feel like it lacks the ability to think system wide, as I have noticed one reasoning doesnt correlate to the other, like if the thoughts where chains, many chains are breakpoint and not connected to the nex set of chains(maybe not the best anslogy), what is mean is that Something is off but im not sure why I feel that way.
Have a great day and thank you for reading. đ
springhilleyeball@reddit
i learned how to code without AI but i am only regressing at my first swe job. i won't last much longer & i know it with the shoving of AI down our throats
internet4ever@reddit
Iâm regressing, too. Three years into full-time and Iâm noticeably worse.Â
Kane_ASAX@reddit
Same here. At my first job I was working with c++ builder stuff and at some point creating the website for some inverters(C rest api + vue). I did use chatgpt back then just to get used to the codebase etc, but after like a week I could confidently code on my own. Moved to another job cause first place was shit.
They wanted me on cursor and I did as such, but damn, my coding skills degraded significantly. I am now basically telling cursor to write like 80% of the code then ill go in, make sure it follows company standard format and fix the overbearing typechecks.
At the very least I still have my other skills.
ForJava@reddit
So you do your job just fine then?
Kane_ASAX@reddit
Well my boss is currently training me to "co-own" the entire product, so im doing something correctly I suppose
Pantzzzzless@reddit
That just means they want another person to take the fall for something in the near future.
Kane_ASAX@reddit
Do you have experience with this happening?
My current boss is now the HOD(devision) and wants me and previous product director to take over. PR's, client calls and deployments and issues if they arise.
Mind you they are both highly technical and built the product from the ground 9 years running.
I'm not detecting anything malicious going on
Pantzzzzless@reddit
I was honestly just being a bit cynical and facetious.
dunderball@reddit
What's the alternative nowadays though? It's starting to make less and less sense to try to write code by hand. Best I try to do is at least spend the time to code review / understand what the LLM spit out
RedCloakedCrow@reddit
I think you kinda hinted at it, and a fairly highly-placed google engineer I spoke with a little while ago painted it explicitly: the career path is moving from "can you produce code" to "can you review code for quality".
The difficult thing is that reviewing code is a skill, and it's one that the LLMs are very unlikely to become good at (IMO). You have to know how to write good code (and why that code is good) in order to be able to review effectively.
Kane_ASAX@reddit
Its certainly possible for LLMs to review code. But it only works up to a point. It can look at the code and compare to what it has in its training data and thus possibly find vulnerabilities or the standards like code structure.
But this has drawbacks. 1) it should not be trained on its own output. It will turn the output into jelly. 2) there is a significant delay in what the model has access to and what has been released(like a new version of C++)
RedCloakedCrow@reddit
It is possible, but it runs counter to the things that they're currently forced to optimize for. There are conditions in every code base (either architectural or infrastructural) that inform patterns that LLMs won't be able to contextualize, because they require in-depth investigation that will usually be avoided in an effort to save tokens. It's the problem of a cursory glance over the code base and comparing to known standards, vs adapting to the specific conditions that this code base lives under.
An example I recently saw was of a partitioned table in a rails application. There were explicit details about the partitioning of the table in the code, but Claude wasn't told explicitly to investigate the partitioning but instead had to tangentially touch this table, so instead of looping over the records it needed to create and saving each individually, it tried to use insert_all, which failed because there was no global primary key index available.
AUTeach@reddit
It can review code, but as the code gets more specific, it becomes less and less reliable. The only way to increase that reliability is to increase the token count for every review (more and more of your code needs to be ingested), which, as providers move to a token-based billing, is going to become an issue.
What are you going to do when it's as or more expensive to pay for tokens than it is to pay for developers?
sandspiegel@reddit
And to review code you need the experience of writing bad code yourself, so you know what not to do. You don't learn by watching a LLM code for you which in my opinion is a big problem. How will a junior become a senior if there is no real know how through experience?
Kane_ASAX@reddit
Yeah if especially composer had to create an image of how i treat it, it would definitely show a whip. There are some things that i still prefer to do manually like meta flow jsons. Or when im busy on my side project learning a new langauge, I try to go the old fashioned method.
IceSentry@reddit
You used C for a rest api?
Kane_ASAX@reddit
Yes. It was part of the inverter's drivers
sandspiegel@reddit
I think as more and more companies use these tools it's gonna get very expensive over the long run if the models don't get much more efficient. These AI companies are losing billions each year and will need to make money somehow. The only way I see that happening is to drive up prices once they have enough customers depending on their product. I actually hope that this will bite all companies who have the culture now to ship fast using AI.
I myself also use AI of course but only as a stackoverflow on steroids to ask questions to a chatbot. I think if I used some AI agent to do all the coding for me, I would quickly lose the know how. The brain is very efficient when it comes to "what you don't use, you lose" and this very much is true for coding.
AntDogFan@reddit
I write for a living. Research and writing. I'm lucky that I got my early stuff out pre ai. But now it's so competitive I think I have to use it extensively to keep in work. Otherwise I'm just too slow. But I feel it corroding my ability to work independently. It's a bind but one route leaves me with no job and the other doesn't.Â
ricey_is_my_lifey@reddit
It is bad enough that when I recall my tasks done at a previous internship I need to relearn what I did with AI
Lorzweq@reddit
I'm graduated software developer and haven't got even my first junior position. It's hard as a junior in IT-field. Companies think they can use AI to do junior stuff but at some point AI is more expensive and they need to hire some Medium-Senior level dudes to fix AI-slop.
MrFlaneur17@reddit
Dont worry, you'll be working as a plumber in 5 years
Aleks_Zemz_1111@reddit
The problem isn't using AI, it's the Loss of the Mental Map.
I run industrial machinery where 'confidently wrong' results in a six figure repair bill. Watching a junior paste their own code back into Claude to ask: what does this do? Is like a mechanic not knowing which way to turn a wrench.
When you spend hours sitting with broken code, you aren't wasting time, you're hard coding the system's logic into your own brain. If you skip that friction, you never build the muscle memory required to debug at 2 am when the AI is hallucinating, and production is down.
These juniors are becoming Operators of an Oracle, not Engineers. They can ship features fast when the path is clear, but they are blind the moment they hit uncharted territory.
In 5 years, we're going to have a surplus of "Prompt Operators" and a shortage of people who actually understand how the machine works. Never merge a line of code you couldn't explain if the power went out.
mp5max@reddit
As a junior interviewing for my first technical role iâm really curious as to how they passed a technical interview?
Important_Coffee_845@reddit
Im still in school and this is the kinda shit that keeps me up at night. Im already shipping code but... I refuse to ship anything I cant fully explain. I will not build a black box.
It just doesent sit right with me. I feel you. Im a week 3 student. Im a little self taught nobody but...
I will not ship code I do not understand.
I will use AI to help me figure something out or to make shit I already know how to make to save time sure. But I design the piping.
gloriousGeeseGrease@reddit
Yeah bud Iâm sorry in advance for the suffering you will experience upon graduating. You got about 10 years tops before our career is underwater fully
Important_Coffee_845@reddit
Nope. Dont take this personally- but I have a fully formed view on this that doesent ignore the lessons of actualy history. And im a young millennial who has been told the world is 5-10 years from ending my whole life.
The children of the people who said this about the internet are now saying it about AI. Im sick of this doomer stuff.
Shit was actually really bad in like 08-13 when I graduated HS and entered the workforce. And It finally slowly started getting better. And everyone forgot and acts like that struggle never happened. Just poof! And now ppl are forgetting the pandemic.
Everyone was scared of an Iran war now everyone treats it like another tuesday in real time. I was introduced to this doomer thinking when I was only 7 and y2k started becoming a thing and the gore-bush election. And you know what?
Im over this kinda thinking.
You can go freak out in a corner.- and im someone who believes modern globalism is facing a collapse- but even i dont buy this AI pessimism especially considering much of it is driven by "oh know it can make pictures now"
Yeah... its just the same shit over and over.
So offense to you. Im just not gonna play this game.
Pantzzzzless@reddit
I would already hire you, just for that attitude alone. You have no idea how rare that actually is for your age group. (I'm assuming 18-25)
Lately, I've been interviewing ~20 candidates per month. Some of them just straight up say that they don't write code, they spend most of their time massaging Kiro skills and managing context window usage. Which, don't get me wrong, that is an important skill to have now, but man that should be WAYYYY down on the list of priorities.
Important_Coffee_845@reddit
No I'm 33 lol - I ya know I had to survive in customer service roles and management for shitty little local restaurant and hotel jobs while I taught myself to code and tried to find a way back into school. So I don't have that cockiness- you know the one- the one we can't talk about because it doesn't make the Zs feel "validated". I think in the near future you might see more re-skilled workers who are older, seasoned, mature and now how to work in general.
I'm banking a lot on merit going into the tech workforce. Thanks for that bit of confidence man I appreciate the hell outta that. I do hear this kinda thing a lot lately and think I've finally put myself where I belong.
Pantzzzzless@reddit
For sure dude. I obviously can't speak for everyone but for me and many of the other tech leads in my org., these are the best qualities you can bring to the table as a junior new hire:
I know you didn't really ask about any of this lol, but it was on my mind so I figured I'd share it for anyone who cares.
Important_Coffee_845@reddit
This is good shit. Exactly what I looked for when I was a hiring manager in hospitality- its just universal stuff that all too often gets over looked by the candidate.
patternrelay@reddit
Feels like a dependency shift more than a skill loss. If the system for reasoning gets outsourced early, you lose visibility into failure modes. It works until it doesnât, then debugging becomes guesswork instead of narrowing hypotheses.
jipai@reddit
What if AI learns that âbad codeâ pushed by AI-lead developers is thought as âgood codeâ since it got pushed to prod? Wonât that compromise how the AI modelâs learning and later its integrity and output?
Mugiwara_Sora@reddit
Wow I really need to apply to more jobs then
surreal_goat@reddit
Your companies need to start hiring college grads and teaching them instead of throwing this boomer-grade, âback-in-my-dayâ bullshit out here.
Your companies did this, not us new grads competing with people with 10yoe for junior rolls, trying to stay relevant by self-learning AI tools that werenât even mentioned in the curriculum 10 months ago.
Piss off.
HootenannyNinja@reddit
Is t this why we have CI tests and PR reviews?
gloriousGeeseGrease@reddit
Brother in 5 years only like 5 of us will still have coding jobs anyway đ« đ weâre all in for a rude awakening
Mizarman@reddit
"Psst Hey. Come'ere kid. I got somethin' for ya!" First it was social media, now AI reliance. Early adopters, have fun being a lab rat in an even bigger unethical experiment.
Bacchaus@reddit
My dude... the fact that Claude can tell you whats happening in a few seconds while you had to "sit with broken shit for hours" is exactly the point
Dangerous-Pen-2940@reddit
Isn't it fair to say that in five years' time, these models are going to be even more capable than they are today?
Kane_ASAX@reddit
Not the same way they have been for the last couple of years. Right now the main problem is energy. We cant just magically build powerful stations in a year, much less 5.
Its going to hit a cap and its closer than you think
OnionsOnFoodAreGross@reddit
The chips are going to get easy more efficient.
Kane_ASAX@reddit
Yes, but that just means the "chips" will pack MORE transistors or have a higher base/boost clock to hit the same wattage as the previous generation.
OnionsOnFoodAreGross@reddit
No it doesn't. There are all sorts of effeciencies that can be made in algorithms software hardware. Besides it really doesn't matter. We can figure out the energy problem anyway.
tfhermobwoayway@reddit
But they aren't going to be superhuman. Someone needs to fix the code. It's nice to have a machine that codes for you but if nobody understands how any of it works then you're basically just building all modern infrastructure on a ticking time bomb.
Little_Elia@reddit
Not sure about that. They've been fed fresh, high quality data that is up to date. As they become more prevalent they will regurgitate their own output again and again and the quality will inevitably degrade.
somewhereAtC@reddit
Maybe Star Trek has correctly predicted the future: only Geordi La Forge or Montgomery "Scotty" Scott can actually think, and those other 15 "engineers" hovering around the control consoles are just sucking up whatever the AI is telling them.
m0viestar@reddit
That's basically what jobs were like before AI too. Not everyone was some stud coder, most were one step away from dumb as rocks
Beneficial_Noise_737@reddit
And scott is superhuman by human standards.
Basically it's so over, (in style).
Wall E more accurately describes the near term state if you are right, though.
Scotty is unlikely to exist.
florinandrei@reddit
It's a sci-fi show.
iamthesam2@reddit
âgrew upâ ? the product has only existed in public for less than a year lol
ColdTrky@reddit
Well, in 5 years the AI will one shot every promt and fix all errors the previous ai did.Â
gordonnowak@reddit
for some reason people are convinced LLMs won't improve and/or have no idea how good they actually are right now
gordonnowak@reddit
year 5, when LLMs are significantly more capable, they're going to have a bad time for reasons that are unrelated to their ability (or not) to code. and it'll be your problem too
Evazzion@reddit
How did they pass an interview? đ€
EssayOk9003@reddit
The models would have gotten way way better in 5years to come. So itâs fine.
01000010110000111011@reddit
They will be way way more confidently incorrect too.
asl_somewhere@reddit
I had a discussion on this with our development director. As a senior dev, using AI does improve my productivity and helps on a day to day basis, but I feel as we get junior devs, part of my job is going to be working with juniors to educate them about what they do. Not stop them from using AI tools but once they do make sure that they know what's happening.
turozfooty@reddit
Signs are already out there, the company I work for is already hinting that ai should be used as a training aid and not doing all the work. This is a complete shift from only a month ago when you could not go a day with out upper management mention ai in some way. Now they are hardly mentioning it.
Suspicious_Strain217@reddit
Ok but what is your solution? I am a senior (and lead a team). If I don't use AI then im simply falling behind everyone that is. If a new grad and they didn't use AI to speed up their work then they probably won't last.
StockMost7233@reddit
This resonates with me for a few reasons. Iâm a new grad/junior who joined a big tech company, and at the start I was honestly overwhelmed (in a good way) by the amount of coding resources available. But at the same time, thereâs pressure to ship code. I remember trying to work through a task that was taking some time, and when I started getting pressed by my TL, I ended up just using Claude to generate the code.
It feels like if I slow down to properly learn something, I fall behind on tasks, which just adds more pressure. So I end up leaning on AI more than I probably should.
Realising now, it was partly due to imposter syndrome and not believing that I could understand and learn it.
I wonder if this is purely a result of new kinds of pressure, or if itâs more that new graduates are increasingly relying on AI anyway, especially coming out of college, possibly to complete assignments, so they donât fully learn the fundamentals.
Local-Scroller@reddit
I believe we need to convince juniors that speed isnât everything, many folks in my program fear falling behind without using ai because itâs faster.
goodevibes@reddit
In 5 years theyâll probably be better at coding with ai than you. They are learning from the beginning , using the latest and greatest tools available. Their ability to prompt and generate workflows will likely be the winning factor. They donât need to change their mindset on coding like OG Devs do, instead they embrace ai tech and can learn much faster than those stuck in the past ways of code generation. 5 years is a long time for AI!
pr0cess1ng@reddit
If you're a junior you need to have discipline and use AI strictly for learning and NOT building.
Ghiren@reddit
They'll need to be pushed to find out "what does this do" before merging anything. Maybe we should be making a point that even if Claude did all the work, they're still the one responsible for accepting and signing off on it, so they'd better be sure that it's good.
OnionsOnFoodAreGross@reddit
Honestly in 5 years the AI will probably be so damned good the 2am thing won't happen in the first place. And if it does the AI will not only fix it but rewrite the whole thing.
Schnarfman@reddit
TDD got a whole lot easier and more valuable with LLMs. I donât think Junior devs will have a hard time. Junior devs will learn to become good engineers in a different way because itâs a different world.
That 1 junior engineers youâre talking about, they may be genuinely bad or they may have made one mistake with one subsystem. They may have failed to verify and validate. That is a legitimate mistake and I hope they take it as a lesson instead of as a portent for the rest of their career.
TheLoneTomatoe@reddit
I lolâd at the second statement, because Iâve 100% copy pasted old function I wrote into Claude or cursor with the same question. But
Accomplished_Fix_131@reddit
Happened with me too. Junior pushed a AI written code change which has a very critical bug. We were clueless as the change was done by AI. Asked claude to find out but unfortunately after doing reasonings after reasonings it failed. Costed us days of debugging.
Sufficient-Source211@reddit
It's going to be like when you start a new job, and try to understand the inscrutable work of those who came before you, only way, way worse.
PocketCSNerd@reddit
The error is thinking that itâs their own code. The code is the output of the predictive text machine.
Individual-Shame6481@reddit
No they don't. There will never exist a scenario where AI is not available. Just wake the fuck up already. It's embarrassing.
pyrrho314@reddit
it looks like good consulting $$$ in about two years, sadly, I don't want to wait it out and fuck em if they don't know a good thing.
yaycupcake@reddit
I'm more frustrated that I can't get my resume through ATS despite 20 years of coding experience, but people who literally don't know how the code they ship works are able to find work writing code.
You have to know what your code does. I don't care if you get help (AI or otherwise, we all need help when learning). But if you don't UNDERSTAND what you're shipping then it shouldn't be shipped!!! People like this lack the fundamental problem solving skills for this kind of job. It's the same lack of skill demonstrated by people who copy paste from stack overflow without understanding it and knowing how to tweak it if there's a problem or incompatibility with your codebase. Use tools as a resource but you have to be willing to understand it before shipping and breaking things. You won't know everything immediately but why did you ship if you didn't understand it...
denerose@reddit
Pay peanuts, get monkeys! I hope youâre not actually competing with those new grads and our pittance salaries, because once the seniors start accepting junior wages our whole sector is even more cooked.
yaycupcake@reddit
Honestly I haven't had income for 3 years and need to stay afloat so I'd take what I could get. If someone offered me 60k plus good health insurance I think I'd take it. My experience is weird since I only worked 3 years at a company but I have been doing solo dev stuff since I was like 11 (32 now) including running large sites for online communities I'm in, and I learned and honed my craft over years and years and years of practice, before all these AI coding things existed. I know accepting shit wages would not be great but I haven't been able to get interviews for anything of any level, and I've applied to junior through staff positions. đ
Efficient_Honey_8894@reddit
The 2am prod incident test is real. I'm 16 and I code almost everything with AI. But I made one rule for myself: I don't merge anything I can't explain line by line without asking Claude what it does. Slows me down. Probably worth it.
007Artemis@reddit
My company wont allow juniors to use AI.
JustinTheCheetah@reddit
So what you're saying is in a few years my long history of shitting on AI and people who use it will be seen as proof as to my credibility.Â
DaveCarradineIsAlive@reddit
I do the IT side of things at work, and even we're getting this problem with new hires.
They can handle generic windows stuff and seem to be learning troubleshooting well enough that we move them on to more complicated, niche systems. And a bunch of them immediately fall down, and are unable to progress. The AI has basically nothing in the training data about our weird software and hardware, so they can't get the same level of answer out of the chatbot.
And that's when it becomes very apparent which of them have been learning to do the process of isolating a problem and testing fixes, and which have been getting spoonfed answers for the whole training period.
It's good for my job security, bad in basically every other way.
Relevant-Western6468@reddit
I didn't major in computer science in college.AI has given me the ability to go deep into computers, and I am grateful for it.
chicago_scott@reddit
I feel old. I expressed the same sentiments 15 years ago with juniors who only learned a managed language and had little understanding of what's going on under the framework, or how the language itself worked under tbe hood.
SuperStone22@reddit
I have been programming since 2015 but I still havenât gotten a bachelors degree. Iâm wondering what it will be like when people think that I never programmed before AI. My resume will list a graduation date that will look like I didnât.
Vetril@reddit
In my opinion, AI will have the same impact of going from assembly to high level languages - IF it doesn't reveal itself to be an unprofitable bubble that will blow up in 2-3 years.
I guess that means we'll probably just fixate less on whether a candidate knows some standard implementation of bubble sort or Dijkstra's algorythm, and more on checking if they know the different between a factory and an abstract factory, if they can refactor O(n\^2) to O(1), if they can tell how to split a UI into MVVM components.
tfhermobwoayway@reddit
To be fair, high level languages at least give a consistent output every time. If they found a way to translate a high-level language into an even higher-level language it would make sense (and probably also be grossly inefficient but who cares about that nowadays?) AI makes up a different response every time. I'm not sure that's the same as the largely logical, mathematical process of coding.
Vetril@reddit
To be fair, non-deterministic programming IS a thing; it's actually an entire branch of CS, so...
farfromelite@reddit
That's nonsense.
High level languages give repeatable, testable output. It's reliable. It requires you to do the hard work of understanding the system and the requirements.
AI does none of that.
It's built on stolen code and stolen research.
Vetril@reddit
Someone in the 80s could have objected that you didn't really understand the system if you coded in C++ rather than with machine code. Same for autocompletion, intellisense, debuggers, libraries... The job evolves - and that's good, otherwise we'd all still be coding with punched cards.
chockeysticks@reddit
This is the right answer. People were freaking out the same way when Java came out and it had automatic memory management.
âHow will the kids know how to handle memory issues in the future?â and all.
Shadilios@reddit
I laugh each time I hear some CEO or one of those edgy entrepreneurs saying that AI will replace human programmers.
Specially when I try to use it to build something expandable & maintainable.
It's really great for learning, running ideas by or creating a starting template though.
Lopsided_Cap_6606@reddit
It's interesting for me to see the difference between opinions like yours, and the ones of people saying AI will be advanced enough in the coming years for there to avoid such a problem or that there won't even be such a problem
xTruegloryx@reddit
I'd been more worried about where you'll be in 5 years. Nobody is safe. Unless you're all set for retirement.
immediate_push5464@reddit
I understand that senior programmers earned a lot of stripes learning this stuff. For some of them, there was no VMware or learning environment. You just fucked shit up and started over.
However, I think upcoming programmers face new challenges with new requirements and new standards in light of where we are now. And ultimately, even if it is embarrassing at times, they will be fine.
There is a level of stupidity where you draw the line at, but I donât think this is it.
cdrun84@reddit
I'm a Senior Software Engineer and I only know a little bit of Ruby and have been doing this for 10 years. I rely 100% on Claude now, without it I don't even know what to do anymore.
hustla17@reddit
I am still in uni learning. We are forced to understand and explain the code personally in internship, and will be kicked out if they notice that we don't know shit. We can use llms outside the internship to complete it that's even encouraged, but at the end of the day if we do not understand what we have built , we can go home literally.
We also were forced to use the debugger in one exercise ,it was the first time in my life that I used the debugger and simultaneously also the first time I felt like a programmer and not an imposter.
I fucking hate this shit , but my future self is going to be eternally grateful for this.
I am 100% pro llm , but i am also pro critical thinking and understanding ( and also the act of physically typing code feels amazing and satisfying after finally getting it ).
I simply love this shit tbh , I am going to keep programming till the end of my days. And even if end up making burgers at least I am going to make them with O(1) in the back of my mind lool.
Kane_ASAX@reddit
Debuggers are nice as hell. I used it a lot for C++ code like a year ago. Better than the shit typescript has
RedCloakedCrow@reddit
I've been worrying about how a trend I noticed back in 2022 is going to combine with AI ubiquity: companies seem much less willing to invest in junior engineers, and look to hire primarily upper-mid to senior engineers.
With how juniors are able to move much faster with the use of AI, I worry that a lot of them will more from junior -> mid -> senior in a shorter span, and will move jobs swiftly to increase their earnings (which is good for them to do). But what that'd produce is a group of engineers who can't really make the mental models necessary to understand a codebase, alongside companies who'd have to clean up AI-generated messes left behind by those former junior/mid-tier developers, and a generation that hasn't been trained nor can train its followers.
cheezballs@reddit
Said everyone.
mattblack77@reddit
You're assuming AI will be wrong in 5 years. Why? It's showing signs of improving all the time.
ohyayitstrey@reddit
I'm learning coding in college now and resisting every urge to use AI. But all my developer friends say their jobs are becoming full-time AI wranglers, so I'm struggling to even see the point going into this field.
EZPZLemonWheezy@reddit
Yeah, there is a big difference in telling ai to make something unguided and telling it how to structure the code. If you basically feed it pseudo code and have a generalized idea how it should all fit together itâs more macaroni and less spaghetti code.
Lelouchtri@reddit
My question is what to do,when mangent is pushing you to finish in 2 weeks, 2 month project. How can you review all of the code, work properly?
The are not giving any quarter to work. Speaking as experience at a startup
JTP709@reddit
Junior developers? Now thatâs a job Iâve not seen anyone hire for in a long timeâŠ
dumpin-on-time@reddit
why are juniors merging code without it being reviewed by someone with more experience?
Thunar13@reddit
Grew up on cursor and Claude code? Wtf nobody has âgrown upâ during ai times itâs been 2 years of fake hype only?
youafterthesilence@reddit
I've said this before but my middle school son and his friends are still learning to actually code- started with scratch but doing python now. I'm doing everything I can to encourage that because I think it'll pay off for them in the long run. So I have hope for that generation, whatever the industry ends up looking like by then.
jermany755@reddit
Do you think this is how people felt when compilers were first developed and coders had no idea what assembly language was getting churned out?
green_meklar@reddit
It's not clear how much work of economic value will be left for them to do in 5 years.
WanderingGalwegian@reddit
I imagine 5 years time AI will be even better than it is now and much more integrated into workflows.
Personally Iâve already shifted in my role to being more of an Agent manager at work.
I build agents now and skills for my team to utilize.. we also have extensive AI assisted discovery and planning phases. we keep the same human in the loop approvals but I would say majority of code is written by AI.
One poor initial prompt can cascade to 1000s of burned tokens. Those that learn how to build with AI are the ones who will still be employed in five years.
The industry will be vastly different.
WellHung67@reddit
LLMs have a theoretical limit in terms of hallucinations - theyâre always going to hallucinate and itâs not possible to prevent it. This is a mathematical determination not a vibe. Whatâs the endgame if this is the case? These things cannot get to perfect, it is inevitable and innate. How can you trust a tool output that can make up stuff, where the errors encompass the entire space of possible wrong answers?
Humans also can be wrong, but itâs constrained to human stuff, very well studied. Decades of software best practices that have inherently been done to account for the human error. What evidence do we have that AI has a lower error rate, and that the error rate can be fixed? You need smart humans to figure that out. So the long game is what? You have to know how to code to spot the errors. And they can be subtleÂ
ZealousidealBet1878@reddit
If a human âhallucinatesâ like an llm, they will be fired for being a fraud and a liar
We call it bushing and we donât trust people who would do that in a professional environment
We really need to put our trust in llms to the same standards
trichotomy00@reddit
You are allowed to say bullshitting on the internet
offsecthro@reddit
> I imagine 5 years time AI will be even better than it is now
How? Trained off of what?
grantrules@reddit
Why, AI code, of course! Can't go wrong there! Like an animal that only eats its own shit!
Beneficial_Noise_737@reddit
Good initial prompt would not be a requirement within the next 3-4 model iterations if the same orders of magnitude scale of progress continues.
Architectural improvements like rlvr, latent state branching at test time, and maybe completely new architectures even are likely to continue the progress for some time.
mrburnerboy2121@reddit
This gives me great joy, Iâm going to continue learning and making sure I understand what Iâm doing. I only ever use AI to explain things my ADHD brain doesnât understand, never ever do I ask it for answers on anything.
HaikusfromBuddha@reddit
It's not just juniors but every level. Amazon had this problem, even with skilled higher ups that had to approve that code.
In reality the issue is management pushing AI to improve and speed things up. Sure AI is good at that but at the same time you lose learning what AI did because you have to quickly iterate. It's do you want something fast? Or something the dev understands the ins and outs because you can't have both. Because if the Dev knows the ins and outs of the code changes they essentially lose the benefit of AI speeding things up because the Dev is essentially learning and understanding the code changes as if they implemented it.
There is going to be a loss is knowledge and there is no way around it other than AI becoming good enough to debug its own issues.
FigStunning252@reddit
This is the best description of the problem I've seen.
I mentor self-taught devs and I'm seeing the exact same thing. The ones who lean on AI the hardest are the ones who can't debug without it. And debugging is where the real understanding lives.
The mental model thing is what people miss. When you spend 3 hours tracing a bug through the call stack, you're not just fixing the bug â you're building a map of the system that lives in your head permanently. AI skips that step entirely. You get the fix but not the understanding.
The scary part isn't that they can't code without AI. It's that they don't know what they don't know. They ship features, the PR looks clean, the tests pass â and nobody realizes the understanding isn't there until something breaks at 2 AM and the AI hallucinates a confident wrong answer.
I think the fix isn't "stop using AI" â it's deliberate practice without it. Like how musicians practice scales even though they could just play along to a track. Spend an hour a day debugging, reading other people's code, tracing through unfamiliar repos with nothing but the docs. Build the muscle while the stakes are low.
The juniors who do both â AI for speed AND manual debugging for understanding â are going to be absolutely lethal in 5 years. The ones who only do the first part... yeah, I share your concern.
UltraPoci@reddit
Good thing I'm not using AI
mrstealyourvibe@reddit
Maybe, maybe those complaining about how people use AI to deliver will be increasingly irrelevant
terrany@reddit
Most people live on a several paycheck basis. 5 years out is an extremely nice problem to put off for most lol
Ezazhel@reddit
That they merged? No one validate the Mr?
dwoodro@reddit
I think like anything else, its a matter of "how you use AI", not that it is "useful".
Coding large programs in a "one and done" approach, while it is getting better, still has limitations and will likely present issues in more complex programming aspects. I am waiting to see what happens when the AI "fishbowl" is emptied into the real "Coding Ocean".
Right now, it's not reliable enough to "simply" replace all coders. But coders using AI as a toolbox will get faster and better.
FourTwentyBaked@reddit
Hmm. I find it's just a different skill entirely.  I think they will be ok. I down more time on testing when I'm using ai and overall i think that is a good thing.Â
Nviki@reddit
Five years ago you expected today you would be using Claude?Â