Juniors & AI
Posted by wilsonnn14@reddit | ExperiencedDevs | View on Reddit | 42 comments
First of all, apologize if this is not correct subreddit to discuss about this topic.
The other day a colleague said something that stuck with me.
We were talking about AI, and he pointed out that junior developers are leaning on it so heavily that they're skipping critical learning phases. And because of that, they can't prompt well or fix what AI gets wrong, leading to poor code.
His point made sense. But then I started second-guessing myself.
Because I probably didn't have to understand a lot of things engineers did 20 years ago. The abstraction layers I inherited already handled them. And nobody called that a problem. So maybe this is just the next layer. Maybe juniors today will be brilliant at things we're not even thinking about yet.
I honestly don't know which framing is right.
But if the first one is, if there's a real skill gap forming, what can we do about it?
- Teach them how to use AI correctly, not "just" use it?
- Limit access in early stages to force the fundamentals?
- Double down on mentorship and code review?
- Something else entirely?
I'd love to hear from people on both sides of this: those mentoring juniors today, and juniors themselves.
Are we protecting something worth protecting, or just being the "kids these days" generation?
bowlochile@reddit
Obvious AI slop. Oh, “it stuck with you”? Really? You’d “love to hear” others? Are you “just curious”? Why not the old “It’s not x it’s not y. Just z.” Too obvious of a giveaway?
wilsonnn14@reddit (OP)
So now everything assisted by AI is called slop?
bowlochile@reddit
Pretty much the definition of it, yeah
wilsonnn14@reddit (OP)
Ok so thanks for your reply. I just wanted to make my debate clear, english is not my first language. So next time I want to open a topic for discusion I will skip the use of AI since some users don't like it.
maqnius10@reddit
I'd prefer your authentic and maybe unpolished voice. Ai generated content is just so off putting after all the slop we have to wade through these days.
wilsonnn14@reddit (OP)
Gotcha!! Next post I'll write it myself, thanks for taking your time to reply
paradoxxxicall@reddit
Totally valid, but it can be hard to differentiate from bot spam. A disclaimer will keep the haters off your back
paradoxxxicall@reddit
Compilers are completely accurate and deterministic to the point that you virtually never need to look at or understand assembly to get good results.
LLMs are as of yet not completely accurate or deterministic, and you still need to understand what the code should look like to get good results. They need to be guided.
I do think it’s a problem but I have no idea how to solve it. If you give someone an instant easy results button they WILL learn to lean on it.
Izkata@reddit
They're built not to be. Randomness is intentionally added through the "temperature" parameter.
OpenJolt@reddit
Even with temperature 0 they will not give the same result twice
paradoxxxicall@reddit
Sure, but when treating them as essentially a compiler from English to code, that fundamental behavior is a serious limitation.
overzealous_dentist@reddit
temperature allows probabilistic selection within a range of likely choices. if there is only 1 likely choice (as in the case of 1+1), it's practically speaking deterministic in its conclusions
paradoxxxicall@reddit
Ok, but the tasks I’m talking about will never have that level of clarity unless you’re basically just giving it pseudo code to begin with.
Sheldor5@reddit
limitation?
this should be THE reason to not replace humans with AI ...
johnpeters42@reddit
And yes, "bUt ThE hUmAnS aRe NoNdEtErMiNiStIc", but I trust competent humans to learn the computer's language far more than I trust otherwise-competent computers to learn the human's language.
exomyth@reddit
Honestly, it depends on the person. Before AI there were the stackoverflow developers that just copy pasted code together they found on stackoverflow without trying to understand what it is does.
You will have junior developers that do the same, but those junior developers are worthless, can and will be replaced by AI.
And you will have junior developer that will use it for just-in-time expertise. They will learn about a thing as they need it, ask why things are the way they are and will super-seed their personal development. Those will be the better developers of the future
HoratioWobble@reddit
Imagine AI wasn't a thing.
Imagine you had one very experienced colleague who would get things wrong on a fundamental level. Not maliciously, but because they're technically right but in an engineering capacity they're not right at all.
Now imagine they're solely responsible for training your junior developers, but instead more often than not - they just did the work for them.
What is the long term outcome to the business by keeping that person around?
bold_snowflake@reddit
I'm really concerned about junior engineers. They're not learning foundational skills and putting (not always by their own choice) too much trust into AI.
I get the argument that this is potentially just the evolution of technology, but the gap in understanding is just too vast. Would you like to drive over a bridge built by engineers with little understanding of how it is being held up?
This isn't the fault of junior engineers. It's the fault of senior leaders like me who are now expecting all engineers to be "all in" on AI development and not creating the space for juniors to learn foundational skills.
Ok-Leopard-9917@reddit
You can’t fix it. How good are you at optimizing assembly? This is why old people complain about change, they know what was lost with each transition.
KyxeMusic@reddit
The problem from my point of view is that there's no longer any struggle.
We used to learn why software was structured, abstracted, tested, etc. in a certain way after dealing with the pains of badly written software for long grueling hours. Patterns were engraved in our brains after making mistakes over and over again.
What I see now is that struggle is gone (not only juniors, but us seniors too). Those hard learned lessons are no longer happening. Broken code is fixed by prompting away, and any kind of lesson to be learned goes in one ear and out the other. I try to write down things I've learned after a refactoring session with AI, but it just doesn't stay the same as when we were doing it by hand.
Curious to hear whether others are experiencing the same.
Ok_Individual_5050@reddit
The way I describe it is there's a feeling in your gut you get 3 hours deep into a complete rabbithole where you go "Can anyone actually understand this? Is this really needed?" So you throw it out and do something much simpler. That's gone. I don't believe developers who claim they can get that while working with Claude.
Ok_Individual_5050@reddit
There's no such thing as a perfect abstraction and you absolutely *should* understand the things engineers were doing 30 years ago even if you don't need to use them in your day to day because they actually do matter.
chaoism@reddit
I say let them use it. They will make mistakes, much like how we did, but in different ways
They will then learn. The learning and learning curve will be different, but they will learn
What we used to do might not be suitable for them anymore. That's how I feel
wilsonnn14@reddit (OP)
Interesting point of view, thanks for sharing!
SpritaniumRELOADED@reddit
What is a "critical learning phase" by today's standards? I don't know how to write a compiler or use punch cards, but I've produced a lot of software over the years. Things are supposed to become easier over time.
srgylvn@reddit
Do not think the “critical learning phase” ever changes. It’s not about code generation itself, but about the ability to do root-cause analysis, which only comes with practice. Nowadays, LLMs let you copy/paste an error or put an agent in a loop to solve it. But the solution is not always good or optimal.
itix@reddit
Juniors are not likely to always find good or optimal solutions either.
symbiatch@reddit
And that’s why they should practice and learn…
srgylvn@reddit
Yep, usually they aren't. But with LLMs the Aha moment is lost. What I've seen a lot is juniors losing the ability to even read what's on the screen - and when you point it out, the answer is "I asked GPT and it told me this." Losing the Aha moment is being junior forever.
CompassionateSkeptic@reddit
Understanding something enough to troubleshoot issues under it can be approached from many directions. How much is enough?
Enough can be:
- Deep knowledge if you work in a space where what’s under the hood often matters
- Spikey — whatever I came across in my career
- Familiarity with concepts — I can ask pointed questions but I can’t do anything novel
I started using Linux 20 years ago and I’m still just familiar with concepts. It’s been enough because all I have to do to solve most of my problems is observe, hypothesize, check hypotheses against the community, test, repeat until solved. Critical thinking plus familiarity is enough.
Same for compilers most of my career with a few exceptions that sent me into spikey territory.
I’ve been actively using AI assistance since just before GPT-3. There is absolutely no *old man yells at cloud* in me when I say critical thinking plus familiarity with what’s in the other side is enough. You have to be able to solution through the AI a dirty lens or you have to have a use case where the stakes are low enough you can defer to it. I’ll be shocked if that ever stops being true. It’s a non-human, non-person collaborator. When product asks me to build something, I’m not their compiler AND, granting some neurodiversity, we’re both doing mind and thought with the same meat. Nom-human, non-person, but not merely a tool.
Ok_Influence8600@reddit
I agree with that colleague.
My understanding is that current AI simply retrieves information from web pages and presents it.
Therefore, I believe it is no different from those engineers who, before AI became popular, felt they had written code simply by copying and pasting.
Admittedly, it does produce something that works.
However, from an operational and maintenance perspective, it is problematic if the person who wrote the code cannot resolve any issues that arise.
Furthermore, once you have written the code, you bear responsibility for it, so you cannot simply say, ‘It was done by AI, so I don’t know anything about it.’
If you wish to build a career as an engineer, you must become a trustworthy professional.
Even if there were engineers who rose to prominence through AI, it would be the AI that is trusted, and it would be difficult to say that they have earned that trust as engineers.
boring_pants@reddit
tHey CaN'T pRoMPt wElL
Junior software engineers should indeed learn software engineering.
If your employer is no longer employing software engineers but instead just people whose job it is to beg chatgpt to please write fewer bugs then maybe learning software development practices is a distraction?
MoreRespectForQA@reddit
Every junior thinking this should read joel spolsky's essays on leaky abstractions and then ponder the fact that non deterministic agents are by their very nature the leakiest motherfucking abstractions to ever be created.
Feeling-Schedule5369@reddit
Leaky abstraction mean?
MoreRespectForQA@reddit
No. Most abstractions you build upon these days are based on entirely deterministic code and have had years of bugfixes thrown at them.
Once they're good enough that theyre almost bug free devs can essentially stop learning about how they work underneath the hood coz theyll never have to pop the hood.
I got into kernel dev coz I had to pop that hood decades ago but ive not run up against a kernel bug thats seriously affected my day to day work in years now.
Feeling-Schedule5369@reddit
All that para and you haven't answered the first question. Also I disagree with the rest coz you are mistaking ai with abstractions when in reality Ai is analogous to humans and both are non deterministic. Check the ray kurzweil books about this topic.
Humans are biological non deterministic machines but are able to produce deterministic abstraction.
paradoxxxicall@reddit
If I use visual studio to compile my code into assembly it will do it exactly the same way every time. It will do precisely what it was instructed to, nothing more or less. Sure, humans may change the next version, but then that version will always do the same thing every time.
An LLM will never “compile” my English into code exactly the same way twice. It’s will make assumptions and inferences about the instructions it was given, and produce a different level of quality and accuracy each time.
Feeling-Schedule5369@reddit
You will use llm to build deterministic code which will be tested. Just like how companies used non deterministic humans to build compilers. This is what I am saying. You can build complex things using non deterministic things.
paradoxxxicall@reddit
Op is comparing the removal of knowledge about this abstraction layer to the removal of past abstraction layers, such as assembly compilation. Thats what everyone is talking about.
You’re comparing the LLM output to human output, which is an entirely different comparison and topic.
miningape@reddit
Check google for "leaky abstraction" so you can learn about this topic before any generalized opinions in your mind
Feeling-Schedule5369@reddit
Which is why I asked him coz I want his answer. Who knows the version of leaky abstraction in his mind might be different? Maybe you can Google it since you like Google so much
ApolloCreed@reddit
I think both sides have merit.
Better tools always change what juniors need to learn. That’s normal.
But AI is different because it can make someone feel productive before they actually understand the code.
I wouldn’t ban it. I’d teach one rule:
Don’t ship code you can’t explain.
Use AI, but test it, question it, and review it. The thing worth protecting isn’t the old way of coding. It’s understanding what you’re building.