I'm at a loss for how to manage my interns
Posted by AlaskanX@reddit | ExperiencedDevs | View on Reddit | 29 comments
We've just got our interns for the summer, from a local program where the university pays the students' wages and places them with local startups to get work experience. Last summer we also participated in this program with great results. We got a really smart student who was able to take on free-form projects, back up her choices with details about why she made the choices, and generally made a good standalone feature for our software over the course of the summer. We had just started dipping our toes into using LLMs beyond just code completion at that point... I know she used it, we paid for it, but I didn't notice an impact on her work or a deference to the LLM. I hardly had to mange... I just gave her a task, we discussed it a bit, she discussed with stakeholders, and it got done to our satisfaction.
This summer is a whole different situation. At this point we're completely using LLMs in our daily routine. And so far, I'm seeing that at least one of the students is deferring to it to an uncomfortable degree. I'll ask "why did you make this choice?" and the answer is basically "I don't know, I'll ask my chatbot". How are other people managing around this? I'm not sure how to make them take ownership for the choices that are being made, and actually think about tradeoffs. Do I need to spend more time being involved, and more hands-on, maybe some pair programming sessions?
It feels a bit hypocritical for me to push back on "but why did it say that" or "how did the two of you come to that conclusion" when I'm frequently relying on it to the same degree. The only caveat is that when I'm discussing or guiding the LLM, its from a place of knowing the stack by heart and having all the tech debt and tradeoffs in my head.
I guess the root of what I'm asking is basically, how are other people shepherding interns or green juniors in this weird new world?
kbielefe@reddit
Whenever I see this "they can't explain" complaint I think about abstraction layers that the asker likely can't explain. "Why isn't this loop unrolled?" "I don't know, the compiler made that decision." People have an uncanny ability to believe the necessary level of abstraction to understand is the level they personally understand.
Just stop asking why. Either it doesn't matter or you have a good reason it does matter. Explain the reason and ask for a fix. Explain how they could improve the prompt for next time.
I think you got lucky with your intern last summer. Most require a fair bit of hand holding. Most don't fully understand what they are doing or why. Before they didn't know why the LLM did it a certain way, they didn't know why their mentor told them to do it a certain way, or why the example they googled did it a certain way.
xdevnullx@reddit
This is well said. I feel this same way. It’s not like I can write cpu instructions.
Even saying that- I still have an emotional response when someone says “idk Claude did it”.
I really want people to take ownership of their work.
kbielefe@reddit
I think that's something we can all agree on.
dudeaciously@reddit
A dev just did a whole app, including design document and ERD, from Copilot. When I started to ask basic and obvious questions, totally lost.
Do over.
Yodiddlyyo@reddit
Yeah. This is simple. Interning is about learning. So you give them a task. Tell them they can use AI all they want, but the acceptance criteria also includes them being able yo explain what each line of code does, and why they chose to do it thay way. Theyll ask the llm, but thats fine. Theyll learn. If they say they dont know, they need to go back and figure it out. You don't push code that you personally dont understand, period.
shifty303@reddit
When you ask what the code does and they start reading the code back line by line while I sit in awkward silence... That's always the worst.
AintNoNeedForYa@reddit
Great idea, maybe code reviews are done unplugged in a meeting room. Start with then walking through the logic.
throwaway_0x90@reddit
This is what I think traditionally experienced SWEs will have the most trouble accepting. Is there a
**measurable**negative impact to just letting the intern continue to rely on LLM as they are doing? Are there more bugs? Slower output? Code not working to spec?Maybe there is no problem and this is just what the new world is going towards.
Basically, I'm saying if you're going to keep second-guessing the intern's choices and slow them down and management notices how would you explain this to management that they will agree with you the intern(s) are relying too heavily on LLM?
AlaskanX@reddit (OP)
Definitely there's some of that. I've been working on this stack for 5 years so I'm burdened with the weight of knowing about all the good and bad decisions and tech debt that I've built along the way.
FWIW this is just like their first week, so this is more me being like "oh shit, that doesn't feel right" right off the cuff. I was reviewing their first PR and asking them questions and they... didn't have answers that didn't involve checking with the bot.
throwaway_0x90@reddit
I usually get downvoted into oblivion in this sub for saying positive things about LLM/AI, but I'll just d!e on this hill and lose all my Reddit karma in the process because I feel strongly about it.
In the way most of us don't care too much about the ASM language our code ultimately becomes, the SWE job role is going to move a tiny bit more abstract and focusing on the Java/Python/Go/Rust/etc source will decrease in importance. Just high level decisions around data structures & algorithms to tell LLMs to use will be more important, as well as overall design.
JChuk99@reddit
Not going to down vote you or sit here and hate because…you’re right. There’s just no world in which a developer with AI is less efficient/productive than a developer with AI. It just seems weird to me as someone who’s been in this field as a professional for 4 years, that there seems to be a such a massive push for forgetting everything we’ve learned a discipline for the last 70 years. Of course we need to test and re-evaluate principles in the age of AI but data in -> process -> data out with some type of requirement will always be the core of software development. It follows that there are some core principles (manage complexity, fight entropy, abstract decisions that are likely to change, etc.) that will never change unless we undergo a fundamental paradigm shift. AI isn’t that shift.
It’s a tool across all fields (swe, education, etc) that widens the gap between high and low performers because you need to be a disciplined, knowledgeable & ultimately take pride in your own skills to use these tools effectively.
throwaway_0x90@reddit
It's tough for sure. I'm not exactly green myself. I'm old enough to have felt the impact of the dot-com-crash and definitely hold pride in coding myself - or at least, I use to. A few years ago I hated AI, but now I've accepted that there is no going back so I've reluctantly let go and advocate that jr.SWEs do more than just "learn to code".
JChuk99@reddit
Yeah I’d disagree with this, there’s gatekeeping & then there’s not being able to explain a design decision with confidence in an external meeting w/ various key stakeholders. That’s one of the big differences between seniors and juniors and if you aren’t practicing that with your team in low stakes situations then that’s an issue. In the future if Claude can jump into a meeting for me and explain the decision for me then this isn’t an issue. But we aren’t there yet.
It’s very frustrating to accept but there’s just some things that are extremely important the we just aren’t very good at measuring, like developer productivity, being able to confidently explain the rationale behind your decisions, understanding when there’s a better option than the options presented and finding a new solution, implicitly catching system behavior that looks off. These are some the nebulous, hard to measure skills that make a skilled developer a skilled developer. These are the exact skills you don’t build when you offload your thinking to an LLM.
throwaway_0x90@reddit
I strongly believe that we're entering a world where if you can't measure it or demonstrate any negative impact to the company, then no matter how you feel about it personally you won't have management on your side.
xaveir@reddit
Not OP, but yes IME the output is slower, the code is less maintainable, the approach is less coherent, but most importantly it sinks way more time for my seniors to review AI slop generated by the juniors than it would for that same senior to just to the right thing themselves.
At least if they're thinking about it themselves the same time sink results in both a feature AND an improved junior.
throwaway_0x90@reddit
Indeed, this issue I understand and that's why it has become unclear how jr.SWEs even fit into this new world. A Sr.SWE with LLM/AI can do nearly equal work as training a new jr.SWE.
But then this part,
Slower as it measurable performance of the application after the code is deployed? And as for maintainable & coherent, I believe the need for a human to understand the AI-gen code will decrease and what will increase will be testing & verifying-to-spec of the output.
Connect_Detail98@reddit
"I don't know let me ask my Claude" felt horrible the first time someone said it to me. This feels like there's such a lack of responsibility and ownership.
sudoku7@reddit
Work to instill that they still need to own the results from using LLMs.
That it's not personal or anything, just we as engineers still have to own the code, and be charitable and explain that aspect of it is one of the harder parts of using LLMs.
RGBrewskies@reddit
their mindset is all wrong.
we talk to the LLM like it's a senior engineer. It's really smart, and it probably knows what we should do, if we ask it good questions
we let the LLM execute tasks like it's a.. intern.. it needs extreme supervision, and it should be working in baby steps. One small piece at a time.
They can't answer the "why" because they never asked the why.
Coach them not to use LLMs to complete tasks, coach them to propose a solution to the LLM, get feedback, and refine
iagovar@reddit
I don't really understand how people use LLMs without having an understanding of what's going on.
If something breaks and the LLM can't figure it out, then what?
Because that happens often in my job, where I have to steer them in the right direction, give them the context etc.
And I didn't write a single line of my data pipeline but I know how's organized and have some familiarity with the details. Because I did discuss the architecture with the LLM and we went step by step.
Own-Football4632@reddit
I'm not sure if "never be true developers" is accurate across the board. I've met some very young coders that rely heavily on AI but seem equally interested in understanding underlying code and such. I'd assume they would find HTTP codes interesting and meaningful.
Even when demand has been high for high level languages, there have been people that learned C anyway when they didn't technically have to, because they found it interesting and/or beneficial for their foundational skills. I'm sure there are old school C devs that would believe a bootcamp JS dev isn't a "true developer" from their perspective, but 2010s bootcamp devs include the seniors that can be considered the "true developers" relatively speaking. With each shift there are people who recognize some value in understanding lower levels than the job market demands specific experience in, and those people are often strong developers.
Also as an aside, before AI I already struggled to work with developers that could make sane HTTP code choices to maddening degrees.
RGBrewskies@reddit
when someone says "they" (a catagorical group of people) will never "do XYZ"
they don't literally mean zero members of that category will ever fail to do XYZ.
Theyre referring to the average person in that catagorical group of people.
And I am damn sure, the average intern coming in right now using AI to pump out react templates will not eventually learn nuanced development fundamentals. Damn sure.
Hot_Money4924@reddit
If you can't explain the design decisions, you didn't review and understand the plan, and you can't summarize what the AI has done, then you can vibe code your way right to the door. Either acquire the essential, basic, software development skills or don't be in this industry.
cran@reddit
Come up with some projects they can own and demo at the end of their internship. Something new and significant. Let them decide.
AlaskanX@reddit (OP)
We came up with some tasks we thought were appropriate but now looking at them again, these are the scope of like 1 week with the bot, if that. 😞
I thought we were doing a good job of giving good, demo-able feature assignments that would take a significant amount of time, turns out my ability to estimate projects is still in need of recalibration.
Early_Rooster7579@reddit
You teach system design now.
SypeSypher@reddit
Realistically to fix it you have to require no AI usage for their work. But what you're seeing is really I think the beginning stages of a generally stupidifying of a LOT of people, if AI can do all the thinking for you, you don't need to think, and fun fact about not thinking....you become worse at thinking.
Schools have not adapted to this yet (they're catching up - hence the whole "oral exams" and "essays written in class" - which are definitely going to leave some students behind as well btw - whole other problem) but the reality is that everyone is becoming dumber and AI is the cause. I know for a fact that I'm a better producer at work with AI, I make more code changes, put out more features etc, so to shareholders/manager/etc I'm worth more today than I was 3 years ago, but I also am fully aware that my personal skills feel like they've worsened
And the reality is that beginner devs/interns shouldn't be using AI anyway, it's the whole "give them easy tasks so they work their way up, work through problems on their own etc' It's the whole reason the first half of calculus 1 requires you to do derivatives the long way, and then once you've learned that part of the class they're like "ok btw now here's a super fast shortcut". the goal is to LEARN not to just finish.
Personally I'd say no AI allowed and make them work through problems on their own. The downsides of this is that a lot of schools are allowing AI usage so this might be the first time they've had to do that. But you shouldn't really expect useful output from interns anyway.
Bobertolinio@reddit
One thing is to ask the llm to give you random ideas and apply them blindly and another totally different thing is to understand the options and apply them strategically considering you current setup and where it will go in the future.
The question is not only why one option was chosen but also what other options where there and why they were disqualified.
therealhappypanda@reddit
Well, there are two reasons that they would use the AI tools and not think for themselves. One is because they are seeking a shortcut (i.e. lazy), the other is because they are afraid and lacking psychological safety.
The former, in my experience, is really hard to dislodge from a management perspective. The latter is much easier, make them feel safe to express their ideas and fail.