How do you interview someone with the expectation they'll be using AI tooling?
Posted by vexstream@reddit | ExperiencedDevs | View on Reddit | 33 comments
Mostly as title. We're having interview processes with the management-driven decision that they should be using AI tooling during the interview to actually produce code. Understandable, frankly, because ~most of our code is written with assistance or totally at this point.
But how do you do a coding interview like this? It's not like we can do a take-home anymore where we give them a service spec because Claude can just oneshot most of it. At most I can think of doing a subjective review of the code to grade it on sloppiness/style.
In the same vein, there's interest in doing internal promotions of people who haven't ever traditionally programmed and moving them into programming positions under the assumption claude can just handle that. I use claude a lot too, but I'm never ever happy with it's oneshot results for anything nontrivial to lay out. What should be done here?
mirageofstars@reddit
Well, why are you interviewing them vs an intern? If Claude is truly doing 100% of the work then the dev isn’t needed.
But…there’s obviously a reason why you want a dev clicking that button. So interview on that.
Impressive_Knee_9586@reddit
I’ve been interviewing people in technical aspects for almost 5 years. One of the most important aspects is that they should be able to reasoning loud. If they cannot understand the problem neither explain a solution step by step, they are not what you are looking for.
Also design a tricky exercise with several incremental steps, ask them to solve it using ai, then ask for some bresking changes.
Old, traditional (boomer?) way of programming was about solving problems then using code skill to produce code. Today producing code might not be required anymore, cause ai is fast and cheaper, but code skills and problem solving stills important for any productive environmet.
Oh! And I’d forgotten to advice you that if the first they do is creating a .md file, you probably shouldn’t hire them. That people will never write a single line of code by they own.
Pd: sorry for bad eng
engineered_academic@reddit
Ask them questions AI won't be able to answer well. Have them evaluate already written code and add a method to do so something in the simplest way possible, or use a debugger to debug the code. AFAIK AI can't do that, yet.
You can also deliberately put errors in the code that are logical and not syntactical - things like generating an S3 bucket that is public by default, or including a typosquatted domain.
Wide-Pop6050@reddit
You can totally put already written code into an AI tool and have it debug it.
engineered_academic@reddit
Not if you are screensharing and just looking at the code.
CrimsonVixenPixie@reddit
This is actually such a good idea. I never thought of this.
Watching how they step into and out of functions would be so illuminating… stealing this one
Dry_Bird1790@reddit
But if they will be using AI on the job shouldn't they be testing how well they use AI?
GarthTaltos@reddit
The "Review this code" interview has been working out for me for years. In a well run org, most engineering time is spent reviewing and debugging code these days; same thing should stand for interviews.
Wide-Pop6050@reddit
We've been asking people to share their whole screen, giving them a task that is similar to what you would actually do at work, and then saying they can use any tool they want as long as we see what they're doing. Not everyone uses AI tools but plenty have.
Only-Fisherman5788@reddit
the interview has to shift from "can they write code" to "can they judge whether ai output is actually right." the hard part isn't producing code anymore, it's noticing when the produced code is confidently wrong in a way that still compiles and reads fluently.
concrete practice that works: hand the candidate an ai-generated PR (200-400 lines, realistic service) with one or two subtle behavioral bugs planted in it. an off-by-one, a silently swallowed exception, a condition that's flipped on an edge case. ask them to review it for prod-readiness and explain what they'd change. you learn more in 30 minutes about how they use ai than a full take-home ever told you.
the internal-promotion thing is a different problem. using claude doesn't teach you the failure modes that only show up after a decade of being wrong about production systems. that's judgment. it doesn't transfer from the tool.
headinthesky@reddit
Incorporate AI into the interview! I have a test project with bugs and errors. See how they use AI to fix it. They actually plan with it? Use it to learn more about the code? Then that's good, use it as a debugging tool and use it to fix problems and then see how they do a review. If they don't do any of that and just one shot and it's slop, that's not gonna work for my team
Flashy-Whereas-3234@reddit
We make it more about theory and attitude, and we ask some specific questions about the things that we dislike about AI to see what they say.
The idea being, you want someone who knows how things should be, shows a desire and willingness to deliver quality in the face of adversity, and an ability to learn and adapt.
We haven't done written code technicals for ages, but if you can't have a long-form tech bullshit conversation, that's gunna out you pretty quickly.
GuybrushThreepwo0d@reddit
This sub sucks now
TheOwlHypothesis@reddit
Just do a normal coding interview.
letsbreakstuff@reddit
Like asking a forklift operator how much they can lift
letsbreakstuff@reddit
Honestly, as AI does more and more us engineers become product managers and architects. Test the candidates systems thinking, if you're giving the requirements give them ambiguous requirements and see if they can naturally refine those requirements to avoid brittle solutions
BearyTechie@reddit
One round of interview should be onsite.
PixelPhoenixForce@reddit
we have 3 leetcode rounds and one AI round where you build an api purely with prompts
zugzwangister@reddit
How did you test for competency before?
Can they speak intelligently about the strengths and weaknesses of the tools available?
Can they walk through their past struggles with it? Somebody who has really been there and done that will have war stories.
MORPHINExORPHAN666@reddit
What kind of larp is this?
psyyduck@reddit
I don’t understand these questions. AI makes tons of mistakes. If your codebase is that big you already know. It makes crappy architectural decisions. It takes shortcuts and makes assumptions when you give it a difficult task. So just pick one, did the human notice and fix it?
Diligent-Seaweed-242@reddit
I’ve done a couple of these rounds recently and what I observed was actually the latter. If someone tried to one shot or use the AI for problem solving, they would typically get rejected. Instead the expectation was to use AI as an assistant, do spec driven cycles and iteratively build the solution. You still have to come up with the approach, the data structures and articulate how to build it etc. AI just speeds it up.
pkmn_is_fun@reddit
not sure why youre even worrying about this when it seems youll be out of a job yourself in the future
interrupt_hdlr@reddit
Knowing AI tooling is a prerequisite ON TOP OF everything they already had to now before.
slowd@reddit
Right, ask them about agent memory, context management, and what should be documented for agent use. Failure modes they’ve encountered. Questions that make it clear they’re really pushing their tools and becoming an expert.
throwaway_0x90@reddit
Pretty sure the industry is still trying to figure that out.
But, start with the base mindset of those exams in school that are open-book and allow calculators.
rover_G@reddit
Ask them how they ensure the AI produces good outputs
polaroid_kidd@reddit
I heard this the other day. A company of giving candidates access to the cursor model 1. It's powerful enough to get some decent response but dumb enough for it not to do all of the heavy lifting. They have to build a fairly large app in an hour or two. It becomes more about architecture and clean code than just "implement X"
Another approach might be is "here's this crap codebase. What's wrong with it, improve it and finish feature X"
laueos@reddit
You need to sit with them and let them walk you through their approach.
disposepriority@reddit
What dev can't use AI tools? Their difficulty floor is literally 0 - any interviews I have a say in have remained relatively unchanged except for one non technical management says "we nee more people faster" where difficulty drops hard.
PM_ME_UR_PIKACHU@reddit
Just say you expect them to go 10 times faster than they normally would and to not check any of their work.
MonochromeDinosaur@reddit
You test their AI usage methodology and their understanding and intuition/opinion or the implementation details and architecture of the generated code.
People who’ve divorced themselves from understanding the code and who haven’t established good AI practices with proper guard rails aren’t good hires or promotion candidates.
seanpuppy@reddit
I don't have an answer, as I think about this several times a week. I do actually think a take home assignment could be good, but it needs to be with the expecation that they would use AI, and that it would be complex enough to necessitate some good skills around wrangling coding agents.
Another option is a live coding session on a video call and watch them solve a problem with claude code. One logistical challenge is that it costs money for the interviewer.