Technical interviews with AI allowed on CoderPad — what are interviewers actually evaluating?
Posted by No_Bowl_6218@reddit | ExperiencedDevs | View on Reddit | 18 comments
I’m preparing for interviews at product companies that explicitly state AI usage on CoderPad is encouraged. I already get the business context well before the interview.
In my day-to-day workflow with Claude Code, I typically run /init, refine the CLAUDE.md to set context, use /plan mode with Opus for new features, then implement with Sonnet.
My question: in an AI-allowed interview setting, what are interviewers actually looking to observe? The ability to structure the problem before writing any code? How well you steer the AI rather than letting it drive? Whether you critically review outputs and push back? Writing tests first and using AI to make them pass?
My gut feeling is that what’s being assessed is no longer “can you code” but “can you orchestrate and reason with AI”. But I’m not sure what concretely separates a strong senior candidate from someone who just lets the AI spin in circles.
Have you been through this kind of interview, or do you run them? What did you notice?
experienceddevsb@reddit
This flair is only allowed on wednesday, saturday (UTC). Please repost on an allowed day. Intentionally trying to circumvent this rule will result in a suspension. See: https://www.reddit.com/r/ExperiencedDevs/comments/1rfhdrg/moderation_changes/
ninetofivedev@reddit
My guess is they care about how you approach the problem, and how you validate you have a working solution.
The irony of all the anti-ai crowd is they forget that SWE has always been more about those two aspects and never about writing code.
——
Now the problem with simple problems is that you can probably just copy/paste the problem and say “solve this problem”, and it’ll get it right.
Versus how I might use ai: “we’re experiencing this problem. This build is intermittently failing. Please find when this build error first occurred over the last week evaluate which instances of the runner the build is failing on” and then it comes back and suggests that it might be this problem. “No, that’s not the issue. You’re just not using the valid credentials, use this API key located at … to read the logs.”
Empanatacion@reddit
I've been realizing how much my coworkers are missing out on the power of plugging it into shit. Give it access to datadog. Give it read access to the database. Let it peek the messages in the queues. Check out all the ancillary code locally and let it read it for context. Its ability to stitch those things together meaningfully is really powerful.
Eg, "The log shows a FooException for order 1234. I see a message in the DLQ for that order, but the database shows that order was updated. Ah, the OTEL span shows the message came from service X prior to that, and right here on line 56, service X is not rolling back."
And then you tell it to update the jira ticket with that information and create a branch for the fix.
ninetofivedev@reddit
The caveat. Giving it access to your logs is going to balloon your AI costs. I went from spending $70/day to $200 a day on average when I started using it for this purpose.
Empanatacion@reddit
Totally. I'm building a tool that I absolutely would not use if I were paying for the tokens.
Putting my fictitious "entrepreneur hat" on, I'd probably still shell out the money for my expensive developers to use.
ninetofivedev@reddit
Absolutely $100/day on tokens is nothing compared to how quickly it pattern matches compared to myself.
Unless you’re a savant, AI is very good at writing CLI commands that parse the data exactly how it needs it.
I will spend hours over the course of the day getting my regex strings right
nkondratyk93@reddit
tbh most companies added ‘AI allowed’ to CoderPad without updating the rubric. still decomposition + edge cases + clean code. the AI changes your pace, not what you’re being graded on.
Legitimate_Key8501@reddit
The signal isn't "can you orchestrate AI." It's "when do you stop it." Strong candidates reject the first answer, catch the hallucinated API call, redirect when the approach drifts. Weak ones accept whatever compiles. Interviewers are watching your friction, not your throughput.
Medium_Ad6442@reddit
Nobody knows. All that AI usage is an experiment in next few years.
Business_Average1303@reddit
It needs proper guidance and horizontal integration, there are so many things to be done vs just “use the damn thing and give me 10x output” lol
Medium_Ad6442@reddit
Say that to upper management. They only think about the LOC. Just increase the speed of coding and the sky is the limit.
Business_Average1303@reddit
I am indeed trying to talk upper management about this!
This needs to be a new function in an organization.
Majestic-Watch-2025@reddit
We do interviews like this. It'll differ from company to company, but I would not overindex on showing how well you use Claude specifically. As another poster said its still about seeing your problem solving skills. It's the equivalent of letting you use Google in the past - it's a tool you should know how to use and be comfortable with, but the point is to showing your actual coding skills. I tell candidates that if they're using AI generated code they should be able to walk me through what its doing and why.
OmegAIChungus@reddit
Are you able to solve the problem at hand, and how do you go about it. Really not much different than before.
SpritaniumRELOADED@reddit
Yeah listen to this comment, it's like mine but more polite
SpritaniumRELOADED@reddit
I feel like this goes without saying: they are hoping to observe/evaluate the way you work with AI.
This is going to cull out a bunch of applicants who just say "I don't use that stuff" which is a red flag (lack of interest in new industry trends)
Then you should get more practice to relieve your Dunning-Kruger Effect
chasectid@reddit
My company has these “AI enabled rounds” the sort of questions that have multiple sub questions that are revealed as you go. This is primarily done to combat cheating. However, since AI has gotten so much better the round itself is a farce imo, because when the questions were designed AI wasn’t able to one shot them that easily but now it’s able to.
Mostly the evaluation criteria then becomes:
Having said all this, since the entire round is farcical, it really comes down to 1-2 sentences said by the candidate during the interview or debugging skills on if there are any errors, is the candidate able to solve it.
Again, post Claude Opus 4.6+, it’s a complete shitshow, initially the questions were just obtuse and complex enough that AI wasn’t able to one shot it. That’s not true anymore
chikamakaleyley@reddit
Honestly, you should ask
but, what i would probably gather, given the climate * above all, there prob is less of an excuse now, to submit something that is not complete * what would you do if for any reason, the tool stops working
You should also find out if you're using their tool. You mention coderpad, so it could be in a controlled env.