Don't use AI for coding interviews
Posted by bytebux@reddit | ExperiencedDevs | View on Reddit | 160 comments
Just don't. We can tell.
I'm starting to see a lot of cheaters in interviews lately. I'm not sure if I should be shocked or not. Some are so blatant to look over at a second monitor and read directly from it. Others work off of a single monitor, ask clarifying questions, and even try to alter the code output a bit. Some probably even have some clever prompts that may spit out code more believable for a code interview. But we still know.
Anyone else seeing a lot of this lately?
KronktheKronk@reddit
Jesus fuck how many data points do you need before you cave and come to the conclusion that coding interviews are, full on hard-R, stupid.
Doub1eVision@reddit
Nah, coding interviews have a lot of value. They’re a great tool for discerning if somebody passes a meaningful bar. The problem is that companies over-leverage this and try to make coding interviews determine more than they reasonably can. You can only get so much signal before you start getting mostly noise, but we all pretend the noise is signal.
KronktheKronk@reddit
No they're a shit tool for discerning if somebody passes a meaningful bar entirely because interviewers are ego-driven human beings.
There are easier ways, and coding interviews are no better than picking at random.
Doub1eVision@reddit
This is an argument fueled by hyperbole.
KronktheKronk@reddit
Go do research. Google, known for their hard coding interviews, admitted that they found zero correlation between interview performance and job performance.
Studies have shown that people who give tricky interview problems enjoy feeling superior when people don't get the questions right and turning away candidates as a result.
A bunch of Google interviews turned down themselves when given their own anonymous hiring packet to review.
Zero correlation. No better than picking at random. So why put yourself and them through the headache? There are easier ways.
Doub1eVision@reddit
Again, they are over-leveraging it to the point that it injects a lot of noise. That doesn’t mean that any form of a coding interview is bad or a waste of time. Its most useful form is when it acts as a simple filter with fairly simple questions.
Also I’ve never seen a company do this, but I’d really like to see a coding interview give the interviewee a PR to review. You can get a ton of really valuable signal by seeing how a software engineer reviews code. It would be easy to sprinkle in issues to demonstrate high level and low level technical skills.
KronktheKronk@reddit
You sound like you interview people, do that
Doub1eVision@reddit
I do technical interviews, yes. And I try my best to choose questions that are direct and focus on getting an understanding of their thought process. In my mind, a coding question should remove as much stress and unnecessary points of failure as possible. We’re not trying to test how well they work under extreme time pressure.
Elegant_in_Nature@reddit
You’re most definitely middle management the amount of defense you’ve given coding interviews in this thread, much respect I suppose!
Doub1eVision@reddit
I’m a software engineer. What amount are you talking about? A single brief post? Are you also adding the post where I said somebody was using hyperbole?
There’s real value in testing that somebody can reasonably code. The problem is that companies rely on it way too much.
It’s a classic problem of diminishing returns. Invest A resources in coding interviews B return on it. Then companies crank it up by investing AC resources and they’re stunned when they don’t get BC resources. Sometimes they get even less than B because of how poorly they do it. But that doesn’t mean that there’s no value in doing it at all. It works best for filtering out frauds (harder now with AI), and it works best when your coding questions actually relate to the work done your business.
Elegant_in_Nature@reddit
I completely agree honestly, I just have an angst that’s stayed with me since the dot com era, not to reminisce but interviews then and now are a whole different ballgame
nemec@reddit
uhhhhhhh
I know what you mean but it's far less common than the alternate meaning
ajfoucault@reddit
Lol, reminds me of (this)[https://www.youtube.com/watch?v=MFDiuBomSuY]
TheOneTrueTrench@reddit
That's... the joke
PoopsCodeAllTheTime@reddit
Obligatory:
https://youtube.com/shorts/qqVkuGjThS4?si=TYrVb3R3QUFbRMaG
OkTourist@reddit
I understand that reference
ProfBeaker@reddit
My company has been having discussions about whether to just allow it for coding interviews. We're rolling out AI assistants for everyone to work with, so it's hard to argue that candidates shouldn't use it. I mean, should we not let them use an IDE either? Notepad, whiteboard, or GTFO! :P
I'm probably on the more AI-skeptical side, but it is pretty clearly becoming just another tool. If a candidate shows that they can use it well, good for them. If they prove they can't, that's also interesting.
Groove-Theory@reddit
In a few years AI is going to be waaay more ubiquitous, whether we like it or not (dependent in part on industry and context of course). I literally have no clue why there's this backlash against using AI in interviews if, like you said, everyone will NEED to work with it.
People who ban AI in interviews outright are just dinosaurs that will inevitably have to come to terms with a changing landscape.
Now... there can be SOME contexts to be AI-free (such as a debugging problem where you need to read an AI generated response and figure out why it isn't fulfilling all test cases), but to not use outright is ludicrous.
thekwoka@reddit
Mainly there is just the issue that YOU are being hired, not the AI tool.
If the AI tool is the thing doing the whole interview, what are YOU there for?
Groove-Theory@reddit
I'm also using an IDE which allows me to compile high level code with more business context into bytecode or machine level code. Are you going to hire the IDE?
thekwoka@reddit
Bruh, this thread is about people that just try to use the AI to answer everything.
Like, why make up some alternative situation to argue against it? It's a straw man.
Groove-Theory@reddit
> Bruh, this thread is about people that just try to use the AI to answer everything.
Because thats EXACTLY what I want to see during an interview. 100% I want to see how they use AI.
Developers will use tools to make them more efficient. I want to know if they can use them properly (i.e can they use IntelliJ, an IDE, properly when working on Java Enterprise code if I'm looking for Java-experienced devs)
If they blindly put something into ChatGPT and don't know what's going on, then I want to know that. If Copilot recommends something and they don't analyze what's given to them, I want to know that.
Because whether you like it or not, they WILL (unless there's a company or industry regulation banning it) use AI in their day to day work. If there's no regulation against it in your company (which I already listed above as an exception in my first post), you must expect, and even embrace, the fact they will use AI.
So no it's not a strawman at all, it's a consistency in argumentation. It's your responsibility then to refute against it instead of applying a false fallacy without any backing.
Elegant_in_Nature@reddit
Do you think the existence of a car means someone should bike to your job for the interview? Just to prove hey if your car breaks down you’ll be able to get here on time? Very silly
turturtles@reddit
Not a great comparison considering biking is more efficient. If someone is within biking distance to an office, the biking would be the superior option.
thekwoka@reddit
Why would they need to bike to their computer in their own home?
I don't pay them to come to an office. I pay them to do specific work.
80eightydegrees@reddit
You are hiring them to be productive with their tools and skills, you - the interviewer, should be judging their ability to do the job with the tools they have and whether or not they are over-reliant on AI, are tripped up by it, can adapt, provide good explanation and communication etc;
No one is suggesting you give them some task they one shot in a prompt with and move on. Give a meaningful real world task in a real world codebase and talk to them as they accept what the AI has provided or what changes they want to make to the generated code and why;
Sorry to say, yes, this is more work for the interviewer beyond here's two leetcode mediums, solve them or dont move on.
apnorton@reddit
I think it's reasonable to allow someone to use AI in a testing environment only if you think it's also reasonable for someone to phone a friend and have them answer the question for them. After all, at work you'll be able to ask your colleagues for help if you get stuck.
The whole point of any kind of test (interview, exam at school, etc) is to test how you can perform.
I'm on team "in person, whiteboard" for coding interviews, tbh.
ProfBeaker@reddit
Ha, interesting point! Though we do somewhat allow phone-a-friend in the form of asking the interviewer questions, and also interviewers giving hints. You could argue that allowing pseudo-code is a similar form of this.
Hypothetically I could see allowing it, though in practice most interview questions should be simple enough that if you need to call anybody it's going to reflect very badly on you.
zylog413@reddit
Yeah my company has started allowing it as well. We ask them to share what prompts they use, and for their thoughts on the generated code.
It raises some interesting questions on what it is you're actually looking for in these kinds of tests, and what kind of problems are most suitable to discern the best candidates.
No_Indication_1238@reddit
I guess it depends on who you want to hire. Everyone can use AI and get the job done, but if you can get someone who knows and can do more for the same money, why not get them instead? The question is...how do you find who they are? For some people, coding without AI is the proof they need. Others disagree. It's a sensitive topic indeed.
Calcidiol@reddit
The point of USING A TOOL is that it makes a task's completion somehow more efficient, effective, ergonomic, expedient, safe, higher / more consistent quality, etc.
So if a workflow is proven to be able to be satisfactorily effectively accomplished using X, Y, Z tools then there's zero justification or benefit in trying to find people who shun using the most effective / expedient tools to satisfactorily accomplish their tasks.
I'm sure I can manually translate SQL or C++ to machine code based on what I know. But every time I'll use a database or compiler for those jobs since it's 100x more efficient / effective at accomplishing the goal.
Doing something in a needlessly burdensome way isn't job relevant, it's at best a stunt / sport but is hardly meritorious.
It should be blatantly obvious that there's a wide world of possible approaches to solve most SW development tasks. Some of those architectural / design / implementation decisions are objectively bad, some of them are categorically about the same as each other with varying trade-offs, and some of them might be overall advantageous.
So whether one is using a library full of DSA text books or stack overflow or a ML model or chatting the ideas over with N peers to help ideate / explore possible approaches to any given problem / sub-problem at the end it's still often personal judgement / evaluation that informs what is ultimately a suitable selection for a final proposed solution to the task.
Sure the root concepts / ideas / algorithms / data structures / patterns / libraries / constructs are not novel, they all came from SOMEONE ELSE who invented it, publicized it, built it into a library / language, etc. So whether studying sorting algorithms by text book, wikipedia, stack overflow, ML assistant, dusty memories from a CS101 lecture, it doesn't matter one little bit how one does research, the only thing that matters is using one's discretion and judgement to select one out of many possible choices to proceed with a given problem most expediently.
It's a fallacy to suggest that we should all just "know everything" without using ML in the same way that it'd be ridiculous to suggest 20 years ago that we shouldn't use reference books, libraries, calculators, et. al.
Mnemonic aids / devices & reference materials are all (at best!) conceptually equivalent in potential utility / effect and there's no justification for deprecating them in virtually any realistic case.
Workflow aids, similarly. So there's not generally any extra merit to calculating with one's fingers & toes, using an abacus, pocket calculator, spreadsheet, or analysis SW. If the job is to get the calculation done quickly & correctly then whatever suffices suffices.
These things (sub-tasks / queries which can be merely looked-up or calculated or selected from a menu of common options) are NOT THE JOB and judging how / what is used is misplaced attention in most cases. They're simply tools to make baby steps along a workflow to ACCOMPLISH the true end job result.
Elegant_in_Nature@reddit
That’s really not necessarily true, I’ve seen many devs struggle to use ai or deploy anything with it. They just think differently; but to think ai is just simple input output is wrong
gemanepa@reddit
The issue has a pretty easy solution: The interviewer should provide problems that can’t be solved through AI alone 🤷🏼♂️
fireblyxx@reddit
You’d have to constantly be updating the question. If you want to figure out if someone would be effective at using AI, ask them to diagram some new feature, component, or whatever outside of code. That’s what they’d need to do in order to write effective prompts and check against for the output anyway.
Shit, maybe go more abstract and ask them to write a procedure. Like, write detailed instructions on how to do laundry.
vivalapants@reddit
Ya actually I’d prefer someone who can white board sudo code than pristine IDE shit
E3K@reddit
I'm sorry, "sudo code" has me absolutely dying. That's amazing.
vivalapants@reddit
Lmao I’ll blame my whole word dyslexia.
Sudo out.
a-rec@reddit
you can run vscode as a normal user /s
Calcidiol@reddit
Anything is "just another tool", slide rule, calculator, spelling checker, grammar checker, spreadsheet, word processor, shelf of tattered reference books by Knuth, Stroustrop, GOF, a computer itself, a search engine, ML.
A problem to be careful of if judging people on their perceived tool use ability is accessibility to the tool. Everyone supposedly has a "good development setup" at work provided & trained as necessary by the employer. At home not everyone has or immediately has use of modern developer workstation computers (e.g. maybe interviewing on a basic laptop or no frills home office / shared desktop setup for Y reasons), so maybe they don't have handy / installed the "ordinary" developer centric tech stack on their personal home machine.
So at work in an tool / tech environment they're equipped & trained & acclimatized to work in they can show good proficiency with the tools needed to do their jobs.
At home? It's unreasonable to hold lack of demonstrative use against people who don't have home labs and expensive SOTA AI & SW development tool subscriptions whether for lack of personal time / personal need / budget (possibly unemployed -- job interview in progress!) or how well equipped their "I'm teleconferencing" cubicle is for doing serious SW development (which even if they do engage in at home may be in an entirely separate room / computer etc. for security / privacy / space constraints or whatever other reasons).
We hire accountants, lawyers, doctors, appliance repair technicians, carpenters, plumbers, electricians, architects, mechanical engineers, et. al. In none of those (and almost any other cases of trades / professions) do we expect people to live-demo the actual workflow activities of their trade while sitting at home in their bedroom on some old laptop or whatever they have handy for some circus style show and then grade them on showmanship (SW developers != actors / salesmen / advertising), irrelevant rote memorization tasks, job irrelevant trivial puzzles / tests, how well they work when they don't even have a normal (realistic / sane) work task, context & environment to work in, etc.
A more reasonable take would be assuming that if you're hiring reasonably intelligent, capable people they can learn to effectively use ANY tool / technology that's reasonably appropriate for them to use. New computer language. New SW engineering tools. How to use the favorite ML assistant / IDE / CASE tool of the year. Whatever.
iggybdawg@reddit
Was coming in here to say upper management wanted this. They want people who can and will write code using AI.
Greykiller@reddit
I did an interview today where I was allowed to use Google and I was sitting there thinking about how I wish I could just use a prompt and show the prompt. I'm not sure it's the right thing to do, but trying to dig through and understand terrible search results which are _still_ probably AI generated is annoying even when I don't have someone watching me.
In this particular case, I was going to bomb that interview either way, but all I could think was "If I were doing this on my own I'd 100% have asked an LLM about this by now".
I agree with the fact that companies probably want someone who knows everything off-hand and then can use LLMs as a plus. I get that. It's a weird world we live in I suppose.
RandyHoward@reddit
I think if a company expects their devs to use AI tools on the job, then they should allow it in the interview. Lots of companies expect their devs to be using AI tools lately, so if you’re not allowing the candidate to use AI in the interview then you’re not really gauging how they’d perform on the job. But you damn well better be able to tell if the candidate understands the code they’re coming up with and not simply regurgitating what the AI spat out.
Lazy_Heat2823@reddit
The problem is that they have to overhaul the interview structure. Which is significantl ymore work.
eksx3@reddit
I work as a head of engineering at a mid sized company and encourage the use of AI in all interviews. Our candidates live code a demo problem with me and can usually tell within a few minutes how capable someone is.
If they can’t prompt well then it probably means they don’t even understand the problem.
If they don’t spend time trying to understand the problem then it usually means they aren’t someone I am looking for.
Finally AI likes to hallucinate often. If the candidate can catch those issues quickly and understand the problem they are the exact person we are looking for. If AI can easily outsmart them with its hallucinations then they aren’t cut out for the role.
Testing with AI has been a game changer for us when it comes to quickly gauging skill level.
steampowrd@reddit
This kind of forward-thinking culture is the kind of place I like to work!
akc250@reddit
Exactly this. Anyone who disagrees are dinosaurs unable to keep up with the times. Why wouldn't you want to test a candidate using all the tools available to them in their every day job?
rkeet@reddit
I'll make it worse for you. I teach IT things, and this past Monday/Tuesday it was Kubernetes. At the beginning of the trainings we now ask attendees to switch off AI tools and AI-autocomplete.
Needed, because it concerns logic, and the fundamentals, the tools are more than adequate. However, autocomplete prevents learning.
Had 1 guy in there that during the theory parts was furiously typing with ChatGPT "because I prefer that explanation". So, I told him he, like the other attendees was very welcome to ask clarification if he didn't understand something. The others did. He kept at it though.
During labs, he was also continuously using the bot. And he was slow completing quite directed assignment (make a Pod, make a Service, tie the together, etc). 2nd half of the 2nd day is a use-case. Quite basic: 2 backend services to be tied together. Uses nearly all things covered. He couldn't get anything to work. Fair is fair, the others also struggled but got much further. 1 or 2 even finished. He didn't get the pod plus service running (labs 1 and 2 from day 1 combined, without guidance).
Any critical thinking and learning moment is starting to get ignored, more and more. Sadly, this guy wasn't an exception nowadays.
The few times he spoke for a question it constantly started with "actually, chatgpt says X instead of what you said".
This is already a real problem, only getting worse.
Best we can do is not hire/enable these people, because critical thinking is, pun intended, a critical part of the job.
bytebux@reddit (OP)
That's crazy. I use AI for things everyday, as one should, but to treat its responses as gospel already is nuts. Maybe in another year or two, lol
kbielefe@reddit
My last interview we let the guy use AI because his Java was rusty, as long as he shared his screen and didn't outright ask it to solve the problem. It was more illuminating than I expected.
IngresABF@reddit
Can you elaborate on how it was interesting? We hired recently and might be doing so again soon, trying to evaluate our approach
kbielefe@reddit
I was surprised I could still get a feel for how good a programmer he was by the kinds of questions he asked the LLM.
hooahest@reddit
What do you mean by illuminating?
kbielefe@reddit
That I still felt like I had a good insight into how good a developer he was on the job.
keskesay@reddit
Those are just the ones you're catching. Some have it listening in and generating on the side.
Awric@reddit
This happens even for practical questions though. For example, an iOS interview question that I consider to be fair is: “Can you explain why we need this reference needs to be weak?” (for context, this is a question related to memory management that we encounter very often when building most apps)
It’s a warm up question, but a lot of people rely heavily on AI to answer it, and they try to be sneaky about it.
Goducks91@reddit
What's the answer? I'd have no idea lol
Awric@reddit
Ah sorry, I didn’t include all the context. It’s usually supplied with code that you can look at, but I often catch people typing the code snippet into ChatGPT (or whatever program they’re using) to generate an answer
It’s related to retain cycles and reference counts. A weak reference prevents an object’s reference count from being incremented.
thekwoka@reddit
is it really about preventing the count being incremented?
Or just about not preventing it being cleaned up?
ResidentSwordfish10@reddit
that's the same thing.
thekwoka@reddit
Well, one is an implementation detail.
The other is the actual goal.
Not blocking cleanup is a goal that applies to weak references in reference counted or mark and sweep garbage collection.
The count being increased is a specific detail.
bcgroom@reddit
Yes
Goducks91@reddit
Ahh ok! I still wouldn't know the answer but that makes more sense lol. Thanks!
spicymato@reddit
Generally, the answer for "when do you want a weak reference?" is "When the thing holding the reference needs access to the thing so long as it is alive, but not be able to keep it alive."
In other words, as long as Foo is alive, Bar should be able to use it, but when Foo is deleted, Bar should be able to detect that Foo was deleted and not use it; Bar should not help keep Foo alive unless Bar is actively using Foo.
This is accomplished by converting the weak Foo reference into a strong one before Bar actually uses it, and then releasing the strong reference when Bar is finished. If Foo was already deleted, the conversion fails and Bar can exit without doing the work, but if Foo still lives, then Bar will increment the reference to keep Foo alive until it has finished working with Foo, and if all external references are deleted while Bar is working, then Bar's release of the strong reference will finally delete Foo.
Toohotz@reddit
To really sus out the candidate further, you can question about if we were to use unowned instead of weak, would there be any caveats. While we all know weak is the safer option always, unowned has its use case if we can guarantee the lifetime of said reference has the same lifetime of its parent. It’s only when it’s independent of its (view controllers are an example of this), should weak be strongly preferred.
There’s a slight cost of the indirection going through Optional with self that most times we can just write off but the performance can add up in certain circumstances.
I tend to die on this hill in PR reviews when I just see a bunch of weak self references that can be unowned due to lifetime and ownership but it makes people a bit nervous 🙂
DreamAeon@reddit
One method I like to do is not read out the entire question and use “this”, “that” with contextual info from the screen.
Interviewees will clarify but the AI promoters will repeat the entire question.
Orrison@reddit
Not true in my experience. It is extremely obvious that you are reading off of or consulting with something off screen.
Some are terrible, repeating interview questions back word for word, pausing, then spewing the first page of Google definition of something in the question. (I had one where I could literally see the AI responding in the reflection of their glasses)
Some are okay at hiding it, but it’s obvious in the body patterns. And they easily get caught up with questions that dig more into personal experience or something related to previously asked questions when the answers don’t match up.
No-Emergency9224@reddit
The idiots are extremely obvious. My friends and I all used cheating tools for our interviews and were all across big tech.
I promise you, for every obvious idiot, there are at least 5 people who’ve practiced their cheating setup.
Obviously you can’t be completely clueless, but if you have a baseline understanding then you don’t actually have to use your brain at all. I know how to use all the data structures and algorithms, I’m just too lazy to figure out the problem during an interview.
ghostwilliz@reddit
God I couldn't imagine doing that. If I don't know something I'll just say I don't know. It's not a big deal, most interview questions are easy as fuck anyways
dat0dat1@reddit
"most interview questions are easy as fuck anyways"... tells me you haven't interviewed much...
ghostwilliz@reddit
I've done more than I'd like to have done, I'll tell you that much
Orrison@reddit
Same! And in interviews I have lead, I love when folks say they don’t know something! It speaks highly to their credibility, which is big.
Current-Fig8840@reddit
You seem seriously paranoid. I usually repeat the question again while using that time to structure an answer.
Orrison@reddit
I’m sure I’ve misunderstood at many points and made assumptions. In the scenario I was recalling with that point there was a lot more context that made it seem more than likely they were using some additional tooling like AI. But maybe I was wrong.
It can be difficult to sus out a person in the brief interaction of an interview and I’m sure I’ll make mistakes. So far i’ve managed to put together a great team so some of them I figured out I guess . 😬
TedW@reddit
I see a confirmation bias here, where you'll convince yourself that you're right more often than you actually are.
You'll finish an interview convinced they used AI when they didn't, or vice versa, and walk away thinking you were right either way.
Orrison@reddit
Perhaps. It’s the job of the interviewee to present themselves as best they can. And the job of the interviewer to work to understand the skill level and fitment through the nerves and questions. It’s common as an interviewer to miss the mark in the brief interaction with/without AI. My anecdotes of possible AI usage are just anecdotes. Maybe you’re right on some of them and I missed out on someone great.
I try my best to think about all my biases but you’re right to remind me of this one here.
ryeguy@reddit
https://rationalwiki.org/wiki/Toupee_fallacy
bytebux@reddit (OP)
Ooo the glasses 😂 that's a good one
ResourceFearless1597@reddit
It wouldn’t be this bad if the companies didn’t ask fucking stupid questions from a grad student. In one interview they asked me about deep advanced OS knowledge, keep in mind this was for a front end intern role. Yeah market is brutal.
Southern_Space7425@reddit
BRO stop lying. You're saying you're a senior devops engineer at a FAANG but you're interviewing for intern frontend roles.
ResourceFearless1597@reddit
Oh my days I work at FAANG, but this was when I was interviewing
Southern_Space7425@reddit
You say you're a senior software engineer, which is 5-8 years of experience. You're saying back when you were interviewing for intern roles, so 6-9 years ago, the market was bad? Except you say the market IS brutal. And the market 6-9 years ago was good. So... which FAANG did you say you're at? Fortunata?
Astarothsito@reddit
If they are good enough that I don't notice as interviewer, then they are good enough to be hired.
I usually put an exercise that any Ai could easily solve. The thing is that if they solve it in 5 minute then I know they cheated, the exercise is designed to have them explain and question concepts that any (C++) programmer should know by memory. Just read and type from an Ai wouldn't be enough.
A coding exercise is for evaluating communication and previous experience, the primary goal is not to implement a working solution.
Goingone@reddit
Yes, this is a real problem today. I’d estimate I see it in roughly 20% of people I interview.
But it’s incredible easy to catch when people can answer technical questions, but then have no idea what their practical applications are.
Or when they don’t know how to do something simple, but can give a textbook answer to an extremely difficult concept.
Calcidiol@reddit
Studying 100s of advanced / new / difficult concepts well enough to pass a verbal / written / multiple choice test is the very definition of preparatory studying from early education onward.
And yet the more specialized one's knowledge is the more likely there are myriads of things that are simple that people are wholly unfamiliar with and unskilled with the practice of.
Perhaps one of the most useful skills these days isn't the depth and breadth of one's rote memory or being N% faster at some common workflows, but how to adapt to the continually evolving / advancing tool & technology stacks we have which are meaningfully advancing annually.
Hand anyone moderately clever / dextrous / able a rubik's cube or jigsaw puzzle they've never encountered before and they'll struggle to learn it initially. But soon they'll figure it out and succeed. That's life every year on the leading edge of the technology curve. It's not about what you know wrt. skills of last year / decade, it's about how adaptable you are to learn and evolve broad but maybe shallow skills as new things continually spring up that are relevant to use / overcome.
Being able to invent and use new / unfamiliar tools and succeed in new / unfamiliar environments has been one of the key enabling traits of the human species over its existence. It's in our DNA, literally.
Being able to either rote memorize or (equivalently!) retrieve those answers (ML, google, web, textbooks) for yesterday's problems isn't remarkable and it doesn't matter HOW one comes up with the non-novel answer if the answer is what's needed to accomplish a task. Being able to use any / all tools at one's disposal to answer the questions of 'tomorrow' when the answers are much less well known and familiar is much more interesting when the tools and topics themselves are endlessly changing.
Using mnemonic aids and adeptly navigating informational resources is perfectly reasonable, even existentially necessary. Whether they're textbooks, reference posters, wikis, blogs, libraries, ML models, doesn't matter, just sources of "already known" data / information.
Using workflow aids to make work more efficient / ergonomic is also perfectly reasonable, even existentially necessary. Whether fingers & toes, abacus, calculator, spreadsheet, assembler, compiler, database, ML model, analysis program, spelling & grammar checker / advisor. Doesn't matter, these are the tools our civilization needs to grow / succeed, they'll evolve and our evolution to continue to be a tool creating & tool using species with an ever wider / broader / deeper set of tools should / must, too.
Doing something necessary more efficiently & ergonomically is probably one of the great virtues of personal learning / evolution. Mostly we're not scientists / inventors of novel concepts and tools, but we can all continually climb the ladder of ever more diverse tools & concepts to maintain "literacy" and "capacity" to surf the wave as it pushes forward into the future.
RowbotWizard@reddit
I think it's a bit naïve to act as though AI assistants aren't becoming a core tool in most devs' toolkit.
If you need runnable code for the goal of your interview, then don't be surprised if an AI assistant is close at hand in their IDE or a browser. It saves time. It reduces struggle with the minutia of code. Of course the details still matter -- if the person is just vibe coding their way through and can't properly assess or justify the solution they're offering then that should drag their changes, but I wouldn't label the tool as the problem.
Heck, maybe you could even use AI to do a new kind of interview where you prototype architecture diagrams in an IDE with Mermaid and ask them to weigh pros/cons for you. That'd be fun!
I'm not interviewing lately in my current role so maybe I'm out of touch, but I think it's important to see how a candidate uses and remains _critical_ of AI in their workflow. All of my most talented peers are using AI assistants in their day-to-day and complaining about fighting its bad suggestions and mistakes.
If the goal of a coding interview is to arrive at the best solution to a given problem, then I think it's beneficial for interviewers to witness how a candidate evaluates options, makes plans, and produces code with an AI to bounce off of because that's what's realistic nowadays.
If you really want them to bounce off you as an interviewer rather than their AI assistant, try a pairing exercise where you drive, and they navigate. Inverting responsibility of screen sharing keeps things in your control. You can work with an AI assistant on their behalf, if they wish. It includes you in the discussion _with_ AI rather than an observer trying to police their AI usage.
r_s@reddit
The motto for tech interviews/school assignments etc in tons of peoples eyes are "If you ain't cheating, you ain't trying".
Multiple AIs running, doing multiple interviews, taking multiple jobs overemployment style.
That is the current state of a very good portion of the industry. When you start paying elite comp ranges (well over 250k), maybe it changes.
Successful_Camel_136@reddit
I wouldn’t consider over employement cheating. If you are still an above average performer it’s a win win for the company that likely would not have been attractive for the employer if they weren’t employed elsewhere. It does cheat other devs out of opportunities I suppose if that’s what you meant
MagicalPizza21@reddit
When interviewing, I'm trying to get a job I can keep, not one I have to cheat to get in the first place.
My current job didn't even have a coding interview.
invest2018@reddit
How about updating interview so they can’t be faked by AI?
MagicalPizza21@reddit
How? Make them in person again?
PoopsCodeAllTheTime@reddit
Maybe a normal conversation?
Lol
Successful_Camel_136@reddit
Normal conversational interviews about your past experiences can be very easily faked. Of course if the interviewer is good/tough and goes deep on technical details with lots of follow ups you can’t fake that. But there’s a lot of bad/ easy interviewers
invest2018@reddit
Final rounds should be in person unless there is external reason to trust the person.
Instead of solving leetcode style problems from scratch, have the interviewer writing the code while screen sharing and the candidate driving verbally.
bytebux@reddit (OP)
I think that's going to come very soon
Efficient_Sector_870@reddit
Who cares leetcode questions are bullshit anyway.
great-pikachu@reddit
Seriously, what’s the point of making interviews to be so disconnected from the daily dev experience? In interviews: leet code, no ai In daily tasks: no leet code, plenty of ai
They could as well ask people to show their yoga or carpentry skills while at it
g-unit2@reddit
take homes skew towards people who have more time/less responsibilities. i.e single no family.
debugging a function or small app seems like a decent evaluation. even if they don’t succeed. just seeing the process. but i haven’t personally seen this done before.
leetcode can provide a standard way to evaluate engineers that is somewhat consistent when you’re hiring 10,000 engineers.
outside of big tech, any company doing a leetcode question is pretty dumb.
Additional-Map-6256@reddit
Damn please require carpentry in interviews. I'll do better than these fucking useless leetcode questions
Groove-Theory@reddit
Don't give them any ideas
ghostwilliz@reddit
I have never actually been asked to do leetcode style questions in any interview surprisingly enough
PoopsCodeAllTheTime@reddit
I did two in the past week, one live and one async, medium/hard difficulty. I'm still unemployed lol.
Toxic_Biohazard@reddit
What role/seniority?
-ScaTteRed-@reddit
Could not agree anymore xD.
CommunicationDry6756@reddit
AI can answer more than leetcode questions.
mincinashu@reddit
They're hiring fork lift operators based on their bare handed shelving skills.
Groove-Theory@reddit
They're hiring fork lift operators based on how they lift their forks at the dinner table
ringohoffman@reddit
They didn't say they were asking Leetcode questions. I ask a pertinent, simple OOP question and still regularly catch interviewees trying to cheat.
drunkandy@reddit
Yeah I see it a bunch, it's pretty obvious when a candidate looks off into space and is suddenly struck by inspiration to type out a complete function start to finish.
Lazy_Heat2823@reddit
I dread getting laid off and having to interview. My eyes look up when thinking, it’s just a natural reflex.
drunkandy@reddit
I assure you that I can recognize nervous engineer tics.
Lazy_Heat2823@reddit
It’s not a tic. It’s literally just eyes looking up for 10 secs and thinking
PoopsCodeAllTheTime@reddit
Me too, I stare into the ether to formulate words, then I can actually speak.
j-random@reddit
I love it when you ask a question and the candidate faffs about with some neutral vague comment, then suddenly spits out a textbook answer to the question you asked. One of these days I'm going to snap and tell candidates that if I can't see both of their hands at all times, the interview is over.
MagicalPizza21@reddit
Idk about you, but my camera doesn't point at my keyboard, so I doubt any interviewer is going to see my hands, but that doesn't mean I'm using AI
drunkandy@reddit
I think sometimes they have a buddy listening in on WeChat sharing a screen or something
Current-Fig8840@reddit
Lool, this is how you should answer interview questions though. You should think about a full solution and run the test in your head then start coding. Some of yall have just become crazy paranoid
akc250@reddit
How about keep up with the times and interview based on how well someone uses tools that are available to them? That doesn't mean the whole interview needs to be completed by an AI but there are plenty of cases where AI fails. Test their knowledge using that. Keeping the old style leetcode interview is perpetrating a toxic interview practice akin to hazing that tests little on how well someone will perform on their actual job. I've worked with plenty of candidates who aced hard leetcode questions and when it comes to on job performance, their ability to write scalable and performant code is worst than some of my interns.
imsupergreg@reddit
I've found about 1/6 interview candidates have obviously been using AI tools. My general goto interview question is designed in such a way that it's obvious when you skip the iterative solutions and land on an optimal solution.
That said, this is just the ones I catch. The smarter ones make it far less obvious. We really need to craft better coding questions based on realistic problems and do more onsite interviews.
CranberryDistinct941@reddit
Just look at the list of people getting called out for cheating after any weekly Leetcode contest
eddielee394@reddit
We provide access to a repo of a small mock application and have the candidate walk through a refactor of the app based on a given a set of criteria. We spend about and hour discussing implementation, abstraction, architecture design and all sorts of fun stuff. No arbitrary coding challenges, but we will dive into writing code based on where the conversation leads. AI isn't really helpful (except maybe as a sounding board) due to the amount of context involved.
Its also alot less stressful for the candidate overall because its highly collaborative. It really gives a lot of insight into how a candidate approaches actual day to day work and their thought processes.
Dreamin0904@reddit
This actually sounds fun in comparison to most technical interviews.
bytebux@reddit (OP)
This I can get behind. Aligns with the actual job description as well. The only trouble is it may be hard to roll this out widely with a large company.
BlackHumor@reddit
That's a clear toupee fallacy. You can tell when someone does it badly, but that doesn't prove you can always tell, because if someone did it well you wouldn't know.
pausethelogic@reddit
Caveat: unless you’re interviewing at an AI focused company or AI startup, where not using AI code assistants would likely cause you to not get the job
Additional-Map-6256@reddit
Better yet, have better interviews that actually assess skills used on the job, with the tools available on the job.
philip_laureano@reddit
That's why when I do my interviews, I often show my hands and say that this is me without googling, checking stack overflow, or asking ChatGPT.
They're interviewing a human, and if my responses are half baked, that's me, and hopefully, it makes sense.
If they're too obsessed with my flaws, it's a red flag, and if they respect me for giving it a red hot go without looking anything up, then I'm in the right place.
The interviewers that look for perfect answers will get turned off, but they're not the ones I'm looking to work with
bdtechted@reddit
And here I am still unable to pass a live coding interview meanwhile those who used AI could.
PoopsCodeAllTheTime@reddit
I'm with ya brother 😔
taylor__spliff@reddit
Very unlikely those people are passing.
TheKleverKobra@reddit
I mean if hiring teams are expecting candidates to be Red Bull addled leetcode champions, it seems like this is just a natural reaction to a broken process. You are seeing recommendations of 6 months for interview prep, it’s completely crazy for working people- so yea, when I run into a bullshit algo question in the wild, I will definitely be looking into ways to use ai to get past that round. I hope all candidates do the same.
Our profession is continually evolving yet we are asking candidates to write sorting algorithms that were perfected by other nerds 20-30 years ago to see “how they problem solve” or “if they can code”, it’s completely asinine; no dumber statements have ever been made. We need to hire people who can execute, function in a team, contribute positively to culture. Why don’t we throw them in shark infested waters or have them lay some tile?
AthFish@reddit
Our company cto literally looking into vibe coding and encourages it … why shouldn’t that be part of the interview ?
teddystan@reddit
Hot take even within my company but I encourage experienced (senior+) interview candidates to use AI tools if they want.
If your interview process can be made trivial by AI tools, maybe consider if the process could be improved.
One thing I find AI really struggles with is making continuous improvements especially for things that don’t have a single correct answer.
For system design interviews, if they can discuss tradeoffs with me smoothly, great. But if it’s clear they’re just waiting for a ChatGPT response, then I might as well hire ChatGPT instead.
Antares987@reddit
When I’m interviewed, I talk the interviewer’s ear off and go back to the 1980s. No way AI can match my level of speed. If you’re interviewing H1Bs, the biggest issue I’ve seen for decades is the person I interview not being the person who shows up.
summerloverrrr@reddit
Ok how can you tell?
Moloch_17@reddit
just don't do coding interviews. If you can't get a good bead on someone's technical capability from a ten minute conversation, you shouldn't be in a position to hire anyone.
bytebux@reddit (OP)
Oh no, it's all very obvious to me, or maybe I just haven't come across a pro cheater yet. I feel like only truly knowledgeable devs would be able to fool me anyway, since they'd have to answer some tough questions on the fly. I know this then makes the coding portion just a vessel to get to the knowledge based questions like having them explain why the architecture or optimizations are the right choice, and I honestly wouldn't be opposed to a better interview strategy than the leetcode style. Maybe this AI push will cause everyone to rethink it
Moloch_17@reddit
The AI issue here is the same one that education is having. The entire system hinges on methods of proving you learned something that the AI can generate in seconds. How the education systems adapt to that will be similar to how these kinds of interview processes adapt. I find it pretty interesting honestly.
bytebux@reddit (OP)
We're now fighting a battle that we'll ultimately lose. Wall-E is waiting for us around the corner 😭
D_D@reddit
/s?
Moloch_17@reddit
Absolutely not.
D_D@reddit
How many people have you hired in your career?
Moloch_17@reddit
A few dozen probably.
MeweldeMoore@reddit
I encourage it. I give candidates a hard problem and encourage them to use their preferred tools to solve it.
crummy@reddit
adjust your interview questions. "how would you efficiently sort a pile of containers in the napalm-making factory"
Current-Fig8840@reddit
You really dont know. Also, some of you are just paranoid like crazy.
bytebux@reddit (OP)
If the candidate is good enough to fool the interviewer, they probably deserve the job anyway.
CombinationNearby308@reddit
I took an interview this week where the candidates' video froze twice during the interview and came back less than a minute later. Guess how many simple coding questions we asked? Every time the candidate came back, they came back with an apology that their roommate was using the microwave and that drops the bandwidth and oh, yeah, by the way, here's the correct solution to your question.
KhellianTrelnora@reddit
Wait, why not?
With so many shops mandating that your day be spent babysitting a AI agentic coder, isn’t spring that you already know how a good thing?
/s, maybe, but only kinda?
monsoon-man@reddit
I had couple of such experience this month. Left a bad taste in my mouth.
Both resume were "impressive". Have github repos with professionally written code. I was like "wow" look at this fresher. I wrote such terrible code when I was a fresher. I told my manager that code seems to be written by AI. "What's wrong with using AI to write code? It is just like another tool.", said my manager who forwarded both resume.
During the interview, there were a lot of red flags. I asked the candidate to write "reverse lookup" for a python dict after disabling the AI. He started writing code which was half javascript! I asked what
Object.keys
means, he drew blank. Fortunately my manager was also on the call.I thanked him for his time. And went for a stroll with my little daughter for a while. Mangoes are really nice this year!
The other candidate was only slightly better.
Whoz_Yerdaddi@reddit
Supposedly there's now software that puts an overlay on your screen and takes microphone input. Don't ask me where as I don't know. The cut n paste ones are definitely out in the wild.
Cheating on interviews has been going on for awhile with brain dumps , etc. It's just become rampant in highly competitive geographies.
pancakemonster02@reddit
We tell people to use AI tools in our interviews. We expect it.
greengoguma@reddit
Shhh. 🤫 Don't let them know
ChuyStyle@reddit
Honestly it's 2025. Why are you not using AI tools. It's literally the same as asking stack overflow and researching through garbage answers. As the interviewer you should be able to see the developers ability to work with an AI tool in order to produce effective code.
Ffdmatt@reddit
Then spend hours rejecting AI code from merge requests?
If they can't explain the code without AI asking them, they're not very useful. Especially in an interview. Who cares if you can spit out code? They care how you think.
ChuyStyle@reddit
How they prompt and mold the code to match your requirements tells a lot about their thinking. I understand the point of view in terms of cheaters but if we step back a bit we can see a larger evolution in software development. How we test this new skill set is important.
Hot-Sheepherder301@reddit
I know many that have successfully cheated their ways into jobs
Javeess@reddit
I caught 3 guys in interviews. I don’t say anything just goes in the DNH pile.
iBN3qk@reddit
I’d rather ask harder questions and see how well they can use the tools.
Affectionate-Tea3834@reddit
That's been happening for a while now. Cheating has become really easy nowadays. Try some anti cheating assessments?
steampowrd@reddit
I can say for a fact you don’t always know. 😂