Software job posts barely mention AI
Posted by davidbasil@reddit | ExperiencedDevs | View on Reddit | 145 comments
90% of the local software job postings barely mention AI in their descriptions or requirements: no ChatGPT, no Claude, no agentic workflows, no LLMs… nothing.
There are some AI/ML openings, but they’re separate from standard web development roles and other general software positions. And even then, they’re dwarfed by traditional .net/java/php jobs.
It feels very strange compared to what we hear online: “learn AI or you’ll be left behind,” “AI is transforming everything,” and so on. It seems that companies don't look at it like that.
And don't tell me that job descriptions lag behind reality. Companies been using AI to filter out candidates for years and they can't put word "AI" in to their the job requirements?
I'm in Tbilisi, Georgia (Eastern Europe).
raven_785@reddit
This is going to sound harsh, but AI is being used by US engineers to do the work that would have been farmed out to Eastern Europe and other low cost engineering talent centers previously. You shoudn’t be seeing job listings requiring AI experience. You should just be seeing fewer job postings in general over time.
Ok-Hospital-5076@reddit
Realistically, AI is used to augment jobs. It is not as transformative as it’s sold. That’s why companies are still interviewing on core skill sets and not AI. Working with AI is such a low skill that it’s not even worth interviewing on. Your “prompt engineering” skills are only as good as the model and harnesses.
I recently interviewed for a “Senior AI Engineer” role at a big company. The first round was a LeetCode-style online assessment.
obelix_dogmatix@reddit
Nope. At Nvidia we have largely moved away from leet code style nonsense. There is a strong directive to not test candidates on syntax, and rather design/architectural questions.
Responsible_Try_1312@reddit
Completely ageree, DSA isn't obsolete, it matters more now. When AI writes the code, you still need to question it, understand the tradeoffs, and own what ships. Syntax is the easy part. Designing and Reasoning is the job.
nsfw_shtuff@reddit
??? Leet code style questions are the opposite of syntax testing
metaphorm@reddit
they're more like "have you specifically studied leetcode problems" rather than "do you have good software engineering fundamentals". it's low signal and is more testing whether or not the candidate did a leetcode prep grind than anything else.
Kultur_Cigany@reddit
Yeah lol I remember like 15 years ago my friend and I were interviewing for the same position on the same day. My friend was scheduled 2 hours before me. He told me the interview was writing a couple of binary tree related algos on a white board. I looked them up 30 minutes before I went and passed (we both did).
Then of course the project was doing bullshit on a legacy java app... :D
Main-Drag-4975@reddit
They’re both rote memorization checks with minimal predictive power for on the job performance
Prior_Section_4978@reddit
Leetcode doesn't "test candidates on syntax", what are you talking about ?
davidbasil@reddit (OP)
I think it's easier to filter out weak candidates by leetcode tasks rather than system design which can be memorized. I might be wrong though.
Fun_Hat@reddit
Every candidate that gets filtered by our interview process, gets filtered out on system design. But we're not just doing "design a URL shortener" either.
GlobalCurry@reddit
People study for leetcode like it's a college exam, but it actually uses a totally different skill set and mental muscle than you apply on the actual job at most companies.
Unhappy-Ladder-4594@reddit
Yep. The main value that LeetCode serves as a screening tool is screening out people that haven't specifically prepared for LeetCoding.
obelix_dogmatix@reddit
I didn’t mean system design. I was talking about design of algorithms.
davidbasil@reddit (OP)
Ah ok. Then I think the main tool there would be pseudocode, right?
obelix_dogmatix@reddit
Not really. I don’t need to see any code, pseudo or not. I am just listening to the candidate go through steps on how they would go about designing an algorithm for a given question. Don’t care about any code or data structures. Only focus is to understand how the candidate is thinking.
Wonderful-Habit-139@reddit
There are a lot of bullshitters that sound really good when talking but are really bad at applying. Writing code that compiles and runs is a very useful bar to test for.
thekwoka@reddit
But what processes filter out people before they get to that point?
Since there are definitely loads of applications getting filtered out. They aren't doing these interviews with every resume submitted.
obelix_dogmatix@reddit
Resumes. Also, me and a lot of my colleagues interview on a rolling basis. 1 candidate at a time. It has been years since i interviewed multiple candidates at the same time. So if a posting has 100 applicants, we start with the first promising candidate on paper. It’s slow, but does the job.
thekwoka@reddit
so a notoriously poor indicator?
obelix_dogmatix@reddit
how so? has worked great for me and my colleagues. If resumes are a poor indicator for someone, respectfully, they don’t know what they are looking for or just don’t know how to weed through resumes.
410_clientGone@reddit
then why not just ask them to write code and run against test cases. what's pseudo achieving that leetcode style isn't? we are back to square one
lab-gone-wrong@reddit
Leetcode is generally memorized too
System design is not as commoditized as reversing a linked list now that LLM coding is the norm
boringfantasy@reddit
Yep. Got a couple jobs because I had just seen the leetcode question the night before.
UXyes@reddit
We’re doing this as well. LLMs are great at handling the syntax, implementing known patterns, and other tedious aspects of programming. But it does all that so fast that foundational skills like architecture, system design, ci/cd pipeline management all becomes much more important.
Tcamis01@reddit
This should have been the case all along
davidbasil@reddit (OP)
Yes I have the same feeling. Nobody will ever hire for claude or chatgpt prompting skills, that's a kindergarten level stuff
sassyhusky@reddit
Yes, and you being 'behind' for not using these is literally just hype to sell the tokens. You don't see any of those posts/comments on locally hosted LLM groups. In fact, if anyone is behind, it's people who became over reliant on these extremely (and increasingly) expensive proprietary tools. Yes, I believe AI will replace more and more jobs, including devs, architects, managers, CEOs, your cleaning lady, your car mechanic, everyone eventually, but spending $200 a month on CC right now won't help anyone avoid any of it.
ericmutta@reddit
shhhh don't say this part out loud :)
sohang-3112@reddit
Yeah but they might if you know RAG, vector databases on Azure etc. very well - basically the surrounding stuff that's popular right now.
CorrectPeanut5@reddit
Sure they will. I've seen job reqs already ask for it. You have some old legacy system that needs to be modernized? The AI skills for analysis can and do save a lot of time.
SnugglyCoderGuy@reddit
Typing "AI please tell me how this program works" is not a skill.
CorrectPeanut5@reddit
If you think you can take a slate of 60 COBOL files just say "AI please tell me how this program works", you will be sorely wrong.
xt1nct@reddit
I’m about to take over some legacy stuff and will definitely ask codex/claude to tell me how the stuff works.
It’s not like it should be an issue. AI read plenty of books and posts about old languages.
CorrectPeanut5@reddit
I've spent the last many months doing just that. People don't realize how much AI depends on well named functions/methods/variables to understand the logical context of more modern languages.
You get some COBOL and then tell is to do the call chain analysis on J0XI1245 (and everything is named with that kind of gibberish). The reality is that is analyzing will chain into 20-40 other obtusely named related components in the chain. Add to that a bunch of variable names that conform to some gibberish internal naming scheme someone long since dead or retired came up with. The system prompt for those skills is 2-3 pages of domain knowledge and company conventions.
Then you run into the issue that the code is so verbose that the context fills way quicker than modern stuff and the tooling really wasn't optimized to handle that kind of thing. So the skills and prompts you need to make have to take into account compression approaches, and how minimize micro-decision on doing said compression.
xt1nct@reddit
I appreciate your perspective. I will take notes as I start peeling the onion. I was hoping it will help but some of the table names are in fact gibberish.
Wonderful-Habit-139@reddit
What do COBOL skills have to do with prompting skills? The former is actually difficult and had to do with your programming skills, not prompting skills.
Chuu@reddit
Asking good questions is definitely a skill though. Very closely related to how useful AI tools are to an individual.
Opt1m1st1cDude@reddit
Asking good questions is table stakes for a good software engineer. The whole point of this field is to be able to solve ambiguous problems.
gjionergqwebrlkbjg@reddit
And yet good half of our candidates fails their system design interviews within the first 10 minutes precisely on that issue.
Ok-Hospital-5076@reddit
AI proficiency is not a standalone skill. A programmer writes effective prompts because they understand logic, just as a quant writes them because they understand data. Your core domain expertise is what makes AI output better
Nailcannon@reddit
I had several interviews where I did well on the technical parts, but didn't answer emphatically enough when prompted about my AI usage. I don't use it to 10x every workflow. It's a tool and I typically use it for the boilerplate time sucking tasks like writing unit tests. I got refused because of my AI interview answers. The only reason I seem to have finally gotten a job is because I found a manager who was equally skeptical about the current AI environment.
sohang-3112@reddit
I have been using Claude Opus - besides test writing it's also helpful for things like helping debug bug given a log file.
montdidier@reddit
I detest Leetcode. Actually I quite enjoy the problems, but the way it is used as a screening tool, I detest. The fact that its so disconnected from my day to day job just creates artificial friction.
takeda64@reddit
I guess it is a double edged sword. At my place it makes bad developers worse.
They provide HUGE PRs with large amount of lines containing a lot of nonsense. When pointing out why it is so big, they argue that it can't be done simpler.
sohang-3112@reddit
Automated reviews with Copilot on github may help to some extent (they have to resolve all its comments first - hopefully it'll make them pay some attention to PR contents)
SplendidPunkinButter@reddit
“Prompt engineering” == “being good at hacking the shitty tool to make it seem less shitty than it actually is”
A scale of questionable benefit in the first place. Show me someone who’s gung ho on prompt engineering and I’ll show you someone who thinks a 20 minute programming task would normally take three days
UXyes@reddit
AI is an amplifier, not the silver bullet it’s being sold as. If you have good practices built on a good foundation, it’s amazing how fast you can move with it. If you don’t, it’s just as amazing how fast you can fuck everything up.
AchillesDev@reddit
There are definitely differences in using them, but it's like being able to use a debugger well or whatever. You can pick them up pretty quickly and if you have good code review/code smell detection you have very quick feedback on how well you're using those tools.
PikachuPeekAtYou@reddit
Because prompting isn’t a skill
Elctsuptb@reddit
Maybe because the companies heavily using AI aren't as likely to be putting up job postings in the first place since they're replacing employees with AI
Timely-Maybe-1093@reddit
I think the concept of arming juniors with AI and increasing productivity is dying. I think now they want seniors again
Impressive_Chemist59@reddit
I would say my CTO announced we are going 100% AI coding by the EOY and brag about how designers and PM use AI to prototype the solution.
Fluffy_Molasses_8968@reddit
I think AI tools are becoming something people expect you to use, even if the job post does not say it directly.
The real skill is still reviewing the output. A model can produce a lot of code quickly, but someone has to know whether it fits the codebase, handles edge cases, and is worth maintaining.
DeterminedQuokka@reddit
I just check my company job descriptions.
We are super ai pilled. Execs talk about ai constantly. We have a token minimum that everyone has to hit. We cite number of ai prs as success stories. We have ai interviews.
There is a single bullet point about ai in our job descriptions.
GlobalCurry@reddit
I really don't get the token minimum thing, like it feels like when aws/cloud stuff was new and people were building microservices for the sake of microservices and then we ended up with the finops role because companies realized they were overspending on supporting tons of microservices.
Less-Fondant-3054@reddit
As a microservices specialist it is exactly that. Except unlike microservice architecture the usefulness of LLMs is closer to the blockchain fad than microservices. Microservices done right really clean up and speed up development and enhance product reliability. LLMs are like blockchain - really trendy but not actually very useful
potatolicious@reddit
Yeah... I'm pretty "AI pilled" but token minimums just screams AI psychosis has taken over at the company. Burning tokens for its own sake is not a good thing!
I'm generally of the opinion that we should not change the goal metrics around AI-assisted development. Things like AI PRs and such are interesting intermediate numbers to look at but not actually the goal. The goal is shipping good product and always has been.
Making these types of numbers the goal is just wankery about process over outcomes.
thekwoka@reddit
Maybe I guess it's a kind of strange incentive to have the bot like..really "think" through the problem and double triple check itself?
GlobalCurry@reddit
Oh yeah that makes more sense, I'm thinking if it from those stories I've heard of token leaderboards and connecting the two behaviors when they're not.
thekwoka@reddit
I mean, i doubt it is that, but that would be like...one way this kind of "incentivizes" using the AI responsibly...but also it incentivizes just throwing AI at problems it sucks at for a long time...
Less-Fondant-3054@reddit
The hype train has crashed. The prediction of 10xing with LLMs has been proved very untrue. In reality for all but the most trivial of things, which can be actually 10xed by things like templates, LLMs are a 1.5x accelerant at best and often a sub-1x, i.e. a decelerant.
johny_james@reddit
My experience has been completely different since I'm looking for Java positions, there AI, Reqct, Nextjs positions like everywhere you turn.
CodelinesNL@reddit
I "taught" our developers how to use Claude Code effectively in a day. It's just a tool. It's powerful when used correctly, can hurt you when used incorrectly, but in the end it's just a tool. And not a complex one.
We always hired for software engineerings skills, never for your ability to "write code", or use a specific IDE.
In addition; tools change, engineering fundamentals don't. Who knows what tools will be the best in class next year? But how we build software has not changed fundamentally, we now just have new tools that do a lot of the boring stuff for us.
johny_james@reddit
And how do you test those?
CorrectPeanut5@reddit
This is where I'm at. I know plenty of devs who code well and could pass a leetcode test, but are totally clueless when it comes to the steps of resolving issues like performance bottlenecks, dealing with eventually consistent data patterns, and horizontal scaling edge cases.
UnderstandingDry1256@reddit
Because this is baseline. Nobody writes you’re supposed to know how to use code editor or cli haha.
kagato87@reddit
Companies have learned that being too vocal in AI adoption has consequences.
Announcements of AI initiatives have a remarkable correlation with dips in stock prices... And advertising an AI heavy role is likely to draw applicants that can basically vibe code and that's it.
I did an AI heavy tool this week. The tool works great. The code smell is so bad you really don't want to even look at the file structure...
Proper_666@reddit
The reason most JDs don't mention AI is that companies genuinely can't articulate what "good AI usage" looks like as a hireable skill. They know they want it, they just don't know how to screen for it. The ability to direct AI with architectural intent, catch when it's confidently wrong, and know when to stop prompting and start thinking, that reads identical to "senior engineering judgment" on a job description. So they hire for engineering judgment and assume the AI part follows.
That's not a bad bet either. Domain knowledge to evaluate AI output matters far more than prompting technique, and domain knowledge is exactly what those "5+ years Java" requirements are proxying for.
shozzlez@reddit
I mean, job postings don’t list “IDE” in their requirements either.
RequirementExtreme89@reddit
Everyone can use AI, it’s in no way a differentiator. It’s a bit like putting “outlook” on your resume.
montdidier@reddit
Consuming AI isn’t really much of skill.
overzealous_dentist@reddit
bringing the necessary context - and only the necessary context - to an AI is a new skill
montdidier@reddit
Thats not a new skill. Thats is a basic skill that developers already have.
Ambitious-Garbage-73@reddit
Was on a hiring panel last month. We asked candidates about AI tools and got basically two kinds of answers.
Group A did the "I use Copilot daily it makes me 10x faster" thing. Couldn't name a single time it was wrong. Couldn't describe what they do when the output doesn't make sense. Just vibes.
Group B gave me a specific bug where AI led them in circles for an hour and they had to step back and actually think. One candidate described a PR where Claude suggested a "fix" that would've introduced a race condition. He caught it because he understood what the code was supposed to do, not because he's good at prompting. The AI didn't save him. He saved himself.
We hired from Group B obviously. But here's the thing that bugs me: there's no clean way to put "must know when AI is bullshitting you" in a job description. It sounds stupid written out. Like you're hiring for cynicism. But it's genuinely the skill that separates devs who ship broken AI code from the ones who ship working systems.
The job postings you're seeing without AI mentions are probably just being honest. The ones with "AI/LLM experience required" are the ones I'd be suspicious of. Either they don't know what they actually need or they're about to replace half their team with vibe coding and call it a transformation.
church-rosser@reddit
How about, "Must know when AI is bullshitting you"
xMisterSnrubx@reddit
So glad I just retired. I know personally of companies where they are already getting AI to detect and pick up a JIRA ticket, implement the changes and create a PR. No dev intervention, in some cases another AI tool reviews the PR and approves it. They have a stated,enforced goal of the BA/Product folks driving AI coding from their stories, no devs. Insane.
church-rosser@reddit
Insane indeed
resurreccionista@reddit
I have been asked how much AI I used at my last position by almost every company I’ve interviewed with. And almost all of them mention they use it or even advertise as “AI-driven”
sogo00@reddit
I think you are seeing a few things:
Connect_Detail98@reddit
Yeah, I don't care if someone doesn't know how to use Claude as long as they have the core skills for the role. Anyone can configure and learn how to use Claude in 2-5 days. Those core skills take years to hone.
Agents are pretty much prompts thanks to the effort companies like Anthropic and Cursor are putting into their products.
micseydel@reddit
Is there any evidence yet that these things are worth their cost? They're still being subsidized, the marketing is still heavy, and people can't articulate clear and reliable use cases.
max123246@reddit
In my day to day it's definitely worth it. Prototyping new designs is a lot easier when I can look at actual code sketches instead of writing it out or thinking in my head
ep1032@reddit
I just had to sit through a company meeting about the successes of our new Agentic engineering team. The two engineers on the team are currently the stars of the department. They're nice guys, but I've seen the code from one of them pre-AI and it was a complete trainwreck that two separate teams have rewritten from scratch in order to make sense of it. And the other guys is well liked by my current team, but is quietly known as the guy who nearly took down production every week.
Right.
So during the meeting, upper management lauded the fact that one of these guys closed out, apparently 20 bug tickets in a single day last week using agentic development flows. That's fantastic!
Apparently no one was allowed to ask why the hell a brand new, greenfield project, already has so many bugs that a developer on the team (THERE ARE ONLY THREE DEVELOPERS ON THE TEAM) can close out 20 bugs in a day.
Sigh. AI is powerful. But the dishonesty around the technology is exhausting.
max123246@reddit
Yup, I'm exhausted by the AI hype bros too. It's a dangerous tool that makes you think it's more capable than it really is. I've definitely fallen into traps using it before where it just led me down a very stupid path where if I paused and thought for myself for a bit, I'd have figured it out myself.
I'm not gonna say I'm perfect at using it and I definitely understand why people would rather wait or stick to their tried and true ways. It's definitely not worth valuation VCs are giving it but I definitely think it allows me to work on the problems I care most about and the parts of the project that need the most scrutiny and things I just need "good enough", ai can do. I'm on a small greenfield project right now with not enough staffing so it's like best case scenario. When I was doing mostly bug fixes and incremental improvements for stable systems it was kinda useless
Frolo_NA@reddit
CRC cards are a fast and easy way to do this
thekwoka@reddit
idk, I feel at the point that I've done that much, its easier to just finish it myself than try to get the AI to do it. It's such a major context shift distraction to change to "talking" to the bot.
max123246@reddit
I mean to be fair, I don't pay for my AI usage, it's provided by work. I wouldn't pay for it myself, lol. But since it's already provided it's not too hard to send it 2-3 sentences and see if it can get something working.
The LLMs are definitely better now than they were last year. I used to never generate code with LLMs and only use them for querying a codebase
thekwoka@reddit
It's the waiting for responses and stuff.
Such a flow breaker.
micseydel@reddit
How specifically did you determine that it's worth it? Did you measure it? How?
max123246@reddit
It probably saved me 4-5 hours of writing the design document last week since I could in the same session as I was sketching out the code, I could ask it to emit markdown and then start telling it to edit different sections or word it differently
So I got to spend that 4-5 hours exploring different design approaches instead. Last time I wrote a design document, I wasn't nearly as thorough
And to be fair, for technical writing, I find the AI can explain it with greater clarity than I can. I love my run-on sentences which is great for tone and voice but not so great for people understanding what I meant
In my mind it feels like shaping clay instead of building brick by brick. You can get a rough shape faster, but it's pretty hard to get any specific details
I do think this work is its best case scenario though. Most of the time it doesn't make me faster and sometimes makes me slower
micseydel@reddit
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/ via https://www.reddit.com/r/ExperiencedDevs/comments/1lwk503/comment/n2f16hx/
If you measure it, I'd be curious, but claims without measurement 2026 to me are just further evidence there's nothing to this stuff.
hardolaf@reddit
In my day to day, every minute possibly saved by AI has been wasted on keeping up with constantly changing AI tooling, corporate policies, configuration changes, upgrading tools, etc. Oh and dealing with the slop that my coworkers ask me to help on. I lost half a day recently because someone had AI write a test that was 99% correct but didn't actually work even though it looked like it should because no one ever bothered to read it before asking me for help on it (it was a concurrency issue).
CSAtWitsEnd@reddit
Yeah, you'd think if basically any company was making substantial money with AI, they wouldn't be able to shut up about it. We'd be seeing TONS of articles about the actual finances and wealth being generated. I don't think we're seeing that.
sogo00@reddit
"these things" = AI coding tools?
short answer: Yes.
Longer answer: there is as usual a middle-ground between "not sue them at all" and non-tech people vibe-code everything.
I think they shine at simple routine tasks, much better at greenfield projects, heavily depends on architecture and risk appetite of the org (don't vibe-code your CC tokenizer, but if an internal dashboard has shitty UX - who cares if it saved a week of engineering time)
They still break at large, complex legacy code bases, but even in that environment it accelerates the "side quests" a developer has all day long.
Most orgs that do not use them at all are either led by specific people being afraid of the unknown or blocked by internal bureaucracy unable to formulate processes and policies. In both cases most of the time people use these tools then "underground".
oditogre@reddit
Aside from my other post in the thread, this is the other big one: Stuff that you weren't going to do otherwise. Those "Nice to have" things that sit in backlog hell for eternity because there's always something more valuable your team could be working on instead.
Half-distractedly prompting Claude during meetings to build a 'good enough' version of that for you is still a win, and especially if it's not external-facing, "good enough" is good enough.
oditogre@reddit
I've found it to be valuable for the stuff 'around' the code - keeping project docs up to date. Triaging bugs. It's not half bad at writing unit tests. Refactoring / rearranging existing code. Upgrading dependencies and fixing broken builds. If you know how to build your project but aren't intimately familiar with your company's / team's CI / Dist systems, it can get you a good 80%+ of the way to having a working pipeline, far enough that you can get the gist of it and edit it to be correct from there rather than starting from scratch.
Connect_Detail98@reddit
It personally helps me a ton. It saves me a lot of time debugging stuff, understanding open-source code, writing boilerplate, catching typos or minor mistakes in my code...
I think there are 2 ways people are using AI today:
As a tool to enhance performance.
As a tool that makes decisions and delivers software end to end.
The second one is pretty bad. It leads to spaghetti code that's really hard to manage. AI still lacks human awareness and decision making. It's like a very smart human that makes very stupid choices. That's why it shouldn't be allowed to tackle large scale problems. It needs to be fed small chunks with clear instructions.
I think the main reason is that AI doesn't know the history of a project or the context of the team the same way a human does. For example, an AI doesn't know how many maintainers are in the team and what's their skill level in order to evaluate if the complexity of a piece of code is acceptable or not. It will just deliver the code and then a human should judge that sort of stuff...... Unless you want to have a document in a repo describing your team in detail, which could make someone feel bad.
Frolo_NA@reddit
this is one reason i'm not rushing to learn every little trick about these tools now.
the time investment to learn them seems relatively small, and better innovations and workflows will emerge over time. i can just wait a while and learn the improved version
metaphorm@reddit
coinbase is obviously overstaffed and cutting back due to contraction of the crypto market in general. they're AI-washing their layoffs like everyone else. cynical, dishonest, and shitty.
Dueeeei@reddit
I've noticed the same thing. Most posts still want traditional stack experience. The gap between LinkedIn AI hype and actual hiring requirements is pretty telling. Companies will catch up eventually but right now it feels like two separate conversations.
moonmop@reddit
I’m having the opposite problem where every single company that reaches out to me is a Series A-C AI startup and I have zero interest in any of them lol. For context I live in NYC and work in an automation startup (some AI in the product but not an “AI startup”)
metaphorm@reddit
developing skillful use of agentic coding assistants is a worthwhile thing. this is not the same thing as becoming an AI researcher or engineer.
NickW1343@reddit
They don't mention it mostly because it's like not seeing StackOverflow in an ad 5 years ago. It's assumed it's a tool you'll be using. Same as a mechanic job never mentioning knowing how to use a wrench.
ILikeCutePuppies@reddit
Companies are not listing the particularly IDE you must know either or that you need to speak English (or whatever language they need).
It's a tool and while there is some skill in using it they are far more interested in if you have the specific skillset for the role.
GlobalCurry@reddit
I interviewed with a consulting company a few weeks ago that said they don't use AI because their clients might start questioning why they hired them instead of just using AI themselves.
Unhappy-Ladder-4594@reddit
I mean, they're not wrong... But they're probably selling dubious bullshit, as is often the case with consultancies.
mickeymartooni@reddit
Thinking non-engineers can code using AI will never not be funny to me
vocal-avocado@reddit
lol
Aleks_Zemz_1111@reddit
A job description doesn't list the operator's tools, it lists the architectural output they are willing to pay for. I've spent 22 years in industrial s*it jobs, roofing, bed factories, and now running a multi million pound Gietz ROFO 870 and management has never once asked if I use a specific wrench or if I've leveraged a pneumatic lift. They pay me to ensure the foil stamps perfectly every time.
You're waiting for companies to put AI in the requirements because you still have the mindset of a corporate janitor looking for a checklist. Companies don't mention AI for the same reason they don't mention using a keyboard. It’s an expected efficiency of the modern operator. If you're waiting for permission to use the most powerful leverage tool in existence, you've already lost the architect's seat. Stop scanning for tool names and start building a distribution engine that proves you can out-deliver a team of 10 manual syntax laborers before the next redundancy wave hits.
Chuu@reddit
Adoption of code assist has been so rapid that at this point I feel most places assume devs use and are familiar with the tools.
Considering more and more shops are providing these tools during interviews to specifically assess skills using them, I think a lot of people are in for a surprise if they completely eschew them.
Zookeeper187@reddit
If they are not using it, they are behind and inefficient.
As always, with high hype and AI hate, truth is somewhere inbetween.
PastaGoodGnocchiBad@reddit
Or they prioritize understanding what they are doing over raw output, as an engineer should prioritize. Not all domains require output quantity over less understanding of the code base.
Zookeeper187@reddit
You can have both
PastaGoodGnocchiBad@reddit
By reviewing the code completely after each iteration. You still lose part of the understanding. You still use about as much time as before because reviewing from scratch and understanding takes that much time compared to coding.
Not every use case need that level or care (I think it's the minority), but when it is needed you make sure the thing you are responsible for behaves exactly as it should.
Melodic_Crow_3409@reddit
My company has Claude and Copilot licenses for all of us. I use Claude to automate certain drudgery tasks, like writing tests. If I took a gig tomorrow where that was gone, I'd be completely fine.
I think we should all still be able to code.
skidmark_zuckerberg@reddit
Every job I’ve applied for and interviewed has explicitly asked me about my experience with AI tooling. Every recruiter I pre-screen with also asks.
From my experience in the last month, it seems like a lot of companies are asking for it and expect you to be competent using it. I used Claude for 12 months at my prior job and didn’t even think to put it on my resume until a recruiter hit me up and while talking, asked if I had used AI, which I said yes, and then he said “okay can you add your experience with it to your resume and resend it to me?”. I then realized it was a bigger deal than I thought.
Also after adding more about my about my AI workflow experience to my LinkedIn I got a decent uptick in recruiter messages as well. Not sure why your experience is so different, but at least in my area of full stack Typescript work, these places are really interested in people who know how to use it.
Plenty-Copy-15@reddit
What exactly did you tell them about your experience if I may ask? I wouldn't really know what to say other than "I used Copilot/Claude, wrote skills, wrote an AGENTS.md" or stuff like that
skidmark_zuckerberg@reddit
Yeah I essentially just say something similar, that’s really all it is. Maybe glaze them a bit with “policy first Agentic workflows”. Which is essentially exactly what you said already. I also mentioned use of Claude Plan mode to define and refine a plan file, that is also apart of the agent workflow.
I worked on the RAG AI chat at my last company and talk about that a little as well.
dalmathus@reddit
Just develop a few repos of agent skills you developed yourself for a repeatable repo agnostic task.
Use the create-skill from anthropic to do it. That will be enough for recruiters.
kevin_whitley@reddit
TLDR; It Depends\^tm
We asked candidates their AI tool usage/workflow in interviews long before adding it to the job descriptions (honestly not even sure if they are there now even). As others have stated, it's less of a requirement, but a signal. Folks that haven't even touched/played with it, and thus have no thoughts, years after launch, suggest a lack of curiosity for instance. Similarly, if you've never heard of Claude Code by now, you might be living under a rock.
The flipside is, some folks broadcast an utter dependence on AI for everything, and appear to waste far too much time automating workflows, agents, subagents, etc, etc. We have to worry that they may try to overengineer solutions within the work product, because they're showing evidence of doing that very thing in their own attempts at efficiency. That said, even this can just be a sign they are early intheir AI journey and haven't realized the time waste some of these things are.
So basically, it's just an interesting conversation piece that gives us signals about how the candidate thinks, learns, etc - rather than any sort of real requirement.
roger_ducky@reddit
Using AI is still kinda expensive. So, smaller companies not in countries where investors are tossing money at people attempting to use use AI won’t try until the costs get cheaper.
sassyhusky@reddit
Twitter and Reddit are an echo chamber. What peop… bots say here, has little relevance to reality.
MagnetoManectric@reddit
Yeah I ended up unsubscribning to this sub in paticular as there was so much obvious bot posting in the comments of every thread about how "I'm a 15 YoE Staff Engineer, and I was skeptical at first, but since (name and version number of specific model) came out, I've been blown away, and if you're not using (name and version number of specific model), you are a dinosaur that is going to be left in the dust".
It's really nakedly transparently silly to anyone who actually is expereinced in this job, and has been through umpteen hype cycles.
You can learn to setup copilot to review your PRs and provide suggestions in a day. All of this agent swarm workflow guff reminds me of those guys who spent the entirety of the 2010s switching frontend frameworks and configuring their VIM plugins. They've just moved on to a new kind of faffing instead of doing their actual work now. these kind of engineers are a loud constant in the industry, who are now being super-amplified by a quadrillion dollars of cap-ex and an army of smarmy bots to push their hobby horses. It's all so tedious...
Ratiocinor@reddit
Yes but, maybe this is because I was actually laid off this time where I wasn't for previous hype cycles, it feels different this time
It feels like people are actually buying into the hype this time or at least the project managers, CEOs, and hiring managers are anyway (and they are the ones that matter when you're looking for a job)
"It's not real it's just social media hype" yeah but if social media is indistinguishable from reality which it is in 2026 does it even matter? It may as well be real at that point
MagnetoManectric@reddit
Oh sure, there's more buy in to this cycle than ever before.
I've definitely felt like my real world experience of how LLMs are being applied/hyped at work is much more measured than you see on here... Like sure, we use LLMs at work, but we've not completely abandoned our processes to let Claude do everything, like a lot of comments around here have implied. It's been much more of an incremental process improvement.
ares623@reddit
They post here because they have no-one to talk to about it in real life. (psst the reason is because you sound like a twat)
Empanatacion@reddit
Totally. The grumpy AI hate on this sub bears no resemblance to the attitudes of the people I actually work with or know.
Crazy-Platypus6395@reddit
The simple reason is they have a tremendous amount of capital invested in this globally and they're likely just now seeing the costs rise and pulling out of "AI ONLY" mindset.
RandyHoward@reddit
Local markets can be very different than the average everywhere else. I'm fully remote, and my job searches are worldwide. I see a lot of mentions of AI in job listings.
You should treat AI as any new tech in this field. Be aware of it and what it can do for you. Even if you don't use it, you should have awareness of its strengths and weaknesses, it has plenty of both. You want to be able to implement any tech you need in a given situation, and if you're not aware of the tech then you may never pick the best tool for the job.
dalmathus@reddit
My company doesn't advertise it outside of the skills wanted section.
But I have heard in interviews if you don't have AI dev experience you are just a hard no.
But at this point if you don't have a basic AI workflow thats kind of on you. You definitely need to understand how to set up some useful skills for at least analysis of a codebase.
Ysilla@reddit
I'd argue that hiring good devs is even more important now than it used to be. AI tools really are mostly a multiplier, hire a bad dev now, and you'll get a lot more bad code than you'd have a few years ago.
"learn AI or you’ll be left behind" is kiiinda true, but the difference between a good dev who spent a few days learning some broad stuff around Claude use, and one who spent a month going in deep won't be that huge. The latter might be able to write better tools that the former can use (like more advanced commands/custom mcps/hooks/all that), but without much productivity difference between the 2 in the end when it comes to actual code. I'm pretty sure we'll see a clearer split between those 2 roles soon.
I've seen teams where everyone wants to take on that 2nd role, and it's always a very non-productive mess where everyone reinvents everything... often even multiple times. Like I've seen a team with 6 devs that had 13 different code review commands in their "private" team plugin market a few days ago, and we already have a few at company level, and also some more specialized ones at tech-specific guild level.
AchillesDev@reddit
It's table-stakes now. You notice how job postings also don't talk about specific IDEs to 'have skills' in or debugging? It's expected in the vast majority of companies to be able to use these tools as part of your workflow. I think it's too early to assume proficiency, because a lot of devs are shit at using these tools and we're still figuring out best practices, but that's easily learnable if the resources are available.
Doing both hiring and IC work for my clients, I can assure you that they do. There is just no need to list as a skill "be able to talk to a natural language interface" - you should theoretically have some level of communication skills as an adult human. They don't know how to assess that skill nor that it is assessable to begin with (you should see the dogshit things twitter loudmouths produce), so they just assume you can use tools that are now ubiquitous in the industry.
preetishpreetish@reddit
My AI startup is looking for a full stack dev remote working pls DM if anyone is interested
davidbasil@reddit (OP)
which stack?
TheTacoInquisition@reddit
They don't mention IDEs, or email clients, or whether CLI github or a desktop client is used either. AI is just a tool. You can ask them at interview what their workflows with it are like.
kosmos1209@reddit
I live in San Francisco, have a job I dislike and looking for a new job. Every single interview I had asked how I use AI, and they are all looking for some sort of positive engagement with it. AI is pretty much a requirement at this point, and skeptics are not going to pass the initial screens.
What generally happens in SF tends to flow out to the world so it’s a matter of time your country will start to require it too.
DurianDiscriminat3r@reddit
Not the case in the US. Not a sight of AI/LLM in the job description but when I asked how do y'all feel about AI assist: "yeah we use them, go with the flow or get left behind right". And this is a 50+ yr old engineer!
ProbablyBsPlzIgnore@reddit
Tech job descriptions are weird. “Do you have Java 1.8 experience?” more than a decade after it was released. Or requiring 5 years of experience in a framework you can learn on the job in a week. You get this filtered through someone whose expertise isn’t technical, and they seem to treat it like a list of ingredients. No other sector hires like that. Hiring a surgeon, must have experience with a particular model scalpel?
proof_required@reddit
Not sure what kind of jobs you are looking for but in my case (ML/DS) almost every job asks for LLM. I have 10+ years of work ex in DS/ML and I find myself so outdated. In addition lot of these roles also expect you to be some kind of backend/devops person managing the infra required for running LLM based software.
Great-Gecko@reddit
I think most companies will update their job posting descriptions very infrequently. I doubt the company work for has added AI to theirs, despite using it internally.
Own-Football4632@reddit
I agree with the mix of it being shops that simply aren't moving on it as fast and shops that just assume it as part of the workflow without needing to list it as a skill. It's not like the barrier of entry to using AI in some coding capacity is that low, which is why whatever you consider a senior-level engineer still makes the difference when the barrier of entry is arguably lower and the ability to express intent, precision, and perform good review is valuable and comes from experience of some kind.
And I think we can face that there are people that may be "behind", but there also are always plenty of businesses that can stay in some sort of stone age, because they don't need cutting edge SaaS etc. to continue making a profit. Some profit-making non-tech companies just have the tech they need and don't feel the need to hyper optimize it. Varying per location and industry, you never know what tech is still being maintained and/or how much potential red tape around employee tooling there is.
Those types of jobs aren't necessarily engineering-ly rewarding if you're a coding goblin fiend like me, though if I was paid well to do AI-less work for someone, I'm not sure if I'd actually care that much. Typing is typing, I know my main domains fairly well from AI-less experience, I'm over going above and beyond what's expected of me as long as I feel like I can employ design that doesn't drive me insane, and I can do what I want in my free time.
Some of those jobs can be places of stagnation, but some tech teams can really survive without changing like roaches, though members can always try to improve their skills and such outside of those constraints when they hopefully can. I don't think that's really changed with any of recent previous generations of most forms of technology in general. No matter how fast tech moving, there is still always a large momentum to old ways, and perhaps adoption windows of new large patterns don't really scale in the same way or rate as the technology itself.
davidbasil@reddit (OP)
Yes, I think it is also true regarding personal tooling. There are people out there still coding in vanilla vim, reading programming books, etc. There are even blind programmers and they're still successful.
OmegAIChungus@reddit
You're in Tbilisi, Georgia. Lovely city, I'm sure.
Wide_Signature1153@reddit
It is