Has using AI made you faster… but also kinda less sure of what you actually know?
Posted by Prior_Plum_9190@reddit | learnprogramming | View on Reddit | 40 comments
AI makes me faster, am not denying that. I finish things faster, and ship way more than I used to. But at the same time, I’ve started noticing this weird feeling after I finish something I ask myself do I fully know it? sometimes yes, sometimes no.
I can read the code, tweak it, explain most of it. but it doesn’t feel the same as when I used to sit with something for hours and finally get it.
Now it feels like I’m always in review mode, less building and more checking. Less thinking from scratch. Now AI is in everything. people win interviews with it, pass exams with it, get through rounds they probably would’ve struggled with alone. I can’t tell what being good is supposed to mean anymore. Maybe that’s just the job now, I don’t know.
I do wonder if anyone else feels this weird where you’re clearly faster, maybe even more productive.. But are we sure about what we are doing?
Heavy-Report9931@reddit
A.I has helped me tremendously.
I don't use it to write entire swaths of code. I just ask it specific things like.
how do you do x in this library?
I then check how it does it and re-implement it for my use case. I've learned a few tricks from it actually.
I also ask it how it can improve or optimize a function block I wrote. I then study the optimization and incorporate it in my own repertoire.
but I've been using it a lot to fill in the gaps of my knowledge lately. anything I dont understand or need clarification on.
stuff like what is this command doing? break it down.
to full blown probing questions about how a certain concepts works.
been asking it for hours to explain how TLS works at a high level because TLS is frigging complicated actually.
I've also wanted a mentor. someone to answer all the stupid questions I have about everything. that mentor has final come and its A.I
Technical-Fruit-2482@reddit
It hasn't sped me up at all. Since it's not that great at programming I have to review, understand, and fix everything, and doing that takes just as long as writing the code myself anyway, so there's no benefit.
BitterCommission3987@reddit
You know, every time I see a comment like this I think it is either a bot or the person just doesn't know how to prompt an LLM. This is not 2023 anymore, LLMs are writing decent code these days. Obviously if you ask it to make a Spotify like app absolute garbage will come out of it, but that's not how you're supposed to use it.
Technical-Fruit-2482@reddit
I mean, every time I see a comment like this I think it's either a bot or the person just doesn't know how to write good code. Even in 2026 LLMs are pretty bad at programming. Obviously if you just settle for mediocrity or worse in everything you produce you can speed up, but that's a pretty bad way to use it.
...
I know how to use the tools. I've been trying to get them to produce decent results for years now with each advancement in the models and the tooling surrounding them. The problem is that writing code isn't really the bottleneck, and if they produce more manual work for me than I would have by just writing the code myself then it's just not worth it.
silvses@reddit
What is knowing? How much knowing do you need to have behind something to say you know something. This is more of a feeling one gets, and you don't feel confident in the fundamentals behind it. Procedural and factual knowledge are different, you can learn and understand facts pretty quick but can you carry out a task without assistance for syntax, control flow ext? If not, it gives you guide how lacking you would be
BizAlly@reddit
AI makes you faster, but it shifts you from building to reviewing. That’s why it feels less solid you skipped some of the struggle that builds intuition.
Individual-Brief1116@reddit
Yeah I feel this too. It's like the difference between learning to cook by following your mom's recipes step by step versus just heating up really good takeout. Both get you fed, but one teaches you something about ingredients and timing.
slyce49@reddit
Great analogy
Curious201@reddit
yeah i know exactly what you mean. the weirdest part for me is that i still feel ownership of the output but not of the learning. like ten years ago solving a bug meant i actually understood the language better afterwards, now i just ship the fix and move on. the skill is shifting from "knowing things" to "knowing what to ask and what to verify" which is real skill but it feels different
Feeling_Photograph_5@reddit
Much faster. 10x at least. But I learned to get comfortable with abstraction years ago. Do we know what's going on with a third party library? Or with code written by another team? Or in the codebase at a new job? Nope.
So, I'm good with it. With AI I focus less on syntax and more on systems. So far, it's been working fairly well.
iRhuel@reddit
Turns out researching, trying, failing at, and iterating on things is where the bulk of learning happens, and outsourcing or automating that part means you don't learn as much. Who knew.
JohnBrownsErection@reddit
I used to be an electrical technician in a factory, which involved a ton of troubleshooting(the job involved maintenance, fixing downed production lines, upgrading existing ones, and other related tasks) and while each time you had to troubleshoot something new it was a grueling and difficult task, it basically guaranteed that you came away from the issue with an immensely greater understanding of how the machine and its code worked.
If you were to quantify that kind of skill on a graph I think it'd look similar to a population model showing extremely slow initial growth, followed by explosive exponential growth, and then finally logarithmic growth as you approach the carrying capacity.
Anyway, basically, yes the struggle is where all the gains are made.
gabrielmuriens@reddit
Yes. This isn't the question anymore though (if you are working as a professional developer; if you are still learning, you should absolutely not use AI for actually writing your code).
The question is: can you keep being a good enough manager for your AI agents if you are no longer involved in the process of creating the actual code. Probably the more experienced you are / higher level responsibilities you have, the longer you'll keep being a useful part of the machine. Can you become a better programmer while working like this? Probably not – which, as someone who got into CS because I like learning new things and better ways of doing old things, I am quite sad about, actually.
deleted_by_reddit@reddit
[removed]
AutoModerator@reddit
Please, ask for programming partners/buddies in /r/programmingbuddies which is the appropriate subreddit
Your post has been removed
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
bird_feeder_bird@reddit
Using a generated response will give you the knowledge for enough time to do what you wanted to do with it. Troubleshooting the same problem for a few hours feels may feel less productive in the moment, but it ensures you’ll have never struggle with the problem again because you need time to let the knowledge integrate
Holiday_Custard_6538@reddit
feel the same way!
Complete_Instance_18@reddit
Totally get this feeling. It's like AI
HolyPommeDeTerre@reddit
I just told my team this week that we have a slow drift of the code base.
LLM generate convincing noise. Most noise, when read independantly, is acceptable. Like it adds a ternary in case of, it adds an else to this if. But most are just useless. You wouldn't have written them, you don't need them. Most problems it fixes are impossible if you structure the code as you would have.
Even tenured reviewers are going to think "yeah why not". Cause putting a comment on every useless addition is going to flood the PR. Too many comments is generally going to make each comment less important unfortunately. (Like alarms that goes off all the time for false positive, you end up ignoring it).
The thing is, taken as a whole, this represent a huge part of a PR (like 5%, my guts here). But this adds up quickly. Making reviewing harder than it was. But reviewing code has always been harder than writing it.
Made me think a bit. Why would we let it write code when it's the easiest part? Why would we review the code when it's the hardest part?
Reviewing is a safety net. So you can't rely only on LLM for that. A external reviewer will still have to review the code. But you'll be faster reviewing your code than the LLM one.
Coding is tiring at some point. You already have the solution in your brain and typing for(let I = .... Is not really adding up value. You just need to transfer your brain into code. So people are getting the habit to not write their code, just think about it and check in review that it match.
When asked, me or other dev in my company, some doesn't want to write code anymore. So... What do we do?
Anyway, me rambling and thinking
InVultusSolis@reddit
Which I find to be extremely ironic considering you still actually have to be a good programmer to do code review - if the "fastest" coders are rewarded (those using AI) eventually we'll end up with an industry full of people who have no idea what they're doing.
InVultusSolis@reddit
I fundamentally don't think that has changed.
Whenever I use AI to attempt to write code that's being used for something important, I always lose time to having to debug it because it usually fails in a way I didn't anticipate. And the session where I ask "this is wrong, fix it" starts flailing - basically "guessing" at what the problem is and not fixing the problem itself.
rustprogram@reddit
Problem is I have never, ever been sure of what I am doing. and I have been doing this for over fifteen years...
TravelingSpermBanker@reddit
It’s reinforced the knowledge I already knew.
I find it pathetic to read these posts. Not everyone is a shit coder lmaoo
No-Injury-1785@reddit
Mmmm, this hits. You didn’t get worse, the skill shifted.
Before: being good = figuring things out from scratch. Now: being good = knowing what AI got right, what’s wrong and how to adapt it. That “do I actually know this?” feeling is just missing the struggle phase we used to rely on for confidence. Quick fix I use: if I can’t explain it or rebuild a simple version without AI, I don’t count it as a skill yet, just assisted output. The real edge now is: AI → understand it → think without it when needed.
SyrupOutrageous1801@reddit
Using AI for coding feels like learning through osmosis - you absorb the end result without understanding the process. It's like having a book with answers but never reading the textbook.
ViolentCrumble@reddit
AI loves to overcomplicate things.. It spreads things out into so many files and seperate functions needlessly. If i need assistance I just ask my questions with AI now instead of posting to Stack Overflow and still write my code myself. Usuaully I just need to know the formula for things or remember the function name. Ai is always trying to make things so much more complicated than it needs to be.
dev_ski@reddit
Human-made, well researched, and more importantly, well structured tutorial or a book is the way to go. AI kind of, still lacks in this department.
lowrads@reddit
Much of education has always just been staking out the boundaries of what you don't know. The cleverest people have an extensive inventory of known unknowns.
GlobalWatts@reddit
AI (specifically LLMs) has made me marginally more productive at very specific tasks. By "marginally" I mean like, if I/my employer had to pay basically any kind of subscription fee for it, it wouldn't make fiscal sense. I'm using it as a sounding board for ideas and at most code snippets, not to generate entire code blocks.
If anything it made me more sure about what I actually know. The amount of times I've had it use functions and keywords that I thought I didn't know about, but turns out they don't actually exist when I verify its output with the docs.
But I'm lucky enough to not yet have to do code reviews on some junior's LLM output, I imagine that would kill any productivity gains.
HolyPommeDeTerre@reddit
Oh, and, adding a real answer to your post instead of me rambling:
"Clearly faster" or more "productive". Did you measure it scientifically or is it your impression? If you're learning, you need to compare quality of knowledge over time. If you are working, you need to compare quality of code merged over time.
We all can be faster by doing worse. We all can write random code and say "I wrote 100k LOC". This isn't a metric.
Some people have been actually measuring the time saved. Their actual finding is:
people think they save 24% time
people actually loose 19% time
Mind your biases :)
If you want I can get the article about this but it's in french.
dazden@reddit
I know that feeling.
Since I am a father of two, my time for throwing myself at a selfmade problem is minimal.
I just can't sit the whole afternoon in front of my monitor and try to bruteforce something (i wish i could)
So I use the AI as a tutor.
I wrote chat instructions for the AI in VSCode and bound it to a workplace (harvard CS50 folder)
The autocomplete is also disabled
Delicious-Macaron363@reddit
I think AI makes you faster, but only if you already understand what’s happening under the hood.
The real problem is when beginners skip fundamentals and rely on AI to “fill the gaps.” That’s where the feeling of being less skilled comes from.
What I’ve noticed works better is:
Otherwise it becomes copy-paste instead of learning.
So yeah, AI isn’t the issue - it just exposes whether we actually understand things or not.
Unhappy_Tomorrow3365@reddit
I agree with you. I feel this too now. Even when I finish something faster I’m not always fully sure about everything like I used to be. And seeing what’s happening around me makes it worse. I know friends who used interview copilot tools like lockedin and got placed after just 2 interviews whereas I got rejected in 4 and only got this job in my 5th.
It makes me wonder whether people are actually getting better or just surviving.
Ok-Formal5641@reddit
I agree
jameyiguess@reddit
Well yeah, if you aren't typing it, you won't have it in your bones. You'll "know" it as much as you know somebody else's PRs.
JohnBrownsErection@reddit
I started coding in 2017 and have been a stunning mediocrity ever since.
With AI, I can be stupid at speeds never before thought possible!
BriefReflection2054@reddit
I think it has sped things up for me personally, but I set out with the goal to learn, not just create. My rule is not to have any lines of code I don't understand. So if vscode autofills something I don't understand, I research until I do.
I think the biggest issue for me with autofills, is that typing/writing can be good for concreting it in my brain
BobJutsu@reddit
No. Because I keep tight controls on ai assistance. It’s only building what it’s trained to build, under tight supervision. It’s less “AI programming” and more “AI typing assistant”.
Maggie7_Him@reddit
Counterpoint: senior engineers mostly read, review, and evaluate code — not produce it from scratch. AI might just be fast-forwarding that transition. The real question isn't whether you know less, it's whether you're developing the judgment to evaluate what's in front of you. That's the skill that actually scales.
amejin@reddit
You clearly haven't been put in the "we've decided to lay off 60% of your team" position yet.
It will come that they want you to both produce and review everything by yourself. If not you, you're gonna be one of the ones let go.