Will AI lead to psychological dependency?
Posted by GolangLinuxGuru1979@reddit | ExperiencedDevs | View on Reddit | 26 comments
Full disclosure, I wrote this on my LinkedIn. But wanted to reshare this here:
I made this post on LinkedIn. Tell me what you think:
I want to talk about psychology and the second order effects of AI. This may be a bit psycho babblish
Over the past couple of years I’ve talked to developers who use AI coding. As the years have gone by I’ve seen people defend it most more ardently than they have in the past. This tells me that this isn’t just a tool psychologically for people? Then what is it?
Well we need to start with imposter syndrome. Software systems are big and complex’s. And there are many layers to learn. It can often feel that there are constant gaps in knowledge. And this is where AI comes in. It’s able to bridge that gap. It makes you feel better because it lets you do things you weren’t able to do before.
This starts the process of appealing to an AI authority. The side effects is that you start telling yourself AI is smarter than you. So you become more ambitious. You start building things you would have never dreamt of building before. And you may do it successfully. AI is unlocking competence for you
So criticism becomes less “AI has flaws and make mistakes”. And more “you’re criticizing the thing that makes me relevant”. It’s an attack on your psychological identity
This creates a dependency loop. The more AI appears successful, the more you rely on it. You start to depend on it more and more. And now every engineering decision you make is to justify why everyone should be using agents.
This is definitely not a good thing. Because you stop understanding. You stop growing. You stop improving. Improvement is only weighted towards tool efficiency. Not understand the domain, the architecture and invariants. And this creates a cycle of blind trust
I believe this dopamine loop is probably psychologically corrosive . But I am not a psychologist. Just trying to predict the future like anyone else
ExperiencedDevs-ModTeam@reddit
Rule 9: No Low Effort Posts, Excessive Venting, or Bragging.
Using this subreddit to crowd source answers to something that isn't really contributing to the spirit of this subreddit is forbidden at moderator's discretion. This includes posts that are mostly focused around venting or bragging; both of these types of posts are difficult to moderate and don't contribute much to the subreddit.
ChickenSaladHoagie@reddit
I think a lot of this is possible, and maybe even probable, but I’d be curious to see the argument developed a bit more.
Right now, I think you make a few big assumptions: people defending AI more intensely indicates psychological dependence, that this dependence is tied to sense of self or professional worth, and the precise link between AI appearing correct causing reliance to increase.
I don’t necessarily think those assumptions are wrong. In fact, some of them seem plausible to me. I just don’t think the post fully argues for them yet. It feels like there’s a potentially strong idea here, but I’d want to see more evidence or examples connecting those steps.
margmi@reddit
Did people develop a psychological dependency on Google?
EliSka93@reddit
That's a very bad comparison.
Google's job is to find you something and then you're gone.
To increase their ad revenue, they actually made their search worse, so you'd need more searches to find what you're looking for (this isn't a conspiracy, they were pretty open about it).
"AI" keeps you on their platform by flattery and sycophancy. It's waaaay more damaging psychologically.
margmi@reddit
OP is theorizing that it creates a dopamine reward system.
Dopamine builds from anticipation, rather than from immediately receiving the reward. Having worse results that you have to sift through actually increases the dopamine you get when you finally find the correct answer.
GolangLinuxGuru1979@reddit (OP)
To some degree yes. But Google is just a tool for discovering and synthesizing information. Not something that hands you a full solution. It’s just a reference not a full solution.
AI just completely implements a solution for you. There’s no synthesis or analysis. It’s not a reference. If it just one shots an implementation and you dint fully understand it. Then it can start to be something you rely on
roger_ducky@reddit
Stack overflow handed solutions to juniors who didn’t want to bother for years too though.
Jmc_da_boss@reddit
No it did not, it did not hand curated contextual solutions to anything they'd need.
It was in fact very rare one could just copy paste directly from SO and even with that you had to know WHERE to copy it too
roger_ducky@reddit
Which is why, it’s vital to ensure everyone asks about the architecture and tradeoffs of the implemented solution such that everyone is comfortable with the selected implementation.
Jmc_da_boss@reddit
This is a shitty comparison, LLMs are not Google
Ok-Hospital-5076@reddit
don’t need a psychology degree to see this. anecdotal evidence from my own life is enough. For a long time, I’ve run my email drafts through models, to the point where I no longer trust my own grammatical judgment. I write scattered thoughts for first drafts, knowing the AI will fix the structure. I’m stopping this now or I lose the ability to write coherently on my own.
potatolicious@reddit
Yes. Somebody blogged about this recently and I think the post is well worth reading.
Lots of people are very obviously (and sometimes very publicly) developing psychological dependency on these tools. Some are straight up falling in love in their agents (see: Richard Dawkins' latest piece) and others I think are getting hooked on the competence-vibes described i the post.
Jmc_da_boss@reddit
Yes it already has, the worst devs you know think it makes them 10x better
i_exaggerated@reddit
I mean it makes them 10x. But a negative times 10 is just a bigger negative.
Jmc_da_boss@reddit
This is very true
pl487@reddit
Or, people defend it because they've now learned how to use it properly and see how much of a benefit it is to their tasks. No psychological dependency, just a tool that makes things easier.
sweetno@reddit
Absolutely! The same way you develop a psychological dependency on your IDE and other tooling.
GolangLinuxGuru1979@reddit (OP)
You’re right and I don’t think that’s always a good thing. You should be effective when all of your tools are taken away.
I will say this. During COVID I was with a healthcare company that gave me a MacBook Air 8 GB ram to work off of. Loading my jetbraijs IDE would see crashes every hour. I literally could not do any work. They eventually replaced it with a proper MacBook Pro. But that ties back to something
I really wasn’t effective without my IDE. Now I use neovim and still use an LSP. But I rely a lot less on jetbrain autocomplete
As a result I understand my tool chain throughly. And I could still code without any sort of autocomplete if I needed to.
AI has greater liability. Because despite not being that great without the IDe the real gaps were syntax recall. I still knew how to structure my code and solution.
AI does this all for you. And as it produces more code, you rely on it more to keep up velocity. So when that tool goes away now you have a ton of AK generated code that most of your team don’t understand.
There are other problems where this isn’t a clean equivalence like token cost.
spicymato@reddit
I've been using AI to write spec docs and think through ideas that I've been considering for years, but never written down. It's first pass as interpreting my ideas is often incorrect, requiring plenty of review and editing.
However, that also highlights my unspoken assumptions; things I consider "obvious" that others don't. It writes the docs much more thoroughly than I would. Even though I need to provide directions and corrections, it is able to handle the actual "writing" part.
Altogether, I view it as a project partner. It's my project, and while the AI often tries to take it in the wrong direction, optimize for the wrong things, or over-engineer for the scope, it does still help a lot, and even occasionally convinces me to adopt its suggestions. Not often, but occasionally.
GolangLinuxGuru1979@reddit (OP)
Here is the issue. AI is probably not that bad at doing things you already know how to do. Because you can evaluate where its flawed, where its making obvious mistakes and how to correct it.
The bigger issues is that it’s being used to close knowledge gaps. So people who don’t really understand anything gets what appears to be a solution from AI. They stop questioning the output and start just relying on it to build things they otherwise wouldn’t know how to build themselves
This is where the dependency loop starts.
Sensitive-Ear-3896@reddit
Huge fan of ai but yes, when I have to code by hand it feels so sluggish. But we are dependent on a lot of technologies (or think we are) going all the way to basics like electricity, water and transport
thomasfr@reddit
For some people yes, some people in my circles has started to exhibit behaviors like like cocaine or other stimulant addicts almost only from using LLM code generation tools a lot. I don't think we have scratched the surface of the mental issues that will come with these kinds of tools.
Decent-Lab-5609@reddit
You've made a lot of affirmative claims without a causal chain and have not made an argument as to why you would stop learning, growing and understanding in this world. You might be right but there's no argument made for it here.
WeHaveTheMeeps@reddit
Lemme ask Claude
MENDACIOUS_RACIST@reddit
Yes, and there are paths forward to mitigating and managing this harm. https://www.anthropic.com/research/disempowerment-patterns
No-Butterscotch-3641@reddit
I’m not sure I 100% agree with this. AI is an accelerator it will allow you to write both good and bad code really fast. That doesn’t always equate to competence.