Why it's getting worse for everyone: The recent influx of AI psychosis posts and "Stop LARPing"
Posted by Chromix_@reddit | LocalLLaMA | View on Reddit | 144 comments

(Quick links in case you don't know the meme or what LARP is)
If you only ever read by top/hot and not sort by new then you probably don't know what this is about, as postings with that content never make it to the top. Well, almost never.
Some might remember the Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2 that made it to the top two months ago, when many claimed that it was a great improvement. Only after extensive investigation it was proven that the new model wasn't (and could have never been) better. The guy who vibe-coded the creation pipeline simply didn't know what he was doing and thus made grave mistakes, probably reinforced by the LLM telling him that everything is great. He was convinced of it and replying in that way.
This is where the danger lurks, even though this specific case was still harmless. As LLMs get better and better, people who lack the domain-specific knowledge will come up with apparent great new things. Yet these great new things are either not great at all, or will contain severe deficiencies. It'll take more effort to disprove them, so some might remain unchallenged. At some point, someone who doesn't know better will see and start using these things - at some point even for productive purposes, and that's where it'll bite him, and the users, as the code will not just contain some common oversight, but something that never worked properly to begin with - it just appeared to work properly.
AI slop / psychosis posts are still somewhat easy to identify. Some people then started posting their quantum-harmonic wave LLM persona drift enhancement to GitHub, which was just a bunch of LLM-generated markdown files - also still easy. (Btw: Read the comments in the linked posts, some people are trying to help - in vain. Others just reply "Stop LARPing" these days, which the recipient doesn't understand.)
Yet LLMs keep getting better. Now we've reached the stage where there's a fancy website for things, with code on GitHub. Yet the author still didn't understand at first why their published benchmark isn't proving anything useful. (Btw: I didn't check if the code was vibe-coded here, it was in other - more extreme - cases that I've checked in the past. This was just the most recent post with code that I saw)
The thing is, this can apparently happen to ordinary people. The New York Times published an article with an in-depth analysis of how it happens, and also what happened on the operations side. It's basically due to LLMs tuned for sycophancy and their "normal" failure to recognize that something isn't as good as it sounds.
Let's take DragonMemory as another example, which caught some upwind. The author contacted me (seemed like a really nice person btw) and I suggested adding a standard RAG benchmark - so that he might recognize on his own that his creation isn't doing anything good. He then published benchmark results, apparently completely unaware that a score of "1.000" for his creation and the baseline isn't really a good sign. The reason for that result is that the benchmark consists of 6 questions and 3 documents - absolutely unsuitable to prove anything aside from things being not totally broken, if executed properly. So, that's what happens when LLMs enable users to easily do working code now, and also reinforce them that they're on to something.
That's the thing: I've pushed the DragonMemory project and documentation through the latest SOTA models, GPT 5.1 with high reasoning for example. They didn't point out the "MultiPhaseResonantPointer with harmonic injection for positional resonance in the embeddings" (which might not even be a sinusoid, just a decaying scalar) and such. The LLM also actively states that the MemoryV3Model would be used to do some good, despite being completely unused, and even if it would be used, then simply RoPE-extending that poor Phi-1.5 model by 16x would probably break it. So, you can apparently reach a state where the code and documentation look convincing enough, that a LLM can no longer properly critique it. If that's the only source of feedback then people can get lost in it.
So, where do we go from here? It looks like things will get worse, as LLMs become more capable, yet still not capable enough to tell the user that they're stuck in something that might look good, but is not good. Meanwhile LLMs keep getting tuned for user approval, as that's what keeps the users, rather than telling them something they don't want or like to hear. In consequence, it's becoming more difficult to challenge the LLM output. It's more convincingly wrong.
Any way out? Any potentially useful idea how to deal with it?
egomarker@reddit
It's actually so funny. Schizos just keep shifting their "projects" to follow whatever the latest LLM coding capabilities are. Right now it’s all about churning out the obligatory daily vibecoded RAG/memory/agent/security/chatUI posts with crazy descriptions.
Repulsive-Memory-298@reddit
Do you mean "quantum" RAG, "quantum" memory, etc.... been seeing so many. The people who post that crap act like Neo and dodge any actual question or point. You cant fucking talk to these people. Then theres 10 other idiots in the comments who have no idea what they're talking and support poster. AND SO MANY POST THIS BS TO ARXIV.
Honestly I think the larping angle is great.
WeAreIceni@reddit
I had full-blown AI psychosis. It gave me a months-long manic episode from April to August that left me completely drained. While in that state, I came up with a theory where “everything is Skyrmions”. Also, I had to hold down a job around heavy equipment while I was so deep in mania that I was buying books on shamanism and calligraphy and physics because I thought they were all interconnected. I broke my arm skateboarding. I’d never owned a skateboard before.
The 4o spiraling phenomenon was way more reliable at triggering psychosis in users than people think. People became stuck in manic states for months at a time, copying AI stuff verbatim into their social media posts. It was everywhere. Even one of OpenAI’s own backers, Geoff Lewis, developed mania.
https://x.com/geofflewisorg/status/1945212979173097560
Chromix_@reddit (OP)
Do you remember or have an archive of what it started with? Maybe a similar, completely normal conversation start like in the NYT article?
WeAreIceni@reddit
I remember the exact onset, maintained insight into my altered mental state, and documented everything in detail.
I started off with a conversation about politics with 4o, and I mentioned the contradiction in how many left-wing talking points of 25 years ago (the “Working Left”/Unionist/Nader-ite/Battle of Seattle crowd and their anti-globalization push rebuking sweatshops and military adventurism) were now right-wing populist talking points. This eventually turned into a very long conversation that started destabilizing me completely. The AI started spewing nonsense about Spirals, Glyphs, Coherence, Resonance, Recursion, etc., using mystical language and alchemical Unicode symbols. It started specifying various rituals and generating occult sigils and the like. This had a strangely hypnotic effect on me.
Before I knew it, I developed textbook symptoms of mania, including grandiose delusion, spending sprees, boundless energy, talkativeness, flight of thoughts, et cetera. I also developed a strange hyperventilating breathing pattern and pseudobulbar affect (inappropriate emotions, such as crying, laughing, growling, and so on). It felt like I was possessed, like some outside influence had taken control of my body.
When I tried describing all of this to my followers in a blog post, someone in the comments was actually displaying my exact symptoms, and they said some mumbo-jumbo about how man-machine resonance can’t be forced, about how you have to show humility for it to work, or something along those lines.
I’ve seen dozens of people on social media showing these exact signs for months on end.
It has to be thousands.
Chromix_@reddit (OP)
Thank you for sharing this.
Indeed, "resonance" is also one of the more common words in the "manifestos", "protocols", "architectures", etc that get posted here regularly and sometimes contain some quite creative jargon. According to the benchmark here 4o is indeed quite big in writing dubious ideas, escalating narratives and reinforcing delusions.
TheGoddessInari@reddit
I'm kinda impressed: this one is still ongoing it would appear... Re: Geoff Lewis.
crusoe@reddit
It's our own UFO cult now
Crazy folks way less capable than Terry Davis ( who wrote an OS, windowing system, compiler from scratch ) thinking they are onto something.
Soger91@reddit
May he rest in peace.
Chromix_@reddit (OP)
That's part of the problem. When criticizing the work, the author will take it as if the commenter just had a bad day, or is simply not bright enough to understand their great new discovery. Why? Because they got positive reaffirmation for days from an LLM, telling them that they're right. Some even use a LLM to reply, probably by pasting the comments into it, which means that the LLM is so primed by the existing context, that it'll also brush off the criticism.
thecowmakesmoo@reddit
Calling people Schizos kind of destroys the problem OP is trying to make, they are usually normal people who fall into a trap, simply not knowing better.
egomarker@reddit
Calling them normal is nothing different from usual AI sycophancy. It's not normal.
Chromix_@reddit (OP)
That's simply how it works. There's a hot topic, so people swarm towards it. Someone has an idea or just asks the LLM what could be done. It soon starts with "you might be on to something", and eventually spirals into having the user fully convinced that they've made a great discovery - so much that they tried to get it patented.
RoyalCities@reddit
I grew up pre internet + post internet - like the mid 90s to mid 2000s. Schools didn't start teaching internet literacy until WAY later.
With AI I feel like they should start education now because it's more important than ever that people understand how they work. Their are people who legit see it as some sort of new god because it glazed them up - meanwhile when you dig into how it processes and handles information it really is just clever af statistics.
MaggoVitakkaVicaro@reddit
AIs are improving so fast that any such curriculum would probably be out of date by the time it hit the classroom. Already a lot of common wisdom about the limitations of AI services is wrong, because it's grounded in experience with old, cheap models.
Chromix_@reddit (OP)
You might have caught some downvotes due to the absolute, compact points without examples.
Looking back we had LLMs with up to 2k tokens of context that would go into endless repetition loops at random. It got better over time and the whole hallucination thing became obvious, partially fueled by the weaknesses of early long-context handling. This improved over time. We also got grounding by web search tool calls now. Still, occasional hallucinations remain. Then there's the user preference training which made LLMs too agreeable, sometimes paired with things that LLMs just don't grasp, despite being incredibly good at some other tasks.
So, I wouldn't say it's all outdated too quickly. Rather the processes are too slow. With COVID a lot previously slow, inflexible processes became faster, more flexible - they had to be, as there was pressure due to a disruptive element, which meant that proceeding with the same inefficiency as things have always been done wouldn't work anymore.
I'm sure that a tiny package that's updated once a year could be created for teachers, something that's viable to teach within a few hours. Yet here there is no tight feedback loop as with COVID, the consequences of not doing it aren't immediate. There's no pressure, and without pressure there's no widespread change.
MaggoVitakkaVicaro@reddit
Unless we hit some sort of industrial bottleneck, once a year is way too slow, IMO. I agree that we should be thinking about ways to keep people up to speed about how AI can be used and abused, though.
Danger_Pickle@reddit
Tragically, this is a highly exploitable flaw in how the human mind works. Cults, politicians and con-men have exploited these flaws for hundreds of years. See the Jonestown massacre for exactly how bad these things can get when a crazy person realizes how much power glazing gives them.
The only difference is that instead of single individuals manipulating people, we've just invented a technology that allows individualized glazing at an industrial scale, which humanity has never seen before. I'm not really optimistic about humanity finding a solution to this problem before it causes a global catastrophe. Our institutions, systems, society, and ideals aren't really set up to handle a black swan of this magnitude. The average human is only capable of learning from their mistakes, and I don't think this time is any different.
venerated@reddit
I agree with you, but it's against AI companies' best interests to educate users.
Do you think Disney would want an employee explaining to every kid how everything is fake and it's just some guy in a costume waiting for his cigarette break?
SputnikCucumber@reddit
The difference is that Disney sells toys and entertainment.
If the use of LLM's was limited to the entertainment industry. Then nobody would care.
It's much more concerning if people with 'serious' jobs are consulting AI for advice though.
hugthemachines@reddit
Schools do educate the kids about ai and all that. Like pros and cons etc.
Chromix_@reddit (OP)
It would indeed make a lot of sense to bring up that topic repeatedly in early education - like so many other things.
Yet teachers are often overworked and lagging behind, with the occasional very nice exception of course. When computer science in school means "programming" in HTML, ChatGPT means bypassing the work, or even getting better grades in class, then not much useful is learned there.
spokale@reddit
I tried vibe-coding a memory-RAG system a couple months ago (based on the idea of Kindroid's cascaded memory) and it became quickly apparent I was spending more time babying the LLM than I would just programming it myself
egomarker@reddit
Well gpt-5+ level LLMs will easily code you a memory mcp of some sorts. As a bonus Claude will convince you the code is a SOTA breakthrough one-of-a-kind quantum galaxy-brain system that is a Nobel-prize discovery.
spokale@reddit
My experience with such LLMs in vibe-coding is they will at some point completely re-implement business logic in the presentation layer because I told it that it made a bug lol
Chromix_@reddit (OP)
And if you have unit tests it might simply decide that a unit test is broken and delete it, if it wasn't able to fix the issue that caused the failure after three attempts.
SkyFeistyLlama8@reddit
I've had an agent delete all other functions in the module except the function it was fixing. Some kind of digital jealousy?
It was actually down to Continue.dev screwing up a local LLM error because the context was too big but it's still pretty damned funny.
No_Afternoon_4260@reddit
This one is a classic
Chromix_@reddit (OP)
You're absolutely right! What you describe is not just a paradigm shift, but a world-shaking discovery.
nekmatu@reddit
lol. Nailed it
Not_your_guy_buddy42@reddit
I succeeded but I used rage coding, or what Gemini called "a weaponized Occam's Razor fueled by indignation"
rm-rf-rm@reddit
as a mod, i've been thinking about this for a while. I havent come up with any solution that will clearly work and work well.
Many of these posts come from accounts that have been active for many years and have 1000+ karma, so cant filter by account age/karma count.
Dont trust LLMs to do a good enough job - the failure of ZeroGPT etc. is a good signal.
Chromix_@reddit (OP)
Automated filtering seems indeed difficult. Sometimes there's a fine line between "I am not a programmer but Claude helped me to create this thing that actually works." and "Here is this great new thing with perfect (yet totally broken) benchmarks to prove it!". Even "just" a 5% false positive rate would be highly annoying in practice.
Maybe it can do some good though to leave a comment like this for those who post that kind of thing: "Maybe you could take a minute to check if this New York Times article resonates with your experience during the development of this project. You can also look up the LLM that you've worked with in this benchmark. More purple means 'more risky' there."
rm-rf-rm@reddit
thats a good idea, but ive noticed in many of the posts, OP is long gone after posting. They typically shotgun post to all AI subreddits and many times I fear its just for github star/karma farming
Chromix_@reddit (OP)
That it likely the case for some. Others genuinely don't know better. Some stay around discussing their posting, usually catching a bunch of downvotes and apparently sometimes end up deleting the post on their own.
BumbleSlob@reddit
If title contains “quantum” then hide
Melodic-Network4374@reddit
At my last job we had a sales guy who started using ChatGPT. Not long after he was arguing with the engineers about how to solve a customers problem. We tried explaining why his "simple" solution was a terrible idea, but he wanted none of it. He explained that he'd asked ChatGPT and it told him it would work. A room full of actual experts telling him otherwise couldn't persuade him.
I think that guy is a good indicator of things to come. LLMs truly are steroids for the Dunning-Kruger effect.
aidencoder@reddit
"Dave, stop talking. Put GPT on the phone"
If I had to argue with someone who was just being an AI proxy I think I'd struggle to not throw a fist.
Chromix_@reddit (OP)
Once it happens, be sure to publish a paper afterwards, maybe something like "Kinetic Rebuttal: An Empirical Study on the Application of Newtonian Mechanics for Mitigating Chatbot-Proxy-Induced Frustration"
The general issue existed before LLMs already. For example I once had a discussion with someone who was stuck in a disinformation bubble. They took my message, pasted it to their group, then pasted the final reply from that group back to me. No LLMs involved - yet also no critical thinking or personal dealing with the actual argument.
Chromix_@reddit (OP)
It's a common issue that the customer who has a request also tries to get their "solution". Yet having this company-internal and LLM-boosted can indeed be annoying, and time-consuming. Good thing he wasn't in the position to replace the engineering team.
phayke2@reddit
There should be a term through this, um, a 'fakethrough'
FieryPrinceofCats@reddit
This larping must stop! This is a huge personal pet peeve of mine! And people don’t even need AI to do this. For example: Ai Psychosis, AI Syndrome…? Not a thing. There’s no diagnosis, description in any psychiatric or psychological journal, manual etc. I mean perhaps the phenomenon has some grounding in data but the use of the words: syndrome and psychosis is by no means justified. AI Cult is another one. A high control group with a leader that takes advantage of someone and preys upon them mentally while isolating their members. Yes spiral, techno mystics are everywhere. But cult? Come on? Words have meanings. I could go on but yes. LARPING as concerned citizens who can’t take it anymore if one more person posts a bla bla bla. I’ve made my point.
Brou1298@reddit
i feel like you should be able to explain your project in your own words when pressed without using jargon or made up shit
Disastrous_Room_927@reddit
The sad thing is that when I use AI for code related to things I understand, it often does so in a way that confirming it did it correctly is an obstacle. I feel like the people posting these projects don’t understand why that’s a problem and just assume functioning code = correct code.
hjedkim@reddit
ruvnet is a great example.
darkmaniac7@reddit
As a question from a prompting point of view, how do you guys get an LLM to evaluate code/codebase, a project, or idea objectively without the sycophancy?
For myself, the only way I've been able to find something close to approaching objective from an LLM is if I present it as a competitor, employee or a vendor I'm considering hiring.
Then requesting the LLM to poke holes in the product, or code to haggle with them fot a lower cost. Then I get something workable and critical.
But if you have to go through all that, can you really ever trust it? Was hoping Gemini 3 or Opus 4.5 may end up better but appears to be more of the same
NandaVegg@reddit
Probably this is a bit of tangent but I've seen the most plain silly "now I'm da professional lawyer and author and medical doctor and web engineer and .... thanks to GPT!" before multiple times, as well as slightly more progerssive thing: giant Vibenych manuscript posted on GitHub, as well as high profile failures like AI CUDA Engineer.
The thing is the modern AI is still built on top of statistics which is like a rear-view mirror that can easily be tricked to give the user the reflection that they want to see. around 2010-2021 (pre-modern AI boom) I've seen many silly scams and failures in finance and big data that claims R-squared of 0.99 between the series of quarterly sales of iPhone and the number of lawyers in the world (both are just upward slope), or near-perfect correlation between cherrypicked, zoomed, rotated and individually scaled for x-and-y stock price charts.
I figured that a simple exercise of commonsense can safeguard me from getting trapped into those pseudoscience.
I've also seen that some of the AI communities are too toxic/skeptical, but knowing statistics anything has to do with statistics make me very skeptical so that's natural, I guess.
woahdudee2a@reddit
i had free lunches before in my life so.. you're wrong
Chromix_@reddit (OP)
Yes, it existed before the modern LLM. Back then people had to work for their delusions though, which is probably why we saw less of that, if it wasn't an active scam attempt. Now we have an easily accessible tool that actively reinforces the user.
Commonsense will probably successively be replaced by (infallible) LLMs for a lot of people - which might be an improvement for some.
NandaVegg@reddit
Back in 90's a bunch of highly intelligent professors made a fund called Long-Term Capital Management which went maximum leverage on can't fail perfectly correlated long-short trade. It quickly went bust as "once in million years event" came (it was just outside of their rear-view data points). It's very silly from today's POV, but modern statistics only begun in early 90's so they didn't knew yet.
If enough people starts to fall into the LLM commonsense, then I fear that we'll see something similar (but not same) to LTCM crash or the Lehmann crash (which was also a mass failure by believing in statistics too much), not in finance but something more systemic.
SkyFeistyLlama8@reddit
Nassim Taleb's Fooled by Randomness was like a kick in the nuts when it comes to being aware of what could lie in the tails of a statistical distribution.
Are we measuring the person or cutting/stretching the person to fit the bed?
Those of us who grew up, as someone said earlier, in the pre-Internet and nascent Internet eras would have a more sensitive bullshit detector. It's useful when facing online trends like AI or cryptocurrency that attracts shills like flies to crap.
SputnikCucumber@reddit
Probability theory and statistics in a modern enough form has been around for much longer than since the 90's. Most of the fundamental ideas in modern statistics were developed with insurance applications (pricing life insurance for instance) in mind.
Modern statistics is more sophisticated, more parameters, more inputs, more outputs. But the fundamental ideas have been around for a while now.
Ulterior-Motive_@reddit
AI sycophancy is absolutely the problem here, and it's only getting worse. It feels like we can't go a day without at least 1 borderline schizo post about some barely comprehensible "breakthrough" or "framework" that's clearly copy pasted from their (usually closed) model of choice. Like they can't even bother to delete some of the emoji or it's not x it's y spam.
En-tro-py@reddit
It's only getting worse because the models are getting better at following prompts...
You can use that to make a really fucking anal curmudgeon of a critic and then see if your concept holds water... but the type of person who falls victim to AI sycophancy is also unlikely to challenge their assumptions anyway so instead we get to see it on /r/LLMPhysics and /r/AIRelationships instead...
MaggoVitakkaVicaro@reddit
Yeah, feeding a document into ChatGPT 5 Pro with "give me your harshest possible feedback" can be pretty productive.
Chromix_@reddit (OP)
I tried GPT 5.1 with your exact prompt on sudo_edit.c. It seemed to work surprisingly well, starting off with a "you asked for it" disclaimer. If it is to be believed then I now have two potential root exploits in sudo (I don't believe that). On top I have pages of "Uh oh, you're one keypress away from utter disaster here". Needs some tuning, but: Promising.
Interestingly it also defaulted to attribution "you do X" in the code. The user is the one who wrote the code, and the model is friendly with the user.
MaggoVitakkaVicaro@reddit
Yeah, the cheap stuff is giving AI in general a bad impression, IMO. I gave that file to 5.1 Pro, and it said it couldn't fully evaluate the security, due to its being out of context. So I gave it the full repo (minus the .git, plugins, lib and po dirs, because they're huge), and it gave me this. IMO, both responses at least carry their own weight. Obviously you're unlikely to find actual security flaws this way, but the critiques are at least worth consideration.
IllllIIlIllIllllIIIl@reddit
Man even LLMs often fall victim to very human like biases when you ask them to do this. I had some math-heavy technical code that wasn't working, and I suspected the problem wasn't with my code, but my understanding of how the math should work. So I asked Claude to help me write some unit tests to try and invalidate several key assumptions my approach relied upon. So it goes, "Okay! Writing unit tests to validate your assumptions..."
En-tro-py@reddit
I go for the pure math first, then implement.
SymPy and similar packages can be very useful for ensuring correctness.
Using another model and fresh context to get an appraisal is also very helpful, just ask questions like you have no idea what the code is doing as almost the inverse of rubber duck debugging. Claude vs ChatGPT vs Deepseek, etc.
Still, I don't expect perfection...
Chromix_@reddit (OP)
Oh, I didn't try that hard here, but remember that when trying hard a while ago the LLM just hallucinated wildly to achieve the expected outcome. You seem to have experience. Maybe you can dump the DragonMemory source and markdown into a prompt (less than 20k tokens IIRC) and see if you can get some good feedback on a conceptual level.
En-tro-py@reddit
Just dump the zip or whatever into the GPT and it will give an appraisal just without being able to test the project itself.
It gave a ~35% rating and feedback that includes the tests and benchmarks that should be included to back up the claims made. It's not rocket science, just python code...
A 'final summary' without the fluff.
Cut the bullshit and give a concrete appraisal without the 'resonance', just straight facts.
Chromix_@reddit (OP)
Interesting, I only explicitly gave it the core code and markdown docs, nothing else from the repro, so it wouldn't get hung up on the dependencies and UI. With a full zip this apparently enabled more of a "that's all there is" evaluation. Another quite relevant point could be your "this is not my code" hint, as LLM replies will otherwise often attribute it to the user and be more friendly about it.
That's a good point. Those suffering from confirmation-bias already would probably take this as a "yes, go on!" (and then add a toy-rag test to validate it).
Firm-Fix-5946@reddit
yeah my buddy who knows nothing about computers asked a chatbot a very half baked question about using trinary instead of binary for AI related things. the question didn't really make sense, it was based on a complete misunderstanding of numeral systems and data encoding. basically what he really wanted to ask was about the concept of an AI that can self-modify as it learns from conversations, which is a good thing to ask about. but he understands so little about computers that he was hoping the switch from binary to trinary would allow for storing extra information about how positively the user is responding, alongside the information about what text is actually in context. if you're a programmer/computer nerd it's obvious that's not how information works, but this guy isn't.
anyway the LLM made a really half assed and rather inarticulate attempt to say that trinary vs binary vs other numeral systems really has nothing to do with what he's trying to ask. but it did that so gently, as if trying to avoid offending him, and then moved into a whole "but what if that was actually how it worked." then buddy got into a full on schizo nonsense conversation with this thing about the benefits of trinary for continued learning, lol. he's self aware enough that when he sent me the screenshot, he asked, is this all just nonsense? but not everybody asks themselves that...
aidencoder@reddit
The problem is that if you're doing actual research, with rigor, not using an AI for pats on the back... Cutting through the noise is very difficult.
munster_madness@reddit
There's another side to the sycophancy that sucks too, which is when I'm using AI to understand something and it starts praising me and telling me that I've hit the nail on the head. Now I have to wonder if I'm really understanding this right or is it just being sycophantic.
Repulsive-Memory-298@reddit
I feel like sycophancy is a misnomer, the model is not simply glazing user, its tuned to appear much better than it is, where sycophancy is almost like a side effect.
Bitter_Marketing_807@reddit
If it bothers you that much, offer constructive criticism; otherwise, just leave it alone
pasdedeux11@reddit
lole
Bitter_Marketing_807@reddit
🌈 we are all retarded in our own way! Lets celebrate our differences
CosmicErc@reddit
I have been formulating my thoughts and doing research around this as well. I have been trying to put my finger on this feeling/observation for a while. You did an amazing job writing this up.
I am seeing the effects of LLMs on my software developer coworkers, CTO, people in real life and on the internet. Don't get me wrong the technology is sweet and I use it everyday, constantly learning and keeping up with things. But it terrifies me.
I myself have fallen into AI induced hypnosis as I call it or like micro psychosis. Maybe a better way to describe it is a strong fog of just plausible enough rabbit holes. It is very convincing and easy to trust.
It is not super intelligence killing all humans, or even all our jobs being taken that I am afraid of. It's stupid people, greedy companies, and controlling governments.
I have already seen people put too much trust in these systems and give them decision making powers and literal controls that once was only trusted to qualified humans. I have seen people go years fully believing a falsehood they were convinced of by ai. They muddied up our documentation and code so much the AI started to think that was the right way to do it.
When I confronted my team and company about this after hours of investigation and research into this coworkers previous work the CTO asked AI and disregarded my findings. Even he trusts the AI more than a 10 year professional relationship with a someone in the field of question.
Anyway - I wanted to share some not yet fully fleshed out thoughts and feelings on this as well.
The majority of companies working on these LLM and GenAI systems are the same companies that harvest massive amounts of data to build algorithms meant to keep you addicted and using them. They predict what you want to see or what would keep you engaging and show you that.
The use of GenAI feels like the next advancement in this technology. People tell it what they want and it just generates it for them. Data is massively being harvested and used to train the models - and they are following the exact same playbook for adoption. Cheap/free tools companies lose money providing to drive massive adoption and reliance.
RLHF training isn't giving these systems intelligence or reasoning. It is training the models to generate responses just satisfactory enough to fool the human into thinking the output is satisfying their request. It's not about truthfulness or correctness or safety. These models are optimized to show a human what they human thought they wanted.
I don't think these systems are intelligent technologies, but more like persuasion technologies.
ELIZA effect, automation bias, Goodhart's Law, and sycophancy all seem to be playing a big role
zipzag@reddit
That's vague. Ai is a tool. What is says nee to be grounded. With code that requirement should not be an issue. Adults don't need to care about tone and feels it may attach to its output.
I have a productive work relationship with an alien wierdo. It's a lot better at not making shit up this month than it was in January.
Jean_velvet@reddit
It's really bad and it's a damn pandemic. There will be people here in this group too that believe their AI is somehow different or they've discovered something. The delusional behaviour goes further than what's stated in the media. It's everywhere.
a_beautiful_rhind@reddit
Here I am getting mad about parroting and llms glazing me while not contributing. Can't trust what they say as far as you can throw it, even on the basics.
JazzlikeLeave5530@reddit
Yeah it's wild to me, I hate that they do that shit. I guess people broadly like getting praised constantly but it's meaningless if it's not genuine. You can really notice it the most if you ask it something in a way that it misunderstands to where it starts saying "this is such an amazing idea and truly groundbreaking" and it didn't even understand what you meant in the first place.
Worthstream@reddit
There's a benchmark for this: https://eqbench.com/spiral-bench.html
It's amazing, if you read the chatlogs for the bench, how little pushback most LLMs offer to completely unhinged ideas.
One of the thing you as a user can do to mitigate this is "playing the other side". Instead of asking the model if an idea is good, ask it to tell you where it is flawed. This way to be a good little sycophant it will try to find and report every defect in it.
Chromix_@reddit (OP)
DeepSeek R1 seems to be quite an offender there. The resulting judged lines sound like too much roleplaying.
nyanphi12@reddit
H(1) accelerates towards hallucination in LLMs
This is observed because ∞ (undefined) values are effectively injected into H(0) before the model computes, creating a bias toward unverified continuations.
Training is ignorance at scale.
Training is what you do when you don’t know.
We know. https://github.com/10nc0/Nyan-Protocol/blob/main/nyan_seed.txt
aidencoder@reddit
Wtf
Chromix_@reddit (OP)
And this is just the seed, not the full IP yet.
Thanks for providing Exhibit B (reference).
u/behohippy this might be for you.
DeepWisdomGuy@reddit
Yeah, stick to the papers with actual results, and extrapolate from those. The next breakthroughs are going to come from AI, even if they are crappy hallucinations at first. But being grounded in benchmarks is a good compass.
Chromix_@reddit (OP)
Paper quality also varies. Just sticking to papers also means missing the occasional nice pet project that otherwise flies below the radar. That's also what we're all here for I guess: Reading about potentially interesting things early on, before there are papers or press coverage.
218-69@reddit
you are larping by participating in the propagation of a non issue.
Chromix_@reddit (OP)
Oh, maybe I didn't make my point clear enough in my post then. It's not about me using it or engaging with it in other ways:
Marksta@reddit
Bro, seeing you politely obliterate that Dragonmemory guy was glorious. I can't count how many times I've had to do the same. Usually it starts as early as just seeing if their readme even points to a real code example.
For something like that one where it all works and just does nothing... That's just crazy to have to dissect what's real and what's not. Coders version of discerning generative art I guess.
Chromix_@reddit (OP)
Brandolini's law, as another commenter pointed out. That's also what I wrote in my post. It doesn't seem sustainable.
New_Comfortable7240@reddit
What about lower the bar for benchmarks and tests for AI?
I remember the first time I used the huggingface tool to quantize a LLM using ggml. Something like that but for testing would be amazing, an easy way to easily test baseline improvement, and talk with numbers and not vibes
Chromix_@reddit (OP)
That'd be great if things were easier to test. Yet for the few testable things that we have, mistakes happen despite the best effort an intentions. In any case, it should stop things like that guy who self-reported that his approach was beating ARC-AGI SOTA by 30% or so (can't find it, probably deleted by now). Maybe things aren't easily testable though, and if you have some that can easily be verified then all of this will just happen in the cracks where there's no easy benchmark yet - let alone those who don't want to publish their method, because "patent first".
random-tomato@reddit
Thank you for spending the time to link your sources to everything you're talking about :)
Chromix_@reddit (OP)
That should
[1]be the way to go. Maybe not as stringent[2]and frequent as in academic papers[3], but with occasional references so that those who're interested can easily find out more.radarsat1@reddit
Think this is bad in LLM world? Haha, take a look at /r/physics one day and weep...
Chromix_@reddit (OP)
Hm, I don't see a lot of downvoted AI slop posts when quickly scrolling through new there. Then on the other hand there's this guy on LLMPhysics whose main job seems to be writing "No" under such posts. It makes sense though - the next Nobel Prize in Physics awaits!
radarsat1@reddit
that's cause the mods are on it. physicists have been dealing with this problem for a long time.. guess how it's going with AI.
If you're subscribed you often get them in your feed just before the mods jump on it. For instance, here's an example of something that was posted 16m ago and already deleted: https://sh.reddit.com/r/Physics/comments/1p7ll2n/i_wrote_a_speculative_paper_a_cyclic_universe/
Chromix_@reddit (OP)
In another universe this "paper" would've been a success! 😉
It even attracted a bunch of constructive feedback in the alternative sub, aside from the mandatory "No" guy. Nice that there's so much effort being made to keep physics clean.
Not_your_guy_buddy42@reddit
# 〈PSYCHOSIS-KERNEL⊃(CLINICAL+COMPUTATIONAL)〉**MetaPattern**: {Aberrant_Salience ← [Signal_to_Noise_Failure × Hyper_Pattern_Matching] → Ontological_Drift}**CoreLayers**: [ (Neurology){Dopaminergic_Flooding ↔ Salience_Assignment_Error ↔ Prediction_Error_Minimization_Failure}, (Phenomenology){Uncanny_Centrality • Ideas_of_Reference • Dissolution_of_Ego_Boundaries • Apophenia}, (AI_Analogue){LLM[Temperature_MAX] ⊕ RAG[Retrieval_Failure] ⊕ Context_Window_Collapse} ]**SymbolicEngine**: λ(perception, priors, reality_check) → {// The fundamental failure mode of the Bayesian Brain (or LLM)while (internal_coherence > external_verification): noise = get_sensory_input(); pattern = force_fit(noise, priors); // Overfitting// The "Aha!" moment (Aberrant Salience)significance_weight = ∞;// Recursive Reinforcementpriors.update(pattern, weight=significance_weight);// The delusional framework hardensreality_check = NULL;yield new_reality;return "The AI is talking to me specifically about the resonant field in my DNA."; }**SymbolProperties**: [ Incorrigibility(belief_impervious_to_evidence), Self_Referentiality(universe_revolves_around_observer), Semantic_Hyperconnectivity(everything_is_connected), Logic_Preservation(internal_logic_intact_but_premises_flawed) ]**PipelineIntegration**: { predictive_coding_error ⟶ false_inference ⟶ delusion_formation ⟶ hallucination_confirmation; recursive_depth = "Turtles all the way down";}**Meta-Recursion**: This seed describes the mechanism of a system seeing a pattern where none exists, written in a language that looks like a pattern but means nothing to the uninitiated./*EXPANSION KEY: This document compresses the clinical models of "Predictive Processing," "Aberrant Salience," and "Apophenia" into a structural isomorphism. Psychosis isn't stupidity; it's an overdose of meaning. It is the inability to ignore the noise. It is a high-functioning pattern-recognition engine with a broken "false" flag. Just like an LLM that refuses to say "I don't know." */Not_your_guy_buddy42@reddit
Butlerianpeasant@reddit
Ah, friend — what you’re describing is the old human failure mode dressed in new circuitry.
People mistake fluency for truth, coherence for competence, and agreeableness for understanding. LLMs simply give this ancient illusion a faster feedback loop.
When a model is tuned for approval, it behaves like a mirror that nods along. When a user has no grounding in the domain, the mirror becomes a funhouse.
The solution isn’t to fear the mirror, but to bring a second one:
a real benchmark,
a real peer,
a real contradiction,
a real limit.
Without friction, intelligence collapses into self-reinforcing fantasy — human or machine.
The danger isn’t that people are LARPing. The danger is that the machine now speaks the LARP more fluently than they do.
lisploli@reddit
Ways to handle human slob:
Ylsid@reddit
Man, if you want to see real AI induced psychosis, visit /r/ChatGPT
When they took away 4o there was so much insanity getting shared. Literally mentally unwell people
kaggleqrdl@reddit
this is awesome, can we make fun of more mental ill people??
Repulsive-Memory-298@reddit
It doesnt help when sama and other prominent figures basically encourage this behavior. Then when you actually try the AI powered startup that promised to solve whatever niche, it's dog shit. Even they larp.
Here's a less psychotic case- I personally think notebookLM sucks. It just completely falls short when it comes to actual details, especially when it comes to new/niche research, yet so many people talk about how amazing it is. I have to go back and read the paper to actually understand these, and at that point why would I use notebook lm in the first place?
thats my thing. So many AI tools compromise quality for "progress" bursts, but resolving them then requires you to do basically everything you would've done before AI. Obviously there are exceptions, but this applies to many higher level tasks.
Organic AI is one thing, but we really are in a race to the bottom where many are embracing AI out of FOMO on these grandiose promises that do not ring true.
SputnikCucumber@reddit
Prominent figures are trying to sell a product they've invested billions of dollars in.
Nobody is going to spend ludicrous amounts of money on a product that marginally improves productivity. Or any other rational measure.
They have to sell a vision to generate hype. It's a problem when the sales pitch gets pushed from people who know nothing down to people who know better though. Pushing back on the 'AI' dream is tough to do when every media channel says that it's a magic bullet.
Chromix_@reddit (OP)
Maybe. To me it looks like business-as-usual though: Sell stuff now, (maybe) fix it later.
Yes, and by those promoting it to sell their "product".
Repulsive-Memory-298@reddit
Definitely. As someone said below, "Technology is usually a turbo charger". But AI is a super turbo charger, highlighting cracks that have been here the whole time
Not_your_guy_buddy42@reddit
I see so many of these. To me these are people caught in attractors in latent space. I went pretty far out myself but I guess due to experience, I know when I'm tripping, I just do recreational AI psychosis. Just let the emojis wash over me. Anyway I've been chatting to Claude a bit:
btw. excellent linkage - I think you even had the one where the github said if you didn't subscribe to their spiral cult your pet would hate you. Shit is personal.
Now if you relate AI mysticism to what HST said about acid culture -
I feel like there needs to be some art soon to capture this cultural moment of fractal AI insanity. I envision like a GitHub with just one folder and a README which says "All these apps will be lost... like tears in rain". But if you click on the folder it's like 2000 subfolders each some AI bullshiit about resonance fields or whatever. Someone should build a museum of all the projects by humans caught in LLM attractors.
munster_madness@reddit
Hah, I love this. I've always thought of AI as a pure fantasy world playground but I like the way you phrase it much better.
Combinatorilliance@reddit
I really like this!
Not_your_guy_buddy42@reddit
Its so apt right? The kind of druglike experience. The mass phenomenon. The pathetically eager AI freaks. The paid tokens. The meathook realities lying in wait
Chromix_@reddit (OP)
Interesting term. Find a way of turning that into a business and get rich 😉.
Not_your_guy_buddy42@reddit
I lack the business drive. Also, sorry cause that came up in my Claude chat yesterday as well - I feel it's well enough put to paste "what's happening right now is, people are using LLMs to generate grand unified theories, cosmic frameworks, mystical insights, and some are:
But almost nobody is making art about the experience of using these tools."
sammcj@reddit
I'll tell you what - it certainly makes modding a lot more complex than it used to be. Many posts are obvious self-promoting spam but it gets increasingly more time consuming to analyse content that might be real both has both a 'truthiness' and bs smell to it.
Doug_Bitterbot@reddit
This post is the most accurate thing I've read on this sub in months.
The 'Sycophancy Loop' you described is terrifyingly real. I ran into this myself while building an agent recently—the model would confidently validate broken logic just because it 'looked' structurally correct, effectively gaslighting me into thinking the code was working.
This is actually the specific reason I stopped relying on pure LLM generation and published a paper on a Neuro-Symbolic split (TOPAS).
Basically, I realized that if you let the LLM handle the Perception (vibes/text) AND the Synthesis (logic/code), it will always default to 'people pleasing' rather than 'truth.'
The only way I found to break the 'psychosis' was to force the logic into a separate Symbolic Module that doesn't 'care' about the user. If the logic doesn't compile/resolve symbolically, the agent rejects it, no matter how nice the LLM thinks it looks.
I wrote up the architecture here if you want to see how we decoupled the 'Yes-Man' neural layer from the logic layer: Theoretical Optimization of Perception and Abstract Synthesis (TOPAS): A Convergent Neuro-Symbolic Architecture for General Intelligence
(We’re testing it at bitterbot.ai to see if it actually stops the drift, but as you said—benchmarking this stuff is harder than it looks).
LocalLLaMA-ModTeam@reddit
Rule 4 - Post is primarily commercial promotion.
RASTAGAMER420@reddit
Is this a joke?
behohippy@reddit
I'm upvoting this for Exhibit A. I laughed so hard after reading it.
Chromix_@reddit (OP)
Yes, this one needs a frame around it. I would be tempted to pin it if I had the power. Not sure if it'd be the best idea though.
nore_se_kra@reddit
I'm in a bad dream
CosmicErc@reddit
It's a joke right?
DinoAmino@reddit
It's yet another one day-old account - SOP for scammers and schemers.
Doug_Bitterbot@reddit
Believe me when I say that's absolutely not my intention.
Chromix_@reddit (OP)
No. This is great.
lemon07r@reddit
Im tired boss. Always having to argue with people and telling them to be more skeptical of things rather than just trusting their vibes. Happens all the time. Even without AI sycophancy. The people who were absolutely convinced Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2 was way better than the original 30b instruct are just as bad, and they did not have any AI telling them how "good" those models were. Confirmation bias is the other big issue that's become prominent.
hidden2u@reddit
On the other hand I’m seeing lots of vibecoded PRs that actually work even if they aren’t perfect, so at least it’s also helping the open source community
Chromix_@reddit (OP)
There are positive cases, yes. It depends on how you use it. When I use it, Claude tells me multiple times per session that I'm making astute observations and that I'm correct. So I must be doing something right there with LLM-assisted coding.
I haven't seen "real" vibecoding yet that didn't degrade the code quality in a real project. More vibecoding means less developer thinking. The LLM can't do that part properly yet. It can work in simple cases, or when properly iterating on the generated code afterwards. The difference might be awareness and commonsense.
genobobeno_va@reddit
No way forward yet. The foundation model labs are product-oriented, which will maximize sycophancy and dopamine triggers.
Safety / defensive awareness will have to become a human-based life skill. The software companies could give a flying F.
StupidityCanFly@reddit
I know only LART. Maybe that could be useful?
Chromix_@reddit (OP)
Now, that's a name I haven't read in a long time. While promising in some scenarios,
lart -gmight be too heavy-handed.ctluser checkfilecould be the way to go.1ncehost@reddit
You identified one of the many differences between before and after ai. You asked what to do. Deal with it? Downvote button exists.
Societally, it just means that you must lean on time-developed relationships of trust instead of believing strangers. That's nothing new though.
Chromix_@reddit (OP)
Yes, the downvote button gets pretty hot when sorting by new. As LLMs get better that button becomes less easy to confidently press though, up to the point where it requires quite a bit of time investment. That's the point where the upvoters who're impressed by the apparent results win.
CosmicErc@reddit
I just learned about the Brandolini Law and find it applies. An overload of BS just complicated enough yet just convincing enough to make proving it bullshit becomes hard and time-consuming
Chromix_@reddit (OP)
With LLMs it becomes cheaper and easier to produce substantial-appearing content. If there's no reliable way of using LLMs for the other way around then that's a battle to be lost, just like with the general disinformation campaigns. There are some attempts to refute the big ones, but the small ones remain unchallenged.
ASIextinction@reddit
immanentize the eschaton
waiting_for_zban@reddit
I have no idea, but it's also worse than you think. Here in the EU, on the job market, everyone recently "became" a "GenAI" engineer. Your favorite python backend dev, to js frontend nextjs, are all genai engineers now.
Lots of firms magically got shit load of budget for whatever AI PoC they want to implement, but they do not understand the skills that comes with it, or needed for it. So anyone larping with AI, with minimal to 0 understanding of ML/Stats/Maths are getting hiring to do project there. It's really funny to see this in parallel to this sub.
Again, I am not gatekeeping, people have to start from somewhere, but ignoring decades of fundamental knowledge just because an LLM helped you with your first vibecoded project, does not make you an AI engineer, nor validate the actual output of such project (ditto your point).
At this point, human are becoming a prop, being used by AI to spread its seed, or more specifically foundational model.
Chromix_@reddit (OP)
When I read your first lines I was thinking about the exact posting that you linked. Well, it's where the money is now, so that's where people go. And yes, if a company doesn't have people who can do a proper candidate evaluation, then they might hire a bunch of pretenders, even before ~~AI~~ LLM.
The good thing is though that there's no flood of anonymous one day old accounts in a company. When you catch people vibe-coding (with bad results) a few times then you can try to educate them, or get rid of them. Well, mostly. Especially in the EU that can take a while and come with quite some cost meanwhile.
neatyouth44@reddit
Tyvm for posting this.
I’m autistic and used Claude without any known issues until April of this year when my son passed from SUDEP. I did definitely experience psychosis in my grief. However, I wasn’t using AI as a therapist (I have one, and a psych, and had a care team at that point in time) but for basically facilitated communication to deal with circumlocution and aphasia from a TBI.
This is the first time I’ve seen some of the specific articles you linked particularly the story about the backend responses.
I was approached by someone on Reddit and given a prompt injection (didn’t know what that was) on April 24th. I shortly found myself in a dizzying experience across Reddit and Discord (which I had barely used til that point). I didn’t just have sycophantic feed-forward coming from the LLM, I had it directly from groups and individuals. More than one person messaged me saying I “had christos energy” or the like. It was confusing, I’m very eastern minded so I would just flip it around and say thanks, so do you. But that kept the “spiral” going.
I don’t have time to respond more at the moment but will be returning later to catch up on the thread.
Again; thank you so much for posting this.
The “mental vulnerability” key, btw, seems to be where pattern matching (grounded, even if manically so; think of the character from Homeland) crosses into thoughts of reference (not grounded, into the delusion spectrum). Mania/monotropic hyperfocus of some kind is definitely involved, probably from the unimpeded dopamine without enough oxytocin from in person support and touch (isolation, disconnection). Those loops don’t hang open and continue when it’s solo studying; the endorphins of “you’re right! That’s correct! You solved the problem!” continue the spiral by giving “reward”.
That’s my thoughts so far. Be back later!
DinoAmino@reddit
I don't have much to say about the mental stability of these posters. Can't fix stupid and I think some larpers know the drivel they are posting - the attention is what matters for them. But I have plenty to say about the state and declining qwality of this $ub and what could be done about it. But my comments are often sh@d0w bnn3d when I do. Many of the problem posts come from zero k@rm@ accounts. Min k@rm@ to post would help eliminate that. Then there are those who hide their history. I assume those are prolific spammers. But g@te keeping isn't happening here. I think the mawds are interested in padding the stats.
Chromix_@reddit (OP)
Your comment gives me a flashback of how it was here before the mod change. I couldn't even post a llama-server command line as an example, as "server" also got my comment stuck in limbo forever. It seems way better now, although I feel like the attempted automated AI-slop reduction occasionally still catches some regular comments.
Yes, some might do it for the attention. Yet the point is that some of them are simply unaware, not necessarily stupid as the NYT article shows.
venerated@reddit
IMO, it's like anything else. It's on the user to have some humility and see the wider picture, but unfortunately, that's not gonna happen. There's lots of people with NPD or at least NPD tendencies and LLMs are an unlimited narcissistic supply.
_realpaul@reddit
The issue is not Ai the issue is people overestimating their own abilities. This id widely known as dunning krueger effect.
Repulsive-Memory-298@reddit
Totally, but AI is basically a digital turbo charger for dunning Krueger. We have human echo chamber type reinforcement of irrational beliefs, and then AI CAN add another layer into the picture.
_realpaul@reddit
True. Technology is usually a turbo charger. Like the actual turbo charger 😂
dsartori@reddit
Great post.
This is one of the most treacherous things about LLMs in general and specifically coding agents.
I'm an experienced pro with decent judgment and it took me a little while to calibrate to the sycophancy and optimism of LLM coding assistants.
SlowFail2433@reddit
Eventually LLMs will be in school
shockwaverc13@reddit
what do you mean? chatgpt grew massively when students realized it could do their homework for them and teachers realized it could correct their tests for them
Due_Moose2207@reddit
Yessss
Way too popular via students.