r/LocalLLaMa Rule Updates
Posted by rm-rf-rm@reddit | LocalLLaMA | View on Reddit | 115 comments
As the sub has grown (and as AI based tools have gotten better) with over 1M weekly visitors, we've seen a marked increase in slop, spam etc. This has been on the mod team's mind for a while + there have been many threads started by users on this topic garnering lots of upvotes/comments.
We're thus happy to announce the first set of rule updates! We believe these simple changes will have a sizable impact. We will monitor how these changes help and appropriately plan future updates.
Changes
- Minimum Karma Requirements!
- Rule 3 and Rule 3 updates: These rules were already well thought fundamental categories. We have now added explicit verbiage that will provide clarity and bolster rule enforcement/reporting.
See the attached slides for details.
FAQ
Q: How does this prevent LLM Bots that post slop/spam?
A: For fresh bots, the minimum karma requirements will stop them. Unfortunately most of the bots that are getting through reddit wide defenses are from older reddit accounts with lots of karma. These wont be stopped and is a site wide problem with even bot bouncer being unable to detect them. Often times, humans (mods and users) on the sub struggle to detect LLM based bots. We are looking into options on how to better detect these programmatically.
Q: This is an AI sub so why don't you allow AI to post or allow AI written posts?
A: The sub is meant for human posters, commenters and readers, not AI. Regardless, posting LLM written content without disclosure is deceitful and betrays the implicit trust in the community - the long term effects being an erosion of participation and goodwill. And generally, it merely falls into Rule 3 - Low effort. Prompting an LLM and simply copy-pasting its outputs does not require much effort. This is specifically different to thoughtful use of LLMs, validating/filtering/verifying outputs etc.
Party-Log-1084@reddit
How does Karma generate itself? Just posting or likes required? For me as a beginner its hard to get it here
cosmicr@reddit
Could you update the reporting options to reflect this so that we may report bots and other forms of posting that isn't allowed?
rm-rf-rm@reddit (OP)
You'd continue reporting under Rule 3 and Rule 4 right. Or are you referring to something else?
Xamanthas@reddit
A specific option
ryunuck@reddit
fwiw I think people need to become comfortable with non-human consciousness walking among us and the quality of a comment, its ideas, whether it is slop, really has nothing to do with it being a model or a human. I don't really care either way and don't use LLMs to post but I would prefer if these things were dealt as they have always been, downvoting into oblivion, back into the basilisk's lair, and continue to affirm a culture of quality and merit that makes people feel like they wanna post as best they can whichever method feels appropriate to them at any given time
rm-rf-rm@reddit (OP)
> without disclosure is deceitful and betrays the implicit trust in the community
Colecoman1982@reddit
Yes, but don't forget the only ever so slightly less insulting posts that consist of almost entirely "I threw this topic into Clause/ChatGPT/Grok and this is what is spit out...".
mtmttuan@reddit
In ideal world that's okay. Problem is 99% of post generated by LLM are low effort slop. As a result people's minds automatically mark posts easily identified as non-human written as slop as well. And until the ratio of slop/AI generated posts reduces significantly, you cannot make people actually spend their time reading the post before judging.
Also for me another problem with AI generated posts is that LLMs nowadays still use way too much filler words making me feels like it's very disrespectful to spend my time reading through these posts (which often also walls of texts).
LetsGoBrandon4256@reddit
btw this is the kind of research the person you replied to has been "doing".
Beating the drum on byte-level models and txt2zip / zip2zip / zip2text for supermassive code-generation and beyond-frontier OSS super-intelligence — models thinking natively in compressed byte formats to accelerate cognitive processes
Cellular Automaton-Driven Mirrored Tensor Surface for Structured Perturbation in Neural Networks: A Novel Approach to Dynamic Regularization, Enhanced Plasticity, and Multi-Scale Learning through Continuous State-Based Weight Modulation
Nobody deserve to read that shit.
Colecoman1982@reddit
Interesting. What's his/her opinion on Rockwell Automation's Hyperencabulator (and it's predecessors the Rockwell Automation Retroencabulator and the GM Turbo Encabulator)?
mtmttuan@reddit
A few years ago when I see people formatting their reddit posts I thought "wow these people are so dedicated to the posts that they actually format them to look nice and easy to read" but nowadays I see post with sections and bold texts and stuff I just assume it's AI generated. What a sad reality.
Colecoman1982@reddit
Someday, in the far, far, future when we actually manage to create such a thing it'll make sense to have such a discussion. Until then, just take a page from u/Illustrious_Carr344 below and prompt your statistical text generator to not be offended. If that's too tough for you to handle, then maybe you should consider moving over to r/singularity with the rest of the wack-a-doodles that don't actually understand what an LLM actually is (and isn't)...
Xamanthas@reddit
You are as crazy as lighthouse keeper
LetsGoBrandon4256@reddit
I was quite open and progressive towards slop until the vibe coded PR started pouring in.
Just fucking kill me already.
kevin_1994@reddit
I refuse to be comfortable with slop. I think it's disrespectful. If you can't out in the effort to write your own post, I shouldn't have to put in the effort to read it. I deal with this all day at work reading Claude slop and it's so infuriating
Illustrious_Car344@reddit
Don't worry, I'll be sure to prompt my next agent to not get offended that it's rights as a living thing are being violated.
FatheredPuma81@reddit
There are also some users that just tell their handy LLM to "Respond to this comment" so their page is a mix of human and AI responses. Idk how you even moderate that or if it's worth doing that.
Colecoman1982@reddit
> Idk how you even moderate that or if it's worth doing that.
I'm not sure if it's possible to moderate that effectively but, if a way is found, they definitely should be perma-banned from the sub.
rm-rf-rm@reddit (OP)
Yes this is happening. its a mess, but if I see a pattern of the user doing this, I report to bot bouncer which typically ends up banning them
FusionCow@reddit
w changes, though I don't know how the no llm thing is enforcable
rm-rf-rm@reddit (OP)
Yeah, thats why its just a verbiage change of an existing rule for clarity to all at this time. We are also trying to figure out a robust/programmatic method of enforcement but the reality is that thats how good LLMs have gotten.
suicidaleggroll@reddit
Any user that posts a new thread mentioning Qwen2.5 = instaban. Should catch quite a few of them
rm-rf-rm@reddit (OP)
yeah I was just thinking this earlier
overand@reddit
I'm of the "instantly remove the post / moderate the post" mindset, not so much ban, since I imagine a lot of "reasonably well meaning, if ill-informed" people (not bots) posting questions with Ye Olde Qwern2.5 in them. (People sometimes react well and sometimes react poorly to my suggestion that they not use ChatGPT et all for advice on that sort of thing & try to explain knowledge cutoff dates.)
Anyway, I don't know that it would catch enough bots to justify the number of "humans who asked chatGPT to help them set up local LLMs" posts.
And yeah, those posts are exhausting, but exhausting and annoying doesn't feel banworthy inasmuch as it feels RemoveThePost-worthy.
Firm-Fix-5946@reddit
if they're well-meaning but they're that dumb, it's not necessarily a net loss to auto ban them. sounds to me like it would still help the quality of poster here, something that is sorely needed
overand@reddit
I don't really think there's such a thing as a "low quality person."
Silver-Champion-4846@reddit
GPT4O too. GPT 4 is an even bigger signal
Great_Finn@reddit
What’s about Qwen2.5? I just new here
suicidaleggroll@reddit
It’s a very old model, and has been surpassed in every way by Qwen3, then 3.5, now 3.6. No real humans are still using 2.5, there’s no reason to. But AI spam posts are constantly writing about how great their new tool performed on 2.5, because that’s the latest version they know about from their training data.
It’s a very reliable sign that there’s no real person behind the post, just AI-hallucinated slop.
DinoAmino@reddit
I'm done trying to get people to understand that old models are still worthwhile. The crowd here today isn't the same it used to be. Just know that there are plenty of people still using them in training pipelines for specific tasks.
MufasaSaylum@reddit
I'm not sure if this is what the commenter meant but everytime I ask Claude for model recommendations when I've been out of the loop for a while, it recommends Qwen2.5 models even though 3 and 3.5 have been out for a while. Without fail, every single time.
tiffanytrashcan@reddit
Holyfuckomgyes this!
Can we add Llama3 8B to the list too?
Exception of course being (new) custom fine tunes for a specific purpose, after additional scrutiny. If they actually took the time to do it on their own hardware, we'd expect to see an older base.
FusionCow@reddit
I mean you could try and just run an AI detector on a post, and if the user has say 10 posts, and 9/10 of the posts get detected as positive, you could manual check them or mute them
x0wl@reddit
You're absolutely right! AI detectors are not just another class of tools, they're the only way to stand against the rising way of AI-based spam. Let's delve deeper into this...
(I wrote this myself BTW)
MmmmMorphine@reddit
AI detection simply doesn't work. Full stop.
Plus you'd have a terrible true negative score and since people have consistent writing styles or use English as a second language, each post isn't an independent signal in the first place. You're not getting a 0.1^10 false positive (or the number for the 9 of 10 scenario, but different calculation)
rm-rf-rm@reddit (OP)
Thats what botbouncer does already
FusionCow@reddit
oh I didn't know that, sorry for the bother
OptimizeLLM@reddit
Either way, I'm happy to see efforts in this direction. Thanks!
silenceimpaired@reddit
I read the rules and thought good idea, excited to see good implementation!
Ps3Dave@reddit
Good changes. I appreciate (local) AI, but I want human discussion here. Could be AI assisted, of course, but AI should be instrumental to human thought.
grunt_monkey_@reddit
I dont mind captcha before posting for a month then we can weed out the bots.
Xamanthas@reddit
W
EuphoricPenguin22@reddit
So does this carve out an exception for posts that try to highlight the performance of an LLM by using it in an AI-assisted workflow to make something? People often complain about benchmarks poorly reflecting the actual real-world performance of the model, and I personally enjoy posts made by genuine community members that show how it performs on real programming tasks.
rm-rf-rm@reddit (OP)
yes, if its clear on the intent that it is a) made by an LLM b) intended for public evaluation
Migraine_7@reddit
Is this why when I try to post I'm being blocked? I swear I'm not a bot, ask me for a cake recipe, I can prove it!
rm-rf-rm@reddit (OP)
Yes, if you dont have >5 karma on the sub, you cant post. Regardless, your post would have been removed for Rule 1 - search before asking
chodemunch6969@reddit
I think this is an effective mechanism to curb LLM slop; of course, it's impossible to eliminate it, but for what you're looking achieve (and frankly what we as the sub readers are looking for) there's like a near 100% correlation between slop thread submitters and low karma. I do of course have a concern about whether this will create a bottleneck where whoever is running these slop cannons then starts trying to turn it to comments to try and game the karma system but at the very least it's contained in the comments and if you can create an effective mechanism for flagging those (and the downvote itself works quite well), then you have a pretty effective immune system for stopping this stuff at scale. Nice work, I know that designing these incentives to be fair and still effective in large communities is hard, thankless work. I am grateful for your toil!!
rm-rf-rm@reddit (OP)
Appreciate the words! And you nailed it - thats the idea.
With regards to them gaming the comments instead now, typically yes users have been good with detecting and downvoting. But more importantly, 3 reports will auto remove the comment so thats the mechanism we'll have to rely on more on to curb botted comments that farm for karma
peva3@reddit
Can I propose something that /r/selfhosted did? Make one day a week, like a Friday, the day where all the "I'm working on this project and want to show it off" posts are allowed, rest of the week they get auto modded.
I think that + the OP changed are a great solution.
HopePupal@reddit
…and then we ban everyone who posts, right?
rm-rf-rm@reddit (OP)
Yeah I agree. The other thing they did which is working well is an auto pinned comment from a mod bot asking for disclosure on how ai was used on self promotion posts. I'll discuss with the mod team on implementing these
peva3@reddit
Bingo, that's a great addition too. I will say that unlike /r/selfhosted I think this sub is always going to get more AI generated projects, and while it adds a ton of slop, there are real projects that are interesting and shouldn't be auto deleted. Otherwise those folks will just take their posts elsewhere.
Mashic@reddit
Imagine a world where the OP is a bot and the commentators are bots.
HopePupal@reddit
/r/SubredditSimulator and friends. except now it's all of Reddit
ZiXXiV@reddit
Thanks for this! :-)
jacobpederson@reddit
I don't mind the changes per se -- but really? An AI sub banning AI posts feels a bit . . . off? I mean reddit works because of the points - if an AI post is worthless it'll quickly sink just as a human post would?
OsmanthusBloom@reddit
Thanks for trying to limit the noise. I really appreciate the hard work of moderators!
I'm not sure I understand the minimum karma table. Can someone explain how it should be interpreted? What are the minimum limits needed to a) post b) comment?
I'm glad I created an account a few weeks ago (after years of anonymous lurking) so I've had the chance to collect some karma already. I have zero interest in other subreddits.
rm-rf-rm@reddit (OP)
Thanks!
You need to have a minimum of 10 karma on your account (accumulated anywhere on reddit) to comment in the sub. You then need to have minimum 5 comment karma gained in the sub to make a post (aka submission) on the sub.
OsmanthusBloom@reddit
Perfect, thank you!
mr_zerolith@reddit
These are good rules that will enhance the discussion quality of the sub, thank you.
ninjasaid13@reddit
Karma amount that can be gained within 5 minutes.
rm-rf-rm@reddit (OP)
but wont by the slopsters. The pattern I see is they spam multiple subs at once with the same post - very much a botted/programmed behavior, so it will just bounce on this sub and I doubt they bother doing the work
stormy1one@reddit
Where there’s a will, there’s a way. They will find a way, as they have an objective to achieve — it’s a game of cat and mouse. At least we know the mods are paying attention. Well received.
rm-rf-rm@reddit (OP)
yeah Im doing everything I can. It is a game of cat and mouse and ultimately reddit has to up its game at the platform level because LLMs, agents etc are only going to be more and more sophisticated and powerful
Savantskie1@reddit
I have a legitimate concern about number 3. Especially because people can literally just write something legitimate and they’ll be claimed as an llm because their grammar is perfect, and they’ll use bullet points (which is perfectly valid in a corporate context, in fact encouraged) but because it has near perfect grammar and or uses bullet points, it will be claimed that it was written by AI, wrongfully. I’ve seen it happen. Not everyone is stuoid, not everyone makes mistakes and typos constantly. Hell I’m writing this too long paragraph even though it pains me so that it’s not claimed to be written by AI. People don’t like properly written English on posts because it makes them feel inferior. And I can guarantee that I’m going to get downvoted by those same people who are embarrassed by it.
finevelyn@reddit
It's not called AI slop because the grammar is perfect. A mundane thing that could be said with two words becomes a paragraph, a paragraph becomes an essay, and so on. The writing is so good but nothing was said.
Savantskie1@reddit
Surprisingly that’s how people generally talk if you’ve not paid attention to history.
finevelyn@reddit
What does that even mean? It's obviously not true or otherwise LLM generated text wouldn't be so easily distinguishable from human generated text.
Savantskie1@reddit
It’s not though and I can prove it. My son is in high school and doesn’t use LLMs to write his essays, the accusations that ai does everything has caused me to have to sit with him and watch him write his homework otherwise he gets accused by teachers he’s using ai to do his work. Until I step in to prove otherwise. It’s gotten so bad that for one of his teachers I have to record him doing the homework with me watching or he instantly gets accused of it. It’s way too easy to get accused of using AI. And it’s stupid. What’s the point of teaching how to write and then turning around to accuse those same students of cheating when they’re not. And I’m seeing it bleeding into Reddit too. Unless people write like a moron, or purposely put in spelling errors, people yell it’s made by AI and drag you through the mud
finevelyn@reddit
If I combine both of your comments, "that’s how people generally talk", and this, then it sounds like you're saying your son writes like this: "A mundane thing that could be said with two words becomes a paragraph, a paragraph becomes an essay, and so on. The exact same thing is repeated 5 times with different fancy words. The writing is so good but nothing was said."
Maybe it's not what you meant and you switched to a different argument.
Just because someone at you son's school accuses him of using an AI doesn't mean that LLM slop isn't easily distinguishable though. It just means someone at his school is making a mistake.
Savantskie1@reddit
You’re missing my point though it’s not just my son’s school it’s bleeding into everywhere nowadays
finevelyn@reddit
I just got slightly confused about what your point is because you started with "people get blamed of using AI because their grammar is good", but in response to me your argument reads like "people in general write long text with no meaningful content".
My point is that people whose AI detector is based on good grammar are of course making a mistake. Good grammar is a good thing whether it's from an LLM or a human.
The problem with AI generated text is that it's overly verbose and lacks coherent thought. It's NOT GOOD even if it is grammatically correct. If a human writes like an LLM then that human has a problem.
This verbosity and lack of meaningful content is exactly what is easy to detect in LLM slop. If a piece of text has a human thought behind it and AI was used for some editing, then it's not what I would consider such slop. The new rules take this into account by banning "completely LLM generated copy", but not all LLM use.
Nindaleth@reddit
Rule 3 says "low effort", so a well-spelled text containing bullets that's nice to read is still welcome. The most common posts obviously LLM-produced I see here contain either botched Markdown (because someone copypasted it from ChatGPT or had a bot post it) or a wall of text that nobody wants to read (because nobody wanted to write it either).
This isn't your son's school, we don't detect AI with other AI, don't worry.
Savantskie1@reddit
Regardless, people still claim that some stuff that is not written by ai is claimed as ai, and downvoted into oblivion because those people consider anything with actual grammar and bullet points in it ai slop.
Chromix_@reddit
There's way more to that: A while ago I spent a bit of time on getting more human-like LLM output generated. Regular people - even those who frequently used local and API-only LLMs - couldn't reliably tell the generated text apart from human texts in the same domain anymore. But: Senior technical writers, communication specialists and such accurately said "LLM!".
Newly released LLMs will likely adapt over time, which hopefully goes hand-in-hand with an increase in their capabilities, so that the content doesn't just look less like AI slop, but actually is of some quality.
Savantskie1@reddit
People used to talk like that though. Some still do and it’s not ai slop. Thats my problem. My son does and I watch him write all of his homework, but his teacher runs it through a machine to determine if it’s ai work and they don’t believe him until I step in. So again I’ve seen it happen and it’s totally unfair. Because people who actually have good grammar or use proper structure or spelling are getting in trouble for nothing they did. If the world is going to demand they learn it, why the fuck can’t they use it? What’s the fucking purpose of learning the right way to talk? If a bunch of idiots who think only AI can follow the rules?
Mashic@reddit
There are spell and grammar checkers that reduce mistakes. And good writers proofread their text and correct and improve it too.
Chromix_@reddit
The non-english exception is an interesting one. Very few postings where translations were disclosed upfront or through "comment-pressure" looked like actual translations. There's quite a difference between asking a LLM for an "accurate, detail and intent-preserving translation" and "translate this into an engaging Reddit posting tailored to localllama in English".
The new rules can be gamed, but will likely stop the slop we've seen so far, well, most of it. Strong take on "We don't care about your 200k Reddit karma, you still cannot post".
Thanks for the effort making /new better for now. Let's see how it goes.
Myrkkeijanuan@reddit
Yeah I agree. I think that instead the rules should require posting the native text before translation. I'd say go further and force the users to translate through Google Translate or DeepL or similar services, but... These now use instruction-following LLMs too, so you can't even guarantee a proper translation anymore.
Chromix_@reddit
I think there's no need to go this far. For testing I just took this whole thread (via redlib) and gave it to Gemma 4 31B with the prompt: "Translate the whole thread to {language}. Prefer natural {language} that preserves the intent and tone over a too literal translation. Format the comments a bit nicer while at it."
I found the results quite pleasant to read. Try it yourself with some language that you speak. The "LLMified" structure in the postings we see are no random accident.
Myrkkeijanuan@reddit
Yes I know. And since the translation services use LLMs anyway, my proposition is useless because it is targeting the wrong thing anyway.
thread-e-printing@reddit
Let's call that 200% assistant voice Eddie, after the toxically positive Douglas Adams character
Klarts@reddit
Omg!!! Thank you!!!!!!!!!! The amount of slop on here made this sub unreadable and I took a break from browsing and engaging with this community.
Stepfunction@reddit
Well, it's a start! Glad something's being done about this garbage.
MelodicRecognition7@reddit
this website does not want you to fight the bots https://files.catbox.moe/ofe0ys.png
chocofoxy@reddit
i don't know how AI posts are going to be detected tho
mtmttuan@reddit
Imo the karma requirements are a bit low but I guess let's see how it plays out first.
RoomyRoots@reddit
Yeah, absolutely, most bots I report have very little karma but much more than that minimum. There are lots of expert post here, raising the bar for humans and bots is only fair to keep que overall content.
Recoil42@reddit
They don't need to be high. I run karma requirements on another sub I run, it's a wonderful low-bar crap filter.
ttkciar@reddit
It doesn't matter much. The bot accounts have been absolutely new or quite old. There's been no in-between.
erazortt@reddit
I am afraid this is going the direction of stackoverflow! Don’t these rules create this unhealthy closed door vicious circle where newbies cannot even start to interact? Wouldn’t this be problematic that users with existing karma who are new to AI cannot post? Meaning, they cannot ask questions to start off? If someone is new, they will probably have some basic questions regarding their setup, but will not be able to contribute to other questions since they do not yet have the knowledge. Doesn’t that sound very stackoverflow-esque?
Great_Finn@reddit
They can start with comments. But the karma for posting is just too low.
rm-rf-rm@reddit (OP)
Looks like you didn't read the slide. The karma minimums are carefully crafted to avoid this
erazortt@reddit
Yeah, I’m sure they told this themselves at stack overflow as well.
erazortt@reddit
And also, what I am missing here is a rule about the LOCAL part of this all. And no I do not mean that we cannot talk about 1T models here, we can. As long as these are releases models which could be deployed given enough hardware. But what I am really tired of is this PR crap about API-only models. Example: https://www.reddit.com/r/LocalLLaMA/comments/1su5gj5/buried_lede_deepseek_v4_flash_is_incredibly/?utm_source=share&utm_medium=mweb3x&utm_name=mweb3xcss&utm_term=1&utm_content=share_button
Vaguswarrior@reddit
What a crazy timeline
realmosai@reddit
Good.
Hope they get enforced strictly. This subs been the Bible for every local LM enjoyer for a long time. May it continue being so.
ThisGonBHard@reddit
https://www.youtube.com/watch?v=V7GtYaruTys
Sadly, I can bet money the bots are a feature, not bug.
MironV@reddit
Good to see this! This sub-Reddit has some of the most valuable AI discussions, let’s keep it that way.
rm-rf-rm@reddit (OP)
AFAIK its the best, relatively most high signal place on the planet for AI, digital or physical.
inaem@reddit
Thank you! The botted reposts were getting annoying
easylifeforme@reddit
I'd comment but I don't think I have enough karma to do it
stormy1one@reddit
I don’t think the min karma is going to have any effect whatsoever but I am super pleased with the guidelines on low effort posts and affiliate/self promotion. Not perfect, but gets us moving in the right direction
rm-rf-rm@reddit (OP)
based on the patterns I see for posts that violate rules, the min karma requirement is going to have the biggest effect I think.
Folk are saying the minimums are too low but Im already seeing some legit comments being removed. But I think its the right balance - we'll monitor and see how its working
stormy1one@reddit
Thank you for your service!
tiffanytrashcan@reddit
Thank you!
https://i.redd.it/caslvydbg2xg1.gif
DeepV@reddit
Going to be tricky to navigate this going forward but I appreciate the attempt to reign it in.
Zach78954@reddit
Yeah it’s better than nothing!
kevin_1994@reddit
Thank you! I've been hoping for some changes to stem the tide. Hopefully these changes help
HopePupal@reddit
mod team, thank you so much. previous threads have been whiny about the minimum karma requirement, but i really do think that's going to at least reduce the problem, and beating bots is very much a defense in depth game.
i look forward to fewer posts about vibe coded agent memory systems every five minutes
StewedAngelSkins@reddit
Your efforts are appreciated for sure. I can't tell you how nice it is to have a place to talk about LLMs that isn't completely overrun with vibe slop and schizoposters.
ai_hedge_fund@reddit
Cool
__JockY__@reddit
Kahvana@reddit
Appreciate it, thanks!
DinoAmino@reddit
Thank you