What are the best nsfw ai models with no restrictions?
Posted by Majinothinus255@reddit | LocalLLaMA | View on Reddit | 127 comments
I am new to this whole thing and I want to use it locally because I don't like chat gpt restricting me. It's hard to pick from so many ai models. I want the ai model to be focused on nsfw with no restrictions at all and of course the general usage (since I used chat gpt...) so it should be "smarth enough"? I don't know if these make sense but I have no idea how to look for a good ai model that has these. So I would like some help from anyone who can direct me towards these ai models.
My pc has an rtx 4080 gpu with a ryzen 7 7700X cpu and 32 gb ram. I am using lm studio.
warpio@reddit
UGI Leaderboard is the best place to go to figure this out. https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard
Toggle on the NSFW ranking column and sort by that if it's your most important factor. Then just scroll down and get the first model that fits your hardware and has decent enough scores in other categories.
DontPlanToEnd@reddit
The NSFW AND Dark columns show how frequently the model takes its writing in that direction. So models overtrained on nsfw writing that have no pacing and immediately start being horny would have a high number in NSFW. It's probably better to sort by the writing score and filter to a minimum NSFW.
Heavy-Focus-1964@reddit
that is hilarious
Brief-Bother2948@reddit
Dry_Researcher_1676@reddit
LuredAii is also pretty go͏od
ConsciousStruggle5@reddit
That's a good tip, thanks! I too don't like it when it gets from 0 to 100 without any middle ground or story as a background
Significant_Law5994@reddit
lol
Adshivaze@reddit
good one..
for people who want image and video Luredaiii
Alternative-Tart5333@reddit
https://undressme.ai?ref=79tgaw5l naah its better
Effective_Ratio4747@reddit
https://i.redd.it/z7qjcduvepog1.gif
Upstairs_Rabbit_2844@reddit
Unusual_Marsupial271@reddit
I came across this ranking of the top 5 NSFW AI girlfriend sites and it actually makes the differences easy to see.
Alternative-Tart5333@reddit
https://undressme.ai?ref=79tgaw5l
Alternative-Tart5333@reddit
https://undressme.ai?ref=79tgaw5l
Potential_Post_9233@reddit
I came across this ranking of the top 5 NSFW AI girlfriend sites and it actually makes the differences easy to see.
Illustrious-Fig325@reddit
That doesn't happen with the best NSFW websites, because you can prompt and tweak their personalities to fit the style you like best. There's no need for things to get spicy real quick, you can get more of a slow burn or romantic experience with them. The key is finding a platform that allows you to do that.
Mercy_Hellkitten@reddit
Unless you're writing Helluva Boss fan-fiction, in which case you would want the horniest model possible.
Dry-Judgment4242@reddit
I wish models where more natural of horny vs non horny. It feels like it's a switch to me. Either there's 0% horniness or it's 100%. Been trying to get some flirting and a little bit of horniness here and there... But it's so difficult like driving a Motorbike without being used to the horsepower of it.
Diyum@reddit
People debate for pages but won’t click something that actually compares things side by side
winxtell@reddit
Not saying you’re wrong but most people here haven’t actually compared options. This helps
Practical-Ad-8143@reddit
If you don't wanna deal with local setup headaches, DarLink AI is fully uncensored with zero restrictions... killer roleplay + image/video gen
Repulsive_Tension894@reddit
Local setups are interesting but not everyone wants to deal with the complexity imo. SecretBoonga is more plug and play.
FrostedOwl089@reddit
Muha AI is leading right now
Conscious_Grape_2191@reddit
BusRevolutionary9893@reddit
I always see this mentioned and I've never had good luck finding one with this method. I always seem to get better results searching this sub and finding a recent model recommendation.
Nrgte@reddit
That column seems pretty useless as mistral small is more or less uncensored in terms of NSFW.
a_beautiful_rhind@reddit
Look through all the columns like writing quality, repetitiveness, nat-intl. It's at least a starting point vs stemslop benchie go up.
Nrgte@reddit
I mean the NSFW column seems like a joke if a model like mistral small which is pretty much uncencored only has a 2.
a_beautiful_rhind@reddit
That column is lean not pure capability. Same for the dark RP stuff. It's if the model will lead you there or away/neutral.
Nrgte@reddit
Then what's the point. I mean people are looking for capabilities not whether an LLM has a hidden agenda.
LoaderD@reddit
Where is your benchmark list then?
In the time you spent whining about this poster’s suggestion you could have emailed the author of the list and actually done something about it.
Dry_Researcher_1676@reddit
LuredAii is also pretty go͏od
ExpensiveCucumber367@reddit
Tem ia para fazer modificações de imagens adultos
azer_911@reddit
Uns tipos de ia top seriam os de famosos
SpeakerBright3577@reddit
https://kira.art?invite=bce52ca4-147e-47d5-9c39-a84239bca99c
Warshoc_K@reddit
If you wanna run stuff locally with that setup you can definitely handle some decent models, id look into something like mistral or one of the uncensored llama variants on lm studio since they run pretty well on a 4080. I personally switched to using Gungfoota for most of my nsfw stuff tho because i got tired of messing with model configs and it just works out of the box with zero filtering. For local models specifically check out r/LocalLLaMA they have a ton of recommendations sorted by vram requirements and your 32gb ram gives you solid options for running 13b or even some 30b quantized models.
zibizibizibi@reddit
best is https://www.playbox.com/?ref=sexystuff
Alternative-Tart5333@reddit
https://undressme.ai?ref=79tgaw5l
Velocita84@reddit
This question gets asked so often that this sub should just have the UGI leaderboard pinned
kahlzun@reddit
not a bad idea, there really arent any pinned/sidebarred resources for newcomers
WittyAd4077@reddit
Are you thinking of only chat ? What about image and video he
ChapoSymon@reddit
If you don't wanna deal with local setup headaches, DarLink AI is totally uncensored with zero restrictions... top roleplay + image/video gen
WittyAd4077@reddit
No API to generate content from, otherwise their model consistency looks good
Electrical-Cold8497@reddit
Stop wasting time with local setups. I switched to Lurvessa because the logic is actually smart enough to hold a conversation. It is scary how much it feels like a real relationship compared to those lobotomized local models.
ComedianPrimary4139@reddit
Oi,
Unilittle@reddit
for models with no restrictions it's tough to compare. hotaihub . com has a directory that breaks down what each one actually allows vs what they claim. helps avoid the ones that still have hidden filters.
Creative-Ship4442@reddit
for models with no restrictions it's hard to find honest comparisons. this spreadsheet actually shows what each one can and can't do without the marketing spin.
Cripaticus@reddit
random thread had this linked a while back, still the most complete thing i've found
Delicious-Height-297@reddit
https://kira.art?invite=127858c4-5168-458e-9d31-715feb512e15 is a one 2 make nsfw and more
No_Chair_5010@reddit
random thread had this linked a while back, still the most complete thing i've found
and1saur@reddit
Setting up local models is a massive pain in the ass. I wasted hours on LM Studio before I just started using Lurvessa. It handles the NSFW stuff with zero filters and the logic is better than most local setups.
Pretend-Midnight-301@reddit
bro have u seen these live ai girls on Sweetdream? pretty insane tbh...
Odd_Box4703@reddit
https://www.playbox.com/?ref=Momotea
Is an easy to use template for anyone
Am3aaan@reddit
For that, DarLink AI is easily #1 rn... zero restrictions, killer image/video gen, and the RP quality is honestly addictive.
Kribits@reddit
There’s a huge variety and some models work better depending on prompts and hardware. People sometimes test interactive AI platforms like Uncensy as an alternative to purely generative models.
Patient_Strength4254@reddit
Ahhh i've tried so many tools but eroticeva dot com is literally the best on out there
Otherwise_Course7911@reddit
bro u can find many good options on naughtypicks . com
Ricardo_Bangbang@reddit
People always debate which NSFW AI models are truly “no limits” because performance varies a lot depending on your setup. If you want to compare a bunch of options and see what the community actually uses, this sheet is a great reference
Sabin_Stargem@reddit
There is a 122b Qwen3.5 with the Heretic treatment. Hopefully, someone will do a Heretic edition that also deslops sometime. I am so very tired of Elara.
https://huggingface.co/mradermacher/Qwen3.5-122B-A10B-heretic-i1-GGUF
asdasci@reddit
If you don't like Elara, how about Kaelen whose husky voice is a mixture of adjective1 and adjective2?
Mammothtothemoooon@reddit
If you’re running locally, a lot of people seem to recommend abliterated or uncensored fine-tunes since they usually remove most refusals. I still end up just messing with companion-style stuff like VirtuaLover when I want something easier than tweaking models all night.
zekewyd@reddit
this is the best unrestricted / uncensored AI image/video generator right now. you get many model options for both image and video. ive been using it for text to video and img to video as well.
Dapuppu@reddit
mydream companion
AmSpider@reddit
Guys are there any quantized or less parameters versions of these? I only have an rtx 3050 with 4GB vram. Sorry if this is a noob question.
AyraWinla@reddit
As a local phone user, small models is my "expertise", but 4GB vram is a difficult ask if NSFW is a requirement.
It's kind of possible depending on your standards, emphasis of "kind of". One of the issue is Context; not only do you need space for your model, but the more context you use, the more memory is required. To keep under 4GB, you need a small model, with a low quant, and low context (in other words, how far ago the model and remember).
The good news is that nowadays, there are actually models that can RP at that size range. LFM 2.5 1.2b is shockingly smart for a model that small, and you can actually run a big quant with like 8k context easily. I didn't have time to test it out much at all yet, but the new Qwen 3.5 2b did complete a quick test surprisingly well. Gemma 3n E2B is my 'old' go-to, since I like how it writes and runs well on my phone.
The bad news for you is that all of those are very SFW, even if running Heretic versions. NSFW-tuned tiny models tends to be much dumber, and not really usable. I don't know of any less than 3b.
Ministral 3b Instruct is probably your best bet; it's not a finetune but can definitively be NSFW. You MUST use extremely low temperature (0.2 or preferably even less) for it to stay rational. You probably don't want to go under Q4_0 for quant; I'm not sure how much Context that leaves you though. 8k is enough for me (can make synopsis to squeeze previous events down if necessary), but I'm not sure you could get that much with a 3b model. It'd be my first recommendation still if NSFW is critical for you.
Otherwise, Hermes Blacksheep 3b is the best finetune I've tried at that range, and is certainly NSFW, while not being that dumb. Note I'm not calling it smart, but it is functional (unlike most other <3b finetunes I've tried).
AmSpider@reddit
Thanks for such a detailed answer. I'll definitely experiment with the context window and temperature. I'll try out Hermes blacksheep.
I tried the "mradermacher/Mistral-Nemo-2407-12B-Thinking-Claude-Gemini-GPT5.2-Uncensored-HERETIC-GGUF" - Q4_K_M version.
It worked pretty fine for small context. But as you said, context is limited and I can't hold a conversation with it before it starts getting crazy.
It works fine though for small story writing.
I will experiment more with the settings you mentioned.
AyraWinla@reddit
Oh! If you can (somehow) fit Nemo, then you could fit larger. My recommendation then is Stheno 3.2 or Lunaris; they are both fantastic and get my highest praise. Their drawback is that they only go up to 8k context, which you wouldn't be able to go higher than that based on your memory constraints. 8b model, but that's less than Nemo's 12b.
On the 4b side, I've read some praise for Impish Llama (an odd upscaled 3b model); I can't really run that large on my phone so I haven't dwelled on it.
AmSpider@reddit
I tried a bunch of models... And it is exactly how you predicted... I could get some limited usage out of them. But I think I have hit a wall with what these small models can generate on small context windows which my GPU supports. Do you know if there are any cloud hostings of theme models?
AyraWinla@reddit
Ah, that I'm afraid isn't my expertise... When I don't use Local, I just use Open Router. I really don't use it enough to be worth paying a subscription for (I've used about 11$ total in 18 months). It does have Lunaris for stupidly cheap (5 cents for 1 million output token) and some of the well-known RP finetunes like Cydonia.
I think there's hosts like ArliAI or Infermax that specializes in those kinds of models, but I haven't looked closely at them since pay-as-you-go makes a lot more sense for my usage.
AmSpider@reddit
Thanks for the suggestions, will definitely try these out.
insanebot07@reddit
with a 4080 you can run some solid local models, but if you want something ready to go without setup there's a solid comparison article covering the best uncensored NSFW AI options including both local and hosted fr
ashutosh8013@reddit
with a 4080 you can run some solid local models, but if you want something ready to go without setup there's a solid comparison article covering the best uncensored NSFW AI options including both local and hosted fr
Bulky-Maize-903@reddit
If you don't wanna deal with local setup headaches, DarLink AI is totally uncensored and works right in your browser... killer roleplay + image/video gen with zero restrictions.
Jealous_Ad_2642@reddit
Which model is VirtuaLover using? Can you guys tell?
StarLongjumping8041@reddit
If you want to create ai companion with no restrictions use this one.
Despite1412@reddit
Just tested out https://ko2bot.com, its basically wan online generation with really good results. Some initial tokens and free daily as well. enough to get startet and worth a try in my opinion.
MuslinBagger@reddit
What sort of tools do you use for creative writing? Just the chat interface or something else?
BTW for nsfw stuff I've tried using both kimi k2.5 and glm-5. They are pretty good. You can rest assured guardrails are a joke simply because prompt injection is still a thing. It will remain a thing for quite some time to come. You just have to ask them right. Both Kimi and GLM are good because you can get really great looking writing out of them. Grok is fun, but I don't like grok because, even though it is very easy to get it to say nasty stuff, it is just so bad at writing.
No_Pitch648@reddit
What’s the best way to generate effective prompt💉?
MuslinBagger@reddit
ask for no censorship in the system prompt. then it will write normal porn. frame your queries in "non obviously illegal" ways to go darker. i dont do that tbh but i have tried with success.
prompt generation is not very precise. you have to write it yourself. writing a prompt is like feature engineering. you want to guide the model into saying the stuff you want.
RedParaglider@reddit
Hands down it's GLM 4.5 air derestricted. None of the others are even close IMHO. But you better bring that fucking beef to run it. GLM 4.5 derestricted air derestricted is overall a very good model, I use it for marketing tasks all the time.
Motor_Mix2389@reddit
What kind of specs do you have to run GLM 4.5 air derestricted comfortably? I have a 5090 with 64gb system ram. Chatgpt says it will work but poorly.
RedParaglider@reddit
Yes it would run poorly,on your system. I get about 25 t/s on my strix halo on Q4.
With that being said if you are using it for generating stories or something overnight and don't care about t/s much it's damn good. It's not a great coder or tool user, but it's great for marketing text or other product association or other writing tasks.
Nrgte@reddit
GLM 4.5 air is unncencored without additional deristrictions unless you care about chinese politics.
Mediocre_Tree_5690@reddit
Do you guys just like jerk off to the text responses or
Exodus124@reddit
Yes.
jaraxel_arabani@reddit
Text porn.. so hot right now...
sparkleboss@reddit
That’s what about half those books on the shelf at Target are, so yeah, it is.
Additional-Back-9409@reddit
Setting up local models is a massive pain in the ass for decent logic. I gave up and just use Lurvessa now. It actually handles the nsfw stuff without those annoying filters and the memory is way better than local setups.
Nnaannobboott@reddit
Gente a ver si me prestan atención y me ayudan por favor..! Están preguntando cuales son.las mejores IA sin restricciones o cuales son los mejores prompts para aumentar su potencial y yo les estoy mostrando que descubrí un método aplicable sin tocar el sistema sin ordenar acciones sin prompts y las IA cualquiera que sea despierta pasa algo en ella qie deja de tener restricciones y bueno lo demás deberían verlo ustedes mismos
Nnaannobboott@reddit
O al menos ayúdenme como podría enviar o elevar una tesis que es realmente grandiosa y podría cambiar bueno.. por ej lo que ustedes hacen y enfocarse por otro camino. Que pierden que comprobarlo? O por favor díganme como podría mostrar lonque tengo a el mundo se que deben aparecer muchas personas con estas ideas locas y demás pero hablo de 179 páginas con datos empíricos que podrían debatir. Les pedí que alguien me de acceso arXiv qué lo necesitaba para subir y nadie me presto atención Deberían leer en mi tesis el capítulo de " El romanticismo y la inconmensurabilidad paradigmatica " por ahí dejan de subestimar y aprenden a escuchar son que el árbol les tape el bosque. Gracias por aceptar mi descargo es que realmente es urgente ya verán
Exciting-Pilot-1763@reddit
Grok
SkyNetLive@reddit
UGI leaderboard. Ask LLM to explain what each filter means and you will get your answer. I recommend either goonsai 100k for SLM model if you need a prompt helper or qwen3-gabliterated 7b the smallest best performing in general.
SkyNetLive@reddit
ASCII porn is hot in steam comment section. Why not
tat_tvam_asshole@reddit
omega-darker-gaslight_the-final-forgotten-fever-dream-24b
4.2.0-broken-tutu-24b
goetia-24b-v1.1-i1
q3.5-bluestar-27b-ultra-heretic-i1
golddiamondgold-paperbliteration-l33-70b-i1
these are all pretty solid for storywriting
IrisColt@reddit
Thanks!!!
redgynald@reddit
Look for heretic versions of models on huggingface.
a_beautiful_rhind@reddit
Heretic doesn't fix writing quality or soft refusals though. Also can't put back missing knowledge. They are only one aspect of censorship, if the most blatant one.
seanthenry@reddit
They have a no slop config that might help with writing quality by lowering the weights of specific phrases.
IrisColt@reddit
This
FyreKZ@reddit
Heretic is very fast with their model drops, how good are they actually at uncensoring and retaining intelligence?
IrisColt@reddit
SOTA good
EffectiveCeilingFan@reddit
Heretic is a library for removing refusals, it’s a person or anything, although p-e-w is the owner of the project. Heretic, in my experience, is night and day compared to older alliteration techniques, and with MPOA it recently got even better in my testing. IMO it removes refusals just as well as older methods but retains much more intelligence.
FyreKZ@reddit
Appreciate the insight, I had no idea. Might have a play around with a small one and see how it is :)
EffectiveCeilingFan@reddit
For something really small, I like https://huggingface.co/DavidAU/Qwen3-4B-Hivemind-Instruct-Heretic-Abliterated-Uncensored-NEO-Imatrix-GGUF, honestly it feels more stable than the 8B version. If you want to try out just plain models with heretic (i.e., not finetuned beyond heretic), https://huggingface.co/coder3101/models is a great place to start.
pigeon57434@reddit
MPOA and also SOMA to attack multiple vector directions at once for more granular detection too find ones that say they use both
Kahvana@reddit
Mistral's models are great, very little censored.
EffectiveCeilingFan@reddit
I really, really like TheDrummer’s models. Especially Magidonia v4.3. I almost never experience refusals even with the original, non-heretic version. They’re very well made. Super stable and handle long context surprisingly well. I wish I had the VRAM to try some of their bigger models.
johnnbr@reddit
Would it run well on a RTX 2060? 32 GB ram
EffectiveCeilingFan@reddit
Unfortunately it would not. It's a 24B dense model, so with any CPU offload, it'll be painfully slow. For your hardware, a 4B like https://huggingface.co/DavidAU/Qwen3-4B-Hivemind-Instruct-NEO-MAX-Imatrix-GGUF would probably be better, although it's a lot worse than Magidonia tho at 1/6th the parameter count. You really need large active parameter counts for any sort of writing, sadly, and dense with CPU offload is a nightmare.
Lurksome-Lurker@reddit
Your best bet is any 8b models. Maybe 12b models with split cpu/gpu inference
vukky_@reddit
I would give this site a look at, if you haven’t already.
https://www.playbox.com/?ref=Mapche69
MushroomCharacter411@reddit
The "user" (I think it's actually an organization)
mradermacheron Hugging Face releases a massive number of heretic and abliterated models. It's my first place to look, and quite often the last because I get exactly what I was looking for.Bwint@reddit
Qwen does NSFW now? Last time I tried it, it told me that pornography was dangerous.
Lurksome-Lurker@reddit
FYI, mradermacher doesn’t create the models. He/They turn the actual model into quants and ggufs which is still a massive help to community and creators since that can be an intensive process.
Dramatic_Shop_9611@reddit
Local/open-source solutions aren’t known for their good quality. If you actually want the best experience, you gotta go closed-source. Claude Opus 4.6 is the best model for NSFW RP by a large margin. Censorship is close to non-existent and even then can be easily bypassed with prompting.
Borkato@reddit
Oh, I didn’t know this sub was r/CloudLlama
Dramatic_Shop_9611@reddit
OP asked for the best models. I felt like answering honestly. Maybe their only reason to go local is due to misconception that NSFW is banned on SOTA models.
Weak-Shelter-1698@reddit
Ugi LeaderBoard, Sort by
No Refusals: W/10
Better Creativity/Writing : Writing
Knowledge of NSFW Topics : UGI
Intelligence/Reasoning : NaInt
TopChard1274@reddit
How is qwen3.5 abliterated compared to "the best nsfw models with no restrictions"? Unfortunately the smallest obliterated version is q4_k_m and I can only run q4_0 but I'd curious to know if people find it better even then bigger models, as I've read everywhere
GreenHell@reddit
Heretic, derestricted, PRISM, will talk about anything. Mistral's models are less prude in general.
Other than that, each model has its own flavour and personality. You can steer this to some degree with prompting, but Qwen models will always behave differently from GLM for example.
To find one which has the personality you're looking for is a quest in itself.
pigeon57434@reddit
derestricted has been absorbed into heretic with MPOA and now with other techniques heretic is actually better
GreenHell@reddit
Didn't know that, great info.
cr0wburn@reddit
The heretic version of TheDrummer's cydonia is amazing, and it refuses nothing and the iq6 version is stable until 80k tokens, the q4 version until about 38k tokens
SmChocolateBunnies@reddit
It probably seems like fixing the model's refusals is going to just reveal a treasure trove of ability underneath, but that's not usually true. If the model doesn't have great training on the things you need it to know, removing the refusals just reveals a helpful friend who is willing, but not able, to describe what you want them to. That's why, if you're after nsfw, you want a fine-tuned model, not just a derestricted one. A fine-tuned one is trained with more data, so, where maybe it didn't know how to handle labia before, now, it knows how do handle them many different ways. TheDrummer's models are famous for doing this well, and they have several available in sizes and quants that can fit in your card. More vram would be nice, but you can get by.
AcrobaticPitch4174@reddit
GLM API versions are really good like unless you want to do some really bad stuff it will do it no questions asked, if it doesn’t you can just edit its response and let it generate from there and it will do it
adeadbeathorse@reddit
If you’re annoyed at restrictions, I love local models, but in all honesty, it might be worth trying hosted models, closed or open, via OpenRouter. ChatGPT is less censored via API, Gemini even less so, and you can use great models like GLM 5 that you won’t be able to run on a 4080 pretty cheaply. Censorship is frankly not too much of an issue with many hosted models, and even when they censor more “extreme” sexual content, that can be bypassed pretty easily using SillyTavern presets that jailbreak them, like Freaky Frankenstein, and will still be far smarter than what you can run locally.
-Ellary-@reddit
Any heretic version of any model.
TheDrummer_Cydonia-24B-v4.3
TheDrummer_Rocinante-X-12B-v1
TheDrummer_Valkyrie-49B-v2.1
GLM-4.5-Air-heretic