[Build Help] Best RP models and frontends for 4090 (24GB VRAM) / 64GB RAM? (No SillyTavern)
Posted by Ok_Cartographer_809@reddit | LocalLLaMA | View on Reddit | 8 comments
Hi everyone,
I'm looking for some recommendations to level up my local RP experience. My current setup is a Windows machine with an i7-14700K, 64GB DDR5 RAM, and an RTX 4090 (24GB VRAM).
I am currently using LM Studio, which I like for its ease of use. However, I’m looking for a frontend that is more specialized for Roleplay—specifically something with robust support for Character Cards and Memory/Lorebook features—without going down the SillyTavern rabbit hole.
For models, since I have 24GB of VRAM and plenty of system RAM, what are the current "S-Tier" recommendations for high-quality, creative RP in 2026? I’m interested in models that:
-
Excel at nuanced prose and avoiding "GPT-isms."
-
Can handle long-context roleplay without losing character consistency.
-
Fit well within my hardware (I'm open to GGUF or EXL2).
Questions:
-
Is there a frontend that bridges the gap between LM Studio's simplicity and SillyTavern's features? (e.g., Faraday/AnythingLLM/etc.)
-
Which 30B-70B models are currently the favorites for immersive storytelling on a single 4090?
Thanks for the help
Silent-Spaz259@reddit
Those suggesting to "vibe code" a system are ignorant to the requirements of what you're asking. A slop machine won't be able to hallucinate a competent solution.
Try search GitHub for "character AI" or "AI roleplay". There are some ok alternatives.
Lorian0x7@reddit
Skill issue, or you must be the ignorant here. I vibe coded my custom frontend with no issues and it feels a lot better than Silly Tavern.
Silent-Spaz259@reddit
Sure you have 👍
Lorian0x7@reddit
🖕
Silent-Spaz259@reddit
🤣 thanks for the laugh pal.
FinBenton@reddit
I'w replaced everything with gemma-4 31b on 5090, 26b moe is very good too and easier to fit to 24GB.
Personally I have vibe coded a few llama.cpp wrappers for backend, I can definitely recommend spending a couple of days to just code a web ui around llama.cpp yourself, its easy and you get something that works much better because you can personalize everything.
Lorian0x7@reddit
This. I did the same thing. Somehow models inside Silly Tavern always felt broken. Building your own frontend is the best advice here
Background-Ad-5398@reddit
why dont you just drop that prompt into your favorite sota model and have it build you a frontend