AesCoder 4B Debuts as the Top WebDev Model on Design Arena
Posted by Interesting-Gur4782@reddit | LocalLLaMA | View on Reddit | 23 comments
Was messing around earlier today and saw a pretty strong model come up in some of my tournaments. Based on the UI and dark mode look I thought it was a GPT endpoint, but when I finished voting it came up as AesCoder-4B. I got curious so I took a look at its leaderboard rank and saw it was in the top 10 by elo for webdev and had the best elo vs speed ranking -- even better than GLM 4.6 / all of the GPT endpoints / Sonnet 4.5 and 4.5 thinking.
Then I looked the model up on hugging face. Turns out this is a 4 BILLION PARAMETER OPEN WEIGHT MODEL. For context, its closest open weight peer GLM 4.6 is 355 billion parameters, and Sonnet 4.5 / GPT 5 would be in the TRILLIONS TO TENS OF TRILLIONS OF PARAMETERS. WTAF?!!!?! Where did this come from and how have I never heard of it??

sleepy_roger@reddit
These are the kind of posts I love here so much. Testing this out this morning and it's pretty good actually, I'm getting quality I haven't seen in design since GLM 4 32b. On the downside they are a bit samey when asked for certain types of sites such as a tech blog. However they look good, thanks for finding and sharing this!
madaradess007@reddit
you know it was trained on snake games and web OS'es
lumos675@reddit
Do you know where to download this model?
Wow a 4b model as good as claude? WTF?
pmttyji@reddit
From the model card:
Note: This is the version of AesCoder-4B model for only webpage design.
Yesterday replied with below model to someone who was looking for python FIM in small size. Try & share your feedback.
https://huggingface.co/JetBrains/Mellum-4b-dpo-python
lumos675@reddit
bro i tested this model...
this model only accepts 8192 context. what can i do with that amount?
also i asked it to create a comfyui node. This was the answer:
write a comfyui node
"a-node-name-without-spaces"
print("Your comfyui node name is: " + input_string)
input_string = input_string.replace(' ', '-')
AXYZE8@reddit
And thats the proper answer, because that's FIM model.
It autocompletes your code, it doesnt follow instructions as assistant.
8k context is ridiculously high for FIM.
lumos675@reddit
Ahaa so that's what the model does
pmttyji@reddit
Some people do use Qwen3-4B models as FIM which has more content length. Try 2507 versions.
lumos675@reddit
Do you have any 4b or bigger model up to 30b for java developement? Kotlin or Java for android developement?
pmttyji@reddit
Same JetBrains has 2 other models
https://huggingface.co/JetBrains/Mellum-4b-sft-kotlin
https://huggingface.co/JetBrains/Mellum-4b-dpo-all (for all coding languages, check model card)
There's not many recent models under & around 30B size. Here some
https://huggingface.co/bigcode/starcoder2-15b-instruct-v0.1
https://huggingface.co/inclusionAI/Ling-Coder-lite
https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct
Tesslate made some models for Web & UI works. Keep an eye on their collection for any future models & browse current models if anything fits
https://huggingface.co/Tesslate/models
pmttyji@reddit
I see that they have one more model (7B size, but blank without models & all stuff) on their HF page.
Interesting-Gur4782@reddit (OP)
thanks for linking! going to try it locally myself
lumos675@reddit
Wow the model is realy good man.
I tried it and it's realy fast. I asked it to generate a web OS and it one shotted it and the calculator was working and windows was movable.
Qwen coder 30b a3b could not do this task as clean and beautiful as this 4b model. Wtf?
What kind of sorcery is this?
But to be honest i was realy thinking that it doesn't matter how big a model is to have better coding capability. Last night also i commented this.
Also it generated a good comfyui node without any trouble.
So i will keep testing it in coming days with tasks to see how it performs.
Cool-Chemical-5629@reddit
From HF:
Authors:
Bang Xiao and Lingjie Jiang and Shaohan Huang and Tengchao Lv and Yupan Huang and Xun Wu and Lei Cui and Furu Wei
Of course model this good is a Chinese model. 😀
On a serious note, I do believe its use case is very limited. Looks like its main focus is web UI development.
lumos675@reddit
I tested it with python and the code was working without any issue.
So i think it might be more capable.
We need more tests.
crantob@reddit
More iteration and refinement on this work in different prgramming domains.
Cool-Chemical-5629@reddit
Well, I would be surprised if it was good as a general coding assistant. Let me explain why I think so.
Tesslate also produced 4B version of their UIGEN and WEBGEN models. Both very capable for web development.
Their focus is clearly oriented on the aesthetics of the regular websites, but ask it to generate a web based game with SVG graphics (featuring SVG code to define it), instead of creating that game for you, it will still try to turn it into a regular website which is obviously not something you would expect from a proper LLM. In other words, it will probably blow your mind in how good the website looks like and it will probably surpass even bigger models in that regard, but that's where the useability of this model ends. For actual game logic AND even proper game graphics, you would need a model that is better suited for that kind of task.
That's the caveat of training a small model on too much web UI data. It's going to be great for that one task, but fail at anything else.
lumos675@reddit
this is the game which generated.
and i can shoot those pink redish blocks to earn score.
i must defend the earth which is bottom.
for a 4b model isn't that so good?
Cool-Chemical-5629@reddit
Sorry, I was busy. It looks good, but does it work well? No obvious issues? I tried it with my latest prompt for game creation and I have mixed feelings. It produced good 3D for such a small model, but the controls were not working correctly. WSAD movement worked, but rotation using mouse did not. As for the rest of the game, it produced pretty UI, but most of it did not work. Truth be told, this prompt required the model to be able to think of is own story and design, so it kinda felt like it could do something cool, but ultimately failed, because it's too small to connect the dots so to speak. If the prompt was more detailed and described all the features in more detail instead of letting it to go freestyle, maybe that would work better for this model. It is still pretty interesting and so far more capable for what I tried to do with it than the same size models I mentioned earlier.
lumos675@reddit
Yeah.. i feel like the model is realy good for it's size. Specialy in web design. I never see a 4b model to get this much close to way bigger models.
But you are right about those dots and connection between them.
Yet i realy like something about this model.
I asked it to create a simple comfyui node to recieve image and zoom on the image for few second and return images as output. It did the task perfectly but my PIL library did not had that function which the model used.
I showed the error to the AI and it asked me which version of the PIL you have and gave me instruction on how to check for it.
It was realy interacting well with me until we managed to fix the problem.
I never saw such a small model to interact like this to be honest.
Even claude after 5 to 10 fail it starts to ask maybe you are doing something wrong.
crantob@reddit
Thumbs up for domain-specific local coding models.
noctrex@reddit
Seems nice, uploaded some unquantized GGUF's here:
https://huggingface.co/noctrex/AesCoder-4B-GGUF
ResidentPositive4122@reddit
Nice to see that this works even at 4b scales.
paper: https://arxiv.org/abs/2510.23272