What’s the training cost for models like Qwen3 coder 30b and is the code for training it is open source or close source?
Posted by NoFudge4700@reddit | LocalLLaMA | View on Reddit | 4 comments
Is it also possible to grab qwen3 coder 4b and train it again on more and new data?
Impressive_Half_2819@reddit
No one knows tbh.
NoFudge4700@reddit (OP)
So nor code neither data is publicly available?
MichaelXie4645@reddit
Nope, there are levels to open source too. Open Source* vs open weights.
__JockY__@reddit
You can do fine tunes on the 4B real easy with Unsloth. LoRAs and QLoRAs, too. It won’t take much VRAM.
You can fine tune a LoRA for Qwen3 30B A3B Instruct 2507 in BF16 on a single 96GB 6000 Pro in just a few hours.