LLaDA2.0 (103B/16B) has been released

Posted by jacek2023@reddit | LocalLLaMA | View on Reddit | 72 comments

LLaDA2.0-flash is a diffusion language model featuring a 100BA6B Mixture-of-Experts (MoE) architecture. As an enhanced, instruction-tuned iteration of the LLaDA2.0 series, it is optimized for practical applications.

LLaDA2.0-mini is a diffusion language model featuring a 16BA1B Mixture-of-Experts (MoE) architecture. As an enhanced, instruction-tuned iteration of the LLaDA series, it is optimized for practical applications.

llama.cpp support in progress https://github.com/ggml-org/llama.cpp/pull/17454