[-] MotokoAGI@reddit Oh wow, I just realized that they also released a 424b-vl model that really went under the radar in Jan. Their 300B model is really solid. [-] LagOps91@reddit not really great to run locally with that many active parameters [-] No-Refrigerator-1672@reddit 424B is not great to run locally regardless of actuve parameter count. It'll require a system that costs like a fancy car, assuming you don't want to crawl it off cpu.
[-] LagOps91@reddit not really great to run locally with that many active parameters [-] No-Refrigerator-1672@reddit 424B is not great to run locally regardless of actuve parameter count. It'll require a system that costs like a fancy car, assuming you don't want to crawl it off cpu.
[-] No-Refrigerator-1672@reddit 424B is not great to run locally regardless of actuve parameter count. It'll require a system that costs like a fancy car, assuming you don't want to crawl it off cpu.
[-] PangurBanTheCat@reddit What would be the proposed benefit here compared to using something like Flux or Z-Image or Qwen?
[-] silenceimpaired@reddit OP have you posted to r/Stable diffusion , ComfyUi, etc? [-] PwanaZana@reddit r/stable should be a horse dedicated sub :P r/StableDiffusion [-] silenceimpaired@reddit Sigh, stupid auto correct. [-] PwanaZana@reddit ducking autocorrect, my man :P
[-] PwanaZana@reddit r/stable should be a horse dedicated sub :P r/StableDiffusion [-] silenceimpaired@reddit Sigh, stupid auto correct. [-] PwanaZana@reddit ducking autocorrect, my man :P
[-] silenceimpaired@reddit Sigh, stupid auto correct. [-] PwanaZana@reddit ducking autocorrect, my man :P
[-] pmttyji@reddit https://huggingface.co/baidu/ERNIE-Image https://huggingface.co/baidu/ERNIE-Image-Turbo Released Versions ERNIE-Image: Our SFT model, delivers stronger general-purpose capability and instruction fidelity in typically 50 inference steps. ERNIE-Image-Turbo: Our Turbo model, optimized by DMD and RL, achieves faster speed and higher aesthetics in only 8 inference steps. License: apache-2.0
MotokoAGI@reddit
Oh wow, I just realized that they also released a 424b-vl model that really went under the radar in Jan. Their 300B model is really solid.
LagOps91@reddit
not really great to run locally with that many active parameters
No-Refrigerator-1672@reddit
424B is not great to run locally regardless of actuve parameter count. It'll require a system that costs like a fancy car, assuming you don't want to crawl it off cpu.
PangurBanTheCat@reddit
What would be the proposed benefit here compared to using something like Flux or Z-Image or Qwen?
silenceimpaired@reddit
OP have you posted to r/Stable diffusion , ComfyUi, etc?
PwanaZana@reddit
r/stable should be a horse dedicated sub :P
r/StableDiffusion
silenceimpaired@reddit
Sigh, stupid auto correct.
PwanaZana@reddit
ducking autocorrect, my man :P
LegacyRemaster@reddit
apache 2.0 ... sounds good
KokaOP@reddit
Image2Image possible ??
ambient_temp_xeno@reddit
Seems decent. Thank god they released a turbo version.
pmttyji@reddit
https://huggingface.co/baidu/ERNIE-Image
https://huggingface.co/baidu/ERNIE-Image-Turbo
Released Versions
ERNIE-Image: Our SFT model, delivers stronger general-purpose capability and instruction fidelity in typically 50 inference steps.
ERNIE-Image-Turbo: Our Turbo model, optimized by DMD and RL, achieves faster speed and higher aesthetics in only 8 inference steps.
License: apache-2.0