Hunyuan-A13B released
Posted by kristaller486@reddit | LocalLLaMA | View on Reddit | 144 comments
From HF repo:
Model Introduction
With the rapid advancement of artificial intelligence technology, large language models (LLMs) have achieved remarkable progress in natural language processing, computer vision, and scientific tasks. However, as model scales continue to expand, optimizing resource consumption while maintaining high performance has become a critical challenge. To address this, we have explored Mixture of Experts (MoE) architectures. The newly introduced Hunyuan-A13B model features a total of 80 billion parameters with 13 billion active parameters. It not only delivers high-performance results but also achieves optimal resource efficiency, successfully balancing computational power and resource utilization.
Key Features and Advantages
Compact yet Powerful: With only 13 billion active parameters (out of a total of 80 billion), the model delivers competitive performance on a wide range of benchmark tasks, rivaling much larger models.
Hybrid Inference Support: Supports both fast and slow thinking modes, allowing users to flexibly choose according to their needs.
Ultra-Long Context Understanding: Natively supports a 256K context window, maintaining stable performance on long-text tasks.
Enhanced Agent Capabilities: Optimized for agent tasks, achieving leading results on benchmarks such as BFCL-v3 and τ-Bench.
Efficient Inference: Utilizes Grouped Query Attention (GQA) and supports multiple quantization formats, enabling highly efficient inference.
Expensive-Apricot-25@reddit
I dont have enough VRAM :'(
TheRealMasonMac@reddit
https://downloadmoreram.com/
Expensive-Apricot-25@reddit
Ah, thank you! This solved all my problems!!
ivari@reddit
At 13B active experts, and Q4, that is around 8 gb vram and 48GB ram requirements right?
Calcidiol@reddit
You could run a Q4 model (given the right SW / format) with no VRAM, just 48 or whatever GBy RAM -- then if you have N amount of VRAM it'll be able to use that much less RAM for the model and that much VRAM instead so it'll provide a fractional benefit. But there's no absolutely needed RAM/VRAM ratio depending on how you set it up.
If you have SW or specific configurations that prioritizes using the VRAM to hold particular data like KV cache or whatever model components then of course you'd be using up whatever that takes amount of VRAM vs. RAM.
Transferring from RAM to VRAM is slow though so usually you just pick a chunk of the inference data to stay in VRAM even though it's only a small part of the total puzzle and just provides speed benefit by handling that which it can permanently store & process in VRAM.
ivari@reddit
so like for example, I can just upgrade my 16 GB ram to 64 GB ram and stay with my RTX 3050 to use this model at Q4 in a good enough speed?
Calcidiol@reddit
Yeah maybe -- you can look at what kinds of RAM bandwidth benchmarks (large size e.g. 128MBy...GBy range sequential 128 bit wide reads) your RAM might achieve based on your CPU / RAM type and speed.
The A13B part of the model name says that at Q4 it'll read approximately 13GBy/2 bytes so around 7GBy read to generate a token. So if your CPU can keep up and get 21 GBy/s RAM BW that might be around 3T/s, or 10T/s if you can get your system to 70GBy/s RAM BW etc.
So the possible speeds are usually in the 3T/s to 14T/s range with DDR4 or DDR5 RAM and a fast enough CPU to handle it also using only CPU+RAM.
ivari@reddit
My CPU is currently Ryzen 5 1600 lol. Will upgrade in few months once I finish my mortgage.
TeakTop@reddit
Wow this is a perfectly sized MoE. If the benchmarks live up, this model is one hell of a gift for local ai.
takuonline@reddit
Perfect for what setup?
ortegaalfredo@reddit
should be able to run quantized with 2x3090.
Goldkoron@reddit
My 2 3090s and 48gb 4090
DeProgrammer99@reddit
It's about perfect for 64 GB main memory if quantized to ~5 bits per weight with room for context. That's how much RAM I have in both my work and personal machines.
kyazoglu@reddit
Looks promising.
I could not make it work with vLLM and gave up after 2 hours of battling with dependencies. I didn't try the published docker image. Can someone who was able to run it share some important dependencies? versions of vllm, transformers, torch, flash-attn, cuda etc.?
getfitdotus@reddit
need to use the vllm docker to make it work. official PR is still pending
nmkd@reddit
Wait a few days, then doubleclick koboldcpp and you're all set.
ttkciar@reddit
I agree it looks promising, but life is too short to struggle with dependency-hell.
Just wait for GGUFs and use llama.cpp. There's plenty of other work to focus on in the meantime.
Dr_Me_123@reddit
The online demo didn't yield any surprising results. So perhaps just an upgrade to Qwen3 30B with more VRAM.
getfitdotus@reddit
this model is actually really good. But I do not like the tags and the implementation on vllm is not 100% its using a python slow tokenizer instead.
DepthHour1669@reddit
That runs faster than Qwen 32b! 13b active means this will inference significantly faster than a dense 32b model.
Dr_Me_123@reddit
Well that's true if your VRAM can load an 80B model entirely. But if you need to load a part of it into your RAM, that depends.
jferments@reddit
80B-A13B is such a perfect sweet spot of power vs. VRAM usage .... and native 256k context 🫠🫠🫠
SkyFeistyLlama8@reddit
Nice sweet spot for 64 GB RAM laptops with unified memory too. At q4 we're looking at around 40 GB RAM to load the entire model. It should be fast if it has 13B active params.
Affectionate-Hat-536@reddit
Do you know if there gguf for this model is available anywhere? I hope there’s ollama or MLX version soon
blurredphotos@reddit
Bingo
Affectionate-Hat-536@reddit
I am in this exact boat with M4 Max 64GB. Hope to try this weekend.
sourceholder@reddit
How much extra VRAM is required to achieve 256k context?
kmouratidis@reddit
Depends on the exact parameters and the frameworks implementation. ~w4a16 should fit in ~120GB, maybe a few tens of gigs more or less depending on other factors. Attention type and context extension methods probably affect this by a lot too.
mxforest@reddit
Holy wow cow!
ResidentPositive4122@reddit
Interesting, it's a 80B_13A model, which gives ~32B dense equivalent.
Evals look amazing (beating qwen3-32B across the board, close to qwen3-A22B and even better on some). I guess we'll have to wait for 3rd party evals to see if they match this in real-world scenarios. Interesting that this scores significantly higher on agentic benchmarks.
License sux tho, kinda like meta (<100monthly users) but with added restrictions for EU. Oh well...
Different_Fix_2217@reddit
The whole dense equivalent is unproven / speculatory.
matteogeniaccio@reddit
it's 100 million monthly users
TheRealMasonMac@reddit
Just a casual 1/80th of the human population.
silenceimpaired@reddit
I was really hoping for Apache. Oh well. It’s a high bar I won’t hit. As long as it doesn’t have rug pull capabilities.
ResidentPositive4122@reddit
Hah, yes, my bad. Thanks, I'll edit.
a_beautiful_rhind@reddit
I don't like that we're topping out at 32b now. Let alone having 13b active only. Training data will make or break it.
For some reason they uploaded it yesterday and then hid/deleted.
rdmkyran@reddit
Jjjjjjjjjjjjjjjjjjjjjjjk.jjjjkjjjjjjjjjjj jjjjjjjjjjjjjjj jjjjjjjj njj jjjjjjnjjjjjjjjjjjjjjjjjjn'''''k j j j kkkjnknk nj nnj. j nn n. Nnknkk knk nk k j n k. K k k n k j knnn k n kn n kn. n n nnnnn un k'''kkkkkkkk''kkkkkkk k nk kk kk'''''kkkk.
tengo_harambe@reddit
something's off with your chat template bro
mantafloppy@reddit
I think someone "pocket dial" on reddit :D
Wonderful_Second5322@reddit
GGUFs?
Admirable-Star7088@reddit
I wonder if this works out of the box in llama.cpp? Or if we must go through the usual steps first:
If this model is good though, it will be very worth the wait!
Tenzu9@reddit
or... download the offical Int4 quant and run it from the included py file:
https://huggingface.co/tencent/Hunyuan-A13B-Instruct-GPTQ-Int4
xxPoLyGLoTxx@reddit
Downloading now...
So, I always just use LM Studio to run my models. Do you happen to know if I can convert the model to MLX format use the mlx-lm library in Python?
Tenzu9@reddit
Just be sure you know your way around Python before you waste 40 GB... This is a quantized transformers model, not a gguf. I have no idea if it supports MLX.
xxPoLyGLoTxx@reddit
I have no idea either. But it's downloaded so let's see what happens. :)
Tenzu9@reddit
this mlx transformers fork should run it:
https://github.com/ToluClassics/mlx-transformers
xxPoLyGLoTxx@reddit
Regular transformers failed. Have to try this next. Thanks for the tip
Admirable-Star7088@reddit
I have previously only been using GGUFs because, to my (incorrect?) knowledge, other formats like GPTQ can only run on GPU/VRAM exclusively. Or can I offload to CPU/RAM also with GPTQ?
kmouratidis@reddit
It should be possible to run 100% on CPU with vLLM. Second row here:
https://docs.vllm.ai/en/latest/features/quantization/supported_hardware.html
I don't think they support mixed inferencing though.
Tenzu9@reddit
Good question.. I'm not sure to be honest. I have only used transformers with small models. I do know that transformers allows you this feature with a library called accelerate. However, whether that will work with GPTQ models is unknown to me.
Severin_Suveren@reddit
I think it is possible, but extremely ineffective. Quants like GPTQ, EXL2 and AWQ are optimized for VRAM runtime and excel at it
Admirable-Star7088@reddit
Guess I will just wait for all the above steps to be done then, so I can run GGUF. An issue has opened on Llama.cpp github to add support, so the very first step has been taken :D
martinerous@reddit
Tried the demo for creative writing. Liked the style - no annoying slop, good story flow and details. Disappointed about intelligence - it often mixes up characters and actions even in a single sentence.
silenceimpaired@reddit
What creative models do you like?
kristaller486@reddit (OP)
The license allows commercial use of up to 100 million users per month and prohibits the use of the model in the UK, EU and South Korea.
ortegaalfredo@reddit
> and prohibits the use of the model in the UK, EU and South Korea.
Lmao
StyMaar@reddit
As if it had any value. ¯_ (ツ)_/¯
stoppableDissolution@reddit
It does, in a sense that company shields itseft from Eurocommission trying to go after it for whatever bullshit reason
StyMaar@reddit
The European Commission has had a pro-business stance pretty much forever, and uses the tools at its disposal very lightly (see how many times they agreed to a privacy-violation deal with US corporation “Safe Harbor”/“Privacy shield” that get shut down by European justice every time because it does indeed violates European laws.
But of course it's an attempt to say “of course no, we're not distributing this to the EU” but that's not giving them actual legal protection. Should someone do harmful stuff with that in the EU, then the AI makers could be prosecuted for making it anyway.
You can't smuggle drugs with a stickers “Consuming this in the EU is forbidden” and expect to be safe from prosecution.
stoppableDissolution@reddit
But it would be smuggler who is prosecuted, not the producer.
And no amount of censorship during training can prevent model from generating "hate speech" or whatever they decide to restrict, so that regulation is just impossible to comply with. Whether its going to be enforced is just a question of desire to exert pressure against a company.
StyMaar@reddit
EU's “AI Act” isn't about censoring AI so that they cannot spit “hate speech”. That “regulation impossible to comply with” is just a strawman actually. (In fact, companies like Meta even had such geographic restriction before the AI act was even passed, it is suspected that it was done as retaliation against the constraints GDPR put on Facebook).
stoppableDissolution@reddit
> Pretty sure a drug lord making drugs that get shipped to the EU can be prosecuted even if he isn't a EU resident
Yeah no, thats no how that works, you cant prosecute someone outside of your jurisdiction. By, well, definition of jurisdiction.
> EU's “AI Act” isn't about censoring AI so that they cannot spit “hate speech”
https://www.reddit.com/r/LocalLLaMA/comments/1llndut/comment/n03hvbh/
DisturbedNeo@reddit
All places that have extensive data protection laws. Curious.
stoppableDissolution@reddit
Not data protection laws, but censorship, in that case. Fuck AI act, huge mistake that puts us behind the progress yet again.
StyMaar@reddit
I read this BS all over the place, but fact is there's no provision for censoring hate speech in the European AI act.
The key point in the AI act that leads to these artificial restrictions is the obligation to respect intellectual property of the material you are training on, and now you see the actual reason that bother model makers.
(As if EU was enforcing their regulation anyway, for instance GDPR is routinely being violated but the pro-business stance of the regulators means they barely do anything against that).
stoppableDissolution@reddit
https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ%3AL_202401689
Art.55:
...providers of general-purpose AI models with systemic risk shall:
- perform model evaluation in accordance with standardised protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risks
- assess and mitigate possible systemic risks at Union level, including their sources, that may stem from the development, the placing on the market, or the use of general-purpose AI models with systemic risk
- keep track of, document, and report, without undue delay, to the AI Office and, as appropriate, to national competent authorities, relevant information about serious incidents and possible corrective measures to address them
What is systemic risk?
Recital 110:
General-purpose AI models could pose systemic risks which include, but are not limited to, any actual or reasonably foreseeable negative effects in relation to major accidents, disruptions of critical sectors and serious consequences to public health and safety; any actual or reasonably foreseeable negative effects on democratic processes, public and economic security; the dissemination of illegal, false, or discriminatory content
So anyone deploying big-enough models has to prune their dataset from anything EU deems illegal (and its not about copyright), redteam that the model is unable to generate it, and monitor that if it does it has to be immediately reported. What is "false" or "discriminatory" content? Well, whatever they will decide to sue you about if they so desire, lol.
Whether it will be enforced or not will totally depend on the political desire.
AssistBorn4589@reddit
EU has AI Directive that basically forbids existence of large enough models, plus hundreds of pages of other regulations, including regulations prohibiting LLMs from generating hatespeech and criminal content.
It's logical that rest of the world doesn't want to engage with that.
hak8or@reddit
"Basically"? How is mistral handling this? I know their AI laws are quite specific, but I haven't heard of them being limiting to that degree.
JadedFig5848@reddit
Curious, how would they know?
eposnix@reddit
They are basically saying anyone can use it outside of huge companies like Meta or Apple that have the compute and reach to serve millions of people.
JadedFig5848@reddit
I agree but let's say a big company uses it. How can people technically sniff out the model?
I'm just curious
Freonr2@reddit
It's really hard to hide something like that in a large company. People find out.
It becomes a massive conspiracy involving more and more people. You have to hope every employee that knows is totally ok with "never tell anyone that we're stealing this model." I.e. you need to employee more and more people with questionable ethics.
One small leak opens the door to court ordered discovery. The risk for large companies are too large to bother.
eposnix@reddit
Normally license breaches are detected by subtle leaks like a config file that points to "hunyuan-a13b", an employee that accidently posts information, or marketing material that lists the model by name. Companies can also include watermarks in the training data that point to their training set, or train it to emit characters in unique ways.
JadedFig5848@reddit
I see, do you have any examples of the emission of chars in unique ways?
PaluMacil@reddit
You can add extra characters to Unicode code points which won’t be visible but could say whatever you want
eposnix@reddit
https://www.reddit.com/r/ChatGPT/comments/1l3bjq1/chatgpt_adds_invisible_characters_to_your_text/
thirteen-bit@reddit
That's to avoid EU AI act requirements if I understand correctly.
It was discussed e.g. here:
https://www.reddit.com/r/aiwars/comments/1g5bz3k/tencents_license_for_its_image_generator_now/
Meta does the same starting with Llama 3.2 if I recall correctly:
https://www.reddit.com/r/LocalLLaMA/comments/1jtejzj/llama_4_is_open_unless_you_are_in_the_eu/
lothariusdark@reddit
This doesnt work with llama.cpp, right?
matteogeniaccio@reddit
Not yet. This is the issue so you can track it: https://github.com/ggml-org/llama.cpp/issues/14415
random-tomato@reddit
Oh the PR (by ngxson of course) also: https://github.com/ggml-org/llama.cpp/pull/14425
Hopefully we can run it soon :o
OutlandishnessIll466@reddit
Yeah! Just pull and build that branch. No need to wait for the pull request. Just there is no GGUF up yet.
noeda@reddit
Lol, I saw this comment thread in the morning, now came back with the intention to say that if I don't see activity or someone working on it, I'd have a stab at it. I feel it's happened now a few times I see some interesting model I want to hack together, but some incredibly industrious person showed up instead and put it together much faster :D
If it's ngxson I'd expect it to be ready soonish. One of these super industrious persons as far as I can tell :) It's probably ready before I can even look at it properly but since the last comment says there's some gibberish I can at least say if no updates this weekend I'm probably going to look at the PR and maybe help verify the computation graph or wherever it looks like the problem might be.
I sometimes wonder where do people summon the time and energy to hack together stuff on such short notice!
LocoMod@reddit
You're a gentleman and a scholar. Thanks.
Mysterious_Finish543@reddit
Doesn't look like it at the moment.
However, support seems to be available for vLLM and SGLang.
kmouratidis@reddit
I think sglang can only run the full version now (using latest commits, it wasn't added in time for v0.4.8). Quantized versions might not be supported. Maybe FP8, but the INT4 has issues (which seems like a more general INT4 issue: https://github.com/sgl-project/sglang/issues/7583).
lothariusdark@reddit
It doesnt quite fit into 24GB VRAM :D
So I need to wait until offloading is possible.
bigs819@reddit
What does offloading do? I thought making it fit into limited GPU ram solely relied on quantizing.
lothariusdark@reddit
No, offloading places parts of the model in your GPU VRAM and what doesnt fit remains in the normal RAM. This means you run mostly at CPU speeds, but allows you to run far larger models at the cost of longer generation times.
This makes large "dense" models (70B/72B/100B+) very slow. You get roughly around 1.5t/s with DDR4 and 2.5t/s with DDR5 RAM.
However, MoE models are still very fast with offloading, while having more parameters and thus better quality responses.
Qwen3 30B A3B for example is blazingly fast when using GPU only, so fast in fact that you cant read or even skim as fast as it generates. (thats partially necessary due to long thought processes but the point stands)
As such you can use larger quants, Q8 to get the highest quality out of the model while still retaining usable speeds. Or you can fill your VRAM with context because even offloaded to RAM the model is still fast enough.
This means this new model has technically 80B parameters, but runs on CPU as fast as a 13B model, which means its very usable at that speed.
Keep in mind this is all precluding coding tasks. There you want the highest speeds possible, but for everything else, offloading MoE models is awesome.
DepthHour1669@reddit
Someone post the where gguf picture please
Mybrandnewaccount95@reddit
Hopefully that 256k context is legit
Googulator@reddit
At first I read "Hunyadi-A13B", and thought, a Hungarian LLM?
vincentz42@reddit
The evals are incredible and trade blows with DeepSeek R1-0120.
Note this model has 80B parameters in total and 13B active parameters. So it requires roughly the same amount of memory compared to Llama 3 70B while offering 5x throughput because of MoE.
This is what the Llama 4 Maverick should have been.
datbackup@reddit
Salt in the wound… i’m still rooting for meta to turn it around with a llama 4.1 that comes roaring back to the top spot
DepthHour1669@reddit
Llama 4 architecture is LITERALLY just Deepseek V3 with a few tweaks (RoPE+NoPE etc) to add long context.
The problem isn't the architecture, it's Meta's data. Garbage in, garbage out.
Who knew facebook comments makes for shit data.
TheThoccnessMonster@reddit
Well, some of them anyway. Their data pile needs to be revisited.
HilLiedTroopsDied@reddit
Prices of used 3090's, and other large Vram cards going to get even higher!. Intel where is the B60 Pros!
Zugzwang_CYOA@reddit
I'm not so sure about that. Expensive VRAM is superior for the dense models of the past, but huge mixture of experts models seems to be the direction that local is going now. CPUmaxxing is much better for big MoE stuff than 3090 stacking.
Expensive-Apricot-25@reddit
no, the vision is also fully native (ie, wasnt added post pre-training), which is one of the only open models with actual native vision.
llama 4 has the most robust vision in any open model.
JustinPooDough@reddit
This is why Google will win it all. Google has all, Google knows all.
HilLiedTroopsDied@reddit
it'd be a shame is someone(s) hacked the big tech companies and torrented their training sets. Need a Fat pipe to clear the terrabytes of data.
datbackup@reddit
Sounds reasonable. Guess we have to wait til someone crowdfunds an open model that takes Anthropic’s approach of buying a million books and scanning them to train a model with highest quality data. Door seems open now that the court ruled in their favor. Chinese models are probably training on mass pirated pdfs so unsurprisingly they’re better than Llama4
Zulfiqaar@reddit
Well Meta pirated 82 terabytes of books for training their models, so unfortunately they don't get that excuse. Looks like immediately after Anthropics win, Meta also won based on precedent (training on copyrighted content), however the allegations of piracy remains to be determined. Apparently Meta engineers specifically tried to minimise seeding while sucking up pretty much every book torrent in existence..darn leehers haha. Which is probably in their favour though as it avoids the illegal redistribution charge.
datbackup@reddit
If this is true, there could be hope for a 4.1!
No-Cod-2138@reddit
llama4 is a lot more sparse so it's even harder to train than otherwise.
They should probably keep pretraining DSV3 lmao
AppearanceHeavy6724@reddit
What is interesting., their Maverick-experimental on LM-arena is really a very fun interesting model. Great creative writer, vibes similar to V3-0324. There is a very special reason why meta botched llama 4, and it is not data.
dark-light92@reddit
LM arena is not a good comprehensive benchmark. It's a vibe benchmark. And meta's data is all vibes so that's not surprising at all.
I second that the issue most likely is the training data.
Expensive-Apricot-25@reddit
yeah same.
though i think it will take more time for them to regain traction, especially with all of the changes they are going thru rn. i'd say give it 6 months.
MagicaItux@reddit
That's awesome, do you think we can merge that with the hyena hierarchy's context starting at 4T?
DepthHour1669@reddit
Eval scores table from the model page
These scores are pretty insane for Jan 2025. Wish they added o3 and Gemini 2.5 Pro for comparison, even if they're better.
starshade16@reddit
Wtf do we have to do to get these guys to include tools in their LLMs? Come on guys.
elij7@reddit
I’m new to the whole build your own LLM thing. Would this be a good starting point to build my own model? Better than Mixtral 8x7B?
random-tomato@reddit
Training LLMs from scratch take millions, if not hundreds of millions of dollars, at least if you want good performance. You can try fine-tuning though, it's a lot less expensive: https://docs.unsloth.ai/
Capable-Ad-7494@reddit
does anybody remember the command to throw the important bits into vram again?
matteogeniaccio@reddit
in llama.cpp the command I used so far is
--override-tensor "([0-9]+).ffn_.*_exps.=CPU"
It puts the non-important bits in the CPU, then I manually tune
-ngl
to remove additional stuff from VRAMrandom-tomato@reddit
If you have free VRAM you can also stack them like:
--override-tensor "([0-2]).ffn_.*_exps.=CUDA0" --override-tensor "([3-9]|[1-9][0-9]+).ffn_.*_exps.=CPU"
So that offloads the first three of the MoE layers to GPU and rest to CPU. My speed on llama 4 scout went from 8 tok/sec to 18.5 from this.
fizzy1242@reddit
remember to use the --fmoe flag too if you use ik_llama.cpp fork
BumbleSlob@reddit
64Gb or higher Unified Memory gang, rise up!
OmarBessa@reddit
someone please tag the gguf troopers
lochyw@reddit
256k is not ultra long..
bene_42069@reddit
How broken can your standard be? lol. Even o3 is "just" that much.
lochyw@reddit
It's hardly 1-2M
bene_42069@reddit
What kind of tasks do you work on to need that much?
bene_42069@reddit
How broken can your standard be? lol. This is like saying 550 hp is mediocre in a sportscar.
datbackup@reddit
Just like these language models aren’t really “large”
ResearchCrafty1804@reddit
What a great release!
They even provide benchmark for the q8 and q4 quants, I wish every model author would do that.
Looking forward to testing myself.
Kudos Hunyuan!
Educational-Shoe9300@reddit
Is it possible that the Hunyuan A13B has almost no precision loss at 4bit quantization? Or am I misreading this benchmark: https://github.com/Tencent-Hunyuan/Hunyuan-A13B?tab=readme-ov-file#int4-benchmark
VoidAlchemy@reddit
I've seen it before where smaller quants sometimes "beat" the original model on some benchmarks as shown in The Great Quant Wars of 2025 as well.
I like to measure Perplexity and KL-Divergence of various sized quants relative to the full model. This let's us have some idea of how "different" the quantized output will be relative to the full size.
So yeah while the 4bit does score pretty similar to the original on most of those listed benchmarks, it is unlikely that it is always "better".
xxPoLyGLoTxx@reddit
Looks great! Quick someone make an mlx 8 bit version.
MagicaItux@reddit
Detected Pickle imports (4)
"torch._utils._rebuild_tensor_v2", "torch.BFloat16Storage", "torch.FloatStorage", "collections.OrderedDict"
If you really want to run it with keeping that in mind, I'd just drop the uri of the .bin file in the right hyena hierarchy
Detected Pickle imports (4)
So could you explain this?
BumbleSlob@reddit
When GGUF?
iansltx_@reddit
...and now to wait until it shows up in ollama-compatible q4. 64GB unified RAM here so this should perform nicely.
05032-MendicantBias@reddit
It feels like this should work wonders with 64GB RAM + 24GB VRAM?
Radiant_Hair_2739@reddit
Can't wait for llama.cpp or LM Studio!
Admirable-Star7088@reddit
Perfect size for 64GB RAM systems, this is exactly the MoE size the community has wanted for a long time! Let's goooooo!
stoppableDissolution@reddit
48gb too, q4 will fit just perfect. Maybe even q6 with good speed with some creative offloading.
m98789@reddit
Fine tune how
matteogeniaccio@reddit
I think it's in the documentation from their github: https://github.com/Tencent-Hunyuan/Hunyuan-A13B/blob/main/train/README.md
Alkaided@reddit
The first paragraph has a very very strong smell of Chinese…
RuthlessCriticismAll@reddit
does he know...
mxforest@reddit
I bet 10 cents he doesn't.
Barry_22@reddit
Wow, great. How many languages it supports?
jacek2023@reddit
Looks perfect!!! What a great time we are living now
Classic_Pair2011@reddit
Who will provide this model on Openrouter? I hope somebody pick it up
Evolution31415@reddit
Hm...
https://huggingface.co/tencent/Hunyuan-A13B-Instruct-FP8 gives 404