mistralai/Magistral-Small-2506
Posted by yoracale@reddit | LocalLLaMA | View on Reddit | 151 comments
Building upon Mistral Small 3.1 (2503), with added reasoning capabilities, undergoing SFT from Magistral Medium traces and RL on top, it's a small, efficient reasoning model with 24B parameters.
Magistral Small can be deployed locally, fitting within a single RTX 4090 or a 32GB RAM MacBook once quantized.
Learn more about Magistral in Mistral's blog post.
Key Features
- Reasoning: Capable of long chains of reasoning traces before providing an answer.
- Multilingual: Supports dozens of languages, including English, French, German, Greek, Hindi, Indonesian, Italian, Japanese, Korean, Malay, Nepali, Polish, Portuguese, Romanian, Russian, Serbian, Spanish, Swedish, Turkish, Ukrainian, Vietnamese, Arabic, Bengali, Chinese, and Farsi.
- Apache 2.0 License: Open license allowing usage and modification for both commercial and non-commercial purposes.
- Context Window: A 128k context window, but performance might degrade past 40k. Hence we recommend setting the maximum model length to 40k.
Benchmark Results
Model | AIME24 pass@1 | AIME25 pass@1 | GPQA Diamond | Livecodebench (v5) |
---|---|---|---|---|
Magistral Medium | 73.59% | 64.95% | 70.83% | 59.36% |
Magistral Small | 70.68% | 62.76% | 68.18% | 55.84% |
Only-Letterhead-3411@reddit
Very excited to see how it will perform against Qwen3 32B
jacek2023@reddit
I am not able to find matching Qwen3 32B benchmark results anywhere, please share if you know them
DeProgrammer99@reddit
Inflation_Artistic@reddit
Healthy-Nebula-3603@reddit
So queen 32b is far more advanced ...
_raydeStar@reddit
Yeah. Though I'll say 24B performs significantly better than a 32B on a 4090. I still think I'll prefer qwen 30 though.
arty0mk@reddit
Hi. What token/s generation speed are you getting for Qwen 24B and 32B on a 4090?
Healthy-Nebula-3603@reddit
Here is not even preferable... objectively queen 32b is just better. ..
Flag_Red@reddit
Benchmarks don't always tell the whole story. We won't know until it passes the vibe test.
windozeFanboi@reddit
Well for one, you can fit more context.
EU language support should be dramatically better than Qwen and who knows what else.
Who knows what else. But yeah, i guess it may not be as good for dedicated coding tasks.
Healthy-Nebula-3603@reddit
I made some texts for reasoning and code ... Unfortunately qwen does it better...
_raydeStar@reddit
Oh. I shouldn't have said better. I meant performance is much faster.
Healthy-Nebula-3603@reddit
You mean speed ?
_raydeStar@reddit
I am a native English speaker, and fast and speed have always been in the same camp.
But it's more t/s AND performance combined to give the optimal model.
Healthy-Nebula-3603@reddit
Native English eh? Ok ...but
"Performance is much faster" doesn't sound strange ? 😅
_raydeStar@reddit
Yet - you already know what I'm referring to without needing to clarify further. That's a win. We can move on.
kerighan@reddit
Nope, qwen3 uses majority voting on 64 prompts. You're comparing apples and oranges
kerighan@reddit
You all got played. Look at the grey lines beneath the benchmark tables of Qwen3. AIME 24 and 25 results are from maj@64, so majority voting on 64 answers
jacek2023@reddit
This is why I wrote I can't find matching scores, so no, not "you all" :)
kerighan@reddit
Not you then <3
I find this kind of practice (Qwen3 table with hidden caveats) very disingenuous.
For comparison, Magistral medium versus Qwen3-235B-A22B (with maj@64) :
AIME 24: 90,0% / 85,7% (Magistral wins)
AIME 25: 83,3% / 81,5% (Magistral wins)
IShitMyselfNow@reddit
https://qwenlm.github.io/blog/qwen3/
jacek2023@reddit
I think Qwen3 32B is:
(please verify)
randomqhacker@reddit
Any smarter in non-thinking mode? (By preloading empty tags?)
danielhanchen@reddit
I made GGUFs for Magistral at https://huggingface.co/unsloth/Magistral-Small-2506-GGUF
You can run them via:
or
Also best to increase Ollama's context length to say 8K at least:
OLLAMA_CONTEXT_LENGTH=8192 ollama serve &
. Some other details in https://docs.unsloth.ai/basics/magistralRickyRickC137@reddit
GGUF when.... Ah you beat me!
danielhanchen@reddit
We worked with Mistral behind the scenes to fix all issues this time :) I double checked on issues as well!
Pristine-Woodpecker@reddit
I find it weird the llama.cpp webui ignores the default system prompts. You explicitly have to go and copy paste it into the settings, even with --jinja. That feels silly.
freedom2adventure@reddit
Shouldn't be too hard to check /props and pull the vars that were loaded when the model loaded. Will have to take a peek later today.
randomqhacker@reddit
You're awesome, thanks!
danielhanchen@reddit
Oh yes that would be helpful! Tbh imo if there is a chat template it should be on by default
colin_colout@reddit
GOAT!
danielhanchen@reddit
:)
ZeroCool2u@reddit
LFGO Daniel! Mistral and Google better be paying you for fixing all their models!
regstuff@reddit
I salute you!
danielhanchen@reddit
Thanks!
theologi@reddit
why are all answers in this format?
\boxed{Nein}
how do i get rid of it?
MoffKalast@reddit
breathes in
Where min_p?
MoffKalast@reddit
breathes in
Where min_p?
MoffKalast@reddit
breathes in
Where min_p?
srtng@reddit
Awesome!That was fast!
inevitable-publicn@reddit
u/danielhanchen Thanks!
What's the right way to configure system prompt for this?
It seems to attempt every answer into a boxed. If I add "Do not put the final answer in boxed." as a system prompt, thinking goes away. If I try to adapt the system prompt from here, boxed behavior comes back - for text answers!
danielhanchen@reddit
Oh use our system prompt we wrote in the docs!
A user will ask you to solve a task. You should first draft your thinking process (inner monologue) until you have derived the final answer. Afterwards, write a self-contained summary of your thoughts (i.e. your summary should be succinct but contain all the critical steps you needed to reach the conclusion). You should use Markdown to format your response. Write both your thoughts and summary in the same language as the task posed by the user. NEVER use \boxed{} in your response.
Your thinking process must follow the template below:
Your thoughts or/and draft, like working through an exercise on scratch paper. Be as casual and as long as you want until you are confident to generate a correct answer.
Here, provide a concise summary that reflects your reasoning and presents a clear final answer to the user. Don't mention that this is a summary.
Problem:
inevitable-publicn@reddit
Right, this I did try, but that ends up putting box everywhere, even long text answers.
I think the `think` prompt needs some work.
kryptkpr@reddit
Thank you, kicked off a benchmark run on UD-Q4_K_XL to see how it compares to the API (which did not perform so hot)
danielhanchen@reddit
Oh fantastic!
kryptkpr@reddit
The UD4 quants are very good, generally speaking within confidence interval of the API.
Unfortunately something seems to be wrong here and I'm not yet sure exactly what. When the reasoning system prompt is injected, the official mistral API goes off the rails and someitmes thinks for 10k+ tokens. I had to cap it at 8k because of my wallet.
With the local modal I can apply thought-shaping, and limiting the reasoning trace to 2k brings the mean response down and certainly improves over truncating the 8k traces but is still behind simple COT without the reasoning.
This is a generally counter-intuitive result and does not match what I see with Qwen3, collecting more data now to try to understand whats up a little better. Might have to cough up the 20eur to let the API run until the end to get the bottom of this mystery.
json12@reddit
Do UD quants work well with Macs or stick to Q8_0?
danielhanchen@reddit
Lower bit works but I think someone mentioned the BF16 and Q8_K_XL being slower due to Mac not liking BF16
SadWolverine24@reddit
This is too little, too late from Mistral. These two models were already obsolete at the time of release.
Qwen3 drastically outperforms.
GreatGatsby00@reddit
You might have to add a system prompt like this one to stop it from thinking too much:
"You have a tendency to overthink simple questions. Counter this by: 1) Trusting your first solid piece of knowledge, 2) Stating it clearly, 3) Adding only what's necessary for completeness, 4) Stopping immediately. If you find yourself generating multiple 'but maybe' thoughts, that's your signal to conclude. Excessive analysis is not accuracy - it's procrastination."
Delicious-Farmer-234@reddit
I believe there's an issue with the chat template because it ignores the system prompt. This is not the first thinking model I've used that the system prompt is ignored. I am using LM studio btw.
and_human@reddit
I’m getting outputs that ends with math equations? \boxed{16027}
I also got this on their own Le Chat so I don’t think it’s due to the quantization process.
CheatCodesOfLife@reddit
Same here, I pasted some bespoke code and asked if it understood, it went through and through about it, then responded with \boxed{Yes } lol
Delicious-Farmer-234@reddit
Lol
OfficialHashPanda@reddit
On math and code queries, the model is trained to output in the format: ... thinking tokens ... \boxed{answer}
This was mentioned in the paper.
JadedSession@reddit
Note the system prompt specifically *forbids* the model from answering this way. So if you see it, it means you're using the wrong prompt.
stddealer@reddit
I remember, I think it was QwQ preview who was also returning boxed answers? It's just a way to make the final answer stand out
JadedSession@reddit
The model doesn't have the system prompt baked in like some other models do. You absolutely need to set it.
Wolly_Bolly@reddit
I'm getting the \boxed{... something ... } too.
HilLiedTroopsDied@reddit
To use with coding, it's annoying that "
Magistral-Small-2506_gguf
does not support function calling."why else would we want a gguf of this model if not to use in an IDE to do functional calls.
tarruda@reddit
If the model is really good, I'm sure someone will eventually fine tune it to support function calling
Iory1998@reddit
That's simple. Upgrade to the larger and better version: magistral medium. 🤦♂️that seems the business model mistrail AI is following now.
JadedSession@reddit
Or upgrade to Qwen3 :-/
Iory1998@reddit
Absolutely!
GreenTreeAndBlueSky@reddit
What I like about the choice of parameters is that it makes sense compared to the typical size of gpu vram (3×8 or 2×12 or 1x24). I really likes their old MoE though and I think a 24B MoE would be so worth it, even if it's not the best in some benchmarks.
AltruisticList6000@reddit
I preferred the 22b since that was perfectly good for 16gb VRAM + a good context size on Q4, 24b barely fits in Q4_s and a smaller context size. Even for 24gb unless you run it on like Q6 the context won't fit for 24b on the GPU. Also I'm pretty sure average GPU VRAM is not 2x12gb and 24gb+ but more like 12-16gb for most AI users.
In fact I wish other devs besides Mistral would make LLM's in the size of 21-22b parameters which is a pretty good sweetspot for 16gb VRAM but somehow most of them ignore this size and go straight up to 27-32b or too small 8-14b for 8-12gb VRAM users.
-lq_pl-@reddit
There are way too many rich people with 24gb cards here that distort the perception. The prices for (used) cards with 16gb+ are insane right now. I got a 4060Ti with 16gb in November for 460 EUR, that card now costs over 700EUR. I could sell my card for a profit right now.
kaisurniwurer@reddit
3090 hovers around the same price as last year in EU (700$, rose relatively to USD, but not in EUR).
Zangwuz@reddit
The 5060 ti 16G new is 500 euro here in the eu and it's available so I'm not sure why someone would be a used 4060 ti 700 euros.
AltruisticList6000@reddit
Yes similar thing I noticed around here too, used rtx 4090 or 3090 is way too overpriced and I don't really trust random used products around here, who knows why they sell these used high end GPUs. Even a rtx 4070 ti super 16gb is freaking expensive here and that still only has 16gb VRAM...
justGuy007@reddit
What are you using now? Somehow, I still like the 22b model.
AltruisticList6000@reddit
I'm still using the mistral 22b 2409 for the reason above and because the 24b is broken for me for some reason, very repetitive and dry, frequently going on infinite responses, weirdly giving 6-7 sentence replies/questions that sound very similar and repetative all in one response that would only warrant the use of only 1-2 of these sentences etc. so it's just overal weird as if it couldn't handle more than 2-3 responses without opening a new chat. Meanwhile 22b is very solid for what I use it for.
justGuy007@reddit
Glad to hear it. I'm having the same experience with 22b. (also running on 16gb VRAM)
GreenTreeAndBlueSky@reddit
Yeah i get that but I think if you have 16gb or less and want the context on gpu you're better off with a smaller model, say a 14b one
Iory1998@reddit
Mixtral is still one of my favorite models of all time.
r4in311@reddit
It's amazing that they released that, but the statistics are incredibly misleading. AIME 24/25 consists of 2x15 questions per year; that's a super low sample count, and the answers to those are contained in pretty much all training datasets. You can test this yourself by just asking the LLM which AIME Q/A pairs it already knows. :-) They are just testing dataset contamination. Also, maj@4 and maj@64 are incredibly misleading too. Who runs the model 64 times in any practical scenario? The wait times are insane for DeepSeek already. They just want to hide that it's far behind DeepSeek, which is fine—it's a super tiny model and amazing for its size.
HiddenoO@reddit
Based on their own benchmarks, it's far behind comparable sizes of Qwen3 though, so I'm not sure how it'd be considered "amazing for its size"?
pseudonerv@reddit
If you haven’t tried, the vision projector mistral small mmproj gguf works with this model too! It’s the first open weight reasoning model with vision
XForceForbidden@reddit
QVQ-72B-Preview is there, not sure how it works
pseudonerv@reddit
True. Completely forgot about that
seventh_day123@reddit
Magistral uses the reinforce++-baseline algorithm from OpenRLHF.
CBW1255@reddit
When someone finds out, please share how to make it not "think". It's taking forever to spit out an answer even the simplest of questions.
jacek2023@reddit
the thinking is disabled by default, how do you run it?
inevitable-publicn@reddit
Just adding any system prompt at all makes it lose its thinking capacity.
It only thinks if your system prompt asks it to (like the one shared in the blog posts).
AppearanceHeavy6724@reddit
Possibly absolutely dreadfully awful for non coding uses.
thereisonlythedance@reddit
You shouldn’t be down-voted for saying this. If you look at analysis from the likes of Anthropic over 70% of usage of their is not coding or maths. Yet all these companies are targeting these things at the expense of everything else. What I wouldn’t give for just one of them to break the mold.
I personally think coding models should be specialised models.
And yes, checking via the API Magistral is not great at creative tasks, language is very sloppy.
IngenuityNo1411@reddit
Not totally agree, just as top tier models like gemini 2.5 pro, claude 4, deepseek r1 0528 are both good at STEM, coding stuff and creative writing. But I agree that for local models in acceptable size range (below 32B) emphasizing STEM might harm model's creativity, because with a certain size they can only remember so much thing. That's still a proof for we need more specialized models for creative writing (and sadly, those RP fine-tunes not quite fits in writing scenario)
thereisonlythedance@reddit
Yeah, though the recent Sonnet 4 model is a step back for non-coding work IMO. I‘ve been impressed by Opus 4 as a generalist model, it bucks the trend. All the recent OpenAI models have been very heavily STEM focused.
DeepSeek is really interesting. I think they said in their paper that they actively had to do special final pass to restore writing capability. V324 is a great all round model that proves it’s possible to have everything. The new R1 is also very creative and more capable of long creative outputs than I’m used to.
Hoodfu@reddit
R1 0528 is absolutely fantastic. I ask gpt 4.1 to make a comedic image prompt for an issue with an hvac on the 7th floor not talking to the control unit work men. It basically just makes it with the workmen saying "it's not talking!" with some cartoon bits. R1 0528's output: Exasperated HVAC technicians on a skyscraper rooftop during golden hour, attempting to coax a sullen anthropomorphic AC unit (with cartoonish frown and crossed ductwork arms) into communicating with the 7th floor; below, office workers hang out windows waving heat-distorted protest signs reading "WE MELT!" while one technician offers the machine a bribe of frozen pizza slices, another uses a comically oversized tin-can telephone, and a third consults a "Talking to Moody Appliances" handbook; dramatic low-angle shot capturing reflective building glass, steam vents, and tangled wires, hyper-detailed textures on grimy uniforms and metallic surfaces, cinematic lighting with lens flare, Pixar-meets-Industrial-Revolution art style, 8K resolution, f/2.8 shallow depth of field
thereisonlythedance@reddit
It’s a very intelligent model. Just feels like something completely different and fresh to me. The level of fine detail it’s capable of in most tasks is super impressive.
IngenuityNo1411@reddit
Yes, and even top models got trade-offs, back and forth. IMO that 03-25 preview is best gemini 2.5 pro version for writing and my favorite one before R1 0528, sad for forever gone. Later versions tend to produce more sloppy output, frequently using those gpt slops (aligned with gpt generated stuff?).
DeepSeek models however are totally "other" things, special ones. Personally I'm not that in favor of initial R1. It did write interesting stuff with shorter prompts, but frequently got own ideas ignoring longer instructions, and more problematic that always tend to write shortly, hence not good at writing long stuffs. (don't know if it was just my prompts too shit) V3 0324 kind of getting rid of these, yet still write shortly. New R1 fixed them, with minor creativity trade-off but I'm totally pleased using it now.
AppearanceHeavy6724@reddit
Deepseek hired literature majors afaik to keep models good at non stem uses.
toothpastespiders@reddit
With thinking datasets as well that's a 'lot' of dry, factual if meandering, writing. While I don't have any proof, I'd still be surprised if that didn't push a model's language into that direction at least to some extent.
AppearanceHeavy6724@reddit
Precisely. The only somewhat usable for creative writing models <= 32b are Gemma 3 12b, Gemma 3 27b and perhaps GLM-4. Qwen and Mistral are unusable for fiction.
Kamimashita@reddit
Do you know of any benchmarks for creative writing? Now that I type that out I imagine it would be really difficult to benchmark other than just having a human eye test.
AppearanceHeavy6724@reddit
Eqbench.com and one by Lech Mazur
fish312@reddit
Gemma is absolute trash at creative writing and rp. It's drier than the sahara.
florinandrei@reddit
Maybe she's not getting enough ~~foreplay~~ prompting and context.
AppearanceHeavy6724@reddit
Hmm. No. It is not. It is actually very detailed, wordy and purple.
dark-light92@reddit
Yes.
I think reasons are twofold.
1) Measuring improvements in coding & math is easy. Measuring improvements in creative tasks is much harder.
2) People use models for coding and there is little to no backlash. Vibe coding is ridiculed but not vilified. If a company focuses their model on creative tasks they will be immediately labeled as anti-artist and it will be a PR nightmare.
dazl1212@reddit
It's a shame as Miqu and Mistral Small 22b were excellent for creative writing. But as you said most newly released models are aimed at STEM... Sucks really.
AltruisticList6000@reddit
Yeah mistral's strength was always writing, style, roleplay, instruction following, summaries etc. I was already dissatisfied with the 24b model from a few months ago. The previous 22b is not that great at math and logic/puzzles (Qwen 14b is way better with thinking) but it is absolutely good at RP, summaries and writing/talking is way more natural and flexible than Qwen, for example Qwen is obsessed with using — no matter what the hell I ask from it. Mistral 22b only used it scarsly when needed like book-style writing or default AI/assist mode and never inserted it into everything where it's not needed.
Particular_Rip1032@reddit
Holy Smokes! After a long while, we're finally really talking about Mistral again.
Eupolemos@reddit
With Devstral, I don't think we ever stopped?
BalaelGios@reddit
Is devstral good for code chat assist? Or only for agentic use?
CheatCodesOfLife@reddit
I mostly use it for "chat-coding" yeah.
No_Afternoon_4260@reddit
It's true potential is as an agent. But yeah why not
Zc5Gwu@reddit
I'm curious if it's good for FIM too.
Particular_Rip1032@reddit
No, I mean like we have rarely talk about it in a significant and hyped way for a couple of months. Most likely due to Mistral themselves not releasing very often tho.
CheatCodesOfLife@reddit
Devstral replaced GLM4 for my main local coding model. It's the best, can't ait to try this new one.
AppearanceHeavy6724@reddit
GLM-4 can both code and write stories. I rarely meet anything like that, the last one was Mistral Small 22b. But as coder devstral is probably better.
INT_21h@reddit
I'm really surprised by how amoral this model is. It seems happy to answer questions about fabricating weapons, synthesizing drugs, committing crimes, and causing general mayhem. Even when it manages to refuse, the reasoning trace usually has a full answer, along with a strenuous internal debate about whether the user should follow guidelines or obey the user. I don't know where this came from: neither mistral nor devstral were like this.
a_beautiful_rhind@reddit
So we're not getting medium. How many parameters is it?
Leather-Term-30@reddit
Medium should be around 50B parameters, or (unlikely) around 70B at most. It’s a dense model, not a MoE.
Klutzy-Snow8016@reddit
The CEO confirmed on Twitter that the pirated model "miqu" was actually Mistral Medium 2. That model was 70b (it was a Llama 2 fine-tune), so Medium 3 might be a similar size.
Amgadoz@reddit
Could you please share the tweet?
Klutzy-Snow8016@reddit
It's this tweet: https://x.com/arthurmensch/status/1920136871461433620
toothpastespiders@reddit
I swear it'll be 2030 and nemo and miqu will still be among the best of their size.
pseudonerv@reddit
Waiting for some good soul to gift us miqu, eh, …, magiqu
ResidentPositive4122@reddit
IMO it's much better for Mistral to release small models under permissive licenses and offer larger ones under a license (for companies or via APIs) than for Mistral to go out of business and not launch anything...
ArsNeph@reddit
That would make sense, if they didn't release their biggest Mistral Large 123B as under a permissive license. Though, in my humble opinion, having Mistral Medium open weight and large as closed source would have been preferable for most local users, due to the size
Ambitious_Subject108@reddit
Mistral is no longer my friend, Deepseek is my new best friend.
silenceimpaired@reddit
I am of the opinion that they could always release their large models but only as a base with pretraining and no post training. And when they do that, they could compare their Apache licensed base model against their private close sourced instruct model with their special sauce instruct and safety training to demonstrate their ability to tune a model for companies. This would still leave incentive for large companies to hire them and give hobbyists and small companies a starting point to get something better than nothing. The data sets that people used to fine tune the base model would often be available (on huggingface) to mistral so they could integrate aspects of if they thought their closed source instruct model would perform better with it. Win, win for all.
a_beautiful_rhind@reddit
They gotta do what they gotta do.
mantafloppy@reddit
Reading is hard.
https://mistral.ai/news/magistral
a_beautiful_rhind@reddit
clearly it is
fuutott@reddit
I've downloaded unsloth and mistralai ggufs in lm studio and both (un)Q4_K_M and (un/mai)Q_8 give me response in this format. The enforced summary and final answer and boxed(???) I like that it's following instructions but shouldn't this be really system prompt territory? @danielhanchen
who was copernicus?
magistral-small-2506@q8_0 Thoughts Thought for 6.77 seconds
Alright, the user is asking about Copernicus. I know he's a famous name from history, but I need to recall more details about him. Let me break it down:
I recall that he had a revolutionary idea about the solar system. But what was it exactly?
His contributions to science:
I think this was during the Renaissance period, maybe 15th or 16th century?
More details:
His work challenged the long-held beliefs of the Church and other scholars at the time, which caused controversy.
Impact:
Now, to summarize all this in a concise way:
Summary: Nicolaus Copernicus was a Polish astronomer who proposed the heliocentric model of the solar system, where the Sun is at the center instead of the Earth. His theory, published in his book "De revolutionibus orbium coelestium," revolutionized our understanding of the universe and challenged the geocentric model that had been widely accepted since antiquity. His work laid the foundation for modern astronomy and had significant implications for science and philosophy.
Final Answer: Nicolaus Copernicus (1473–1543) was a Polish astronomer best known for proposing the heliocentric model of the solar system, where the Sun, rather than the Earth, is at the center. His revolutionary theory, presented in his book "De revolutionibus orbium coelestium" (On the Revolutions of the Celestial Spheres), challenged the geocentric model and laid the foundation for modern astronomy. This work marked a pivotal shift in scientific thought during the Renaissance.
[ \boxed{Nicolaus\ Copernicus} ]
danielhanchen@reddit
Yes so that's what Mistral trained it on so it must adhere to that otherwise you will get worse performance
jacek2023@reddit
there is also official gguf
https://huggingface.co/mistralai/Magistral-Small-2506_gguf
intc3172@reddit
it's not "official" it's just llama.cpp default quantization output. that's what they said in model card
maikuthe1@reddit
It is "official." It was officially released by the official creators.
Jotschi@reddit
I'm currently working on adding Ollama Think support for Langchain4j - Is it normal that the reasoning part only generated for the first message? I don't see it for followup messages. I'm on Ollama 0.9.0
Hujkis9@reddit
lovely
danigoncalves@reddit
I am curious about this one mainly due to its the first "small" reasoning model from Mistral and I want it to test with my native language ( European Portuguese). Does anyone know if we can find it on Open Router?
AdamDhahabi@reddit
Sure it will come, at the moment only Magistral Medium (non-free)
Salt-Advertising-939@reddit
i’d love to see this on live fiction benchmark
kryptkpr@reddit
Super quick benchmark run comparing llama 3.1 8b (AWQ), qwen3 8b/14b (AWQ/GGUF Q8), gpt-4.1-nano (openai API) and magistral-small (via the official mistral API)
I'm fairly disappointed, if I added qwen3-4B to this table the results would be even more damning: the reasoning in magistral-small does not appear to be SOTA. I hope the medium is a stronger reasoner, its running now.
(You can mess around with this benchmark here https://github.com/the-crypt-keeper/ChatBench but it's not really ready for release yet, not all the tasks I'd like have been included and the documentation is scarse)
kryptkpr@reddit
Good news, it might not be terrible - I read the docs and a specific system prompt is required to trip generating block.
Consider the above "naive" results, the model didn't actually get a chance to reason!
jacek2023@reddit
https://huggingface.co/mistralai/Magistral-Small-2506/discussions/4
Could anyone verify or deny that?
ForsookComparison@reddit
Where's deephermes-mistral-24b ? I'd put my faith in a reasoning Mistral on that one. The preview was excellent
Ne00n@reddit
Holy shit, I tried the same task yesterday on Devstral and DeepCoder.
DeepCoder made it work, on the first try but it did look ugly.
Magistral however, just nailed the task, on the first run.
Educational-Shoe9300@reddit
It's going to be interesting to combine using Magistral & Devstral as an architect-editor combo in Aider.
Also, it's great to get Magistral with context > 32K :)
Predatedtomcat@reddit
How does this compare to Devstral ?
ForsookComparison@reddit
It's not married to one specific product so it's already off to a better start
ResidentPositive4122@reddit
I've been pleasantly surprised by devstral w/ cline. So whatever ft they did for openhands apparently carried over to how cline prompts it.
celsowm@reddit
2506 means was avaliable for some people, right?
Deathcrow@reddit
it means 2025 june, which is the current month.
jacek2023@reddit
This is a fantastic news
ForsookComparison@reddit
Apache 2.0 phewww
sky-syrup@reddit
wdymmmmmm it’s been 9 minutes ;-;
danielhanchen@reddit
I made GGUFs for Magistral at https://huggingface.co/unsloth/Magistral-Small-2506-GGUF
You can run them via:
or
Also best to increase Ollama's context length to say 8K at least:
OLLAMA_CONTEXT_LENGTH=8192 ollama serve &
. Some other details in https://docs.unsloth.ai/basics/magistral