internlm/Intern-S2-Preview · Hugging Face
Posted by pmttyji@reddit | LocalLLaMA | View on Reddit | 8 comments
Introduction
We introduce Intern-S2-Preview, an efficient 35B scientific multimodal foundation model. Beyond conventional parameter and data scaling, Intern-S2-Preview explores task scaling: increasing the difficulty, diversity, and coverage of scientific tasks to further unlock model capabilities.
By extending professional scientific tasks into a full-chain training pipeline from pre-training to reinforcement learning, Intern-S2-Preview achieves performance comparable to the trillion-scale Intern-S1-Pro on multiple core professional scientific tasks, while using only 35B parameters (continued pretrained from Qwen3.5). At the same time, it maintains strong general reasoning, multimodal understanding, and agent capabilities.
Features
- Scientific task scaling with full-chain training. Intern-S2-Preview scales hundreds of professional scientific tasks from pre-training to RL, enabling strong performance across multiple specialized domains at only 35B parameters. It further strengthens spatial modeling for small-molecule structures and introduces real-valued prediction modules, making it the first open-source model with both material crystal structure generation capability and strong general capabilities.
- Enhanced agent capabilities for scientific workflows. Intern-S2-Preview significantly improves agentic abilities over the previous generation, achieving strong results on multiple scientific agent benchmarks.
- Efficient RL reasoning with MTP and CoT compression. During RL, Intern-S2-Preview adopts shared-weight MTP with KL loss to reduce the mismatch between training and inference behavior, substantially improving MTP accept rate and token generation speed. It also introduces CoT compression techniques to shorten responses while preserving strong reasoning capability, achieving improvements in both performance and efficiency.
techlatest_net@reddit
Nice to see a 35B model punching above its weight on scientific tasks. The crystal structure generation + real-valued prediction is a cool addition—haven't seen that in many open models. Task scaling over just throwing more params at it feels like the right direction. Will check out the weights and see how it handles my use cases.
Zealousideal-Lie8829@reddit
damn nice bro gonna test it soon
MrBIMC@reddit
looks cool. MOE with less yapping to itself - sounds like a perfect candidate for strix halo to run at 8bit.
awaiting for ggufs. And hope they also train 122b model, given that alibaba dropped it from 3.6 public release.
BlueSwordM@reddit
Honestly, considering how good Intern S1-Mini was in the first place, I'd be interested to try this out once I have the time later this week.
HavenTerminal_com@reddit
task scaling is interesting. harder training problems instead of bigger models, and somehow crystal structure generation ended up in the same package.
StupidityCanFly@reddit
Nice, this one looks interesting. Time to test.
pmttyji@reddit (OP)
pmttyji@reddit (OP)