Automated AI researcher running locally with llama.cpp
Posted by lewtun@reddit | LocalLLaMA | View on Reddit | 9 comments
Hi everyone, I'm happy to share ml-intern, which is a harness for agents to have tighter integration with Hugging Face's open-source libraries (transformers, datasets, trl, etc) and Hub infrastructure:
https://github.com/huggingface/ml-intern
The harness is quite simple (basically tools + system prompt) and we built it initially for Claude Opus. However, now that open models are getting really good at agentic workflows, I just added support for running ml-intern with local models via llama.cpp or ollama. As you can see in the video, Qwen3.6-35B-A3B is able to SFT a model end-to-end by orchestrating CPU/GPU sandboxes and jobs on the Hub. I find this pretty neat because we can now have an AI researcher running 24/7 on a laptop, without maxing out token limits :)
Anyway, I hope this is useful to the community and please let me know if there are any features that you'd like us to include.
jochenboele@reddit
have you noticed big differences in agentic reliability between Claude and local models like Qwen3.6 so far?
lewtun@reddit (OP)
Yeah, at least for my local setup (Macbook with 32GB unified memory) the 4-bit quants tend to be a bit less consistent, especially if the context grows very large. However, if one runs Qwen3.6-27B with MTP, then this was pretty close to using Claude in my testing (for that I had to switch to a GPU + vLLM because MTP hasn't landed in llama.cpp yet ...)
jochenboele@reddit
In my experience they can look impressive on demos yet silently derail after enough tool calls/context growth.
EbbNorth7735@reddit
I've had the exact opposite experience myself. Fully coherent at 200k+ context.
LoafyLemon@reddit
Much to my surprise considering previous experiences with Qwen... I have to say this matches my own experience.
35B stays coherent and codes at over 150k context length. Insane...
lewtun@reddit (OP)
Yeah agreed, although the models are getting better almost every month so I'm optimistic about the future capabilities :)
MotokoAGI@reddit
what are the 2 most successful outcome you have seen with both open models and closed models so far?
lewtun@reddit (OP)
With the closed models, ml-intern was able to port the talkie model from their raw torch format to transformers and make is 100x faster in the process: https://huggingface.co/lewtun/talkie-1930-13b-it-hf
With open models, I am still experimenting but so far they have been decent at launching training runs automatically
MotokoAGI@reddit
very nice