Liquid AI releases LFM2.5-VL-450M - structured visual understanding at 240ms
Posted by PauLabartaBajo@reddit | LocalLLaMA | View on Reddit | 7 comments
Today, we release LFM2.5-VL-450M our most capable vision-language model for edge deployment. It processes a 512×512 image in 240ms and it is fast enough to reason about every frame in a 4 FPS video stream. It builds on LFM2-VL-450M with three new capabilities:
- bounding box prediction (81.28 on RefCOCO-M)
- multilingual visual understanding across 9 languages (MMMB: 54.29 → 68.09), and
- function calling support.
Most production vision systems are still multi-stage: a detector, a classifier, heuristic logic on top. This model does it in one pass:
- locating objects
- reasoning about context, and
- returning structured outputs directly on-device.
It runs on Jetson Orin, Samsung S25 Ultra, and AMD 395+ Max. Open-weight, available now on Hugging Face, LEAP, and our Playground.
HF model checkpoint: https://huggingface.co/LiquidAI/LFM2.5-VL-450M
Blog post: https://www.liquid.ai/blog/lfm2-5-vl-450m
Specter_Origin@reddit
I feel they need to add weight class of up to 2-8b range to make the model more reliably usable in actual use cases.
_JustLivingLife_@reddit
I imagine .8B models are useful for very tailored fine tuning
Specter_Origin@reddit
Yes and they can 'add new weight class' and keep current one as is
DistanceSolar1449@reddit
4-8b more like.
2b models just aren’t there yet. Even Google couldn’t make Gemma 4 2b functional.
WhoRoger@reddit
InternVL 3.5 1B seems pretty solid, I'm just playing with it. Will probably check the larger ones later.
Designer_Reaction551@reddit
the function calling support on a 450M model is the real story here imo. that means you can wire this into an agent loop running entirely on-device - camera feed goes in, structured tool calls come out, no cloud roundtrip. 240ms per frame at 512x512 is genuinely usable for real-time robotics or quality inspection workflows where you can't afford latency to a remote API. the single-pass architecture replacing detector + classifier + heuristic stacks is also exactly the direction edge ML needs to go. fewer moving parts = fewer failure modes in production.
Foreign-Beginning-49@reddit
Omg you guys did it again can't wait to test this out congrats on a new release, and thank you.