Liquid AI releases LFM2.5-VL-450M - structured visual understanding at 240ms

Posted by PauLabartaBajo@reddit | LocalLLaMA | View on Reddit | 7 comments

Liquid AI releases LFM2.5-VL-450M - structured visual understanding at 240ms

Today, we release LFM2.5-VL-450M our most capable vision-language model for edge deployment. It processes a 512×512 image in 240ms and it is fast enough to reason about every frame in a 4 FPS video stream. It builds on LFM2-VL-450M with three new capabilities:

Most production vision systems are still multi-stage: a detector, a classifier, heuristic logic on top. This model does it in one pass:

It runs on Jetson Orin, Samsung S25 Ultra, and AMD 395+ Max. Open-weight, available now on Hugging Face, LEAP, and our Playground.

HF model checkpoint: https://huggingface.co/LiquidAI/LFM2.5-VL-450M
Blog post: https://www.liquid.ai/blog/lfm2-5-vl-450m