Liquid LFM 2.5 hits edge with vision and audio
Liquid AI's latest hybrid models deliver high-performance vision and language capabilities locally, using under 1GB of RAM. Optimized for edge deployment, LFM 2.5 offers superior instruction following and 2x faster CPU throughput than Llama 3.2.
Liquid AI's hybrid architecture bypasses quadratic scaling issues, enabling long-context reasoning with a minimal memory footprint. The 1.2B Instruct model's IFEval performance supports local agentic workflows, while native NPU optimization signals a shift toward specialized hardware for private inference. Multimodal variants like VL-1.6B and Audio-1.5B further enable low-latency vision and voice interfaces on-device without cloud dependencies.
DISCOVERED
21d ago
2026-03-22
PUBLISHED
21d ago
2026-03-22
RELEVANCE
AUTHOR
Better Stack