Today, we release LFM2-VL, our first series of vision-language foundation models. These multimodal models are designed for low-latency and device-aware deployment. LFM2-VL extends the LFM2 family of open-weight Liquid Foundation Models (LFMs) into the vision-language space, supporting both text and image inputs with variable resolutions.

LFM2-VL offers a practical and versatile solution for various device environments, ranging from phones, laptops, and single-GPU instances to wearables and other embedded devices. Our models achieve great performance on vision-language tasks while offering significant efficiency gains, with up to 2x inference speedups on GPU compared to existing models.

LFM2-VL comes in two variants with the hyper-efficient LFM2-VL-450M for highly resource-constrained settings, and the more capable but still lightweight LFM2-VL-1.6B. It offers an efficient solution that is integrated into the open-source ecosystem, as well as LEAP for customization and multi-platform edge deployment.

Highlights on LFM2-VL

  • New efficient models based on LFM2: LFM2-VL-450M and LFM2-VL-1.6B, designed for resource-constrained environments
  • 2× faster inference speed on GPUs compared to existing VLMs while maintaining competitive accuracy
  • Flexible architecture with user-tunable speed-quality tradeoffs at inference time
  • Native resolution processing up to 512×512 with intelligent patch-based handling for larger images, avoiding upscaling and distortion
Figure 1. Demo: LFM2-VL-450M identifying a tropical beach with cows on the shore.

Architecture

LFM2-VL consists of three main components: a language model backbone, a vision encoder, and a multimodal projector.

Figure 2. LFM2-VL architecture and data flow.

For the language model tower, LFM2-VL builds upon the LFM2 backbone, inheriting from either LFM2-1.2B (for LFM2-VL-1.6B) or LFM2-350M (for LFM2-VL-450M).

For the vision tower, LFM2-VL uses SigLIP2 NaFlex encoders to convert input images into token sequences. Two variants are implemented:

  • Shape-optimized (400M) for more fine-grained vision capabilities for LFM2-VL-1.6B
  • Base (86M) for fast image processing for LFM2-VL-450M

The encoder processes images at their native resolution up to 512×512 pixels, efficiently handling smaller images without upscaling and supporting non-standard aspect ratios without distortion.

Larger images are split into non-overlapping square patches of 512×512 each, preserving detail. In LFM2-VL-1.6B, the model also receives a thumbnail (a small, downscaled version of the original image capturing the overall scene) to enhance global context understanding and alignment. Special tokens mark each patch’s position and indicate the thumbnail’s start.

For the multimodal projector, we implement a 2-layer MLP connector with pixel unshuffle to reduce image token count. This allowed us to increase throughput without major quality loss. For example, a 256×384 image generates 96 image tokens, a 384×680 image produces 240 tokens, and a 1000×3000 image yields 1,020 tokens.

This flexible architecture enables users to adjust the speed-quality tradeoff during inference without retraining. Both the maximum number of image tokens (which controls effective input resolution) and the number of image patches are user-tunable, allowing performance optimization for specific use cases and latency requirements.

Training

LFM2-VL builds on the LFM2 base model. Vision and language capabilities are then fused during a joint mid-training phase, where the ratio of text to image data is gradually adjusted from 95% to 30%. 

This is followed by a joint supervised fine-tuning stage with an emphasis on image understanding. Vision training data comes from a combination of large-scale open-source datasets and in-house synthetic vision datasets, selected to balance coverage across diverse tasks. Overall, LFM2-VL is trained on the order of 100 billion multimodal tokens.

Evaluation

Benchmarks

We evaluate LFM2-VL on several public vision-language benchmarks. The model shows excellent performance in high-resolution image understanding and multimodal instruction following, while maintaining strong performance in other tasks.

Model
LFM2-VL-1.6B
LFM2-VL-450M
InternVL3-2B
InternVL3-1B
SmolVLM2-2.2B
SmolVLM2-500M
RealWorldQA
65.23
52.29
65.10
57.00
57.50
49.90
MM-IFEval
37.66
26.18
38.49*
31.14*
19.42*
11.27*
InfoVQA (Val)
58.68
46.51
66.10*
54.94*
37.75*
24.64*
OCRBench
742
655
831
798
725
609
BLINK
44.40
41.98
53.10
43.00
42.30
40.70
MMStar
49.53
40.87
61.10
52.30
46.00
38.20
MMMU (Val)
38.44
33.11
48.70
43.20
41.60
34.10
MathVista
51.10
44.70
57.60
46.90
51.50
37.50
SEEDBench_IMG
71.97
63.5
75.00
71.20
71.30
62.2
MMVet
48.07
33.76
67.00
58.70
34.90
29.90
MME
1753.04
1239.06
2186.40
1912.40
1792.50
1448.30
Text
MMLU
50.99
40.16
64.80
49.80
-
-
Table 1. Benchmark results for vision-language evaluations.
*We obtained MM-IFEval and InfoVQA (Val) scores for InternVL3 and SmolVLM2 models using VLMEvalKit.

Inference speed

Our models excel in inference speed, achieving the fastest performance among all competitors on GPU. We evaluate LFM2-VL in a typical workload consisting of one 1024x1024 image, paired with a short prompt such as “Describe this image in detail,” and generate 100 output tokens, under default settings for each model. In these conditions, LFM2-VL runs up to 2× faster than the fastest comparable model, while delivering competitive accuracy.

Figure 3. Processing time comparison across vision-language models.
Figure 4. Memory footprint (in GB) comparison across vision-language models.

Build with LFM2-VL

LFM2-VL models are available today on Hugging Face with example finetuning code in Colab. We’re releasing them under an open license, which is based on Apache 2.0. Our license allows you to freely use LFM2-VL models for academic and research purposes. You can also use the models commercially if you’re a smaller company (under $10M revenue). Above this threshold, you should contact us (sales@liquid.ai) to obtain a commercial license. You can get more details about our license here.

Since LFM2-VL models are designed for on-device efficiency, we recommend testing them privately and locally on your device. They are currently compatible with Hugging Face transformers and TRL. We are actively working with the community to integrate LFM2-VL into other popular inference and finetuning frameworks.

If you are interested in custom solutions with edge deployment, please contact our sales team at sales@liquid.ai.

Ready to experience AI?

Power your business, workflows, and engineers with Liquid AI

Manage your preferences

We use cookies to enhance your browsing experience and analyze our traffic. By clicking "Accept All", you consent to our use of cookies.

Learn more
  • Essential cookies required